This contains various commands and information that I find useful for AWS work.
UI
Backup instance manually
- Go to your instance
- Right click and select Image from the dropdown
- Click Create Image
- Give your backup a name and description
- Click No reboot if you want your instance to stay in a running state
- Click Create Image
- At this point you should be able to find the AMI that is associated with your backup under AMIs. Give the AMI a more descriptive name if you'd like.
Resource: https://n2ws.com/blog/how-to-guides/automate-amazon-ec2-instance-backup
Parameter Store location
- Login
- Search for Systems Manager
- Click on Parameter Store in the menu on the left-hand side
EC2
Use env vars
Run the following with the proper values:
export AWS_ACCESS_KEY_ID=AKIAI44QH8DHBEXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Set up credentials file
You can run aws configure
if you want a guided setup. Alternatively, you can add the following to ~/.aws/credentials
:
[default]
aws_access_key_id = <YOUR_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>
If you don't opt for the guided setup, don't forget to set the region in ~/.aws/config
:
[default]
region = <YOUR_REGION>
Multiple profiles
Your credentials file will probably look something like this:
[default]
aws_access_key_id = <YOUR_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>
[notdefault]
aws_access_key_id = <YOUR_OTHER_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_OTHER_SECRET_ACCESS_KEY>
To use the notdefault
profile, run the following command:
export AWS_PROFILE=notdefault
Use temp credentials
Add the following to your credentials file:
[temp]
aws_access_key_id = <YOUR_TEMP_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_TEMP_SECRET_ACCESS_KEY>
aws_session_token = <YOUR_SESSION_TOKEN>
Then run this command:
export AWS_PROFILE=temp
Resource: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html
Use env vars for temp credentials
Run the following with the proper values:
export AWS_ACCESS_KEY_ID=AKIAI44QH8DHBEXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_SESSION_TOKEN=AQoDYXdzEJr...<remainder of security token>
Resource: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html
List instances
aws ec2 describe-instances
Get number of instances
aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId]'
--output text | wc -l
Resource: https://stackoverflow.com/questions/40164786/determine-how-many-aws-instances-are-in-a-zone
Reboot all instances in a region
aws ec2 reboot-instances --instance-ids $(aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId" | jq '.[]|.[0]' -r)
Assign an elastic IP to an instance
aws ec2 associate-address --allocation-id eipalloc-<eip id> --instance-id <the instance id>
Create instance with a tag
aws ec2 run-instances --image-id ami-xxxxxxx --count 1 --instance-type t2.medium --key-name MyKeyPair --security-group-ids sg-xxxxxx --subnet-id subnet-xxxxxx --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=my-test-instance}]'
Create instance using security group name
aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups MySecurityGroup
List instances with filtering
This example in particular will get you all your m1.micro instances.
aws ec2 describe-instances --filters "Name=instance-type,Values=m1.micro"
List instance by instance id
aws ec2 describe-instances --instance-ids i-xxxxx
Destroy instance
aws ec2 terminate-instances --instance-ids <instance id(s)>
If you want to terminate multiple instances, be sure to use this format:
id1 id2 id3
Get info about a specific AMI by name and output to JSON
aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json
Get AMI id with some python
This uses the run_cmd() function found in /python-notes/.
import json
def get_ami_id(ec2_output):
json.loads(ec2_output.decode('utf-8'))['Images'][0]['ImageId']
ec2_output = run_cmd('aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json')
ami_id = get_ami_id(ec2_output)
print(ami_id)
Deregister an AMI
aws ec2 deregister-image --image-id <ami id>
Get list of all instances with the state terminated
aws ec2 describe-instances --filters "Name=instance-state-name,Values=terminated"
Alternatively, if you want running instances, change Values=terminated to Values=running.
Get info about an AMI by product-code
aws --region <region> ec2 describe-images --owners aws-marketplace --filters Name=product-code,Values=<product code>
This is useful if you have the product code, and want more information (like the image ID). For CentOS, you can get the product code here. I started down this path when I was messing around with the code in this gist for automatically creating encrypted AMI's.
Resize ec2 instance
https://medium.com/@kenichishibata/resize-aws-ebs-4d6e2bf00feb
Show available subnets
aws ec2 describe-subnets
CodeBuild
Pretty decent, relatively up-to-date tutorial on using CodeBuild and CodeCommit to autobuild AMI's: https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer/
If you want to use the gist to create encrypted AMI's mentioned above, be sure to specify aws_region, aws_vpc, aws_subnet, and ssh_username in the variables section.
CodeCommit
You like the idea of CodeCommit? You know, having git repos that are accessible via IAM?
How about using it in your EC2 instances without needing to store credentials? Really cool idea, right? Bet it's pretty easy to setup too, huh? Ha!
Well, unless you know what to do, then it actually is. Here we go:
Build the proper IAM role
- Login to the UI
- Click on IAM
- Click Roles
- Click Create role
- Click EC2, then click Next: Permissions
- Search for CodeCommit, check the box next to AWSCodeCommitReadOnly
- Click Next: Tags
- Give it some tags if you'd like, click Next: Review
- Specify a Role name, like CodeCommit-Read
- Click Create role
Now we're cooking. Let's test it out by building an instance and not forgetting to assign it the CodeCommit-Read IAM role. You can figure this part out.
Cloning into a repo
Once you've got a working instance:
- Access it via ssh into it
sudo su
- Install the awscli with pip:
pip install awscli
- Run this command and be sure to change the region to match the one you're working with:
git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.helper '!aws --profile default codecommit credential-helper $@'
- Run this command and be sure to change the region to match the one you're working with:
git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.UseHttpPath true
- Run this command and be sure to change the region to match the one you're working with:
aws configure set region us-west-2
At this point, you should be able to get clone into your repo: git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/GREATREPONAME
Resources:
https://jameswing.net/aws/codecommit-with-ec2-role-credentials.html
https://stackoverflow.com/questions/46164223/aws-pull-latest-code-from-codecommit-on-ec2-instance-startup - This site got me to the above site, but had incomplete information for their proposed solution.
Integrating this in with CodeBuild
To get this to work with CodeBuild for automated and repeatable builds, I needed to do a few other things. Primarily, take advantage of the Parameter Store. When I was trying to build initially, my buildspec.yml
looked something like this (basically emulating the one found in here):
---
version: 0.2
phases:
pre_build:
commands:
- echo "Installing HashiCorp Packer..."
- curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.1.1/packer_1.1.1_linux_amd64.zip && unzip packer.zip
- echo "Installing jq..."
- curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
- echo "Validating kali.json"
- ./packer validate kali.json
build:
commands:
### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
### More info here: https://github.com/mitchellh/packer/issues/4279
- echo "Configuring AWS credentials"
- curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
- aws configure set region $AWS_REGION
- aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
- aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
- aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
- echo "Building HashiCorp Packer template, kali.json"
- ./packer build kali.json
post_build:
commands:
- echo "HashiCorp Packer build completed on `date`"
However, I was getting this obscure error message about authentication, and spent several hours messing around with IAM roles, but didn't have any luck. At some point, I eventually decided to try throwing a "parameter" in for the AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
. This worked great, but I noticed that whenever I tried the build again, I would run into the same issue as before. To fix it, I had to modify the buildspec.yml
to look like this (obviously the values you have for your parameter store may vary depending on what you set for them):
---
version: 0.2
env:
parameter-store:
AWS_ACCESS_KEY_ID: "/CodeBuild/AWS_ACCESS_KEY_ID"
AWS_SECRET_ACCESS_KEY: "/CodeBuild/AWS_SECRET_ACCESS_KEY"
phases:
pre_build:
commands:
- echo "Installing HashiCorp Packer..."
- curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.1.1/packer_1.1.1_linux_amd64.zip && unzip packer.zip
- echo "Installing jq..."
- curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
- echo "Validating kali.json"
- ./packer validate kali.json
build:
commands:
### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
### More info here: https://github.com/mitchellh/packer/issues/4279
- echo "Configuring AWS credentials"
- curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
- aws configure set region $AWS_REGION
- aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
- aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
- aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
- echo "Building HashiCorp Packer template, kali.json"
- ./packer build kali.json
post_build:
commands:
- echo "HashiCorp Packer build completed on `date`"
At this point, everything is working consistently with the IAM role mentioned previously being specified in the packer file (this is a snippet):
"variables": {
"iam_role": "CodeCommit-Read"
},
"builders": [{
"iam_instance_profile": "{{user `iam_role` }}",
}],
SSM
Get the information for a particular parameter (this will give you the encrypted value if the parameter is a SecureString): aws ssm get-parameter --name <nameofparam>
List parameters
aws ssm describe-parameters
Access a parameter
aws ssm get-parameter --name /path/to/parameter
Miscellaneous
Encrypt your pem file:
openssl rsa -des3 -in key.pem -out encrypted-key.pem
# Enter the pass phrase you've selected
mv encrypted-key.pem key.pem
chmod 400 key.pem
Remove the encryption:
openssl rsa -in key.pem -out key.open.pem
# Enter the pass phrase you've selected
mv key.open.pem key.pem
Resource: https://security.stackexchange.com/questions/59136/can-i-add-a-password-to-an-existing-private-key
Set up aws cli with pipenv on OSX
https://duseev.com/articles/perfect-aws-cli-setup/
S3
List buckets
aws s3 ls s3://target/
Download bucket
aws s3 sync s3://mybucket .
Resource: https://stackoverflow.com/questions/8659382/downloading-an-entire-s3-bucket/55061863
Copy file down
aws s3 cp s3://target/file.html file.html
Copy file up
aws s3 cp TEST s3://target
Copy folder up
aws s3 cp foldertocopy s3://bucket/foldertocopy --recursive
Resource: https://coderwall.com/p/rckamw/copy-all-files-in-a-folder-from-google-drive-to-aws-s3
Cheatsheet
https://linuxacademy.com/blog/amazon-web-services-2/aws-s3-cheat-sheet/
Set up S3 IAM for backup/restore
Storing aws credentials on an instance to access an S3 bucket can be a bad idea. Let's talk about what we need to do in order to backup/restore stuff from an S3 bucket safely:
Create Policy
- Go to IAM
- Policies
- Create Policy
- Policy Generator, or copy and paste JSON from the interwebs into Create Your Own Policy. This is the one I used:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::techvomit"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket name>/*"
]
}
]
}
Create a Role
- Go to Roles in IAM
- Click Create role
- Select EC2
- Select EC2 again and click Next: Permissions
- Find the policy you created previously
- Click Next: Review
- Give the Role a name and a description, click Create role
Assign the role to your instance
This will be the instance that houses the service that requires a backup and restore service (your S3 bucket).
- In EC2, if the instance is already created, right click it, Instance Settings, Attach/Replace IAM Role
- Specify the IAM role you created previously, click Apply.
Set up automated expiration of objects
This will ensure that backups don't stick around longer than they need to. You can also set up rules to transfer them to long term storage during this process, but we're not going to cover that here.
From the bucket overview screen:
- Click Management
- Click Add lifecycle rule
- Specify a name, click Next
- Click Next
- Check Current version and Previous versions
- Specify a desired number of days to expiration for both the current version and the previous versions, click Next
- Click Save
Mount bucket as local directory
Warning, this is painfully slow once you have it set up.
Follow the instructions found on this site.
Then, run this script:
#!/bin/bash
folder="/tmp/folder"
if [ ! -d $folder ]; then
mkdir $folder
fi
s3fs bucket_name $folder -o passwd_file=${HOME}/.passwd-s3fs -o volname="S3-Bucket"
Copy multiple folders to bucket
aws s3 cp /path/to/dir/with/folders/to/copy s3://bucket/ --recursive --exclude ".git/*"
Resource: https://superuser.com/questions/1497268/selectively-uploading-multiple-folders-to-aws-s3-using-cli
Boto
Create session
from boto3.session import Session
def create_session():
session = Session(aws_access_key_id=access_key,aws_secret_access_key=secret_key,aws_session_token=session_token)
return session
Resource: https://stackoverflow.com/questions/30249069/listing-contents-of-a-bucket-with-boto3
List buckets with boto3
def get_s3_buckets(session):
s3 = session.resource('s3')
print("Bucket List:")
for bucket in s3.buckets.all():
print(bucket.name)
Resource: https://stackoverflow.com/questions/36042968/get-all-s3-buckets-given-a-prefix
Show items in an s3 bucket
def list_s3_bucket_items(session, bucket):
s3 = session.resource('s3')
my_bucket = s3.Bucket(bucket)
for file in my_bucket.objects.all():
print(file.key)
List Users
def get_users(session):
client = boto3.client('iam', aws_access_key_id=access_key, aws_secret_access_key=secret_key,aws_session_token=session_token)
users = client.list_users()
for key in users['Users']:
print(key['UserName'])
Resource: https://stackoverflow.com/questions/46073435/how-can-we-fetch-iam-users-their-groups-and-policies
Get account id
def sts(session):
sts_client = boto3.client('sts',aws_access_key_id=access_key,aws_secret_access_key=secret_key, aws_session_token=session_token)
print(sts_client.get_caller_identity()['Account'])
Create ec2 instance with name
EC2_RESOURCE = boto3.resource('ec2')
def create_ec2_instance():
instance = EC2_RESOURCE.create_instances(
ImageId='ami-ID_GOES_HERE',
MinCount=1,
MaxCount=1,
InstanceType='t2.micro',
SecurityGroupIds = ["sg-ID_GOES_HERE"]
KeyName='KEY_NAME_GOES_HERE',
TagSpecifications=[
{
'ResourceType': 'instance',
'Tags': [
{
'Key': 'Name',
'Value': 'INSTANCE_NAME_HERE'
}
]
}
]
)
return instance[0]
Resources:
https://blog.ipswitch.com/how-to-create-an-ec2-instance-with-python
https://stackoverflow.com/questions/52436835/how-to-set-tags-for-aws-ec2-instance-in-boto3
http://blog.conygre.com/2017/03/27/boto-script-to-launch-an-ec2-instance-with-an-elastic-ip-and-a-route53-entry/
Allocate and associate an elastic IP
import boto3
from botocore.exceptions import ClientError
# Wait for instance to finish lanuching before assigning the elastic IP address
print('Waiting for instance to get to a running state, please wait...')
instance.wait_until_running();
EC2_CLIENT = boto3.client('ec2')
try:
# Allocate an elastic IP
eip = EC2_CLIENT.allocate_address(Domain='vpc')
# Associate the elastic IP address with an instance launched previously
response = EC2_CLIENT.associate_address(
AllocationId=eip['AllocationId'],
InstanceId='INSTANCE_ID_GOES_HERE'
)
print(response)
except ClientError as e:
print(e)
Allocate existing elastic IP
EC2_CLIENT.associate_address(
AllocationId='eipalloc-EXISTING_EIP_ID_GOES_HERE',
InstanceId=INSTANCE_ID_GOES_HERE
)
Resource:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ec2-example-elastic-ip-addresses.html
http://blog.conygre.com/2017/03/27/boto-script-to-launch-an-ec2-instance-with-an-elastic-ip-and-a-route53-entry/
Wait for instance to finish starting
retries = 10
retry_delay = 10
retry_count = 0
instance[0].wait_until_running()
instance[0].reload()
while retry_count <= retries:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex((instance[0].public_ip_address,22))
if result == 0:
print(f"The instance is up and accessible on port 22 at {instance[0].public_ip_address}")
break
else:
print("Instance is still coming up, retrying . . . ")
time.sleep(retry_delay)
Resource:
https://stackoverflow.com/questions/46379043/boto3-wait-until-running-doesnt-work-as-desired
Metadata
Get Credentials
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
Resource:
https://gist.github.com/quiver/87f93bc7df6da7049d41
Get region
curl http://169.254.169.254/latest/dynamic/instance-identity/document
Get role-name
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
The role name will be listed here.
Resource: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
Get Account ID
curl http://169.254.169.254/latest/meta-data/identity-credentials/ec2/info/
Get public hostname
curl 169.254.169.254/latest/meta-data/public-hostname
Show configuration
aws configure list
Get account id
aws sts get-caller-identity | jq '.Account'
Resource: https://shapeshed.com/jq-json/#how-to-find-a-key-and-value
Go SDK
Stand up EC2 Instance
This accounts for the exceptionally annoying message: An error occurred (VPCIdNotSpecified) when calling the RunInstances operation: No default VPC for this user
that does not have any solutions in sight unless you do some deep diving. Essentially this means that a default VPC isn't defined and subsequently you need to provide a subnet id:
package main
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/ec2"
"fmt"
"log"
)
func main() {
// Get credentials from ~/.aws/credentials
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-west-2")},
)
// Create EC2 service client
svc := ec2.New(sess)
// Specify the details of the instance that you want to create.
runResult, err := svc.RunInstances(&ec2.RunInstancesInput{
ImageId: aws.String("ami-id-here"),
InstanceType: aws.String("t2.small"),
MinCount: aws.Int64(1),
MaxCount: aws.Int64(1),
SecurityGroupIds: aws.StringSlice([]string{"sg-id-here"}),
KeyName: aws.String("keypairname-here"),
SubnetId: aws.String("subnet-id-here"),
})
if err != nil {
fmt.Println("Could not create instance", err)
return
}
fmt.Println("Created instance", *runResult.Instances[0].InstanceId)
// Add tags to the created instance
_, errtag := svc.CreateTags(&ec2.CreateTagsInput{
Resources: []*string{runResult.Instances[0].InstanceId},
Tags: []*ec2.Tag{
{
Key: aws.String("Name"),
Value: aws.String("GoInstance"),
},
},
})
if errtag != nil {
log.Println("Could not create tags for instance", runResult.Instances[0].InstanceId, errtag)
return
}
fmt.Println("Successfully tagged instance")
}
Resources:
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/ec2-example-create-images.html - starting point
https://gist.github.com/stephen-mw/9f289d724c4cfd3c88f2
https://stackoverflow.com/questions/50289221/unable-to-create-ec2-instance-using-boto3 - where I found some of the solution (for boto, which translated to this fortunately)
https://docs.aws.amazon.com/sdk-for-go/api/aws/#StringSlice
Stand up EC2 Instance with lambda
Modify the previous code to make it into a lambda function - find it here.
Next you'll need to get the binary for the function:
env GOOS=linux GOARCH=amd64 go build -o /tmp/main
With that, you'll need to zip it up:
zip -j /tmp/main.zip /tmp/main
At this point, you need to create the iam role:
- Navigate to https://console.aws.amazon.com/iam/home#/roles
- Click Create role
- Click Lambda
- Click Next: Permissions
- Add the following policies:
AmazonEc2FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
- Click Next: Tags
- Give it a Name tag and click Next: Review
- Give it a Role name such as "LambdaCreateEc2Instance"
- Click Create role
- Once it's completed, click the role and copy the Role ARN
Now, you'll need to run the following command to create the lambda function:
aws lambda create-function --function-name createEc2Instance --runtime go1.x \
--zip-file fileb:///tmp/main.zip --handler main \
--role <Role ARN copied previously>
Lastly, you'll need to populate all of the environment variables. To do this, you can use this script:
aws lambda update-function-configuration --function-name createEc2Instance \
--environment "Variables={AMI=ami-id-here, INSTANCE_TYPE=t2.small, SECURITY_GROUP=sg-id-here, KEYNAME=keypairname-here, SUBNET_ID=subnet-id-here}"
Alternatively, you can set the values in the lambda UI by clicking Manage environment variables:
but this gets very tedious very quickly.
If you want to throw all of this into a Makefile
to streamline testing, you could do something like this:
build:
env GOOS=linux GOARCH=amd64 go build -o /tmp/main
deploy:
zip -j /tmp/main.zip /tmp/main
bash scripts/create_function.sh
bash scripts/create_env_vars.sh
run:
aws lambda invoke --function-name createEc2Instance /tmp/output.json
Run the whole thing with this command:
make build && make deploy && make run
At this point, you can go ahead and invoke the lambda function to see if everything is working as expected:
aws lambda invoke --function-name createEc2Instance /tmp/output.json
This can of course be determined by looking at your running EC2 instances and seeing if there's a new one that's spinning up from your invoking the lambda function.
Resources:
https://www.alexedwards.net/blog/serverless-api-with-go-and-aws-lambda#setting-up-the-https-api - doing all the lambda cli stuff and making a lambda function with golang
https://medium.com/appgambit/aws-lambda-launch-ec2-instances-40d32d93fb58 -doing the web UI stuff
https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html - setting the env vars programatically
https://www.softkraft.co/aws-lambda-in-golang/ - fantastic in-depth guide for using Go with Lambda
CORS with lambda and API Gateway
Want to do AJAX stuff with your lambda function(s)? Cool, you're in the right place.
- Open your gateway
- Click Actions -> Enable CORS
- Check the boxes for POST, GET, and OPTIONS
- Input the following for Access-Control-Allow-Headers:
'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'
- Input the following for Access-Control-Allow-Origin:
'*'
- Click Enable CORS and replace existing CORS headers
For Options Method
Open the Method Response and click the arrow next to 200. Add the following headers:
For GET Method
Be sure to add the appropriate headers to your APIGatewayProxyResponse:
Headers: map[string]string{
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": "true",
},
Next, open the Method Response and click the arrow next to 200. Add the following headers:
For POST Method
Open the Method Response and click the arrow next to 200. Add the following header:
Finishing touches
Finally, be sure to click Actions and Deploy API when you're done
Resource: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors-console.html
Return Response for API Gateway
You have two options here:
return events.APIGatewayProxyResponse{
StatusCode: http.StatusBadGateway,
Headers: map[string]string{
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": "true",
},
Body: string("Method not Allowed"),
}, nil
or alternatively:
resp := events.APIGatewayProxyResponse{Headers: make(map[string]string)}
resp.Headers["Access-Control-Allow-Origin"] = "*"
resp.Headers["Access-Control-Allow-Credentials"] = "true"
resp.StatusCode = http.StatusOK
resp.Body = string(publicInstanceIp)
return resp, nil
Resources:
https://github.com/serverless/examples/blob/master/aws-golang-simple-http-endpoint/hello/main.go - used to figure out the first option
Update function via CLI
This is useful to run after updating your code. This will grab main.zip
in the current directory:
env GOOS=linux GOARCH=amd64 go build -o main
zip -j main.zip main
aws lambda update-function-code --function-name <lambda function name> --zip-file fileb:///${PWD}/main.zip
Use serverless framework
This framework makes it easier to develop and deploy serverless resources, such as AWS Lambda Functions.
To start we'll need to install the Serverless Framework:
npm install -g serverless
Then we will need to create the project with a boilerplate template. A couple of examples:
# Nodejs
serverless create --template aws-nodejs --path myservice
# Golang
cd $GOPATH/src && serverless create -t aws-go-dep -p myservice
From here, you need to populate the serverless.yml
template. This will use the lambda code from above that deploys ec2 instances:
service: lambdainstancedeployer
frameworkVersion: '2'
provider:
name: aws
runtime: go1.x
stage: ${opt:stage, 'dev'}
region: ${opt:region, 'us-west-2'}
environment:
DYNAMO_TABLE: ${self:service}-${opt:stage, self:provider.stage}
memorySize: 3008
timeout: 30 # API Gateway max timeout
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.DYNAMO_TABLE}"
- Effect: Allow
Action:
- ec2:RunInstances
- ec2:DescribeInstances
- ec2:DescribeInstanceStatus
- ec2:TerminateInstances
- ec2:StopInstances
- ec2:StartInstances
- ec2:CreateTags
- ec2:DeleteTags
Resource: "*"
package:
exclude:
- ./**
include:
- ./bin/**
functions:
lambdaMop:
handler: bin/lambdaMop
events:
- http:
path: /deployer
method: post
cors: true
- http:
path: /deployer
method: get
cors: true
environment:
AMI: ami-xxxxxx
INSTANCE_TYPE: t2.small
REGION: us-west-2
resources:
Resources:
InstanceDeployerDynamoDbTable:
Type: 'AWS::DynamoDB::Table'
# Uncomment if you want to want to ensure the table isn't deleted
# DeletionPolicy: Retain
DeletionPolicy: Delete
Properties:
AttributeDefinitions:
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: email
KeyType: HASH
BillingMode: PAY_PER_REQUEST
TableName: ${self:provider.environment.DYNAMO_TABLE}
it will also create the API gateway, IAM role and DynamoDB table.
Modify the Makefile
if you'd like. The one I like to use can be found right below.
Next, compile the function and build it:
cd myservice && make build
Resources:
https://www.serverless.com/blog/framework-example-golang-lambda-support - Lambda + Golang + Serverless walkthrough
https://marcelog.github.io/articles/aws_lambda_start_stop_ec2_instance.html - useful information for IAM actions needed for ec2 operations
https://forum.serverless.com/t/missing-required-key-tablename-in-params-error/4492/5 - how to set the dynamodb iam permissions
https://forum.serverless.com/t/deleting-table-from-dynamodb/3837 - how to delete a database or retain it
More useful Makefile
Move your functions into a functions
folder in the repo for the serverless work.
Next, change the Makefile
to the following:
functions := $(shell find functions -name \*main.go | awk -F'/' '{print $$2}')
build: ## Build golang binary
@for function in $(functions) ; do \
cd functions/$$function ; \
env GOOS=linux go build -ldflags="-s -w" -o ../../bin/$$function ; \
cd .. ; \
done
serverless deploy
destroy:
serverless remove
This will output all function binaries into the bin/
directory at the top level of your project.
Resources:
https://github.com/serverless/examples/blob/master/aws-golang-auth-examples/Makefile - super useful Makefile
example
Decode Error Message from CloudWatch Logs
msg="themessage"
aws sts decode-authorization-message --encoded-message $msg --query DecodedMessage --output text | jq '.'
Resource: https://aws.amazon.com/premiumsupport/knowledge-center/aws-backup-encoded-authorization-failure/