This contains various commands and information that I find useful for AWS work.

Install latest version of AWS CLI on linux

curl "" -o ""
sudo ./aws/install


Backup instance via UI

  1. Go to your instance
  2. Right click and select Image from the dropdown
  3. Click Create Image
  4. Give your backup a name and description
  5. Click No reboot if you want your instance to stay in a running state
  6. Click Create Image
  7. At this point you should be able to find the AMI that is associated with your backup under AMIs. Give the AMI a more descriptive name if you’d like.


Backup instance via CLI

aws ec2 create-image --instance-id ${INSTANCE_ID} --name "backup_of_${INSTANCE_ID}" --description "an AMI"

You can also add the --no-reboot parameter to stop the instance from being restarted, although this may not be a good idea if your instances runs a lot of write heavy actions.

Resources: - original idea - docs

Parameter Store UI Location

  1. Login
  2. Search for Systems Manager
  3. Click on Parameter Store in the menu on the left-hand side


Use env vars

Run the following with the proper values:


Set up named profile

You can run aws configure if you want a guided setup. Alternatively, you can add the following to ~/.aws/credentials:


If you don’t opt for the guided setup, don’t forget to set the region in ~/.aws/config:

[profile myenv]
region = us-west-2
output = json

Resource: - how to set up the config file with a named profile

Populate env vars with credentials file

export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)


Populate config file with env vars

aws configure set region $AWS_REGION \
--profile default

Multiple profiles

Your credentials file will probably look something like this:

aws_access_key_id = <YOUR_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>

aws_access_key_id = <YOUR_OTHER_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_OTHER_SECRET_ACCESS_KEY>

To use the notdefault profile, run the following command:

export AWS_PROFILE=notdefault

Use temp credentials

Add the following to your credentials file:

aws_access_key_id = <YOUR_TEMP_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_TEMP_SECRET_ACCESS_KEY>
aws_session_token = <YOUR_SESSION_TOKEN>

Then run this command:

export AWS_PROFILE=temp


Use env vars for temp credentials

Run the following with the proper values:

export AWS_SESSION_TOKEN=AQoDYXdzEJr...<remainder of security token>


Show configuration

aws configure list

List instances

aws ec2 describe-instances

Get number of instances

aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId]'
--output text | wc -l


Get running instances

aws ec2 describe-instances --filters Name=instance-state-name,Values=running

Get Name and public IP of running instances

aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIP:PublicIpAddress,Name:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running


Reboot all instances in a region

aws ec2 reboot-instances --instance-ids $(aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId" | jq '.[]|.[0]' -r)

Assign an elastic IP to an instance

aws ec2 associate-address --allocation-id eipalloc-<eip id> --instance-id <the instance id>

Create instance with a tag

aws ec2 run-instances --image-id ami-xxxxxxx --count 1 --instance-type t2.medium --key-name MyKeyPair --security-group-ids sg-xxxxxx --subnet-id subnet-xxxxxx --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=my-test-instance}]'


Create instance using security group name

aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups MySecurityGroup

Get Active Regions


get_active_regions() {
  ACTIVE_REGIONS=($(aws ec2 describe-regions --all-regions | jq -r '.Regions | .[] | .RegionName + " " + .OptInStatus'  | grep -v not-opted-in | cut -d' ' -f1))

for region in ${ACTIVE_REGIONS[@]}; do
  echo ${region}


Get security group id from group name

aws ec2 describe-security-groups --filters Name=group-name,Values=$sg_name --query "SecurityGroups[*].[GroupId]" --output text


Get ingress TCP ports from a group

ports=($(aws ec2 describe-security-groups --group-ids ${sg} --query 'SecurityGroups[*].IpPermissions[]' | jq '.[] | select(.IpProtocol=="tcp").ToPort'))

for port in ${ports[@]}; do
    echo "port"

Resources: - using jq to select on a key and value - provided a lot of useful aws cli info

Add new ingress rule to security group

Using the value from the example above, we can get the id of a security group and update it to do things such as allowing a codebuild instance temporary SSH access to an instance:

aws ec2 authorize-security-group-ingress \
    --group-id $sg_id \
    --protocol tcp \
    --port 22 \
    --cidr $(curl


List instances with filtering

This example in particular will get you all your m1.micro instances.

aws ec2 describe-instances --filters "Name=instance-type,Values=m1.micro"

List instance by instance id

aws ec2 describe-instances --instance-ids i-xxxxx

Destroy instance

aws ec2 terminate-instances --instance-ids <instance id(s)>

If you want to terminate multiple instances, be sure to use this format:

id1 id2 id3

Get info about a specific AMI by name and output to JSON

aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json

Get AMI id from AMI name

AMI_ID=$(aws ec2 describe-images --filters "Name=name,Values=THEAMINAME" --query 'sort_by(Images, &CreationDate)[-1].[ImageId]' --output text)
echo $AMI_ID


Find latest ubuntu 20.04 AMI

# Get the AMI id for Ubuntu 20.04 AMI
ubuntu_ami_id=$(aws ec2 describe-images --filters "Name=name,Values=ubuntu*20.04-arm64-server*" --query "sort_by(Images, &CreationDate)[-1:].[Name, ImageId]" --output text | awk '{prin
t $2}')

Get AMI id with some python

This uses the run_cmd() function found in /python-notes/.

import json

def get_ami_id(ec2_output):

ec2_output = run_cmd('aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json')
ami_id = get_ami_id(ec2_output)

Deregister an AMI

aws ec2 deregister-image --image-id <ami id>

Get list of all instances with the state terminated

aws ec2 describe-instances --filters "Name=instance-state-name,Values=terminated"

List all instances that match a tag name and are running

aws ec2 describe-instances --filters "Name=tag:Name,Values=*somename*" "Name=instance-state-name,Values=running" | jq

Resources: - filter by running instances - filter by instance name

Alternatively, if you want running instances, change Values=terminated to Values=running.

Get info about an AMI by product-code

aws --region <region> ec2 describe-images --owners aws-marketplace --filters Name=product-code,Values=<product code>

This is useful if you have the product code, and want more information (like the image ID). For CentOS, you can get the product code here. I started down this path when I was messing around with the code in this gist for automatically creating encrypted AMI’s.

Resize ec2 instance

Show available subnets

aws ec2 describe-subnets

Attach volume at root

aws ec2 attach-volume --volume-id vol-xxxx --instance-id i-xxxx --device /dev/sda1


List snapshots

aws ec2 describe-snapshots --output json --query 'Snapshots[*].SnapshotId' --max-items 10 | head

Add custom tag for cloud-init finishing

aws ec2 create-tags --resources `ec2metadata --instance-id` --tags Key=BootstrapStatus,Value=complete



Pretty decent, relatively up-to-date tutorial on using CodeBuild and CodeCommit to autobuild AMI’s:

If you want to use the gist to create encrypted AMI’s mentioned above, be sure to specify aws_region, aws_vpc, aws_subnet, and ssh_username in the variables section.

Create the proper IAM role

  1. Login to the UI
  2. Click on IAM
  3. Click Roles
  4. Click Create role
  5. Click EC2, then click Next: Permissions
  6. Search for CodeCommit, check the box next to AWSCodeCommitReadOnly
  7. Click Next: Tags
  8. Give it some tags if you’d like, click Next: Review
  9. Specify a Role name, like CodeCommit-Read
  10. Click Create role

Now we’re cooking. Let’s test it out by building an instance and not forgetting to assign it the CodeCommit-Read IAM role. You can figure this part out.

Cloning into a repo

Once you’ve got a working instance:

  1. SSH into it
  2. Escalate privileges:
sudo su
  1. Install the awscli with pip:
pip install awscli
  1. Run these commands and be sure to change the region to match the one you’re using:
git config --system credential. '!aws codecommit credential-helper $@'
git config --system credential. true
  1. Run this command and be sure to change the region to match the one you’re working with:
aws configure set region us-west-2
  1. Clone your repo:
git clone

Resources: - This site got me to the above site, but had incomplete information for their proposed solution. - git config commands to authenticate

Using this role with CodeBuild

To get this to work with CodeBuild for automated and repeatable builds, I needed to do a few other things. Primarily, take advantage of the Parameter Store. When I was trying to build initially, my buildspec.yml looked something like this (basically emulating the one found in here):

version: 0.2

      - echo "Installing HashiCorp Packer..."
      - curl -qL -o && unzip
      - echo "Installing jq..."
      - curl -qL -o jq && chmod +x ./jq
      - echo "Validating kali.json"
      - ./packer validate kali.json
      ## HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ## Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ## More info here:
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building HashiCorp Packer template, kali.json"
      - ./packer build kali.json
      - echo "HashiCorp Packer build completed on `date`"

However, I was getting this obscure error message about authentication, and spent several hours messing around with IAM roles, but didn’t have any luck. At some point, I eventually decided to try throwing a “parameter” in for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. This worked great, but I noticed that whenever I tried the build again, I would run into the same issue as before. To fix it, I had to modify the buildspec.yml to look like this (obviously the values you have for your parameter store may vary depending on what you set for them):

version: 0.2


      - echo "Installing HashiCorp Packer..."
      - curl -qL -o && unzip
      - echo "Installing jq..."
      - curl -qL -o jq && chmod +x ./jq
      - echo "Validating kali.json"
      - ./packer validate kali.json
      ## HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ## Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ## More info here:
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building HashiCorp Packer template, kali.json"
      - ./packer build kali.json
      - echo "HashiCorp Packer build completed on `date`"

At this point, everything is working consistently with the IAM role mentioned previously being specified in the packer file (this is a snippet):

"variables": {
    "iam_role": "CodeCommit-Read"

  "builders": [{
    "iam_instance_profile": "{{user `iam_role` }}",

Validate buildspec

python3 -c 'import yaml, sys; yaml.safe_load(sys.stdin)' < buildspec.yml


Debug Codebuild

You can get a shell to your codebuild system, which is incredibly helpful when it comes to debugging build problems.

  1. Add the AmazonSSMFullAccess policy to your codebuild service role
  2. Add a breakpoint to buildspec.yml:
  3. Click Start build with overrides -> Advanced build overrides
  4. Under environment, click the checkbox next to Enable session connection
  5. Click Start build
  6. Click the AWS Session Manager link that appears under build status to access the system

Once you’re done debugging, type in codebuild-resume



Create bucket

# if you need a random name:
BUCKET_NAME=$(head /dev/urandom | tr -dc a-z0-9 | head -c 25 ; echo '')
aws s3 mb s3://$BUCKET_NAME


List buckets

aws s3 ls

List files in a bucket

aws s3 ls s3://target/

Download bucket

aws s3 sync s3://mybucket .


Copy file down

aws s3 cp s3://target/file.html file.html

Copy file up

aws s3 cp TEST s3://target


Copy folder up

aws s3 cp foldertocopy s3://bucket/foldertocopy --recursive


Copy folder down

aws s3 cp s3://bucket/foldertocopy  --recursive

Copy all files in a bucket down

aws s3 cp s3://bucket/foldertocopy ./ --recursive

Read buckets into an array

buckets=($(aws s3 ls |grep tf | awk '{print $3}' | tr " " "\n"))
# Print first element
echo ${buckets[1]}

Iterate over buckets

for b in "${buckets[@]}"; do echo "Bucket: $b"; done


Empty bucket

Delete objects in the bucket:

aws s3api delete-objects --bucket ${bucket} --delete "$(aws s3api list-object-versions --bucket ${bucket} --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"

Delete markers in the bucket:

aws s3api delete-objects --bucket ${bucket} --delete "$(aws s3api list-object-versions --bucket ${bucket} --query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')"


Delete bucket

aws s3 rb s3://bucketname --force


Copy multiple folders to bucket

aws s3 cp /path/to/dir/with/folders/to/copy s3://bucket/ --recursive --exclude ".git/*"


Set up S3 IAM for backup/restore

Storing aws credentials on an instance to access an S3 bucket can be a bad idea. Let’s talk about what we need to do in order to backup/restore stuff from an S3 bucket safely:

Create Policy

  1. Go to IAM
  2. Policies
  3. Create Policy
  4. Policy Generator, or copy and paste JSON from here (YOLO) into Create Your Own Policy. This is the one I used:
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
                "arn:aws:s3:::<bucket name>/*"

Create a Role

  1. Go to Roles in IAM
  2. Click Create role
  3. Select EC2
  4. Select EC2 again and click Next: Permissions
  5. Find the policy you created previously
  6. Click Next: Review
  7. Give the Role a name and a description, click Create role

Assign the role to your instance

This will be the instance that houses the service that requires a backup and restore service (your S3 bucket).

  1. In EC2, if the instance is already created, right click it, Instance Settings, Attach/Replace IAM Role
  2. Specify the IAM role you created previously, click Apply.

Set up automated expiration of objects

This will ensure that backups don’t stick around longer than they need to. You can also set up rules to transfer them to long term storage during this process, but we’re not going to cover that here. From the bucket overview screen:

  1. Click Management
  2. Click Add lifecycle rule
  3. Specify a name, click Next
  4. Click Next
  5. Check Current version and Previous versions
  6. Specify a desired number of days to expiration for both the current version and the previous versions, click Next
  7. Click Save

Mount bucket as local directory

Warning, this is painfully slow once you have it set up.

Follow the instructions found on this site.

Then, run this script:

if [ ! -d $folder ]; then
    mkdir $folder

s3fs bucket_name $folder -o passwd_file=${HOME}/.passwd-s3fs -o volname="S3-Bucket"

Create IAM role to grant read access to an s3 bucket

  1. If accessing from an ec2 instance, find your ec2 instance in the web UI, right click it -> Security -> Modify IAM Role. Otherwise, just open the IAM console
  2. Click Roles -> Create role
  3. Click EC2
  4. Click Next: Permissions
  5. Click Create policy
  6. Click JSON
  7. Copy the json from here:
  1. Change awsexamplebucket to the name of your bucket and click Review policy
  2. Specify a Name for the policy and click Create policy

Get id of KMS key associated with a bucket

aws s3api get-bucket-encryption --bucket $(aws s3 ls | grep -i bucketname | awk '{print $3}') | jq '.ServerSideEncryptionConfiguration.Rules[0].ApplyServerSideEncryptionByDefault.KMSMasterKeyID' | awk -F '/' '{print $2}' | tr -d '"'

Resources: - solved most of the problem for jq parsing - jq cheatsheet


Create session

from boto3.session import Session

def create_session():
  session = Session(aws_access_key_id=access_key,aws_secret_access_key=secret_key,aws_session_token=session_token)
  return session


List buckets with boto3

def get_s3_buckets(session):
  s3 = session.resource('s3')
  print("Bucket List:")
  for bucket in s3.buckets.all():


Show items in an s3 bucket

def list_s3_bucket_items(session, bucket):
  s3 = session.resource('s3')
  my_bucket = s3.Bucket(bucket)

  for file in my_bucket.objects.all():

List Users

def get_users(session):
  client = boto3.client('iam', aws_access_key_id=access_key, aws_secret_access_key=secret_key,aws_session_token=session_token)
  users = client.list_users()
  for key in users['Users']:


Get account id

def sts(session):
  sts_client = boto3.client('sts',aws_access_key_id=access_key,aws_secret_access_key=secret_key, aws_session_token=session_token)

Create ec2 instance with name

EC2_RESOURCE = boto3.resource('ec2')

def create_ec2_instance():
    instance = EC2_RESOURCE.create_instances(
        SecurityGroupIds = ["sg-ID_GOES_HERE"]
                'ResourceType': 'instance',
                'Tags': [
                        'Key': 'Name',
                        'Value': 'INSTANCE_NAME_HERE'
    return instance[0]


Allocate and associate an elastic IP

import boto3
from botocore.exceptions import ClientError

# Wait for instance to finish lanuching before assigning the elastic IP address
print('Waiting for instance to get to a running state, please wait...')

EC2_CLIENT = boto3.client('ec2')

    # Allocate an elastic IP
    eip = EC2_CLIENT.allocate_address(Domain='vpc')
    # Associate the elastic IP address with an instance launched previously
    response = EC2_CLIENT.associate_address(
except ClientError as e:

Allocate existing elastic IP



Wait for instance to finish starting

retries = 10
retry_delay = 10
retry_count = 0
while retry_count <= retries:
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    result = sock.connect_ex((instance[0].public_ip_address,22))
    if result == 0:
        print(f"The instance is up and accessible on port 22 at {instance[0].public_ip_address}")
        print("Instance is still coming up, retrying . . . ")



Query v2

TOKEN=$(curl -s -X PUT "" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")

# Query the service
curl -H "X-aws-ec2-metadata-token: $TOKEN" -v


Get Credentials



Get region

curl --silent | jq -r .region


Get role-name


The role name will be listed here.


Get Account ID


Get public hostname


Get account id

aws sts get-caller-identity | jq '.Account'



Stand up EC2 Instance

This accounts for the exceptionally annoying message: An error occurred (VPCIdNotSpecified) when calling the RunInstances operation: No default VPC for this user that does not have any solutions in sight unless you do some deep diving. Essentially this means that a default VPC isn’t defined and subsequently you need to provide a subnet id:

package main

import (


func main() {

    // Get credentials from ~/.aws/credentials
    sess, err := session.NewSession(&aws.Config{
        Region: aws.String("us-west-2")},

    // Create EC2 service client
    svc := ec2.New(sess)

    // Specify the details of the instance that you want to create.
    runResult, err := svc.RunInstances(&ec2.RunInstancesInput{
        ImageId:      aws.String("ami-id-here"),
        InstanceType: aws.String("t2.small"),
        MinCount:     aws.Int64(1),
        MaxCount:     aws.Int64(1),
        SecurityGroupIds: aws.StringSlice([]string{"sg-id-here"}),
        KeyName: aws.String("keypairname-here"),
        SubnetId:     aws.String("subnet-id-here"),

    if err != nil {
        fmt.Println("Could not create instance", err)

    fmt.Println("Created instance", *runResult.Instances[0].InstanceId)

    // Add tags to the created instance
    _, errtag := svc.CreateTags(&ec2.CreateTagsInput{
        Resources: []*string{runResult.Instances[0].InstanceId},
        Tags: []*ec2.Tag{
                Key:   aws.String("Name"),
                Value: aws.String("GoInstance"),
    if errtag != nil {
        log.Println("Could not create tags for instance", runResult.Instances[0].InstanceId, errtag)

    fmt.Println("Successfully tagged instance")

Resources: - starting point - where I found some of the solution (for boto, which translated to this fortunately)

Stand up EC2 Instance with lambda

Modify the previous code to make it into a lambda function - find it here.

Next you’ll need to get the binary for the function:

env GOOS=linux GOARCH=amd64 go build -o /tmp/main

With that, you’ll need to zip it up:

zip -j /tmp/ /tmp/main

At this point, you need to create the iam role:

  1. Navigate to
  2. Click Create role
  3. Click Lambda
  4. Click Next: Permissions
  5. Add the following policies:
  1. Click Next: Tags
  2. Give it a Name tag and click Next: Review
  3. Give it a Role name such as “LambdaCreateEc2Instance”
  4. Click Create role
  5. Once it’s completed, click the role and copy the Role ARN

Now, you’ll need to run the following command to create the lambda function:

aws lambda create-function --function-name createEc2Instance --runtime go1.x \
  --zip-file fileb:///tmp/ --handler main \
  --role <Role ARN copied previously>

Lastly, you’ll need to populate all of the environment variables. To do this, you can use this script:

aws lambda update-function-configuration --function-name createEc2Instance \
    --environment "Variables={AMI=ami-id-here, INSTANCE_TYPE=t2.small, SECURITY_GROUP=sg-id-here, KEYNAME=keypairname-here, SUBNET_ID=subnet-id-here}"

Alternatively, you can set the values in the lambda UI by clicking Manage environment variables:


but this gets very tedious very quickly.

If you want to throw all of this into a Makefile to streamline testing, you could do something like this:

	env GOOS=linux GOARCH=amd64 go build -o /tmp/main

	zip -j /tmp/ /tmp/main
	bash scripts/
	bash scripts/

	aws lambda invoke --function-name createEc2Instance /tmp/output.json

Run the whole thing with this command:

make build && make deploy && make run

At this point, you can go ahead and invoke the lambda function to see if everything is working as expected:

aws lambda invoke --function-name createEc2Instance /tmp/output.json

This can of course be determined by looking at your running EC2 instances and seeing if there’s a new one that’s spinning up from your invoking the lambda function.

Resources: - doing all the lambda cli stuff and making a lambda function with golang -doing the web UI stuff - setting the env vars programatically - fantastic in-depth guide for using Go with Lambda

CORS with lambda and API Gateway

Want to do AJAX stuff with your lambda function(s)? Cool, you’re in the right place.

  1. Open your gateway
  2. Click Actions -> Enable CORS
  3. Check the boxes for POST, GET, and OPTIONS
  4. Input the following for Access-Control-Allow-Headers:
  1. Input the following for Access-Control-Allow-Origin:
  1. Click Enable CORS and replace existing CORS headers

For Options Method

Open the Method Response and click the arrow next to 200. Add the following headers:


For GET Method

Be sure to add the appropriate headers to your APIGatewayProxyResponse:

Headers: map[string]string{
				"Access-Control-Allow-Origin":      "*",
				"Access-Control-Allow-Credentials": "true",

Next, open the Method Response and click the arrow next to 200. Add the following headers:


For POST Method

Open the Method Response and click the arrow next to 200. Add the following header: Screen-Shot-2020-10-12-at-3.12.47-PM

Finishing touches

Finally, be sure to click Actions and Deploy API when you’re done


Return Response for API Gateway

You have two options here:

return events.APIGatewayProxyResponse{
			StatusCode: http.StatusBadGateway,
			Headers: map[string]string{
				"Access-Control-Allow-Origin":      "*",
				"Access-Control-Allow-Credentials": "true",
			Body: string("Method not Allowed"),
		}, nil

or alternatively:

resp := events.APIGatewayProxyResponse{Headers: make(map[string]string)}
resp.Headers["Access-Control-Allow-Origin"] = "*"
resp.Headers["Access-Control-Allow-Credentials"] = "true"
resp.StatusCode = http.StatusOK
resp.Body = string(publicInstanceIp)
return resp, nil

Resources: - used to figure out the first option

Update function via CLI

This is useful to run after updating your code. This will grab in the current directory:

env GOOS=linux GOARCH=amd64 go build -o main
zip -j main
aws lambda update-function-code --function-name <lambda function name> --zip-file fileb:///${PWD}/


Use serverless framework

This framework makes it easier to develop and deploy serverless resources, such as AWS Lambda Functions.

To start we’ll need to install the Serverless Framework:

npm install -g serverless

Then we will need to create the project with a boilerplate template. A couple of examples:

# Nodejs
serverless create --template aws-nodejs --path myservice
# Golang
cd $GOPATH/src && serverless create -t aws-go-dep -p myservice

From here, you need to populate the serverless.yml template. This will use the lambda code from above that deploys ec2 instances:

service: lambdainstancedeployer

frameworkVersion: '2'

  name: aws
  runtime: go1.x
  stage: ${opt:stage, 'dev'}
  region: ${opt:region, 'us-west-2'}
    DYNAMO_TABLE: ${self:service}-${opt:stage, self:provider.stage}
  memorySize: 3008
  timeout: 30 # API Gateway max timeout
    - Effect: Allow
        - dynamodb:Query
        - dynamodb:Scan
        - dynamodb:GetItem
        - dynamodb:PutItem
        - dynamodb:UpdateItem
        - dynamodb:DeleteItem
      Resource: "arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.DYNAMO_TABLE}"
    - Effect: Allow
        - ec2:RunInstances
        - ec2:DescribeInstances
        - ec2:DescribeInstanceStatus
        - ec2:TerminateInstances
        - ec2:StopInstances
        - ec2:StartInstances
        - ec2:CreateTags
        - ec2:DeleteTags
      Resource: "*"

    - ./**
    - ./bin/**

    handler: bin/myLambdaService
      - http:
          path: /deployer
          method: post
          cors: true
      - http:
          path: /deployer
          method: get
          cors: true
      AMI: ami-xxxxxx
      INSTANCE_TYPE: t2.small
      REGION: us-west-2

      Type: 'AWS::DynamoDB::Table'
      # Uncomment if you want to want to ensure the table isn't deleted
      # DeletionPolicy: Retain
      DeletionPolicy: Delete
          - AttributeName: email
            AttributeType: S
          - AttributeName: email
            KeyType: HASH
        BillingMode: PAY_PER_REQUEST
        TableName: ${self:provider.environment.DYNAMO_TABLE}

it will also create the API gateway, IAM role and DynamoDB table.

Next, compile the function and build it:

cd myservice && make build

Resources: - Lambda + Golang + Serverless walkthrough - useful information for IAM actions needed for ec2 operations - how to set the dynamodb iam permissions - how to delete a database or retain it

More useful Makefile

Move your functions into a functions folder in the repo for the serverless work.

Next, change the Makefile to the following:

functions := $(shell find functions -name \*main.go | awk -F'/' '{print $$2}')

build: # Build golang binary
	@for function in $(functions) ; do \
		cd functions/$$function ; \
		env GOOS=linux go build -ldflags="-s -w" -o ../../bin/$$function ; \
		cd .. ; \
	serverless deploy

	serverless remove

This will output all function binaries into the bin/ directory at the top level of your project.

Resources: - super useful Makefile example

Decode Error Message from CloudWatch Logs

aws sts decode-authorization-message --encoded-message $msg --query DecodedMessage --output text | jq '.'


Secrets Manager

Create IAM role to grant read access to a secret

  1. If accessing from an ec2 instance, find your ec2 instance in the web UI, right click it -> Security -> Modify IAM Role. Otherwise, just open the IAM console
  2. Click Roles -> Create role
  3. Click EC2
  4. Click Next: Permissions
  5. Click Create policy
  6. Click JSON
  7. Copy the json from here:
    "Version": "2012-10-17",
    "Statement": [
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
            "Resource": "<your secret ARN>"
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "secretsmanager:ListSecrets",
            "Resource": "*"
  1. Change <your secret ARN> to the proper value of your secret, which you can find in the Secrets Manager UI and click Review policy
  2. Specify a Name for the policy and click Create policy


Get secret from secrets manager and output to file

aws secretsmanager get-secret-value --secret-id <your secrets id> --query SecretString --output text | tee <file to output secret to> 


Get several secrets

users=(user1 user2 user3)
for user in "${users[@]}"; do
  sec=$(aws secretsmanager get-secret-value --secret-id $environment-$user \
    --query SecretString \
    --output text)
  echo "Secret for $environment-$user is $sec"

Create new secret from a file

aws secretsmanager create-secret --name MyTestDatabaseSecret \
    --description "My test database secret created with the CLI" \
    --secret-string file://mycreds.json
    --output text


Add access key and secret access key as secrets

aws secretsmanager create-secret --name "prod/someuser_aws_access_key_id" \
    --description "someuser prod aws_access_key_id" \
    --secret-string "$(sed '2q;d' ~/.aws/credentials | awk '{print $3}')" \
    --output text
aws secretsmanager create-secret --name "prod/someuser_aws_secret_access_key" \
    --description "someuser prod aws_secret_access_key" \
    --secret-string "$(sed '3q;d' ~/.aws/credentials | awk '{print $3}')" \
    --output text

List secrets

aws secretsmanager list-secrets --output text

Update secret from a file

aws secretsmanager update-secret --secret-id <name of secret or ARN> \
--description "<what the secret is>" \
--secret-string "file://somesecret" \
--output text

Delete secret without waiting period

aws secretsmanager delete-secret --secret-id <name of secret or ARN> --force-delete-without-recovery

Resources: - gave me the information to piece together the command - gave me idea for the solution

One liner for ssh secret

If you have an SSH key in Secrets Manager, you can run the following to grab it and put it into a file on your local system:

aws secretsmanager get-secret-value --secret-id ssh_key | jq '.SecretString' | sed 's/\\n/\n/g' | sed 's/"//g' | tee ~/.ssh/ssh_key && chmod 400 ~/.ssh/ssh_key

Resource: - clean up the JSON so the file works


Find when an ec2 instance was terminated

This will require you to have the instance id of the terminated instance and a rough sense of the day that it was terminated.

  1. Open the CloudTrail service
  2. Click Event history
  3. Select Event name from the dropdown
  4. Input TerminateInstances
  5. Search for the terminated instance id under the Resource name column



Create user

aws iam create-user --user-name "${USERNAME}" --output json

Delete user

aws iam delete-user --user-name "${USERNAME}" --output json

Create access keys for a user

aws iam create-access-key --user-name "${USERNAME}" --query 'AccessKey.[AccessKeyId,SecretAccessKey]' --output text


Get credentials as vars

credentials=$(aws iam create-access-key --user-name "${USERNAME}" --query 'AccessKey.[AccessKeyId,SecretAccessKey]' --output text)
secret_access_key=$(echo ${credentials} | cut --complement -d " " -f 1)

echo "The access key ID  of "${username}" is $access_key_id "
echo "The Secret access key of "${username}" is $secret_access_key "


List users

aws iam list-users
usernames=($(aws iam list-users --output text | cut -f 7))

for user in ${usernames[@]}; do
  echo $user


List policies

aws iam list-policies

List managed apologies attached to a role

aws iam list-attached-role-policies --role-name $ROLE_NAME


List inline policies embedded in a role

aws iam list-role-policies --role-name $ROLE_NAME


Delete policy

aws iam delete-policy --policy-arn $ARN

Delete policies with word terraform in them

aws iam list-policies |grep terraform |grep arn | awk '{print $2}' | tr -d '"' | tr -d ',' | xargs -I{} aws iam delete-policy --policy-arn {}

Create instance profile

aws iam create-instance-profile --instance-profile-name $PROFILE_NAME


List instance profiles

aws iam list-instance-profiles

Associate role with instance profile

aws iam add-role-to-instance-profile --role-name YourNewRole --instance-profile-name YourNewRole-Instance-Profile

Delete instance profile

aws iam delete-instance-profile --instance-profile-name $PROFILE_NAME

Associate Instance Profile with instance you want to use

aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=YourNewRole-Instance-Profile

Attach IAM instance profile to ec2 instance via UI

  1. Open the Amazon EC2 console
  2. Click Instances
  3. Click the instance you want to access the s3 bucket from
  4. Click Actions in the upper right-hand side of the screen
  5. Click Security -> Modify IAM role
  6. Enter the name of the IAM role created previously
  7. Click Save

To download files from the S3 bucket, follow the steps at the top of the page under INSTALL LATEST VERSION OF AWS CLI ON LINUX to get the AWS cli utils in order to grab stuff from the bucket.

Resources: - Set up IAM and attach it to ec2 instance - IAM policy used

Get assumed roles in instance

aws --profile test sts get-caller-identity

Use instance profile credentials in ec2 instance

TOKEN=$(curl -s -X PUT "" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")

export AWS_ACCESS_KEY_ID=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v | jq -r .AccessKeyId)

export AWS_SECRET_ACCESS_KEY=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v | jq -r .SecretAccessKey)

export AWS_SESSION_TOKEN=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -v | jq -r .Token)


Validate cloud-init

cloud-init devel schema --config-file bob.yaml


Delete cloud-init logs

cloud-init clean --logs

Log locations for cloud-init

  • cloud-init package version
  • dmesg output
  • journalctl output


Location of userdata script

View what’s provided as userdata by running this command:

cat /var/lib/cloud/instance/cloud-config.txt



List Tables

aws dynamodb list-tables


Delete Table

aws dynamodb delete-table --table-name $TABLE


Install session manager plugin on MacOS

brew install cask session-manager-plugin --no-quarantine

Resource: - avoid annoying MacOS malware message with the --no-quarantine parameter

Set default shell and script to run for instances

  1. Go to
  2. Scroll down to Linux shell profile
  3. Input the following to run zsh if it is installed:
    if [[ "$(which zsh)" ]]; then 
      "$(which zsh)"
    cd "${HOME}"
  4. Click Save


Show managed SSM instances

aws ssm describe-instance-information

List parameters

aws ssm describe-parameters

Access a parameter

aws ssm get-parameter --name /path/to/parameter

Install SSM Agent Manually on Ubuntu ec2 instance

sudo snap install amazon-ssm-agent --classic
sudo systemctl start


SSH over SSM

Add your ssh public key to your instance’s authorized_keys file.

Add this to your local system’s ~/.ssh/config:

# SSH over Session Manager

host i-* mi-*

ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"

Access the instance via:

ssh -i ~/.ssh/instance-key.pem ubuntu@$INSTANCE_ID:


KMS Encryption

Create KMS key for session encryption

It’s worth noting that sessions already have encryption in place for SSM connection data (TLS 1.2 by default). However, if you want to use fleet manager, then you’ll need to enable KMS encryption.

  1. Navigate to
  2. Leave the default (Symmetric)
  3. Click Next
  4. Input an alias, provide a Name tag if you choose -> Next
  5. Specify the role you use for the SSM IAM Instance Profile - if you don’t have one yet, it’s the name of the role you create at step 4 of the guide below
  6. Click Next
  7. Click Finish

Resources: - how to create the key - explains existing encryption

Enable KMS Encryption

  1. Navigate to
  2. Click Preferences -> Edit
  3. Check the box next to Enable KMS encryption
  4. Click Select a KMS key -> select the key we created previously from the dropdown
  5. Scroll all the way down and click Save

Access EC2 instance

  1. Create the SSM Service Linked role:
aws iam create-service-linked-role --aws-service-name --description "Provides access to AWS Resources managed or used by Amazon SSM"
  1. Create an instance profile for SSM:
aws iam create-instance-profile --instance-profile-name AmazonSSMInstanceProfileForInstances
  1. Create a trust relation JSON file:
cat > trust_policy.json <<- EOM
  1. Create SSM IAM role:
aws iam create-role --role-name "AmazonSSMRoleForInstances" --assume-role-policy-document file://trust_policy.json
  1. Attached required IAM policy for SSM:
aws iam attach-role-policy --role-name "AmazonSSMRoleForInstances" --policy-arn "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"

5a. If you are using KMS encryption, you’ll need to add an inline policy as well:

cat > kms_ssm_policy.json <<- EOM
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "YOURKEYARN"

Note: Be sure to replace YOURKEYARN with your KMS key’s ARN.

5b. Add the policy to your existing role:

aws iam put-role-policy --role-name "AmazonSSMRoleForInstances" --policy-name KMSSSM --policy-document file://kms_ssm_policy.json
  1. Attach the role to the instance profile:
aws iam add-role-to-instance-profile --instance-profile-name "AmazonSSMInstanceProfileForInstances" --role-name "AmazonSSMRoleForInstances"
  1. Attach the instance profile to an EC2 instance:
aws ec2 associate-iam-instance-profile --instance-id $INSTANCE_ID --iam-instance-profile "Name=AmazonSSMInstanceProfileForInstances"
  1. Access the instance with SSM:
aws ssm start-session --target 

Resources: - provided all of the commands - add inline policies to roles programatically


SSH Key Encryption


openssl rsa -des3 -in key.pem -out encrypted-key.pem
# Enter the pass phrase you've selected
mv encrypted-key.pem key.pem
chmod 400 key.pem


openssl rsa -in key.pem -out
# Enter the pass phrase you've selected
mv key.pem


View information for all VPCs

  1. Open the VPC dashboard
  2. Click on Running instances -> See all regions


Multiple filters

You need to separate with spaces and put each in quotes. This particular example will find tcp and udp security groups open to

aws ec2 describe-security-groups --filters "Name=ip-permission.cidr,Values=''" "Name=ip-permission.protocol,Values='tcp'" "Name=ip-permission.protocol,Values='udp'" --query "SecurityGroups[*].[GroupId]" | jq -r .[][0]



List repositories

aws ecr describe-repositories | jq

Create repository

aws ecr create-repository --repository-name ${REPO_NAME} | jq


Delete repository

aws ecr delete-repository --repository-name ${REPO_NAME} | jq


Delete repository with images

aws ecr delete-repository --repository-name ${REPO_NAME} --force | jq


Delete all repositories

Obviously this is incredibly destructive, so be extremely careful if you use this, it will delete ALL of the repos in your region!!!

repos=$(aws ecr describe-repositories | jq -c .repositories)

delete_repo() {
    aws ecr delete-repository --repository-name ${repo_name} --force | jq

for repo in $(echo "${repos}" | jq -r '.[] | @base64'); do
    repo_name=$(echo "${repo}" | base64 -D | jq -r '.repositoryName')
    delete_repo "${repo_name}"


Grab cert from ACM

#!/usr/bin/env bash
set -e

# Get the certificate ARN
aws_certs=$(aws acm list-certificates | jq .CertificateSummaryList)
for row in $(echo "${aws_certs}" | jq -r '.[] | @base64'); do
    # Get the cert domain
    cert_domain=$(echo ${row} | base64 --decode | jq -r '.DomainName')
    cert_arn=$(echo ${row} | base64 --decode | jq -r .'CertificateArn')
    if [[ "${cert_domain}" == '' ]]; then
        echo "Got the ARN associated with ${cert_domain} - ${cert_arn}"

aws acm get-certificate --certificate-arn ${cert_arn} | jq -r .Certificate > "${cert_domain}.pem"
aws acm get-certificate --certificate-arn ${cert_arn} | jq -r .CertificateChain > "${cert_domain}-fullchain.pem"

Resources: - official docs - let me know it was possible

Create LetsEncrypt Cert using Route 53 plugin

This particular function has been tested to work with Ubuntu 20.04:

get_cert() {
    snap install core; snap refresh core
    apt-get remove -y certbot
    snap install --classic certbot
    ln -s /snap/bin/certbot /usr/bin/certbot
    snap set certbot trust-plugin-with-root=ok
    snap install certbot-dns-route53
    if [[ ${CERT_MODE} == 'prod' ]]; then
        # Prod certs have a rate limit, so you want to be judicious
        # with the number of times you deploy with a prod cert
        certbot certonly --dns-route53 -d "${SERVER_DOMAIN}"
        # Deploy with staging cert if prod isn't specified
        certbot certonly --dns-route53 --staging -d "${SERVER_DOMAIN}"


Resource: - official docs


Delete all task definitions

REGION=us-west-2 # if AWS_DEFAULT_REGION isn't already set
#!/usr/bin/env bash
get_task_definition_arns() {
    aws ecs list-task-definitions --region "${REGION}" \
        | jq -M -r '.taskDefinitionArns | .[]'

delete_task_definition() {
    local arn=$1

    aws ecs deregister-task-definition \
        --region "${REGION}" \
        --task-definition "${arn}" > /dev/null

for arn in $(get_task_definition_arns)
    echo "Deregistering ${arn}..."
    delete_task_definition "${arn}"
    # Speed things up with concurrency:
    #delete_task_definition "${arn}" &



CodeCommit is a service in AWS that provides an option for private git repos. Access can be dictated by IAM, which is nice.