AWS Cheatsheet

This contains various commands and information that I find useful for AWS work.

UI

Backup instance manually

  1. Go to your instance
  2. Right click and select Image from the dropdown
  3. Click Create Image
  4. Give your backup a name and description
  5. Click No reboot if you want your instance to stay in a running state
  6. Click Create Image
  7. At this point you should be able to find the AMI that is associated with your backup under AMIs. Give the AMI a more descriptive name if you'd like.

Resource: https://n2ws.com/blog/how-to-guides/automate-amazon-ec2-instance-backup

Parameter Store location

  1. Login
  2. Search for Systems Manager
  3. Click on Parameter Store in the menu on the left-hand side

EC2

List instances

aws ec2 describe-instances

Get number of instances

aws ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId]'
--output text | wc -l

Resource: https://stackoverflow.com/questions/40164786/determine-how-many-aws-instances-are-in-a-zone

Assign an elastic IP to an instance

aws ec2 associate-address --allocation-id eipalloc-<eip id> --instance-id <the instance id>

Create instance with a tag

aws ec2 run-instances --image-id ami-xxxxxxx --count 1 --instance-type t2.medium --key-name MyKeyPair --security-group-ids sg-xxxxxx --subnet-id subnet-xxxxxx --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=my-test-instance}]'

Resource: https://serverfault.com/questions/724501/how-to-add-a-tag-when-launching-an-ec2-instance-using-aws-clis

Create instance using security group name

aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups MySecurityGroup

List instances with filtering

This example in particular will get you all your m1.micro instances.

aws ec2 describe-instances --filters "Name=instance-type,Values=m1.micro"

List instance by instance id

aws ec2 describe-instances --instance-ids i-xxxxx

Destroy instance

aws ec2 terminate-instances --instance-ids <instance id(s)>

If you want to terminate multiple instances, be sure to use this format:

id1 id2 id3

Get info about a specific AMI by name and output to JSON

aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json

Get AMI id with some python

This uses the run_cmd() function found in /python-notes/.

import json

def get_ami_id(ec2_output):
    json.loads(ec2_output.decode('utf-8'))['Images'][0]['ImageId']

ec2_output = run_cmd('aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json')
ami_id = get_ami_id(ec2_output)
print(ami_id)

Deregister an AMI

aws ec2 deregister-image --image-id <ami id>

Get list of all instances with the state terminated

aws ec2 describe-instances --filters "Name=instance-state-name,Values=terminated"

Alternatively, if you want running instances, change Values=terminated to Values=running.

Get info about an AMI by product-code

aws --region <region> ec2 describe-images --owners aws-marketplace --filters Name=product-code,Values=<product code>

This is useful if you have the product code, and want more information (like the image ID). For CentOS, you can get the product code here. I started down this path when I was messing around with the code in this gist for automatically creating encrypted AMI's.

Resize ec2 instance

https://medium.com/@kenichishibata/resize-aws-ebs-4d6e2bf00feb

Show available subnets

aws ec2 describe-subnets

CodeBuild

Pretty decent, relatively up-to-date tutorial on using CodeBuild and CodeCommit to autobuild AMI's: https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer/

If you want to use the gist to create encrypted AMI's mentioned above, be sure to specify aws_region, aws_vpc, aws_subnet, and ssh_username in the variables section.

CodeCommit

You like the idea of CodeCommit? You know, having git repos that are accessible via IAM?

How about using it in your EC2 instances without needing to store credentials? Really cool idea, right? Bet it's pretty easy to setup too, huh? Ha!

Well, unless you know what to do, then it actually is. Here we go:

Build the proper IAM role

  1. Login to the UI
  2. Click on IAM
  3. Click Roles
  4. Click Create role
  5. Click EC2, then click Next: Permissions
  6. Search for CodeCommit, check the box next to AWSCodeCommitReadOnly
  7. Click Next: Tags
  8. Give it some tags if you'd like, click Next: Review
  9. Specify a Role name, like CodeCommit-Read
  10. Click Create role

Now we're cooking. Let's test it out by building an instance and not forgetting to assign it the CodeCommit-Read IAM role. You can figure this part out.

Cloning into a repo

Once you've got a working instance:

  1. Access it via ssh into it
  2. sudo su
  3. Install the awscli with pip: pip install awscli
  4. Run this command and be sure to change the region to match the one you're working with:
git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.helper '!aws --profile default codecommit credential-helper $@'
  1. Run this command and be sure to change the region to match the one you're working with: git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.UseHttpPath true
  2. Run this command and be sure to change the region to match the one you're working with: aws configure set region us-west-2

At this point, you should be able to get clone into your repo: git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/GREATREPONAME

Resources:
https://jameswing.net/aws/codecommit-with-ec2-role-credentials.html
https://stackoverflow.com/questions/46164223/aws-pull-latest-code-from-codecommit-on-ec2-instance-startup - This site got me to the above site, but had incomplete information for their proposed solution.

Integrating this in with CodeBuild

To get this to work with CodeBuild for automated and repeatable builds, I needed to do a few other things. Primarily, take advantage of the Parameter Store. When I was trying to build initially, my buildspec.yml looked something like this (basically emulating the one found in here):

---
version: 0.2

phases:
  pre_build:
    commands:
      - echo "Installing HashiCorp Packer..."
      - curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.1.1/packer_1.1.1_linux_amd64.zip && unzip packer.zip
      - echo "Installing jq..."
      - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
      - echo "Validating kali.json"
      - ./packer validate kali.json
  build:
    commands:
      ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ### More info here: https://github.com/mitchellh/packer/issues/4279
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building HashiCorp Packer template, kali.json"
      - ./packer build kali.json
  post_build:
    commands:
      - echo "HashiCorp Packer build completed on `date`"

However, I was getting this obscure error message about authentication, and spent several hours messing around with IAM roles, but didn't have any luck. At some point, I eventually decided to try throwing a "parameter" in for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. This worked great, but I noticed that whenever I tried the build again, I would run into the same issue as before. To fix it, I had to modify the buildspec.yml to look like this (obviously the values you have for your parameter store may vary depending on what you set for them):

---
version: 0.2

env:
  parameter-store:
    AWS_ACCESS_KEY_ID: "/CodeBuild/AWS_ACCESS_KEY_ID"
    AWS_SECRET_ACCESS_KEY: "/CodeBuild/AWS_SECRET_ACCESS_KEY"

phases:
  pre_build:
    commands:
      - echo "Installing HashiCorp Packer..."
      - curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.1.1/packer_1.1.1_linux_amd64.zip && unzip packer.zip
      - echo "Installing jq..."
      - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
      - echo "Validating kali.json"
      - ./packer validate kali.json
  build:
    commands:
      ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ### More info here: https://github.com/mitchellh/packer/issues/4279
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building HashiCorp Packer template, kali.json"
      - ./packer build kali.json
  post_build:
    commands:
      - echo "HashiCorp Packer build completed on `date`"

At this point, everything is working consistently with the IAM role mentioned previously being specified in the packer file (this is a snippet):

"variables": {
    "iam_role": "CodeCommit-Read"
  },

  "builders": [{
    "iam_instance_profile": "{{user `iam_role` }}",
  }],

SSM

Get the information for a particular parameter (this will give you the encrypted value if the parameter is a SecureString): aws ssm get-parameter --name <nameofparam>

List parameters

aws ssm describe-parameters

Access a parameter

aws ssm get-parameter --name /path/to/parameter

Miscellaneous

Encrypt your pem file:

openssl rsa -des3 -in key.pem -out encrypted-key.pem
# Enter the pass phrase you've selected
mv encrypted-key.pem key.pem
chmod 400 key.pem

Remove the encryption:

openssl rsa -in key.pem -out key.open.pem
# Enter the pass phrase you've selected
mv key.open.pem key.pem

Resource: https://security.stackexchange.com/questions/59136/can-i-add-a-password-to-an-existing-private-key

Set up aws cli with pipenv on OSX

https://duseev.com/articles/perfect-aws-cli-setup/

S3

List buckets

aws s3 ls s3://target/

Download bucket

aws s3 sync s3://mybucket .

Resource: https://stackoverflow.com/questions/8659382/downloading-an-entire-s3-bucket/55061863

Copy file down

aws s3 cp s3://target/file.html file.html

Copy file up

aws s3 cp TEST s3://target

Copy folder up

aws s3 cp foldertocopy s3://bucket/foldertocopy --recursive

Resource: https://coderwall.com/p/rckamw/copy-all-files-in-a-folder-from-google-drive-to-aws-s3

Cheatsheet

https://linuxacademy.com/blog/amazon-web-services-2/aws-s3-cheat-sheet/

Set up S3 IAM for backup/restore

Storing aws credentials on an instance to access an S3 bucket can be a bad idea. Let's talk about what we need to do in order to backup/restore stuff from an S3 bucket safely:

Create Policy

  1. Go to IAM
  2. Policies
  3. Create Policy
  4. Policy Generator, or copy and paste JSON from the interwebs into Create Your Own Policy. This is the one I used:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::techvomit"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket name>/*"
            ]
        }
    ]
}

Create a Role

  1. Go to Roles in IAM
  2. Click Create role
  3. Select EC2
  4. Select EC2 again and click Next: Permissions
  5. Find the policy you created previously
  6. Click Next: Review
  7. Give the Role a name and a description, click Create role

Assign the role to your instance

This will be the instance that houses the service that requires a backup and restore service (your S3 bucket).

  1. In EC2, if the instance is already created, right click it, Instance Settings, Attach/Replace IAM Role
  2. Specify the IAM role you created previously, click Apply.

Set up automated expiration of objects

This will ensure that backups don't stick around longer than they need to. You can also set up rules to transfer them to long term storage during this process, but we're not going to cover that here.
From the bucket overview screen:

  1. Click Management
  2. Click Add lifecycle rule
  3. Specify a name, click Next
  4. Click Next
  5. Check Current version and Previous versions
  6. Specify a desired number of days to expiration for both the current version and the previous versions, click Next
  7. Click Save

Mount bucket as local directory

Warning, this is painfully slow once you have it set up.

Follow the instructions found on this site.

Then, run this script:

#!/bin/bash
folder="/tmp/folder"
if [ ! -d $folder ]; then
    mkdir $folder
fi

s3fs bucket_name $folder -o passwd_file=${HOME}/.passwd-s3fs -o volname="S3-Bucket"

Copy multiple folders to bucket

aws s3 cp /path/to/dir/with/folders/to/copy s3://bucket/ --recursive --exclude ".git/*"

Resource: https://superuser.com/questions/1497268/selectively-uploading-multiple-folders-to-aws-s3-using-cli

Boto

Create session

from boto3.session import Session

def create_session():
  session = Session(aws_access_key_id=access_key,aws_secret_access_key=secret_key,aws_session_token=session_token)
  return session

Resource: https://stackoverflow.com/questions/30249069/listing-contents-of-a-bucket-with-boto3

List buckets with boto3

def get_s3_buckets(session):
  s3 = session.resource('s3')
  print("Bucket List:")
  for bucket in s3.buckets.all():
    print(bucket.name)

Resource: https://stackoverflow.com/questions/36042968/get-all-s3-buckets-given-a-prefix

Show items in an s3 bucket

def list_s3_bucket_items(session, bucket):
  s3 = session.resource('s3')
  my_bucket = s3.Bucket(bucket)

  for file in my_bucket.objects.all():
    print(file.key)

List Users

def get_users(session):
  client = boto3.client('iam', aws_access_key_id=access_key, aws_secret_access_key=secret_key,aws_session_token=session_token)
  users = client.list_users()
  for key in users['Users']:
    print(key['UserName'])

Resource: https://stackoverflow.com/questions/46073435/how-can-we-fetch-iam-users-their-groups-and-policies

Get account id

def sts(session):
  sts_client = boto3.client('sts',aws_access_key_id=access_key,aws_secret_access_key=secret_key, aws_session_token=session_token)
  print(sts_client.get_caller_identity()['Account'])

Create ec2 instance with name

EC2_RESOURCE = boto3.resource('ec2')

def create_ec2_instance():
    instance = EC2_RESOURCE.create_instances(
        ImageId='ami-ID_GOES_HERE',
        MinCount=1,
        MaxCount=1,
        InstanceType='t2.micro',
        SecurityGroupIds = ["sg-ID_GOES_HERE"]
        KeyName='KEY_NAME_GOES_HERE',
        TagSpecifications=[
            {
                'ResourceType': 'instance',
                'Tags': [
                    {
                        'Key': 'Name',
                        'Value': 'INSTANCE_NAME_HERE'
                    }
                ]
            }
        ]
    )
    return instance[0]

Resources:
https://blog.ipswitch.com/how-to-create-an-ec2-instance-with-python
https://stackoverflow.com/questions/52436835/how-to-set-tags-for-aws-ec2-instance-in-boto3
http://blog.conygre.com/2017/03/27/boto-script-to-launch-an-ec2-instance-with-an-elastic-ip-and-a-route53-entry/

Allocate and associate an elastic IP

import boto3
from botocore.exceptions import ClientError

# Wait for instance to finish lanuching before assigning the elastic IP address
print('Waiting for instance to get to a running state, please wait...')
instance.wait_until_running();

EC2_CLIENT = boto3.client('ec2')

try:
    # Allocate an elastic IP
    eip = EC2_CLIENT.allocate_address(Domain='vpc')
    
    # Associate the elastic IP address with an instance launched previously
    response = EC2_CLIENT.associate_address(
                   AllocationId=eip['AllocationId'],
                   InstanceId='INSTANCE_ID_GOES_HERE'
               )
    print(response)
except ClientError as e:
    print(e)

Allocate existing elastic IP

EC2_CLIENT.associate_address(
    AllocationId='eipalloc-EXISTING_EIP_ID_GOES_HERE',
    InstanceId=INSTANCE_ID_GOES_HERE
)

Resource:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/ec2-example-elastic-ip-addresses.html
http://blog.conygre.com/2017/03/27/boto-script-to-launch-an-ec2-instance-with-an-elastic-ip-and-a-route53-entry/

Wait for instance to finish starting

retries = 10
retry_delay = 10
retry_count = 0
instance[0].wait_until_running()
instance[0].reload()
while retry_count <= retries:
    sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    result = sock.connect_ex((instance[0].public_ip_address,22))
    if result == 0:
        print(f"The instance is up and accessible on port 22 at {instance[0].public_ip_address}")
        break
    else:
        print("Instance is still coming up, retrying . . . ")
        time.sleep(retry_delay)

Resource:
https://stackoverflow.com/questions/46379043/boto3-wait-until-running-doesnt-work-as-desired

Metadata

Get Credentials

curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

Resource:
https://gist.github.com/quiver/87f93bc7df6da7049d41

Get region

curl http://169.254.169.254/latest/dynamic/instance-identity/document

Get role-name

curl http://169.254.169.254/latest/meta-data/iam/security-credentials/

The role name will be listed here.

Resource: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

Get Account ID

curl http://169.254.169.254/latest/meta-data/identity-credentials/ec2/info/

Get public hostname

curl 169.254.169.254/latest/meta-data/public-hostname

Show configuration

aws configure list

Get account id

aws sts get-caller-identity | jq '.Account'

Resource: https://shapeshed.com/jq-json/#how-to-find-a-key-and-value

Go SDK

Stand up EC2 Instance

This accounts for the exceptionally annoying message: An error occurred (VPCIdNotSpecified) when calling the RunInstances operation: No default VPC for this user that does not have any solutions in sight unless you do some deep diving. Essentially this means that a default VPC isn't defined and subsequently you need to provide a subnet id:

package main

import (
    "github.com/aws/aws-sdk-go/aws"
    "github.com/aws/aws-sdk-go/aws/session"
    "github.com/aws/aws-sdk-go/service/ec2"

    "fmt"
    "log"
)

func main() {

    // Get credentials from ~/.aws/credentials
    sess, err := session.NewSession(&aws.Config{
        Region: aws.String("us-west-2")},
    )

    // Create EC2 service client
    svc := ec2.New(sess)

    // Specify the details of the instance that you want to create.
    runResult, err := svc.RunInstances(&ec2.RunInstancesInput{
        ImageId:      aws.String("ami-id-here"),
        InstanceType: aws.String("t2.small"),
        MinCount:     aws.Int64(1),
        MaxCount:     aws.Int64(1),
        SecurityGroupIds: aws.StringSlice([]string{"sg-id-here"}),
        KeyName: aws.String("keypairname-here"),
        SubnetId:     aws.String("subnet-id-here"),
    })

    if err != nil {
        fmt.Println("Could not create instance", err)
        return
    }

    fmt.Println("Created instance", *runResult.Instances[0].InstanceId)

    // Add tags to the created instance
    _, errtag := svc.CreateTags(&ec2.CreateTagsInput{
        Resources: []*string{runResult.Instances[0].InstanceId},
        Tags: []*ec2.Tag{
            {
                Key:   aws.String("Name"),
                Value: aws.String("GoInstance"),
            },
        },
    })
    if errtag != nil {
        log.Println("Could not create tags for instance", runResult.Instances[0].InstanceId, errtag)
        return
    }

    fmt.Println("Successfully tagged instance")
}

Resources:
https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/ec2-example-create-images.html - starting point
https://gist.github.com/stephen-mw/9f289d724c4cfd3c88f2
https://stackoverflow.com/questions/50289221/unable-to-create-ec2-instance-using-boto3 - where I found some of the solution (for boto, which translated to this fortunately)
https://docs.aws.amazon.com/sdk-for-go/api/aws/#StringSlice

Stand up EC2 Instance with lambda

Modify the previous code to make it into a lambda function - find it here.

Next you'll need to get the binary for the function:

env GOOS=linux GOARCH=amd64 go build -o /tmp/main

With that, you'll need to zip it up:

zip -j /tmp/main.zip /tmp/main

At this point, you need to create the iam role:

  1. Navigate to https://console.aws.amazon.com/iam/home#/roles
  2. Click Create role
  3. Click Lambda
  4. Click Next: Permissions
  5. Add the following policies:
AmazonEc2FullAccess
AWSLambdaBasicExecutionRole
AWSLambdaVPCAccessExecutionRole
  1. Click Next: Tags
  2. Give it a Name tag and click Next: Review
  3. Give it a Role name such as "LambdaCreateEc2Instance"
  4. Click Create role
  5. Once it's completed, click the role and copy the Role ARN

Now, you'll need to run the following command to create the lambda function:

aws lambda create-function --function-name createEc2Instance --runtime go1.x \
  --zip-file fileb:///tmp/main.zip --handler main \
  --role <Role ARN copied previously>

Lastly, you'll need to populate all of the environment variables. To do this, you can use this script:

aws lambda update-function-configuration --function-name createEc2Instance \
    --environment "Variables={AMI=ami-id-here, INSTANCE_TYPE=t2.small, SECURITY_GROUP=sg-id-here, KEYNAME=keypairname-here, SUBNET_ID=subnet-id-here}"

Alternatively, you can set the values in the lambda UI by clicking Manage environment variables:

Screen-Shot-2020-10-02-at-5.07.57-PM

but this gets very tedious very quickly.

If you want to throw all of this into a Makefile to streamline testing, you could do something like this:

build:
	env GOOS=linux GOARCH=amd64 go build -o /tmp/main

deploy:
	zip -j /tmp/main.zip /tmp/main
	bash scripts/create_function.sh
	bash scripts/create_env_vars.sh

run:
	aws lambda invoke --function-name createEc2Instance /tmp/output.json

Run the whole thing with this command:

make build && make deploy && make run

At this point, you can go ahead and invoke the lambda function to see if everything is working as expected:

aws lambda invoke --function-name createEc2Instance /tmp/output.json

This can of course be determined by looking at your running EC2 instances and seeing if there's a new one that's spinning up from your invoking the lambda function.

Resources:
https://www.alexedwards.net/blog/serverless-api-with-go-and-aws-lambda#setting-up-the-https-api - doing all the lambda cli stuff and making a lambda function with golang
https://medium.com/appgambit/aws-lambda-launch-ec2-instances-40d32d93fb58 -doing the web UI stuff
https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html - setting the env vars programatically
https://www.softkraft.co/aws-lambda-in-golang/ - fantastic in-depth guide for using Go with Lambda

CORS with lambda and API Gateway

Want to do AJAX stuff with your lambda function(s)? Cool, you're in the right place.

  1. Open your gateway
  2. Click Actions -> Enable CORS
  3. Check the boxes for POST, GET, and OPTIONS
  4. Input the following for Access-Control-Allow-Headers:
'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'
  1. Input the following for Access-Control-Allow-Origin:
'*'
  1. Click Enable CORS and replace existing CORS headers

For Options Method

Open the Method Response and click the arrow next to 200. Add the following headers:

Screen-Shot-2020-10-12-at-3.12.12-PM

For GET Method

Be sure to add the appropriate headers to your APIGatewayProxyResponse:

Headers: map[string]string{
				"Access-Control-Allow-Origin":      "*",
				"Access-Control-Allow-Credentials": "true",
			},

Next, open the Method Response and click the arrow next to 200. Add the following headers:

Screen-Shot-2020-10-12-at-3.11.12-PM

For POST Method

Open the Method Response and click the arrow next to 200. Add the following header:
Screen-Shot-2020-10-12-at-3.12.47-PM

Finishing touches

Finally, be sure to click Actions and Deploy API when you're done

Resource: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors-console.html

Return Response for API Gateway

You have two options here:

return events.APIGatewayProxyResponse{
			StatusCode: http.StatusBadGateway,
			Headers: map[string]string{
				"Access-Control-Allow-Origin":      "*",
				"Access-Control-Allow-Credentials": "true",
			},
			Body: string("Method not Allowed"),
		}, nil

or alternatively:

resp := events.APIGatewayProxyResponse{Headers: make(map[string]string)}
resp.Headers["Access-Control-Allow-Origin"] = "*"
resp.Headers["Access-Control-Allow-Credentials"] = "true"
resp.StatusCode = http.StatusOK
resp.Body = string(publicInstanceIp)
return resp, nil

Resources:
https://github.com/serverless/examples/blob/master/aws-golang-simple-http-endpoint/hello/main.go - used to figure out the first option

Update function via CLI

This is useful to run after updating your code. This will grab main.zip in the current directory:

env GOOS=linux GOARCH=amd64 go build -o main
zip -j main.zip main
aws lambda update-function-code --function-name <lambda function name> --zip-file fileb:///${PWD}/main.zip

Resource: https://stackoverflow.com/questions/49611739/aws-lambda-update-function-code-with-jar-package-via-aws-cli