AWS Cheatsheet

This contains various commands and information that I find useful for AWS work.

UI

Backup instance manually

  1. Go to your instance
  2. Right click and select Image from the dropdown
  3. Click Create Image
  4. Give your backup a name and description
  5. Click No reboot if you want your instance to stay in a running state
  6. Click Create Image
  7. At this point you should be able to find the AMI that is associated with your backup under AMIs. Give the AMI a more descriptive name if you'd like.

Resource: https://n2ws.com/blog/how-to-guides/automate-amazon-ec2-instance-backup

Parameter Store location

  1. Login
  2. Search for Systems Manager
  3. Click on Parameter Store in the menu on the left-hand side

EC2

Assign an elastic IP to an instance

aws ec2 associate-address --allocation-id eipalloc-<eip id> --instance-id <the instance id>

Create instance with a tag

aws ec2 run-instances --image-id ami-xxxxxxx --count 1 --instance-type t2.medium --key-name MyKeyPair --security-group-ids sg-xxxxxx --subnet-id subnet-xxxxxx --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=my-test-instance}]'

Resource: https://serverfault.com/questions/724501/how-to-add-a-tag-when-launching-an-ec2-instance-using-aws-clis

Create instance using security group name

aws ec2 run-instances --image-id ami-xxxxxxxx --count 1 --instance-type t1.micro --key-name MyKeyPair --security-groups MySecurityGroup

List instances with filtering

This example in particular will get you all your m1.micro instances.

aws ec2 describe-instances --filters "Name=instance-type,Values=m1.micro"

List instance by instance id

aws ec2 describe-instances --instance-ids i-xxxxx

Destroy instance

aws ec2 terminate-instances --instance-ids <instance id(s)>

If you want to terminate multiple instances, be sure to use this format:

id1 id2 id3

Get info about a specific AMI by name and output to JSON

aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json

Get AMI id with some python

This uses the run_cmd() function found in https://techvomit.net/python-notes/.

import json

def get_ami_id(ec2_output):
    json.loads(ec2_output.decode('utf-8'))['Images'][0]['ImageId']

ec2_output = run_cmd('aws ec2 describe-images --filters "Name=name,Values=<AMI Name>" --output json')
ami_id = get_ami_id(ec2_output)
print(ami_id)

Deregister an AMI

aws ec2 deregister-image --image-id <ami id>

Get list of all instances with the state terminated

aws ec2 describe-instances --filters "Name=instance-state-name,Values=terminated"

Alternatively, if you want running instances, change Values=terminated to Values=running.

Get info about an AMI by product-code

aws --region <region> ec2 describe-images --owners aws-marketplace --filters Name=product-code,Values=<product code>

This is useful if you have the product code, and want more information (like the image ID). For CentOS, you can get the product code here. I started down this path when I was messing around with the code in this gist for automatically creating encrypted AMI's.

Resize ec2 instance

https://medium.com/@kenichishibata/resize-aws-ebs-4d6e2bf00feb

Show available subnets

aws ec2 describe-subnets

CodeBuild

Pretty decent, relatively up-to-date tutorial on using CodeBuild and CodeCommit to autobuild AMI's: https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer/

If you want to use the gist to create encrypted AMI's mentioned above, be sure to specify aws_region, aws_vpc, aws_subnet, and ssh_username in the variables section.

CodeCommit

You like the idea of CodeCommit? You know, having git repos that are accessible via IAM?

How about using it in your EC2 instances without needing to store credentials? Really cool idea, right? Bet it's pretty easy to setup too, huh? Ha!

Well, unless you know what to do, then it actually is. Here we go:

Build the proper IAM role

  1. Login to the UI
  2. Click on IAM
  3. Click Roles
  4. Click Create role
  5. Click EC2, then click Next: Permissions
  6. Search for CodeCommit, check the box next to AWSCodeCommitReadOnly
  7. Click Next: Tags
  8. Give it some tags if you'd like, click Next: Review
  9. Specify a Role name, like CodeCommit-Read
  10. Click Create role

Now we're cooking. Let's test it out by building an instance and not forgetting to assign it the CodeCommit-Read IAM role. You can figure this part out.

Cloning into a repo

Once you've got a working instance:

  1. Access it via ssh into it
  2. sudo su
  3. Install the awscli with pip: pip install awscli
  4. Run this command and be sure to change the region to match the one you're working with:
git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.helper '!aws --profile default codecommit credential-helper $@'
  1. Run this command and be sure to change the region to match the one you're working with: git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.UseHttpPath true
  2. Run this command and be sure to change the region to match the one you're working with: aws configure set region us-west-2

At this point, you should be able to get clone into your repo: git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/GREATREPONAME

Resources:
https://jameswing.net/aws/codecommit-with-ec2-role-credentials.html
https://stackoverflow.com/questions/46164223/aws-pull-latest-code-from-codecommit-on-ec2-instance-startup - This site got me to the above site, but had incomplete information for their proposed solution.

Integrating this in with CodeBuild

To get this to work with CodeBuild for automated and repeatable builds, I needed to do a few other things. Primarily, take advantage of the Parameter Store. When I was trying to build initially, my buildspec.yml looked something like this (basically emulating the one found in here):

---
version: 0.2

phases:
  pre_build:
    commands:
      - echo "Installing HashiCorp Packer..."
      - curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.1.1/packer_1.1.1_linux_amd64.zip && unzip packer.zip
      - echo "Installing jq..."
      - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
      - echo "Validating kali.json"
      - ./packer validate kali.json
  build:
    commands:
      ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ### More info here: https://github.com/mitchellh/packer/issues/4279
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building HashiCorp Packer template, kali.json"
      - ./packer build kali.json
  post_build:
    commands:
      - echo "HashiCorp Packer build completed on `date`"

However, I was getting this obscure error message about authentication, and spent several hours messing around with IAM roles, but didn't have any luck. At some point, I eventually decided to try throwing a "parameter" in for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. This worked great, but I noticed that whenever I tried the build again, I would run into the same issue as before. To fix it, I had to modify the buildspec.yml to look like this (obviously the values you have for your parameter store may vary depending on what you set for them):

---
version: 0.2

env:
  parameter-store:
    AWS_ACCESS_KEY_ID: "/CodeBuild/AWS_ACCESS_KEY_ID"
    AWS_SECRET_ACCESS_KEY: "/CodeBuild/AWS_SECRET_ACCESS_KEY"

phases:
  pre_build:
    commands:
      - echo "Installing HashiCorp Packer..."
      - curl -qL -o packer.zip https://releases.hashicorp.com/packer/1.1.1/packer_1.1.1_linux_amd64.zip && unzip packer.zip
      - echo "Installing jq..."
      - curl -qL -o jq https://stedolan.github.io/jq/download/linux64/jq && chmod +x ./jq
      - echo "Validating kali.json"
      - ./packer validate kali.json
  build:
    commands:
      ### HashiCorp Packer cannot currently obtain the AWS CodeBuild-assigned role and its credentials
      ### Manually capture and configure the AWS CLI to provide HashiCorp Packer with AWS credentials
      ### More info here: https://github.com/mitchellh/packer/issues/4279
      - echo "Configuring AWS credentials"
      - curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI > aws_credentials.json
      - aws configure set region $AWS_REGION
      - aws configure set aws_access_key_id `./jq -r '.AccessKeyId' aws_credentials.json`
      - aws configure set aws_secret_access_key `./jq -r '.SecretAccessKey' aws_credentials.json`
      - aws configure set aws_session_token `./jq -r '.Token' aws_credentials.json`
      - echo "Building HashiCorp Packer template, kali.json"
      - ./packer build kali.json
  post_build:
    commands:
      - echo "HashiCorp Packer build completed on `date`"

At this point, everything is working consistently with the IAM role mentioned previously being specified in the packer file (this is a snippet):

"variables": {
    "iam_role": "CodeCommit-Read"
  },

  "builders": [{
    "iam_instance_profile": "{{user `iam_role` }}",
  }],

SSM

Get the information for a particular parameter (this will give you the encrypted value if the parameter is a SecureString): aws ssm get-parameter --name <nameofparam>

Miscellaneous

Encrypt your pem file:

openssl rsa -des3 -in key.pem -out encrypted-key.pem
# Enter the pass phrase you've selected
mv encrypted-key.pem key.pem
chmod 400 key.pem

Remove the encryption:

openssl rsa -in key.pem -out key.open.pem
# Enter the pass phrase you've selected
mv key.open.pem key.pem

Resource: https://security.stackexchange.com/questions/59136/can-i-add-a-password-to-an-existing-private-key

Set up aws cli with pipenv on OSX

https://duseev.com/articles/perfect-aws-cli-setup/

S3

List buckets

aws s3 ls s3://target/

Copy file down

aws s3 cp s3://target/file.html file.html

Cheatsheet

https://linuxacademy.com/blog/amazon-web-services-2/aws-s3-cheat-sheet/

Set up S3 IAM for backup/restore

Storing aws credentials on an instance to access an S3 bucket can be a bad idea. Let's talk about what we need to do in order to backup/restore stuff from an S3 bucket safely:

Create Policy

  1. Go to IAM
  2. Policies
  3. Create Policy
  4. Policy Generator, or copy and paste JSON from the interwebs into Create Your Own Policy. This is the one I used:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::techvomit"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket name>/*"
            ]
        }
    ]
}

Create a Role

  1. Go to Roles in IAM
  2. Click Create role
  3. Select EC2
  4. Select EC2 again and click Next: Permissions
  5. Find the policy you created previously
  6. Click Next: Review
  7. Give the Role a name and a description, click Create role

Assign the role to your instance

This will be the instance that houses the service that requires a backup and restore service (your S3 bucket).

  1. In EC2, if the instance is already created, right click it, Instance Settings, Attach/Replace IAM Role
  2. Specify the IAM role you created previously, click Apply.

Set up automated expiration of objects

This will ensure that backups don't stick around longer than they need to. You can also set up rules to transfer them to long term storage during this process, but we're not going to cover that here.
From the bucket overview screen:

  1. Click Management
  2. Click Add lifecycle rule
  3. Specify a name, click Next
  4. Click Next
  5. Check Current version and Previous versions
  6. Specify a desired number of days to expiration for both the current version and the previous versions, click Next
  7. Click Save