In this comprehensive guide, we’ll walk through the process of setting up AWS Systems Manager (SSM) and HashiCorp Packer, with two different approaches: one using Ansible and one without. We’ll also cover the setup of the necessary IAM instance profile.
Prerequisites
Before we begin, ensure you have the following:
- An AWS account with appropriate permissions
- AWS CLI installed and configured
- An s3 bucket for storing files transferred by Packer when using SSM
- Packer installed
- Ansible installed (for the Ansible example)
Setting Up SSM and IAM Instance Profile
AWS Systems Manager (SSM) is a management tool that provides a unified user interface to view and control your AWS infrastructure. To use SSM with Packer, we need to set up an IAM instance profile with the necessary permissions.
Here’s a bash script that creates the required IAM role, attaches the necessary policies, and creates an instance profile:
#!/bin/bash
# Set variables
PROFILE_NAME="PackerInstanceProfile"
ROLE_NAME="PackerInstanceRole"
BUCKET_NAME="my-awesome-bucket"
# Create IAM role
aws iam create-role --role-name $ROLE_NAME --assume-role-policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}'
# Attach AmazonSSMManagedInstanceCore managed policy
aws iam attach-role-policy --role-name $ROLE_NAME --policy-arn arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
# Create custom policy for S3 access
aws iam put-role-policy --role-name $ROLE_NAME --policy-name "S3AccessPolicy" --policy-document '{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::'"$BUCKET_NAME"'",
"arn:aws:s3:::'"$BUCKET_NAME"'/*"
]
}
]
}'
# Create instance profile
aws iam create-instance-profile --instance-profile-name $PROFILE_NAME
# Add role to instance profile
aws iam add-role-to-instance-profile --instance-profile-name $PROFILE_NAME --role-name $ROLE_NAME
# Wait for IAM changes to propagate
sleep 60
echo "Instance profile $PROFILE_NAME created successfully with required permissions."
Save this script as setup_iam_profile.sh
and run it with:
chmod +x setup_iam_profile.sh
./setup_iam_profile.sh
This script does the following:
- Creates an IAM role named “PackerInstanceRole”
- Attaches the AmazonSSMManagedInstanceCore managed policy to the role
- Creates a custom policy for S3 access and attaches it to the role
- Creates an instance profile named “PackerInstanceProfile”
- Adds the role to the instance profile
Make sure to replace “my-awesome-bucket” with the actual name of your S3 bucket.
Example 1: Packer with SSM (Without Ansible)
Now, let’s set up Packer to use SSM without Ansible.
Packer Template (test.pkr.hcl)
#########################################################################################
# test packer template
#
# Author: Jayson Grace <jayson.e.grace@gmail.com>
#
# Description: Create a docker image provisioned with a basic bash script and an Ubuntu AMI
# provisioned with SSM support.
#########################################################################################
locals {
timestamp = formatdate("YYYY-MM-DD-hh-mm-ss", timestamp())
}
source "amazon-ebs" "ubuntu" {
ami_name = "${var.blueprint_name}-${local.timestamp}"
instance_type = "${var.instance_type}"
region = "${var.ami_region}"
source_ami_filter {
filters = {
name = "${var.os}/images/*${var.os}-${var.os_version}-${var.ami_arch}-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["099720109477"] // Canonical's owner ID for Ubuntu images
}
communicator = "${var.communicator}"
run_tags = "${var.run_tags}"
user_data_file = "${var.user_data_file}"
#### SSH Configuration ####
ssh_username = "${var.ssh_username}"
ssh_file_transfer_method = "${var.communicator == "ssh" ? "sftp" : null}"
ssh_timeout = "${var.communicator == "ssh" ? var.ssh_timeout : null}"
#### SSM and IP Configuration ####
associate_public_ip_address = true
ssh_interface = "session_manager"
iam_instance_profile = "${var.iam_instance_profile}"
tags = {
Name = "${var.blueprint_name}-${local.timestamp}"
BuildTime = "${local.timestamp}"
}
}
build {
sources = ["source.amazon-ebs.ubuntu"]
provisioner "shell" {
inline = [
"mkdir -p ${var.pkr_build_dir}",
]
}
provisioner "file" {
source = "${var.provision_script_path}"
destination = "${var.pkr_build_dir}/provision.sh"
}
provisioner "shell" {
environment_vars = [
"PKR_BUILD_DIR=${var.pkr_build_dir}",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
]
inline = [
"chmod +x ${var.pkr_build_dir}/provision.sh",
"${var.pkr_build_dir}/provision.sh"
]
}
}
Variables File (variables.pkr.hcl)
# (The variables file content remains the same as in the previous version)
Running the Packer Build (Without Ansible)
To run the Packer build without Ansible, use the following command:
packer build \
-var instance_type=t3.micro \
-var blueprint_name=test \
-var provision_repo_path=/path/to/your/repo/on/local/disk \
-var ami_region=$AWS_DEFAULT_REGION \
test.pkr.hcl
Example 2: Packer with SSM and Ansible
Now, let’s look at an example that incorporates Ansible into our Packer and SSM setup.
Packer Template (test_ansible.pkr.hcl)
#########################################################################################
# test packer template with Ansible
#
# Author: Jayson Grace <jayson.e.grace@gmail.com>
#
# Description: Create a docker image provisioned with Ansible and an Ubuntu AMI
# provisioned with Ansible and SSM support.
#########################################################################################
locals {
timestamp = formatdate("YYYY-MM-DD-hh-mm-ss", timestamp())
}
source "amazon-ebs" "ubuntu" {
ami_name = "${var.blueprint_name}-${local.timestamp}"
instance_type = "${var.instance_type}"
region = "${var.ami_region}"
source_ami_filter {
filters = {
name = "${var.os}/images/*${var.os}-${var.os_version}-${var.ami_arch}-server-*"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["099720109477"] // Canonical's owner ID for Ubuntu images
}
communicator = "${var.communicator}"
run_tags = "${var.run_tags}"
user_data_file = "${var.user_data_file}"
#### SSH Configuration ####
ssh_username = "${var.ssh_username}"
ssh_file_transfer_method = "${var.communicator == "ssh" ? "sftp" : null}"
ssh_timeout = "${var.communicator == "ssh" ? var.ssh_timeout : null}"
#### SSM and IP Configuration ####
associate_public_ip_address = true
ssh_interface = "session_manager"
iam_instance_profile = "${var.iam_instance_profile}"
tags = {
Name = "${var.blueprint_name}-${local.timestamp}"
BuildTime = "${local.timestamp}"
}
}
build {
sources = ["source.amazon-ebs.ubuntu"]
provisioner "file" {
source = "ansible.cfg"
destination = "/tmp/ansible.cfg"
}
provisioner "ansible" {
playbook_file = "${var.provision_repo_path}/playbooks/workstation/workstation.yml"
inventory_file_template = "{{ .HostAlias }} ansible_host={{ .ID }} ansible_user={{ .User }} ansible_ssh_common_args='-o StrictHostKeyChecking=no -o ProxyCommand=\"sh -c \\\"aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters portNumber=%p\\\"\"'\n"
user = "${var.ssh_username}"
galaxy_file = "${var.provision_repo_path}/requirements.yml"
ansible_env_vars = [
"ANSIBLE_CONFIG=/tmp/ansible.cfg",
"AWS_DEFAULT_REGION=${var.ami_region}",
"PACKER_BUILD_NAME={{ build_name }}",
]
extra_arguments = [
"--connection", "packer",
"-e", "ansible_aws_ssm_bucket_name=${var.ansible_aws_ssm_bucket_name}",
"-e", "ansible_connection=aws_ssm",
"-vvv",
]
}
}
Updated Variables File (variables.pkr.hcl)
The variables file remains the same as in the previous example, with one addition:
variable "ansible_aws_ssm_bucket_name" {
type = string
description = "Name of the S3 bucket to store ansible artifacts."
}
Ansible Playbook (workstation.yml)
---
- name: Workstation
become: true # Necessary for SSM to work with Ansible
hosts: all
remote_user: root # Set this to an existing elevated user on the target system or Ansible will fail
vars:
ansible_connection: community.aws_ssm
ansible_aws_ssm_bucket_name: my-awesome-bucket
ansible_aws_ssm_region: us-west-1
# When the S3 bucket isn't in the same region as the Instance
# Explicitly setting the addressing style to 'virtual' may be necessary
# https://repost.aws/knowledge-center/s3-http-307-response
ansible_aws_ssm_s3_addressing_style: virtual
tasks:
- name: Get primary group name of the current user
ansible.builtin.command: id -gn "{{ ansible_user_id }}"
changed_when: false
register: primary_group_name
- name: Check if user exists
ansible.builtin.command: id "{{ ansible_user_id }}"
register: user_exists
changed_when: false
failed_when: user_exists.rc not in [0, 1]
- name: Check if user home directory exists
ansible.builtin.stat:
path: "{{ '/Users' if ansible_distribution == 'MacOSX' else '/home' }}/{{ ansible_user_id }}"
register: home_dir_exists
changed_when: false
Creating the ansible.cfg
File
Create an ansible.cfg
file with the necessary configuration to set the remote
temporary directory:
[defaults]
remote_tmp = /tmp/.ansible/tmp
Running the Packer Build (With Ansible)
To run the Packer build with Ansible, use the following command:
packer build \
-var instance_type=t3.micro \
-var ansible_aws_ssm_bucket_name=your-s3-bucket-name \
-var blueprint_name=test \
-var provision_repo_path=/path/to/your/ansible/repo \
-var ami_region=$AWS_DEFAULT_REGION \
test_ansible.pkr.hcl
Conclusion
We’ve explored two approaches to using Packer with AWS Systems Manager (SSM): one without Ansible and one with Ansible integration. Both methods allow for secure, automated AMI creation without the need for opening SSH ports or managing SSH keys.
The non-Ansible approach is simpler and relies on shell scripts for provisioning. It’s suitable for straightforward setups or when Ansible isn’t required.
The Ansible-integrated approach offers more powerful and flexible provisioning capabilities. It’s ideal for complex setups or when you want to leverage Ansible’s extensive module library and playbook structure.
By using the provided IAM instance profile setup script, you ensure that your EC2 instances have the necessary permissions to communicate with SSM and access the required S3 bucket.