Installation on Ubuntu 20.04

curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install packer

# Verify it works
packer

Resource: https://learn.hashicorp.com/tutorials/packer/getting-started-install

File transfer

This snippet will move a directory called scripts into the /tmp directory of the AMI being built. It will then run an ls -l on /tmp from within the AMI being built so that we can see that the directory transferred as we expected it to.

"provisioners": [{
    "type": "file",
    "source": "./scripts",
    "destination": "/tmp/"
  },
  {
    "type": "shell",
    "inline": ["ls -l /tmp"]
  }]

Resources:

Run multiple shell scripts provisioners as root

"provisioners": [{
    "type": "shell",
    "execute_command": "sudo -u root /bin/bash -c '{{.Path}}'",
    "scripts": [
      "scripts/script1.sh",
      "scripts/script2.sh"
    ]
  }]

Specify IAM Build Role

"variables": {
"iam_role": "SomeRole"
},

"builders": [{
"iam_instance_profile": "{{user `iam_role` }}",
}],

Resource: https://stackoverflow.com/questions/36311048/how-to-use-aws-roles-with-packer-to-create-amis

Ubuntu 20.04 template

{
  "description": "some awesome AMI image that will be used to do great things.",
  "variables": {
    "subnet_id": "subnet-id-goes-here",
    "security_group_id": "sg-id-goes-here",
    "vpc_id": "vpc-id-goes-here",
    "instance_type": "t2.small",
    "region": "us-west-2",
    "source_ami": "ami-source-id-goes-here",
    "ssh_username": "ubuntu"
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "ami_name": "name_for_ami",
      "ami_description": "Some kind of AMI",
      "instance_type": "{{user `instance_type`}}",
      "force_deregister": true,
      "force_delete_snapshot": true,
      "region": "{{user `region`}}",
      "vpc_id": "{{user `vpc_id`}}",
      "subnet_id": "{{user `subnet_id`}}",
      "security_group_id": "{{user `security_group_id`}}",
      "source_ami": "{{user `source_ami`}}",
      "ssh_username": "{{user `ssh_username`}}",
      "associate_public_ip_address": true,
      "tags": {
        "Name": "some awesome ubuntu based image",
        "OS": "Ubuntu",
        "OSVER": "20.04"
      }
    }
  ],
  "provisioners": [
    {
      "type": "file",
      "source": "source_file",
      "destination": "/place/to/put/source/file/on/target/system"
    },
    {
      "type": "shell",
      "script": "scripts/provision_system.sh"
    }
  ]
}

Terraform Ubuntu 20.04 Template

The idea is you want to integrate your packer build into your orchestration process. Subsequently, Terraform will provide a number of the variables that you would provide by default.

ubuntu_terra.json:

{
  "description": "some awesome AMI image that will be used to do great things.",
  "variables": {
    "instance_type": "t2.small",
    "ssh_username": "ubuntu"
  },
  "builders": [
    {
      "type": "amazon-ebs",
      "ami_name": "name_for_ami",
      "ami_description": "Some kind of AMI",
      "instance_type": "{{user `instance_type`}}",
      "force_deregister": true,
      "force_delete_snapshot": true,
      "region": "{{user `region`}}",
      "vpc_id": "{{user `vpc_id`}}",
      "subnet_id": "{{user `subnet_id`}}",
      "security_group_id": "{{user `security_group_id`}}",
      "source_ami": "{{user `source_ami`}}",
      "ssh_username": "{{user `ssh_username`}}",
      "associate_public_ip_address": true,
      "tags": {
        "Name": "some awesome ubuntu based image",
        "OS": "Ubuntu",
        "OSVER": "20.04"
      }
    }
  ],
  "provisioners": [
    {
      "type": "file",
      "source": "source_file",
      "destination": "/place/to/put/source/file/on/target/system"
    },
    {
      "type": "shell",
      "script": "scripts/provision_system.sh"
    }
  ]
}

main.tf:

resource "null_resource" "removeAmiReady" {
  provisioner "local-exec" {
    command: "rm packer.done"
    when   : destroy
  }
}

resource "null_resource" "packer" {
  triggers: {
    ami_name: "name-for-your-AMI"
    # This will rebuild each time you run - remove this line if that's undesirable; I tend to use this for debugging
    build_number: "${timestamp()}"
  }

  provisioner "local-exec" {
    working_dir: "../../packer"
    command: <<EOF
RED='\033[0;31m' # Red Text
GREEN='\033[0;32m' # Green Text
BLUE='\033[0;34m' # Blue Text
NC='\033[0m' # No Color

packer build \
  -var region=${var.region} \
  -var vpc_id=${aws_vpc.your_vpc.id} \
  -var subnet_id=${aws_subnet.your_subnet.id} \
  -var source_ami=${var.source-ami} \
  -var id=${self.id} \
  # packer output goes to a file
  ubuntu_terra.json | tee packer.log ; \
  if grep -q "Build 'amazon-ebs' finished." packer.log; \
    # Output the AMI ID to ami_id.txt so that we can read and use it later in our terraform code
  then cat packer.log| \
  tail -2 |\
   awk '{print $2}'| xargs > ami_id.txt; fi

if [ $? -eq 0 ]; then
  printf "\n $GREEN Packer Succeeded $NC \n"
else
  printf "\n $RED Packer Failed $NC \n" >&2
  exit 1
fi
EOF
  }
}

Deploy with the following commands:

terraform plan -out plan
terraform apply "plan"

Resource: https://austincloud.guru/2020/02/27/building-packer-image-with-terraform/

Verbose mode

PACKER_LOG=1 packer build <your packer file>.json

Resource: https://github.com/hashicorp/packer/issues/4870


Multiple security groups

This will go under the builders section like so:

  "builders": [{
      "type": "amazon-ebs",
      "security_group_ids": ["sg-FOO", "sg-BAR"],
      ...

Resource: https://github.com/hashicorp/packer/issues/4870


Debug a packer build

This will pause at each step of the run. It will also drop the associated SSH key into the current working directory.

PACKER_FILE=mypackerfile.json
packer build -debug $PACKER_FILE

Resource: https://devops.stackexchange.com/questions/7975/is-it-possible-to-locate-the-temp-keypair-generated-by-packer#:~:text=packer%20will%20store%20the%20key,you%20started%20the%20packer%20process.&text=They%20are%20likely%20in%20the,in%20which%20packer%20is%20running.

Debug provisioner

If you don’t want to have to debug every step, you can put put a breakpoint in your packer script and it will only break there. Run the packer command as you would normally (without the -debug flag).

"provisioners": [
    {
      "type": "shell-local",
      "inline": "echo hi"
    },
    {
      "type": "breakpoint",
      "disable": false,
      "note": "this is a breakpoint"
    },
    {
      "type": "shell-local",
      "inline": "echo hi 2"
    }

Resource: https://www.packer.io/docs/provisioners/breakpoint

Inconsistent apt-get behavior on Ubuntu

This can happen because cloud-init hasn’t had the chance to finish before you run apt-get update or apt-get install -y <package, etc. Resolution:

{
  "type": "shell",
  "inline": [
    "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done"
  ]
},

Resource: https://serverfault.com/questions/904080/inconsistent-apt-get-update-behaviour-on-official-ubuntu-aws-ami

Encrypt boot volume

Put this under the builders section if you want to encrypt your boot volume of your ec2 instances made with the packer AMI:

"encrypt_boot": true,

Resource: http://site.clairvoyantsoft.com/encrypting-amazon-ec2-boot-volumes-via-packer/

Run repo with ansible playbook

If you have criteria that is not met by the packer ansible provisioner, you can do the following:

"provisioners": [{
  "type": "file",
  "source": "ansible",
  "destination": "/tmp"
 },
 {
  "type": "shell",
  "inline": [
   "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
   "sudo apt-get -y autoremove && sudo apt-get clean && sudo apt-get update && sudo apt-get install -y ansible",
   "cd /tmp/ansible && ansible-playbook site.yml -e 'ansible_python_interpreter=/usr/bin/python3'"
  ]
 }]

As an added bonus, this also shows how to run multiple inline bash commands with a single shell provisioner.

Resource: Run multiple inline commands