Terraform Cheatsheet

Installation on Ubuntu 20.04

sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install terraform
# Verify it works
terraform -v

Resource: https://learn.hashicorp.com/tutorials/terraform/install-cli

Commands

This is used to download and configure providers in your terraform code:

terraform init

Resource: https://learn.hashicorp.com/tutorials/terraform/eks

Reconfigure state

If you need to reconfigure your state, run the following:

terraform init --reconfigure

Run the terraform code

terraform apply

Destroy all terraform resources

terraform destroy

List all resources

terraform state list

Resource: https://github.com/hashicorp/terraform/issues/12917

Remove something from state

This will remove packet_device called worker from your existing state:

terraform state rm 'packet_device.worker'

Resource: https://www.terraform.io/docs/cli/commands/state/rm.html

Cause rebuild

terraform taint $RESOURCE_NAME
# example:
terraform taint aws_security_group.allow_all

Resource: https://www.terraform.io/docs/cli/commands/taint.html

Makefile Template

all: init plan apply

init:
	@echo Initializing the terraform project, please wait...
	terraform init

plan:
	@echo Generating an execution plan, please wait...
	terraform plan

apply:
	@echo Deploying terraform code, please wait...
	terraform apply

destroy:
	@echo Tearing down the entire terraform deployment, incoming 'are you absolutely sure you want to do this?'
	terraform destroy 

Import existing resources

This particular example will import the OPTIONS method from an API gateway.

Put the following in main.tf:

resource "aws_api_gateway_method" "options_method" {
}

Then run this command to import it:

/usr/local/bin/terraform import aws_api_gateway_method.options_method <api_gateway_id>/<api_resource_id>/OPTIONS

You can find the output by running this command:

terraform show

Another example (import the POST gateway method):
put the following in main.tf:

# POST
resource "aws_api_gateway_method" "post_method" {
}

command to import:

/usr/local/bin/terraform import aws_api_gateway_method.post_method <api_gateway_id>/<api_resource_id>/POST

One last example (import stage):
put the following in main.tf:

resource "aws_api_gateway_stage" "<stage_name>" {
}

command to import:

/usr/local/bin/terraform import aws_api_gateway_stage.<stage_name> <api_gateway_id>/<stage_name>

Example with security group

Terraform code:

resource "aws_security_group" "my_sg" {
}

Command to import:

terraform import aws_security_group.my_sg sg-xxxxxxxxx

To see the changes:

terraform show

AWS

Secrets Manager

Create blank secret:

resource "aws_secretsmanager_secret" "IRCSecrets" {
  name = "irc/client/credentials"
  description = "My IRC client credentials"
}

Resource: https://gist.github.com/anttu/6995f20e641d4f30a6003520f70608b3

Create IAM role to run on an instance and attach it

iam.tf:

# Policy for role that uses STS to get credentials to access ec2 instances
resource "aws_iam_role" "ec2_iam_role" {
  name               = "ec2_iam_role"
  assume_role_policy = file("iam_role_policy.json")

  tags = {
    Name = "ec2_iam_role"
  }
}

# Group together roles that apply to an instance
resource "aws_iam_instance_profile" "ec2_iam_instance_profile" {
  name = "ec2_iam_instance_profile"
  role = aws_iam_role.ec2_iam_role.name
}

resource "aws_iam_role_policy" "ec2_iam_role_policy" {
  name               = "ec2_iam_role_policy"
  role               = ec2_iam_role.id
  policy = file("ec2_iam_role_policy.json")
}

Create iam_role_policy.json to be used to get credentials to access ec2 instances:

{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {
          "Service": "ec2.amazonaws.com"
        },
        "Effect": "Allow",
        "Sid": ""
      }
    ]
}

ec2_iam_role_policy.json - this is going to be variable based on what you want your ec2 instance to do. Here's an eaxmple that allows it to do a bunch of logging stuff and clone CodeCommit repos:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:DescribeLogGroups",
                "logs:DescribeLogStreams",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:ap-southeast-1:0000:log-group:*",
                "arn:aws:logs:ap-southeast-1:0000:log-group:production:*"
            ]
        },
        Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "codecommit:Get*",
                "sns:ListSubscriptionsByTopic",
                "lambda:ListFunctions",
                "sns:GetTopicAttributes",
                "codestar-notifications:ListNotificationRules",
                "codecommit:BatchGet*",
                "sns:ListTopics",
                "codecommit:GitPull",
                "codestar-notifications:ListEventTypes",
                "codecommit:EvaluatePullRequestApprovalRules",
                "codestar-notifications:ListTargets",
                "codeguru-reviewer:ListRepositoryAssociations",
                "codeguru-reviewer:ListCodeReviews",
                "codeguru-reviewer:DescribeRepositoryAssociation",
                "iam:ListUsers",
                "codecommit:List*",
                "codecommit:Describe*",
                "codeguru-reviewer:DescribeCodeReview",
                "codecommit:BatchDescribe*"
            ],
            "Resource": "*"
        },
        {
            "Sid": "VisualEditor2",
            "Effect": "Allow",
            "Action": "codestar-notifications:DescribeNotificationRule",
            "Resource": "*",
            "Condition": {
                "StringLike": {
                    "codestar-notifications:NotificationsForResource": "arn:aws:codecommit:*"
                }
            }
        },
        {
            "Sid": "VisualEditor3",
            "Effect": "Allow",
            "Action": [
                "events:DescribeRule",
                "iam:ListSSHPublicKeys",
                "iam:GetSSHPublicKey",
                "codestar-connections:GetConnection",
                "iam:ListServiceSpecificCredentials",
                "events:ListTargetsByRule",
                "iam:ListAccessKeys"
            ],
            "Resource": [
                "arn:aws:codestar-connections:*:*:connection/*",
                "arn:aws:iam::*:user/${aws:username}",
                "arn:aws:events:*:*:rule/codecommit*"
            ]
        },
        {
            "Sid": "VisualEditor4",
            "Effect": "Allow",
            "Action": "codestar-connections:ListConnections",
            "Resource": "arn:aws:codestar-connections:*:*:connection/*"
        }
    ]
}

ec2.tf:

resource "aws_instance" "ec2_node" {
  ami                         = "ami-07dd19a7900a1f049"
  instance_type               = "t3.medium"
  key_name                    = "ec2-key"
  # Enable termination protection
  disable_api_termination     = true
  vpc_security_group_ids      = [aws_security_group.name1.id, aws_security_group.name2.id]
  subnet_id                   = "your_subnet_id"
  associate_public_ip_address = true

  root_block_device {
    volume_size           = 100
    delete_on_termination = true
  }

  tags = {
    Name = "ec2_node"
  }
  iam_instance_profile = "aws_iam_instance_profile.ec2_iam_instance_profile.name"
}

Resources:
https://adrianhesketh.com/2016/06/27/creating-aws-instance-roles-with-terraform/
https://devopslearning.medium.com/aws-iam-ec2-instance-role-using-terraform-fa2b21488536
https://stackoverflow.com/questions/62953164/create-and-attach-iam-role-to-ec2-using-terraform

Import existing IAM role

  1. Create a directory and run terraform init
  2. Create a placeholder like so
resource "aws_iam_role" "yourrolename" {
  name = "yourrolename"
  assume_role_policy = "{}"
}
  1. Run this command to import the existing role:
terraform import aws_iam_role.yourrolename <the name of the existing role>
  1. Run terraform show to get the block of terraform code that you'll want to implement

Resource: https://mklein.io/2019/09/30/terraform-import-role-policy/

Create ansible hosts file

ansible_hosts_file_builder.tf:

resource "local_file" "ansible_hosts" {
  content = templatefile("templates/hosts.tmpl",
  {
    private-ip = aws_instance.managed_system.private_ip,
    public-id = aws_instance.managed_system.id
  }
)
filename = "${path.module}/hosts"
}

templates/hosts.tmpl:

[some_group]
%{ for index, ip in private-ip ~}
${ip} ansible_user=ansible ansible_ssh_private_key_file=/home/ubuntu/.ssh/key_file ansible_python_interpreter=/usr/bin/python3 # ${public-id[index]}
%{ endfor ~}

Resource:
https://www.linkbynet.com/produce-an-ansible-inventory-with-terraform

Create security group with instance's public ip

If you need to specify a security group that relies on an instance's public IP address and you don't want to use an EIP, you can do the following:

resource "aws_instance" "my_system" {
  ami                         = var.my_ami
  instance_type               = var.instance_type
  key_name                    = "my-key"
  subnet_id                   = module.vpc.public_subnets[0]
  associate_public_ip_address = true

  root_block_device {
    volume_size           = var.disk_size
    delete_on_termination = true
  }

  tags = {
    Name = "My System"
  }
  vpc_security_group_ids = [ aws_security_group.service_sg.id ]
}

resource "aws_security_group" "service_sg" {
  name    = "my_service"
  description = "Some great description"
  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    ipv6_cidr_blocks = ["::/0"]
    cidr_blocks      = ["0.0.0.0/0"]
    description      = "Allow egress everywhere"
  }
  vpc_id      = module.vpc.vpc_id

  tags = {
    Name = "service_sg"
  }
}

resource "aws_security_group_rule" "instance_to_itself" {
  type = "ingress"
  from_port        = 22
  to_port          = 22
  protocol         = "tcp"
  cidr_blocks      = ["${aws_instance.my_system.public_ip}/32"]
  security_group_id = aws_security_group.service_sg.id
}

Resource: https://stackoverflow.com/questions/38246326/cycle-error-when-trying-to-create-aws-vpc-security-groups-using-terraform - discovered aws_security_group_rule from here

Add multiple security groups to instance

resource "aws_instance" "my_system" {
  ami                         = var.my_ami
  iam_instance_profile        = aws_iam_instance_profile.myprofile.name
  instance_type               = var.instance_type
  key_name                    = "my-key"
  subnet_id                   = module.vpc.public_subnets[0]
  associate_public_ip_address = true

  root_block_device {
    volume_size           = var.disk_size
    delete_on_termination = true
  }

  tags = {
    Name = "My System"
  }
  
  vpc_security_group_ids = [
    aws_security_group.sg1.id,
    aws_security_group.sg2.id,
    aws_security_group.sg3.id ]
  }
}

Provide script to instance user-data

Terraform code:

data "template_file" "user_data" {
  template = file("templates/user_data.yaml")
}

resource "aws_instance" "my_system" {
  ami                         = var.my_ami
  iam_instance_profile        = aws_iam_instance_profile.myprofile.name
  instance_type               = var.instance_type
  key_name                    = "my-key"
  subnet_id                   = module.vpc.public_subnets[0]
  associate_public_ip_address = true
  user_data                   = data.template_file.user_data.rendered

  root_block_device {
    volume_size           = var.disk_size
    delete_on_termination = true
  }

  tags = {
    Name = "My System"
  }
  
  vpc_security_group_ids = [
    aws_security_group.sg1.id,
    aws_security_group.sg2.id,
    aws_security_group.sg3.id ]
  }
}

templates/user_data.yml:

#cloud-config
write_files:
  - path: /root/boot.sh
    content: |
      #!/bin/bash
      # Wait for various functionality to finish spinning up
      sleep 90;
      sudo su
      pushd /root
      # Config to clone code onto the system - this will be facilitated using an instance profile (`aws_iam_instance_profile.myprofile.name` in this case). 
      # See https://techvomit.net/terraform-cheatsheet/#createiamroletorunonaninstanceandattachit for how to create that.
      git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.helper '!aws codecommit credential-helper $@'
      git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.UseHttpPath true
      # Clone code onto the system
      git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/myrepo
      cd myrepo
      # do stuff in the repo
    # Folder ownership and permissions
    owner: root:root
    permissions: '0755'
runcmd:
  - |
    set -x
    (
      while [ ! -f /root/boot.sh ]; do
        sleep 1
      done
      /root/boot.sh
    ) &
    # Clean up repo
    rm /root/myrepo
    # Delete the cloud-init logs - not necessary but if you want to do it, this is how
    cloud-init clean --logs

Resources:
https://www.digitalocean.com/community/questions/cloud-init-change-order-of-module-execution - how to do the runcmd part of it

GCP

GCS Backend

If you want to manage your terraform state with a remote backend (you do if you have multiple people managing the infrastructure), you will need to run a couple of command before your first terraform init.

Create the bucket you'll be storing the state in:

REGION=us-west1
gsutil mb -p $(gcloud projects list --format="value(project_id)" --filter="yourprojectname") -l $REGION gs://name-of-bucket-to-store-state

Next, enable object versioning to avoid any corruption with your state file:

gsutil versioning set on gs://name-of-bucket-to-store-state

Finally, create a backend.tfvars with the following commands:

echo -e "bucket         = \"$(gsutil ls | grep yourprojectname | awk -F '[/:]' '{print $4}')"\" | tee backend.tfvars
echo -e "prefix         = \"terraform/state\"" | tee -a backend.tfvars

Add this block to your terraform code:

terraform {
  backend "gcs" {}
}

At this point, you can run the following to init your terraform:

terraform init -backend-config backend.tfvar

This will take the variables we defined in the backend.tfvar we created previously and apply them to the gcs backend in the above terraform code.

From here, feel free to run plan and then apply.

Resources:
https://betterprogramming.pub/effective-ways-of-managing-your-terraform-state-44bc53043d5 - great introduction to the concept of terraform state
https://medium.com/swlh/terraform-securing-your-state-file-f6c4e13f02a9 - walkthrough of how to set things up with gsutil

Packer

Create packer file

You will also need to handle if the AMI exists and use that to decide if you want to do a rebuild. This can be done by creating a file to track that or through something like DynamoDB.

packer_builder.tf:

resource "local_file" "ami_name_to_use" {
  content = templatefile("templates/ami_name_to_use.json.tmpl", {
    ami_name = var.ami_name,
    ansible_path = var.ansible_path,
    iam_instance_profile = aws_iam_instance_profile.yourprofile.name,
    instance_type = var.instance_type,
    profile = var.profile,
    region = var.region,
    sg_1 = aws_security_group.sg_1.id,
    sg_2 = aws_security_group.sg_2.id,
    size = var.disk_size,
    source_ami = var.source_ami,
    ssh_username = var.ssh_username,
    subnet_id = module.vpc.public_subnets[0],
    vpc_id = module.vpc.vpc_id,
  })
  
  filename        = "${var.packer_code_path}/ami_name_to_use.json"
  file_permission = "0644"

  provisioner "local-exec" {
    when    = destroy
    command = "rm ${self.filename}"
  }
}

templates/ami_name_to_use.json.tmpl:

{
    "description": "Description of the AMI image purpose.",
    "builders": [{
      "ami_name": "${ami_name}",
      "ami_description": "My Awesome AMI",
      "associate_public_ip_address": true,
      "encrypt_boot": true,
      "force_deregister": true,
      "force_delete_snapshot": true,
      "iam_instance_profile": "${iam_instance_profile}",
      "instance_type": "${instance_type}",
      # Settings when the instance is launched from an AMI
      "launch_block_device_mappings": [
          {
              "delete_on_termination": true,
              "device_name": "/dev/sda1",
              "encrypted": true,
              "volume_size": "${size}",
              "volume_type": "gp2"
          }
      ],
      "region": "${region}",
      "security_group_ids": ["${sg_1}", "${sg_2}"],
      "source_ami": "${source_ami}",
      "ssh_username": "${ssh_username}",
      "subnet_id": "${subnet_id}",
      "type": "amazon-ebs",
      "tags": {
        "Name" : "Some name for the AMI",
        "OS":"Ubuntu",
        "OSVER": "20.04"
      },
      "vpc_id": "${vpc_id}"
    }],
    "provisioners": [{
      "type": "file",
      "source": "../ansible-code",
      "destination": "/tmp"
    },
    {
      "type": "shell",
      "inline": [
        "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
        "sudo apt-get -y autoremove && sudo apt-get clean && sudo apt-get update && sudo apt-get install -y ansible",
      "cd /tmp/ansible-code && ansible-galaxy collection install -r requirements.yml && ansible-galaxy install -r requirements.yml && ansible-playbook site.yml -e 'ansible_python_interpreter=/usr/bin/python3'"
      ]
    }]
}

Get output from bash script

You do need to output json for this to work, so keep that in mind.

scripts/gen_s3_folder_name.sh:

#!/bin/bash

# Exit if any of the intermediate steps fail
set -e

FOLDER_NAME=$(echo $(date +"%c") | tr -s ' ' | tr ' ' '_')

jq -n --arg folder_name "$FOLDER_NAME" '{"folder":$folder_name}'

Set execute permissions for this script or you will get annoying errors from tf that make no sense:

chmod +x scripts/gen_s3_folder_name.sh

The terraform code that will run this script locally and get the output - example.tf:

data "external" "folder_name" {
  program = [
    "${path.cwd}/scripts/gen_s3_folder_name.sh",
  ]
}

...
resource ...  {
  value = data.external.folder_name.result["folder"]
}

Resources:
https://stackoverflow.com/questions/55592292/using-output-from-bash-script-as-variable-within-terraform - the tf solution
https://registry.terraform.io/providers/hashicorp/external/latest/docs/data-sources/data_source - output json with bash
https://github.com/hashicorp/terraform-provider-external/issues/14 - how to fix the issue that comes from not making the script executable
https://stackoverflow.com/questions/12524437/output-json-from-bash-script/12524510 - old way to output json with bash

Create s3 bucket with folder

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my-bucket"
  acl    = "private"

  tags = {
    Name = "My Bucket"
  }
}

resource "aws_s3_bucket_object" "folder1" {
    bucket = "${aws_s3_bucket.my_bucket.id}"
    acl    = "private"
    # Using output from the bash script above
    key    = "${data.external.folder_name.result["folder"]}/"
    # simpler key example:
    #key    = Folder1/
}

Resource: https://stackoverflow.com/questions/37491893/how-to-create-a-folder-in-an-amazon-s3-bucket-using-terraform

Ensure public access is not allowed to bucket

resource "aws_s3_bucket" "my_bucket" {
  bucket = "my-bucket"
  acl    = "private"

  tags = {
    Name = "My Bucket"
  }
}

resource "aws_s3_bucket_public_access_block" "build_artifacts" {
  bucket = aws_s3_bucket.my_bucket.id
  block_public_acls   = true
  block_public_policy = true
  restrict_public_buckets = true
}

Resource: https://www.edureka.co/community/84360/how-to-block-public-access-to-s3-bucket-using-terraform

Debugging

Run this command to enable detailed logging:

export TF_LOG=trace

If you just want debug output:

export TF_LOG=debug

Resources:
https://www.terraform.io/docs/cli/config/environment-variables.html
https://stackoverflow.com/questions/59583711/error-launching-source-instance-unauthorizedoperation-you-are-not-authorized-t - debug

Missing IAM Permissions for CodeBuild Service Role

A great way to approach this is to review the cloudwatch logs associated with your run and filter on is not authorized to perform

Resource: https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html

Create secretsmanager secret and set a secret

resource "aws_secretsmanager_secret" "codebuild_credentials" {
  name = "codebuild_credentials"
  description = "Codebuild credentials"
}

resource "aws_secretsmanager_secret_version" "codebuild_credentials" {
  secret_id     = "${aws_secretsmanager_secret.codebuild_credentials.id}"
  secret_string = jsonencode({"access_key" = aws_iam_access_key.codebuild.id, "secret_access_key" = aws_iam_access_key.codebuild.secret})
}

Resource: https://github.com/rhythmictech/terraform-aws-secretsmanager-keypair/blob/master/main.tf - how to create an SSH key and upload it to secrets manager

TF gitignore file

Template from GitHub: https://github.com/github/gitignore/blob/master/Terraform.gitignore

Template from Hashicorp: https://github.com/hashicorp/terraform-guides/blob/master/.gitignore

Read env var in terraform

In bash:

export NAME=bla

In terraform:

variable "NAME" {
  type = string
}

Resource: https://stackoverflow.com/questions/36629367/getting-an-environment-variable-in-terraform-configuration