Installation on Ubuntu 20.04
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=amd64] \
https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt-get update && sudo apt-get install -y terraform
# Verify it works
terraform -v
Resource: https://learn.hashicorp.com/tutorials/terraform/install-cli
Commands
This is used to download and configure providers in your terraform code:
terraform init
Resource: https://learn.hashicorp.com/tutorials/terraform/eks
Reconfigure state
If you need to reconfigure your state, run the following:
terraform init --reconfigure
Run the terraform code
terraform apply
Destroy all terraform resources
terraform destroy
List all resources
terraform state list
Resource: https://github.com/hashicorp/terraform/issues/12917
Remove something from state
This will remove packet_device
called worker from your existing state:
terraform state rm 'packet_device.worker'
Resource: https://www.terraform.io/docs/cli/commands/state/rm.html
Remove every type of resource from state
RESOURCE='firewall_rule'
terraform state list |grep "${RESOURCE}" | xargs -I {} terraform state rm {}
Trigger rebuild
terraform taint $RESOURCE_NAME
# example:
terraform taint aws_security_group.allow_all
Resource: https://www.terraform.io/docs/cli/commands/taint.html
Terragrunt
Init
There is no need for init with terragrunt since auto-init is on by default: https://terragrunt.gruntwork.io/docs/features/auto-init/
List all resources in the terragrunt state
terragrunt state list
Remove resource from state
This example removes the aws_auth config map from the resource in the state:
terragrunt state rm 'module.eks_blueprints_addons.kubernetes_config_map.aws_auth[0]'
Remove all matched resources from state
This example removes all resources that match the
module.eks_blueprints_addons
pattern from the state:
terragrunt state list | grep 'module\.eks_blueprints_kubernetes_addons\..*' | \
while read -r line; do
terragrunt state rm -lock=false "$line"
done
Remove all module resources from state
terragrunt state list | grep 'module.eks_blueprints_addons' | while read -r line ; do
terragrunt state rm "$line"
done
Use PAT
In terragrunt.hcl
for a module:
locals {
pat = get_env("PAT")
}
terraform {
source = "git::https://${local.pat}@github.com/username/private-repo//?ref=main"
}
Avoid annoying terragrunt warning for modules without submodules
terraform {
source = "git::git@github.com:terraform-aws-modules/terraform-aws-vpc.git//.?ref=v3.7.0"
}
Resource: https://githubmemory.com/repo/gruntwork-io/terragrunt/issues/1675
Yes to all Terragrunt prompts
--terragrunt-non-interactive
Debugging Terragrunt remote backend
terragrunt init --terragrunt-non-interactive --terragrunt-log-level debug --terragrunt-debug
Resources: https://terragrunt.gruntwork.io/docs/features/debugging/
Delete resource out of terragrunt state
# List state items
terragrunt state ls
# Delete state item with name module.eks_blueprints.kubernetes_config_map.aws_auth[0]
terragrunt state rm 'module.eks_blueprints.kubernetes_config_map.aws_auth[0]'
Resource: https://github.com/terraform-aws-modules/terraform-aws-eks/issues/911
Remove all resources from terraform state
This particular example will remove all eks resources from the state:
RES=eks
terragrunt state rm "$(terragrunt state list | grep $RES)"
Force unlock state lock
# Get this value from the associated error message
LOCK_ID=9db590f1-b6fe-c5f2-2678-8804f089deba
terragrunt force-unlock $LOCK_ID
Create module dependencies
dependency "vpc" {
config_path = "../terraform-aws-vpc"
}
Resource: https://terragrunt.gruntwork.io/docs/reference/config-blocks-and-attributes/#dependency
Deployment Examples
Several ideas for how you can architect a terragrunt deployment:
- https://github.com/antonbabenko/terragrunt-reference-architecture
- https://github.com/cogini/multi-env-deploy/blob/master/terraform
- https://www.reddit.com/r/Terraform/comments/hk6y05/terragrunt_directory_structure_question/
Reference local module
terraform {
source = "${get_parent_terragrunt_dir()}/../modules//terraform-aws-vpc"
}
Resource: https://github.com/antonbabenko/terragrunt-reference-architecture/tree/master/modules/aws-data
Run init upgrade on all repos with lock
#!/bin/bash
# Find directories containing .terraform.lock.hcl
find_dirs() {
find . -type f -name ".terraform.lock.hcl" -exec dirname {} \;
}
# Run terragrunt init -upgrade in the given directory
run_terragrunt_init() {
local dir="$1"
echo "Running terragrunt init -upgrade in $dir"
(cd "$dir" && terragrunt init -upgrade)
}
main() {
local dirs
dirs=$(find_dirs)
for dir in $dirs; do
run_terragrunt_init "$dir"
done
}
main
AWS
Import existing resources into tf file
This particular example will import the OPTIONS method from an API gateway.
Put the following in main.tf
:
resource "aws_api_gateway_method" "options_method" {
}
Then run this command to import it:
/usr/local/bin/terraform import aws_api_gateway_method.options_method <api_gateway_id>/<api_resource_id>/OPTIONS
You can find the output by running this command:
terraform show
Another example (import the POST gateway method): put the following in
main.tf
:
# POST
resource "aws_api_gateway_method" "post_method" {
}
command to import:
/usr/local/bin/terraform import aws_api_gateway_method.post_method <api_gateway_id>/<api_resource_id>/POST
One last example (import stage): put the following in main.tf
:
resource "aws_api_gateway_stage" "<stage_name>" {
}
command to import:
/usr/local/bin/terraform import aws_api_gateway_stage.<stage_name> <api_gateway_id>/<stage_name>
Example with security group
Terraform code:
resource "aws_security_group" "my_sg" {
}
Command to import:
terraform import aws_security_group.my_sg sg-xxxxxxxxx
To see the changes:
terraform show
Import existing IAM role
Create a directory and run
terraform init
Create a placeholder like so:
resource "aws_iam_role" "yourrolename" { name = "yourrolename" assume_role_policy = "{}" }
Run this command to import the existing role:
terraform import aws_iam_role.yourrolename <the name of the existing role>
Run
terraform show
to get the block of terraform code that you’ll want to implement
Resource: https://mklein.io/2019/09/30/terraform-import-role-policy/
Import role without creating file
terraform import -var "region=$AWS_DEFAULT_REGION" aws_iam_role.yourrolename $YOUR_ROLE_NAME
Secrets Manager
Create blank secret:
resource "aws_secretsmanager_secret" "IRCSecrets" {
name = "irc/client/credentials"
description = "My IRC client credentials"
}
Resource: https://gist.github.com/anttu/6995f20e641d4f30a6003520f70608b3
Create IAM role to run on an instance and attach it
iam.tf
:
# Policy for role that uses STS to get credentials to access ec2 instances
resource "aws_iam_role" "ec2_iam_role" {
name = "ec2_iam_role"
assume_role_policy = file("iam_role_policy.json")
tags = {
Name = "ec2_iam_role"
}
}
# Group together roles that apply to an instance
resource "aws_iam_instance_profile" "ec2_iam_instance_profile" {
name = "ec2_iam_instance_profile"
role = aws_iam_role.ec2_iam_role.name
}
resource "aws_iam_role_policy" "ec2_iam_role_policy" {
name = "ec2_iam_role_policy"
role = ec2_iam_role.id
policy = file("ec2_iam_role_policy.json")
}
Create iam_role_policy.json
to be used to get credentials to access ec2
instances:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
ec2_iam_role_policy.json
- this is going to be variable depending on the
permissions required for a given ec2 instance. This example provides permissions
for logging and cloning CodeCommit repos:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:ap-southeast-1:0000:log-group:*",
"arn:aws:logs:ap-southeast-1:0000:log-group:production:*"
]
},
Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"codecommit:Get*",
"sns:ListSubscriptionsByTopic",
"lambda:ListFunctions",
"sns:GetTopicAttributes",
"codestar-notifications:ListNotificationRules",
"codecommit:BatchGet*",
"sns:ListTopics",
"codecommit:GitPull",
"codestar-notifications:ListEventTypes",
"codecommit:EvaluatePullRequestApprovalRules",
"codestar-notifications:ListTargets",
"codeguru-reviewer:ListRepositoryAssociations",
"codeguru-reviewer:ListCodeReviews",
"codeguru-reviewer:DescribeRepositoryAssociation",
"iam:ListUsers",
"codecommit:List*",
"codecommit:Describe*",
"codeguru-reviewer:DescribeCodeReview",
"codecommit:BatchDescribe*"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "codestar-notifications:DescribeNotificationRule",
"Resource": "*",
"Condition": {
"StringLike": {
"codestar-notifications:NotificationsForResource": "arn:aws:codecommit:*"
}
}
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": [
"events:DescribeRule",
"iam:ListSSHPublicKeys",
"iam:GetSSHPublicKey",
"codestar-connections:GetConnection",
"iam:ListServiceSpecificCredentials",
"events:ListTargetsByRule",
"iam:ListAccessKeys"
],
"Resource": [
"arn:aws:codestar-connections:*:*:connection/*",
"arn:aws:iam::*:user/${aws:username}",
"arn:aws:events:*:*:rule/codecommit*"
]
},
{
"Sid": "VisualEditor4",
"Effect": "Allow",
"Action": "codestar-connections:ListConnections",
"Resource": "arn:aws:codestar-connections:*:*:connection/*"
}
]
}
ec2.tf
:
resource "aws_instance" "ec2_node" {
ami = "ami-07dd19a7900a1f049"
instance_type = "t3.medium"
key_name = "ec2-key"
# Enable termination protection
disable_api_termination = true
vpc_security_group_ids = [aws_security_group.name1.id, aws_security_group.name2.id]
subnet_id = "your_subnet_id"
associate_public_ip_address = true
root_block_device {
volume_size = 100
delete_on_termination = true
}
tags = {
Name = "ec2_node"
}
iam_instance_profile = "aws_iam_instance_profile.ec2_iam_instance_profile.name"
}
Resources:
- https://adrianhesketh.com/2016/06/27/creating-aws-instance-roles-with-terraform/
- https://devopslearning.medium.com/aws-iam-ec2-instance-role-using-terraform-fa2b21488536
- https://stackoverflow.com/questions/62953164/create-and-attach-iam-role-to-ec2-using-terraform
Create security group with instance’s public ip
If you need to specify a security group that relies on an instance’s public IP address and you don’t want to use an EIP, you can do the following:
resource "aws_instance" "my_system" {
ami = var.my_ami
instance_type = var.instance_type
key_name = "my-key"
subnet_id = module.vpc.public_subnets[0]
associate_public_ip_address = true
root_block_device {
volume_size = var.disk_size
delete_on_termination = true
}
tags = {
Name = "My System"
}
vpc_security_group_ids = [ aws_security_group.service_sg.id ]
}
resource "aws_security_group" "service_sg" {
name = "my_service"
description = "Some great description"
egress {
from_port = 0
to_port = 0
protocol = "-1"
ipv6_cidr_blocks = ["::/0"]
cidr_blocks = ["0.0.0.0/0"]
description = "Allow egress everywhere"
}
vpc_id = module.vpc.vpc_id
tags = {
Name = "service_sg"
}
}
resource "aws_security_group_rule" "instance_to_itself" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["${aws_instance.my_system.public_ip}/32"]
security_group_id = aws_security_group.service_sg.id
}
Resource: Source for aws_security_group_rule
Add multiple security groups to instance
resource "aws_instance" "my_system" {
ami = var.my_ami
iam_instance_profile = aws_iam_instance_profile.myprofile.name
instance_type = var.instance_type
key_name = "my-key"
subnet_id = module.vpc.public_subnets[0]
associate_public_ip_address = true
root_block_device {
volume_size = var.disk_size
delete_on_termination = true
}
tags = {
Name = "My System"
}
vpc_security_group_ids = [
aws_security_group.sg1.id,
aws_security_group.sg2.id,
aws_security_group.sg3.id ]
}
}
Provide script to instance user-data
Terraform code:
data "template_file" "user_data" {
template = file("templates/user_data.yaml")
}
resource "aws_instance" "my_system" {
ami = var.my_ami
iam_instance_profile = aws_iam_instance_profile.myprofile.name
instance_type = var.instance_type
key_name = "my-key"
subnet_id = module.vpc.public_subnets[0]
associate_public_ip_address = true
user_data = data.template_file.user_data.rendered
root_block_device {
volume_size = var.disk_size
delete_on_termination = true
}
tags = {
Name = "My System"
}
vpc_security_group_ids = [
aws_security_group.sg1.id,
aws_security_group.sg2.id,
aws_security_group.sg3.id ]
}
}
templates/user_data.yml
:
#cloud-config
write_files:
- path: /root/boot.sh
content: |
#!/bin/bash
# Wait for various functionality to finish spinning up
sleep 90;
sudo su
pushd /root
# Config to clone code onto the system - this will be facilitated using an instance profile (`aws_iam_instance_profile.myprofile.name` in this case).
# See https://techvomit.net/terraform-cheatsheet/#createiamroletorunonaninstanceandattachit for how to create that.
git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.helper '!aws codecommit credential-helper $@'
git config --system credential.https://git-codecommit.us-west-2.amazonaws.com.UseHttpPath true
# Clone code onto the system
git clone https://git-codecommit.us-west-2.amazonaws.com/v1/repos/myrepo
cd myrepo
# do stuff in the repo
# Folder ownership and permissions
owner: root:root
permissions: "0755"
runcmd:
- |
set -x
(
while [ ! -f /root/boot.sh ]; do
sleep 1
done
/root/boot.sh
) &
# Clean up repo
rm /root/myrepo
# Delete the cloud-init logs - not necessary but if you want to do it, this is how
cloud-init clean --logs
Resources: https://www.digitalocean.com/community/questions/cloud-init-change-order-of-module-execution
Create s3 bucket with folder
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
acl = "private"
tags = {
Name = "My Bucket"
}
}
resource "aws_s3_bucket_object" "folder1" {
bucket = "${aws_s3_bucket.my_bucket.id}"
acl = "private"
# Using output from the bash script above
key = "${data.external.folder_name.result["folder"]}/"
# simpler key example:
#key = Folder1/
}
Ensure public access is not allowed to bucket
resource "aws_s3_bucket" "my_bucket" {
bucket = "my-bucket"
acl = "private"
tags = {
Name = "My Bucket"
}
}
resource "aws_s3_bucket_public_access_block" "build_artifacts" {
bucket = aws_s3_bucket.my_bucket.id
block_public_acls = true
block_public_policy = true
restrict_public_buckets = true
}
Resource: https://www.edureka.co/community/84360/how-to-block-public-access-to-s3-bucket-using-terraform
Missing IAM Permissions for CodeBuild Service Role
A great way to discover this is to review the cloudwatch logs associated with
your run and filter on is not authorized to perform
.
Resource: https://docs.aws.amazon.com/codebuild/latest/userguide/setting-up.html
Create secretsmanager secret and set a secret
resource "aws_secretsmanager_secret" "codebuild_credentials" {
name = "codebuild_credentials"
description = "Codebuild credentials"
}
resource "aws_secretsmanager_secret_version" "codebuild_credentials" {
secret_id = "${aws_secretsmanager_secret.codebuild_credentials.id}"
secret_string = jsonencode({"access_key" = aws_iam_access_key.codebuild.id, "secret_access_key" = aws_iam_access_key.codebuild.secret})
}
Resource: Create SSH key and upload to secrets manager
Solving dependency cycles in security groups
# Create an empty security group:
resource "aws_security_group" "bastion" {
name = "bastion"
description = "Bastion security group"
}
# Create a group rule to provide the logic:
resource "aws_security_group_rule" "private-from-bastion-ssh-ingress" {
type = "ingress"
from_port = 22
to_port = 22
protocol = "tcp"
security_group_id = "${aws_security_group.bastion.id}"
}
GCP
GCS Backend
If you want to manage your terraform state with a remote backend (you do if you
have multiple people managing the infrastructure), you will need to run a couple
of command before your first terraform init
.
Create the bucket you’ll be storing the state in:
REGION=us-west1
gsutil mb -p $(gcloud projects list --format="value(project_id)" --filter="yourprojectname") -l $REGION gs://name-of-bucket-to-store-state
Next, enable object versioning to avoid any corruption with your state file:
gsutil versioning set on gs://name-of-bucket-to-store-state
Finally, create a backend.tfvars
with the following commands:
echo -e "bucket = \"$(gsutil ls | grep yourprojectname | awk -F '[/:]' '{print $4}')"\" | tee backend.tfvars
echo -e "prefix = \"terraform/state\"" | tee -a backend.tfvars
Add this block to your terraform code:
terraform {
backend "gcs" {}
}
At this point, you can run the following to init your terraform:
terraform init -backend-config backend.tfvar
This will take the variables we defined in the backend.tfvar
we created
previously and apply them to the gcs
backend in the above terraform code.
From here, feel free to run plan
and then apply
.
Resources:
Read env var in terraform
In bash:
export NAME=bla
In terraform:
variable "NAME" {
type = string
}
Create ansible hosts file
ansible_hosts_file_builder.tf
:
resource "local_file" "ansible_hosts" {
content = templatefile("templates/hosts.tmpl",
{
private-ip = aws_instance.managed_system.private_ip,
public-id = aws_instance.managed_system.id
}
)
filename = "${path.module}/hosts"
}
templates/hosts.tmpl
:
[some_group]
%{ for index, ip in private-ip ~}
${ip} ansible_user=ansible ansible_ssh_private_key_file=/home/ubuntu/.ssh/key_file ansible_python_interpreter=/usr/bin/python3 # ${public-id[index]}
%{ endfor ~}
Resource: https://www.linkbynet.com/produce-an-ansible-inventory-with-terraform
Populate Packer File with Templating
You will also need to handle if the AMI exists and use that to decide if you want to do a rebuild. This can be done by creating a file to track that or through something like DynamoDB.
packer_builder.tf
:
resource "local_file" "ami_name_to_use" {
content = templatefile("templates/ami_name_to_use.json.tmpl", {
ami_name = var.ami_name,
ansible_path = var.ansible_path,
iam_instance_profile = aws_iam_instance_profile.yourprofile.name,
instance_type = var.instance_type,
profile = var.profile,
region = var.region,
sg_1 = aws_security_group.sg_1.id,
sg_2 = aws_security_group.sg_2.id,
size = var.disk_size,
source_ami = var.source_ami,
ssh_username = var.ssh_username,
subnet_id = module.vpc.public_subnets[0],
vpc_id = module.vpc.vpc_id,
})
filename = "${var.packer_code_path}/ami_name_to_use.json"
file_permission = "0644"
provisioner "local-exec" {
when = destroy
command = "rm ${self.filename}"
}
}
templates/ami_name_to_use.json.tmpl
:
{
"description": "Description of the AMI image purpose.",
"builders": [{
"ami_name": "${ami_name}",
"ami_description": "My Awesome AMI",
"associate_public_ip_address": true,
"encrypt_boot": true,
"force_deregister": true,
"force_delete_snapshot": true,
"iam_instance_profile": "${iam_instance_profile}",
"instance_type": "${instance_type}",
# Settings when the instance is launched from an AMI
"launch_block_device_mappings": [
{
"delete_on_termination": true,
"device_name": "/dev/sda1",
"encrypted": true,
"volume_size": "${size}",
"volume_type": "gp2"
}
],
"region": "${region}",
"security_group_ids": ["${sg_1}", "${sg_2}"],
"source_ami": "${source_ami}",
"ssh_username": "${ssh_username}",
"subnet_id": "${subnet_id}",
"type": "amazon-ebs",
"tags": {
"Name" : "Some name for the AMI",
"OS":"Ubuntu",
"OSVER": "20.04"
},
"vpc_id": "${vpc_id}"
}],
"provisioners": [{
"type": "file",
"source": "../ansible-code",
"destination": "/tmp"
},
{
"type": "shell",
"inline": [
"while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
"sudo apt-get -y autoremove && sudo apt-get clean && sudo apt-get update && sudo apt-get install -y ansible",
"cd /tmp/ansible-code && ansible-galaxy collection install -r requirements.yml && ansible-galaxy install -r requirements.yml && ansible-playbook site.yml -e 'ansible_python_interpreter=/usr/bin/python3'"
]
}]
}
TF gitignore file
Debugging
Run this command to enable detailed logging:
export TF_LOG=trace
If you just want debug output:
export TF_LOG=debug
Resources:
- https://www.terraform.io/docs/cli/config/environment-variables.html
- https://stackoverflow.com/questions/59583711/error-launching-source-instance-unauthorizedoperation-you-are-not-authorized-t
Foreach examples with a map
Wait for cloud-init in SSM-enabled instance to finish
resource "aws_ssm_document" "cloud_init_wait" {
name = "cloud-init-wait"
document_type = "Command"
document_format = "YAML"
content = <<-DOC
schemaVersion: '2.2'
description: Wait for cloud init to finish
mainSteps:
- action: aws:runShellScript
name: StopOnLinux
precondition:
StringEquals:
- platformType
- Linux
inputs:
runCommand:
- cloud-init status --wait
DOC
}
resource "null_resource" "wait_for_instance" {
provisioner "local-exec" {
command = <<-EOF
#!/bin/bash
set -Ee -o pipefail
# Wait for instance to finish initializing
sleep 60
instance_status="initializing"
while [[ "$instance_status" == "initializing" ]]; do
instance_status=$(aws ec2 describe-instance-status --instance-id ${aws_instance.instance.id} | jq -r ".InstanceStatuses[0].InstanceStatus.Status")
sleep 10
done
# Wait for cloud-init to complete
command_id=$(aws ssm send-command --document-name ${aws_ssm_document.cloud_init_wait.arn} --instance-ids ${aws_instance.instance.id} --output text --query "Command.CommandId")
if ! aws ssm wait command-executed --command-id $command_id --instance-id ${aws_instance.instance.id}; then
echo "Failed to start services on instance ${aws_instance.instance.id}!";
echo "stdout:";
aws ssm get-command-invocation --command-id $command_id --instance-id ${aws_instance.instance.id} --query StandardOutputContent;
echo "stderr:";
aws ssm get-command-invocation --command-id $command_id --instance-id ${aws_instance.instance.id} --query StandardErrorContent;
exit 1;
fi
cloud_init_state="running"
while [[ "$cloud_init_state" != "done" ]]; do
cloud_init_state=$(aws ssm get-command-invocation --command-id $command_id --instance-id ${aws_instance.instance.id} | \
jq -r .StandardOutputContent | tr -d '\n\t' | awk -F ': ' '{print $2}')
sleep 5
done
EOF
}
triggers = {
"after" = "${aws_instance.instance.id}"
}
depends_on=[aws_ssm_document.cloud_init_wait]
}
Resource: https://stackoverflow.com/questions/62116684/how-to-make-terraform-wait-for-cloudinit-to-finish
Create ami from instance
resource "aws_ami_from_instance" "ami" {
name = var.ami_name
source_instance_id = aws_instance.instance.id
tags = {
Name = "${var.ami_name}"
}
depends_on = [null_resource.wait_for_instance]
}
Get CloudFormation Output
templates/cloudformation.yml.tpl
:
Resources:
ImageBuildComponent:
Type: AWS::ImageBuilder::Component
# Retaining each component when updated because the old component can't be removed until the recipe is updated
UpdateReplacePolicy: Retain
Properties:
Name: ${name}
Version: ${version}
%{~ if change_description != null ~}
ChangeDescription: ${change_description}
%{~ endif ~}
%{~ if description != null ~}
Description: ${description}
%{~ endif ~}
Platform: ${platform}
Tags:
${ indent(8, chomp(yamlencode(tags))) }
%{~ if uri != null ~}
Uri: ${uri}
%{~ endif ~}
%{~ if data != null ~}
Data: |
${indent(8, data)}
%{~ endif ~}
Outputs:
ComponentArn:
Description: ARN of the created component
Value: !Ref "ImageBuildComponent"
EC2 Image Builder recipe block that leverages the CloudFormation output:
resource "aws_imagebuilder_image" "this" {
distribution_configuration_arn = aws_imagebuilder_distribution_configuration.this.arn
image_recipe_arn = aws_imagebuilder_image_recipe.this.arn
infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.this.arn
depends_on = [
aws_iam_policy.image_builder,
aws_imagebuilder_image_recipe.this,
aws_imagebuilder_distribution_configuration.this,
aws_imagebuilder_infrastructure_configuration.this
]
}
resource "aws_imagebuilder_image_recipe" "this" {
block_device_mapping {
device_name = "/dev/sda1"
ebs {
delete_on_termination = true
volume_size = var.vol_size
volume_type = "gp3"
}
}
component {
component_arn = "arn:aws:imagebuilder:${var.region}:aws:component/simple-boot-test-linux/1.0.0/1"
}
# CloudFormation output:
component {
component_arn = aws_cloudformation_stack.this.outputs.ComponentArn
}
name = "${var.ami_name}-recipe"
parent_image = var.base_ami_id
version = var.image_recipe_version
lifecycle {
create_before_destroy = true
}
}
Resources:
- https://stackoverflow.com/questions/64541206/terraform-resolving-cloudformation-outputs
- https://github.com/rhythmictech/terraform-aws-imagebuilder-component-ansible
- https://cloudly.engineer/2021/get-started-with-ec2-image-builder-in-terraform/aws/
Fix terraform in gh actions parsing error
If your terratest works for your module locally, but not in your github action
and you’re getting this error:
invalid character 'c' looking for beginning of value
Try this:
- name: Setup Terraform
uses: hashicorp/setup-terraform@v1
with:
terraform_wrapper: false
Resource: https://github.com/gruntwork-io/terragrunt/issues/1202