This workshop assumes you’re already familiar with the techniques and concepts in the Introduction & Key Concepts section. If you haven’t reviewed it yet, start with AWS Architecture and work your way through the rest of the section to fully understand the mechanics and philosophy behind our setup.
Overview
Why a wrapper? It standardizes patterns (networking, security, tagging) across
teams while keeping the underlying upstream module up-to-date. This example uses
a remote module from Git, demonstrating how teams can share reusable infrastructure
patterns across multiple projects.
This workshop walks you through deploying a pre-built wrapper module built on top of the official terraform-aws-modules/ec2-instance. The wrapper extends the base EC2 functionality with additional AWS components—such as an Application Load Balancer (ALB)—to create a complete, production-ready web application stack.
Prerequisites
- AWS SSO signed in:
aws sso login (see Getting Started with AWS SSO if you need help setting this up)
- Fast Foundation infrastructure repository cloned
- You know the target account / environment / region you’ll deploy to
- You have a VPC and at least one private subnet and one public subnet. Both tagged with “Type: Private/Public” according the case.
Module Structure
Remote Module Benefits: By referencing the module from Git, multiple teams can use
the same standardized wrapper while the module maintainers can update it centrally.
The ref=main ensures you’re using a specific version. Check the official documentation for more details.
The wrapper module is already created and available in the fast-foundation-examples repository. Instead of creating local files, you’ll reference it directly from Git.
👉 Take a moment to explore the repository files to understand the structure and configuration you’ll be using them throughout this workshop.
Target folder (where to place your unit)
If your ec2 folder does not already exist, create it and add an _service.hcl file:locals{
service = basename(get_terragrunt_dir())
}
Now you’ll deploy this wrapper module into your infrastructure. To do this, you need to create two files in the appropriate folder:
- the deployment unit file → terragrunt.hcl
- the parameters file →
inputs.hcl
For this workshop, we’ll work in the development_workload_account account.
You can copy the examples below — terragrunt.hcl and inputs.hcl — into your module folder, following the structure shown in this file system tree:
fast-foundation-infrastructure/
└── Workloads/
└── Development/
└── <development_workload_account>/
└── development
└── us-west-1/
└── ec2/
└── _service.hcl
└── web-server/
├── terragrunt.hcl
└── inputs.hcl
If inputs.hcl isn’t there, your before-hooks will generate/sync it on terragrunt init.
Step 1 — Create terragrunt.hcl
This references the wrapper module from the fast-foundation-examples repository and wires in Fast Foundation’s parameter sync.
terraform {
# Reference the wrapper module from the examples repository
source = "[email protected]:Nimble-la/fast-foundation-examples.git//terragrunt/modules/ec2-wrapper?ref=main"
before_hook "secrets_management" {
commands = ["init", "plan", "apply"]
execute = [
"${local.parameter_script}",
"${local.local_file_path}",
"${include.root.locals.project_name}-terragrunt-states",
"${local.s3_key}",
"${local.aws_profile_infrastructure}"
]
run_on_error = false
}
}
include "root" {
path = find_in_parent_folders("root.hcl")
expose = true
}
# Optional: Use dependencies instead of data sources to fetch VPC information
# dependency "vpc" {
# config_path = "../../networking/vpc"
#
# mock_outputs = {
# vpc_id = "vpc-mock"
# private_subnet_ids = ["subnet-mock1", "subnet-mock2"]
# public_subnet_ids = ["subnet-mock3", "subnet-mock4"]
# }
# mock_outputs_allowed_terraform_commands = ["validate", "plan"]
# }
locals {
# Base variables from parameter store
region_vars = include.root.locals.region_vars.locals
environment_vars = include.root.locals.environment_vars.locals
service_vars = include.root.locals.service_vars.locals
inputs = try(read_terragrunt_config("${get_terragrunt_dir()}/inputs.hcl").locals, {})
# Parameter management
s3_key = "${path_relative_to_include("root")}/inputs.hcl"
local_file_path = "${get_terragrunt_dir()}/inputs.hcl"
aws_profile_infrastructure = "${include.root.locals.project_name}-infrastructure"
project_name = include.root.locals.project_name
# Cross-platform script selection
script_base_path = "${dirname(find_in_parent_folders("root.hcl"))}/_scripts/parameter-management"
script_preference = get_env("TG_SCRIPT_TYPE", "auto")
parameter_script = (
local.script_preference == "powershell" ? "${local.script_base_path}.ps1" :
local.script_preference == "python" ? "${local.script_base_path}.py" :
local.script_preference == "bash" ? "${local.script_base_path}.sh" :
"${local.script_base_path}.sh" # Default fallback to bash
)
# Use directory name for resource naming
instance_name = basename(get_terragrunt_dir())
}
inputs = {
# Resource naming based on directory name
name = "${local.instance_name}-${local.environment_vars.environment}"
# Environment fetch from repository structure
environment = local.environment_vars.environment
# Parameters from inputs file
vpc_id = try(local.inputs.vpc_id, null)
vpc_private_subnets = try(local.inputs.vpc_private_subnets, [])
vpc_public_subnets = try(local.inputs.vpc_public_subnets, [])
instance_type = try(local.inputs.instance_type, null)
app_port = try(local.inputs.app_port, null)
health_check_path = try(local.inputs.health_check_path, null)
enable_deletion_protection = try(local.inputs.enable_deletion_protection, true)
tags = {
Name = "${local.instance_name}-${local.environment_vars.environment}"
Service = local.instance_name
DeployedBy = "terragrunt"
}
}
Parameter values are stored here and synced to S3. Update the VPC ID and subnet IDs with valid values from your environment.
locals {
# Environment-specific configuration stored in parameter store
vpc_id = "vpc-0f7fff5a089d46f0a" # Change with a valid vpc id from your environment
vpc_private_subnets = ["subnet-0eb97763906e0b202"] # Change with a valid private subnet for the web instance
vpc_public_subnets = ["subnet-0a47ba8a3d3c2470e", "subnet-0a7f55f161ed7a956"] # Change with at least two valid public subnet for your ALB
instance_type = "t3.small"
# Application configuration
app_port = 8080
health_check_path = "/health"
enable_deletion_protection = false
}
Step 3 — Deploy the custom module
Run these commands from your new folder:
terragrunt init
terragrunt plan
terragrunt apply
The before-hooks will sync inputs.hcl and ensure your AWS SSO session is valid.
Cleanup
Important: Don’t forget to clean up the resources you created during this workshop to avoid unnecessary AWS charges. This deployment creates multiple resources including EC2 instances, Load Balancers, and Security Groups.
To destroy all resources created by the custom module, run the following command from your deployment folder:
When prompted, type yes to confirm the destruction of resources.
Notes & Options
Terragrunt Dependencies vs. Data Sources
If you prefer stronger guarantees about resource ordering (and want to avoid relying on data-source lookups), you can switch to using Terragrunt dependencies instead:
- Uncomment the dependency block in
terragrunt.hcl
- Update your module inputs to accept values like
vpc_id and subnet IDs directly
- Pass outputs such as
dependency.vpc.outputs.vpc_id into the module