Write Terraform Like a Pro- Part 2
Terraform Modules: Reusable Infrastructure as Code
Table of contents
- Introduction
- Table of Contents
- Prerequisite
- Project Structure
- Step One: Configuring the AWS Provider
- Step Two: Creating the Root Module – Defining How We Use the Module
- Step Three: Declaring Variables – Preparing Terraform to Accept Inputs
- Step Four: Storing Configurations In A TFVARS File
- Why is this the best option?
- Step Five: Deploying the Configuration
- The Result
- Conclusion
Introduction
You’ve built a well-structured Terraform module, published it to the Terraform Registry, and now it’s ready to be used. But the real question is: how do you effectively integrate it into your projects? How do you ensure smooth deployments while keeping your infrastructure code clean, efficient, and scalable?
This is where consuming Terraform modules comes into play. Instead of writing and managing every infrastructure component manually, you can simply pull a module, supply the necessary inputs, and let Terraform handle the rest. This approach streamlines development, ensures consistency across environments, and promotes best practices in Infrastructure as Code (IaC).
In this second part of our deep dive into Terraform modules, we’ll explore the complete process of integrating and consuming modules in your Terraform configuration. You’ll learn how to reference modules from the Terraform Registry, pass variables dynamically, apply configurations, and handle versioning effectively.
By the end of this guide, you’ll have a clear understanding of how to leverage Terraform modules to deploy scalable and reusable infrastructure effortlessly. Let’s dive in!
Table of Contents
Project Structure
Creating the Root Module – Defining How We Use the Module
Declaring the Variables – Preparing Terraform to Accept Inputs
Storing Configurations in terraform
.tfvars
Deploying the Configuration
The Result
Conclusion
Key Takeaways
Prerequisite
Before diving in, make sure you have these installed:
Terraform CLI
AWS CLI
VCode (Code Editor)
Create an AWS IAM User
In Part 1, we created and published a Terraform module, making it reusable and accessible via the Terraform Registry. But having a module alone isn’t enough. Now, we need to use it effectively to deploy infrastructure in a structured and scalable way.
This is where module consumption comes into play. Instead of manually defining every AWS resource in Terraform, we can leverage the module we built, passing in configuration values that dictate how our infrastructure should be set up.
But here’s the challenge: Where do we define these configuration values?
Should we hardcode them into Terraform files? No—that would make our code inflexible and unmanageable.
Should we pass them manually every time we run Terraform? No—that would be tedious and error-prone.
The best approach is to store configurations in a structured way so that Terraform can automatically pick them up and apply them seamlessly. This is where the .tfvars
file comes in.
Project Structure
Before we start writing code, we need to establish a clear folder structure to ensure the project remains well-organized. The following structure represents a root module that consumes the published Terraform module:
terraform-aws-s3/
│── main.tf
│── variables.tf
│── terraform.auto.tfvars
│── provider.tf
main.tf
: Defines the root configuration and calls the Terraform modulevariables.tf
: Declares input variables used inmain.tf
terraform.auto.tfvars
: Stores the configuration values to be automatically loaded by Terraformprovider.tf
: Configures Terraform to communicate with AWS.
Step One: Configuring the AWS Provider
Before defining our infrastructure, we must ensure Terraform can communicate with AWS. This is done by configuring the AWS provider inside the provider.tf
file.
Creating the provider.tf
File
provider "aws" {
region = var.region
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
Breaking It Down – What’s Happening Here?
provider "aws"
→ Declares that we are using AWS as our cloud provider.region = var.region
→ Referencesvar.region
, making the region configurable instead of hardcoded.terraform {}
→ Defines Terraform’s global settings.required_providers {}
→ Specifies that this configuration requires the AWS provider.source = "hashicorp/aws"
→ Tells Terraform to download the official AWS provider from the Terraform Registry.version = "~> 5.0"
→ Locks the AWS provider to version 5.x (e.g.,5.1
,5.2
) but prevents automatic upgrades to6.0
, ensuring stability.
This setup ensures Terraform correctly interacts with AWS while preventing unexpected changes due to provider version updates.
Step Two: Creating the Root Module – Defining How We Use the Module
With the AWS provider configured, we can now define our infrastructure by referencing the published Terraform module inside main.tf
.
Creating the main.tf
File
The root module is where we call our published Terraform module and provide configuration values. In your project directory, Create a main.tf
file and reference our published Terraform module:
module "s3_buckets" {
source = "terraform-aws-modules/s3-bucket/aws"
version = "3.2.1"
buckets = var.buckets
}
What’s Happening Here?
module "s3_buckets"
→ This tells Terraform we are using a module named s3_buckets
.source = "terraform-aws-modules/s3-bucket/aws"
→ Specifies where Terraform should fetch the module from (Terraform Registry).
version = "3.2.1"
: Ensures we use a specific, stable version of the modulebuckets = var.buckets
→ Passes the buckets
configuration as a variable instead of hardcoding it.
But wait—what is var.buckets
? Keep paying attention, we are addressing it next.
Step Three: Declaring Variables – Preparing Terraform to Accept Inputs
To make our module flexible, we need to define an input variable that allows us to pass in configurations dynamically.
Creating the variables.tf
file:
variable "buckets" {
description = "A map of S3 bucket configurations"
type = map(object({
bucket_name = string
force_destroy = bool
policy_identifiers = list(string)
policy_actions = list(string)
versioning_status = string
sse_algorithm = string
tags = map(string)
}))
}
What’s Happening Here?
variable "buckets"
→ Declares a variable named buckets
.description = "..."
→ Provides a short explanation of what the variable does.type = map(object({ ... }))
→
Specifies that
buckets
is a map, meaning it holds multiple key-value pairs.Each bucket is represented as an object with fields like
bucket_name
,force_destroy
, andtags
.
list(string)
→ Defines fields likepolicy_identifiers
andpolicy_actions
as lists of strings.
At this point, Terraform expects us to provide values for buckets
, but we haven’t done that yet.
Step Four: Storing Configurations In A TFVARS File
Now, let’s define our actual bucket configurations inside terraform.auto.tfvars
.
Create a terraform.auto.tfvars
file and define the variable:
This allows us to store our configuration in a structured way, making deployments simple and automated. Now add the following:
buckets = {
"bucket1" = {
bucket_name = "my-bucket-1gktoghk5hkht4jg"
force_destroy = true
policy_identifiers = ["arn:aws:iam::1234556789:user/devops"] #Replace this block with the ARN of the IAM user you created
policy_actions = ["s3:GetObject", "s3:PutObject"]
versioning_status = ""
sse_algorithm = "AES256"
tags = {
Name = "My Bucket 1"
Environment = "Production"
}
},
"bucket2" = {
bucket_name = "my-bucket-1vm4ficd0i0-df"
force_destroy = true
policy_identifiers = ["arn:aws:iam::12345667889:user/devops"] #Replace this block with the ARN of the IAM user you created
policy_actions = ["s3:GetObject", "s3:PutObject"]
versioning_status = "Enabled"
sse_algorithm = "AES256"
tags = {
Name = "My Bucket 1"
Environment = "Production"
}
}
}
region = "us-west-1"
Why is this the best option?
Terraform automatically loads it, meaning no need to pass extra arguments.
It keeps the configuration separate from the Terraform code, improving readability.
It makes automation easier, especially in CI/CD pipelines.
Step Five: Deploying the Configuration
Now that everything is set up, let’s deploy our infrastructure step by step.
Step 1: Initialize Terraform
terraform init
Downloads the module.
Prepares Terraform for execution.
Step 2: Preview Changes
terraform plan
- Shows what Terraform will create before making any changes.
Step 3: Apply the Configuration
terraform apply
Terraform will ask for confirmation. Type
yes
to proceed.The S3 buckets are created based on the values in
terraform.auto
.tfvars
.
The Result
Conclusion
Throughout this section, we tackled a fundamental challenge: How can we consume Terraform modules effectively without sacrificing scalability or maintainability? By exploring practical strategies, we identified that using .tfvars
to manage configurations dynamically is a game-changer for streamlining module integration.
This approach empowers us to keep our Terraform code clean, modular, and reusable. By relying on input variables and external configuration files, we avoid hardcoding values, making our infrastructure code more adaptable to changing requirements.
Ultimately, this method not only simplifies the consumption of Terraform modules but also ensures that our infrastructure remains scalable and easy to maintain. Whether you're managing a single project or a complex multi-environment setup, adopting these practices will help you build a more efficient and future-proof infrastructure workflow.