Table of contents
- Table of Contents
- Prerequisite
- What is Terraform?
- What is Terraform Module?
- What is Terraform Registry?
- The S3 Bucket Module
- Publishing Module To Terraform Registry
- Publish the Module to Terraform Registry
- Conclusion
Imagine this: You’re building a complex cloud infrastructure with Terraform, and you find yourself copying and pasting the same chunks of code across multiple projects. Sound familiar? Not only is this approach tedious, but it’s also error-prone and difficult to maintain. What if there was a better way—a way to write your infrastructure code once and reuse it across environments, teams, and projects?
Terraform Modules—the building blocks of reusable, scalable, and maintainable Infrastructure as Code (IaC). Modules allow you to encapsulate your infrastructure logic into reusable components, making your code cleaner, more efficient, and easier to manage. Whether you’re provisioning a simple VPC, deploying a Kubernetes cluster, or setting up a multi-tier application, modules can save you time, reduce duplication, and ensure consistency across your infrastructure.
In this article, we’ll dive deep into how to create, version, and use Terraform modules using amazon S3 as a case study. By the end, you’ll have the tools and knowledge to transform your Terraform workflows, making your infrastructure code as modular and reusable as your application code. Let’s get started!
Table of Contents
What is Terraform?
What is a Terraform Module?
What is Terraform Registry?
The S3 Bucket Module
Code Breakdown
Publishing Module To Terraform Registry
Key Takeaways
Conclusion
Prerequisite
Before diving in, make sure you have these installed:
Terraform CLI
AWS CLI
VCode (Code Editor)
What is Terraform?
Terraform is an Infrastructure as Code (IaC) tool that allows you to define and manage cloud infrastructure using configuration files. Instead of clicking around in the AWS console, you write declarative code that describes what your infrastructure should look like, and Terraform takes care of provisioning it for you.
Now, think of Terraform as your tool for building infrastructure. Instead of manually creating resources like S3 buckets, VPC, IAM, you use code to define what you need. Terraform takes that code and makes it real.
Terraform uses a language called HCL (HashiCorp Configuration Language), which is simple yet powerful. With HCL, you can describe your infrastructure in a way that’s both human-readable and machine-friendly.
What is Terraform Module?
Modules are the heart of Terraform. They allow you to package and reuse code, making your infrastructure more modular and maintainable. Think of them as Lego blocks:
Each module is a self-contained piece of infrastructure (e.g., an S3 bucket with versioning and encryption).
You can reuse modules across projects, teams, or environments.
Modules make your code DRY (Don’t Repeat Yourself).
What is Terraform Registry?
Now that we understand the power of Terraform modules, the next question is—where do you find them? Just like developers rely on package managers for pre-built solutions, Terraform has its own treasure trove: the Terraform Registry.
Terraform Registry is like an app store for Terraform modules. It hosts thousands of pre-built modules for AWS, Azure, GCP, and more. Instead of building everything from scratch, you can use these modules to speed up your workflow.
Public Modules: Free, community-driven modules for common use cases.
Private Modules: Host your own modules for internal use.
For example, instead of writing your own S3 module, you could use the official AWS S3 module from the Terraform Registry.
The S3 Bucket Module
In this article we will be using AWS S3 as our case study. We will build an S3 bucket module for creating AWS S3 buckets with policies, versioning, and encryption. This example will show you how modularization makes your infrastructure code reusable and scalable.
Directory Structure: Organizing Your Module
modules/
buckets/
├── main.tf
├── variables.tf
├── outputs.tf
└── README.md
Before we dive into the code, let’s understand how we organize our Terraform files. Think of these files like different sections of a recipe book:
main.tf
– The main instructions (defines the actual S3 bucket and its settings).variables.tf
– A list of ingredients (allows customization without changing the main code).outputs.tf
– The final result (provides important details like bucket names and ARNs).README.md
– The guidebook (explains what this module does and how to use it).
Key Features of the S3 Module
Dynamic Bucket Creation: Uses
for_each
to create multiple buckets dynamically.Bucket Policies: Configures IAM policies for each bucket.
Versioning and Encryption: Enables versioning and server-side encryption.
Code Breakdown
Add the code below in your main.tf file
resource "aws_s3_bucket" "bucket" {
for_each = var.buckets
bucket = each.value.bucket_name
force_destroy = each.value.force_destroy
tags = each.value.tags
}
This Terraform code snippet defines an aws_s3_bucket
resource using a for_each
loop. Let’s break it down step by step:
Resource Block
resource "aws_s3_bucket" "bucket" {
This declares a Terraform resource of type
aws_s3_bucket
, which is used to create an S3 bucket in AWS.The resource is named
bucket
(this is the logical name used within Terraform to reference this resource).
for_each
Meta-Argument
for_each = var.buckets
The
for_each
argument tells terraform to create multiple instances of theaws_s3_bucket
resource.It iterates over the
var.buckets
variable, which is expected to be a map or set of objects. Each key-value pair invar.buckets
will create a separate S3 bucket.
each.value
bucket = each.value.bucket_name
force_destroy = each.value.force_destroy
tags = each.value.tags
Inside the
for_each
loop,each.value
refers to the current iteration's value from thevar.buckets
map or set.The
bucket
argument sets the name of the S3 bucket usingeach.value.bucket_name
. This means each bucket will have a unique name defined in thevar.buckets
variable.The
force_destroy
argument determines whether the bucket should be deleted even if it contains objects. This is set usingeach.value.force_destroy
.The
tags
argument assigns tags to the bucket usingeach.value.tags
. Tags are key-value pairs used for organizing and identifying resources.
Adding Bucket Policies (
data "aws_iam_policy_document" "s3_policy"
)
Now that we have our buckets, we need to control who can access them.
data "aws_iam_policy_document" "s3_policy" {
for_each = var.buckets
statement {
principals {
type = "AWS"
identifiers = each.value.policy_identifiers
}
actions = each.value.policy_actions
resources = [
aws_s3_bucket.bucket[each.key].arn,
"${aws_s3_bucket.bucket[each.key].arn}/*",
]
}
}
Think of this as creating a "VIP guest list" for your S3 bucket.
principals
→ Defines who has access (e.g., specific AWS users or services).actions
→ Defines what they can do (read files, upload files, etc.).resources
→ Lists which buckets the policy applies to (the bucket itself and all files inside).
Enforcing Security: Versioning & Encryption
As regards S3 bucket versioning it is based on an engineer’s discretion as some might love to turn it off so in this article we will make it optional.
resource "aws_s3_bucket_versioning" "s3_bucket_versioning" {
for_each = { for k, v in var.buckets : k => v if try(v.versioning_status, null) != "" }
bucket = aws_s3_bucket.bucket[each.key].id
versioning_configuration {
status = each.value.versioning_status
}
}
Code Breakdown
for_each = { for k, v in var.buckets : k => v if try(v.versioning_status, null) != "" }
The code iterates over the
var.buckets
variable, which is expected to be a map of objects. Each key-value pair invar.buckets
represents an S3 bucket configuration.The
for
expression filters the buckets to only include those whereversioning_status
is defined and not an empty string (""
).The
try(v.versioning_status, null)
function safely checks ifversioning_status
exists in the object. If it doesn’t, it returnsnull
.The
if
condition ensures that only buckets with a non-emptyversioning_status
are processed.
Now, let’s encrypt our data to prevent unauthorized access:
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_sse" {
for_each = var.buckets
bucket = aws_s3_bucket.bucket[each.key].id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = each.value.sse_algorithm
}
}
}
Think of encryption as locking your storage box with a special key so only authorized people can access the contents.
sse_algorithm = each.value.sse_algorithm
→ Defines the encryption method (e.g., AES-256).
variables.tf
: Making the Module Flexible
To keep our module reusable, we define variables that allow users to customize their S3 buckets.
variable "buckets" {
description = "A map of S3 bucket configurations."
type = map(object({
bucket_name = string
force_destroy = bool
policy_identifiers = list(string)
policy_actions = list(string)
versioning_status = string
sse_algorithm = string
tags = map(string)
}))
}
Instead of hardcoding values, we use variables to allow users to specify different bucket settings.
This makes our module scalable (can be used in multiple environments without rewriting code).
outputs.tf
: Getting Useful Information
After Terraform creates the S3 buckets, we may want to retrieve their details.
output "bucket_names" {
description = "The names of the created S3 buckets."
value = { for k, v in aws_s3_bucket.bucket : k => v.bucket }
}
output "bucket_arns" {
description = "The ARNs of the created S3 buckets."
value = { for k, v in aws_s3_bucket.bucket : k => v.arn }
}
The snippet above retrieves the detail for each bucket name and bucket ARNs.
Publishing Module To Terraform Registry
Now that we've explored how Terraform modules are structured and configured, the next logical step is making them accessible beyond just your local environment. Publishing a module to the Terraform Registry allows for easy reuse, versioning, and collaboration across teams.
However, before we can publish a module, we need to ensure it is stored in a GitHub repository. Terraform Registry pulls modules from GitHub, so setting up a repository correctly is a crucial first step.
Creating and Pushing Your Module to GitHub
Terraform Registry requires modules to be stored in GitHub for public sharing. The repository must follow Terraform’s naming convention:
terraform-<PROVIDER>-<MODULE-NAME>
Example: If you're publishing a module for AWS S3 buckets, name it terraform-aws-s3-bucket
. just like the case in this project, i used terraform-aws-modules
as the Github repo name.
Steps to Create and Set Up a Repository:
i. Create a new repository on GitHub with the correct name (e.g., terraform-aws-modules
).
Clone the repository to your local machine:
git clone https://github.com/your-username/terraform-aws-modules.git
cd terraform-aws-modules
Initialize Git and Add Terraform Files
Once inside your project directory, initialize Git and add the Terraform module files:
git init # Initialize the Git repository
git add . # Stage all files for commit
git commit -m "My first Terraform module"
Ensure your module contains at least these files:
main.tf
(Defines the module resources)variables.tf
(Defines input variables)outputs.tf
(Defines module outputs)
Tag the Module Version
Terraform Registry identifies different versions of a module using Git tags. To create a versioned release, follow these steps:
git tag 1.0.0 # Create a version tag
git push origin main # Push the code to GitHub
git push origin --tags # Push the version tag to GitHub
Authenticate and Connect to GitHub
Before publishing the module, Terraform needs authentication to access GitHub. This is done by:
Logging into Terraform Registry (registry.terraform.io)
Connecting your GitHub account to Terraform Registry
Authorizing Terraform to access your repositories
Once authenticated, Terraform Registry can scan your repositories and detect the module.
Publish the Module to Terraform Registry
Once authentication is complete:
Go to Terraform Registry → Publish Module
Select your GitHub repository (e.g.,
terraform-aws-modules
)Click "Publish"
Terraform will validate your module and publish it
Your module is now publicly available and can be used in Terraform projects!
Conclusion
By now, you’ve gained a solid understanding of how to structure, version, and publish Terraform modules. Instead of duplicating infrastructure code across multiple projects, you can now encapsulate best practices into reusable modules, making your deployments consistent, scalable, and maintainable.
Key takeaways from this section
- Modular Infrastructure – Writing reusable Terraform modules simplifies cloud deployments.
- Versioning and Publishing – Storing modules in GitHub and Terraform Registry ensures easy sharing and collaboration.
- Automation and Consistency – Teams can now consume your modules without modifying core configurations.
With your module successfully published, the next step is learning how to consume and apply it in real-world projects. In Part two, we’ll explore the best practices for integrating modules into your Terraform workflows, ensuring smooth and efficient infrastructure provisioning. 🚀
See you in Part two