Before Infrastructure as Code, setting up a server meant logging into a web console, clicking through menus, filling forms, and hoping you remembered every step when you need to replicate it. One engineer could set up a dev environment completely differently from production. Changes were undocumented. Rollbacks were manual and scary. Infrastructure as Code (IaC) changed all of this. Terraform is the tool that made IaC accessible to everyone.
Infrastructure as Code means your servers, databases, networks, load balancers, and every other infrastructure component are described in code files. You commit those files to Git just like application code. You can review changes with pull requests, roll back to previous states, recreate identical environments on demand, and ensure dev and production are always consistent. The infrastructure becomes reproducible, auditable, and collaborative.
Terraform is an open-source IaC tool created by HashiCorp. It uses a declarative language called HCL (HashiCorp Configuration Language) to describe infrastructure. You write what you want — "I want an EC2 instance with these specifications" — and Terraform figures out how to create it, update it, or delete it. Terraform works with AWS, GCP, Azure, and hundreds of other providers.
# Configure AWS provider
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "ap-south-1"
}
# Create an EC2 instance
resource "aws_instance" "web_server" {
ami = "ami-0f5ee92e2d63afc18" # Ubuntu 22.04 in Mumbai
instance_type = "t2.micro"
tags = {
Name = "MyWebServer"
Environment = "Development"
ManagedBy = "Terraform"
}
}
# Create an S3 bucket
resource "aws_s3_bucket" "app_storage" {
bucket = "my-app-storage-bucket-unique-name"
tags = {
Name = "AppStorage"
}
}
# Initialize (download providers)
terraform init
# Preview changes (no actual changes made)
terraform plan
# Apply changes (creates/updates infrastructure)
terraform apply
# Apply without confirmation prompt (for CI/CD)
terraform apply -auto-approve
# Destroy all resources
terraform destroy
# Show current state
terraform show
# List resources in state
terraform state list
# variables.tf
variable "instance_type" {
description = "EC2 instance type"
type = string
default = "t2.micro"
}
variable "environment" {
description = "Deployment environment"
type = string
}
# Use variables in main.tf
resource "aws_instance" "web" {
instance_type = var.instance_type
tags = {
Environment = var.environment
}
}
# terraform.tfvars (never commit to git if contains secrets)
instance_type = "t2.small"
environment = "production"
By default Terraform stores state locally. This does not work for teams — two engineers running Terraform simultaneously can corrupt the state. Remote state in S3 with DynamoDB locking solves this:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "production/terraform.tfstate"
region = "ap-south-1"
dynamodb_table = "terraform-state-lock"
encrypt = true
}
}
In Part 8, we will cover Ansible — configuration management and server automation, the tool DevOps teams use to keep server configurations consistent and apply changes across fleets of servers.
Terraform maintains a state file that maps your configuration to real infrastructure resources. This state file is critical — losing it means Terraform loses track of what it created. For team use, store state remotely in an S3 bucket (AWS), GCS bucket (GCP), or Terraform Cloud, never in local files or version control. Remote state also enables state locking — preventing two team members from running terraform apply simultaneously and creating conflicts. Use workspaces or separate state files for different environments (dev, staging, production) to isolate changes and reduce the blast radius of mistakes.
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "production/terraform.tfstate"
region = "ap-south-1"
encrypt = true
dynamodb_table = "terraform-state-lock"
}
}
Terraform modules are reusable packages of infrastructure configuration — similar to functions in programming. A well-designed module accepts input variables, creates a set of related resources, and exposes output values. Creating modules for common patterns — a standard VPC layout, an EC2 instance with monitoring, an RDS database with proper security groups — allows consistent, auditable infrastructure across multiple projects and environments. The Terraform Registry provides community modules for common use cases; evaluate them carefully before using in production.
Write a Terraform configuration that creates a VPC with public and private subnets, an EC2 instance in the public subnet with a security group allowing SSH from your IP, and outputs the instance's public IP. Use variables for configurable values like instance type and your IP address. Run terraform plan to preview changes before applying. After creating the infrastructure, verify you can SSH to the instance, then destroy everything with terraform destroy.
DevOps is not a destination but a continuous journey of improvement. The practices covered here — automation, monitoring, infrastructure as code, CI/CD pipelines — are tools in service of a deeper goal: enabling teams to deliver software changes to production quickly, safely, and reliably. The measurement that matters is not which tools you use but how long it takes to go from a committed code change to running in production, and how confident you are in that process. The best DevOps teams measure their deployment frequency, lead time for changes, change failure rate, and mean time to recovery (the DORA metrics), and treat these as engineering objectives to improve over time.