Docker Full Tutorial — Part 8: Multi-Stage Builds

By Suraj Ahir November 29, 2025 6 min read

Docker — Docker Logs
Docker — Docker Logs
← Part 7 Docker Tutorial · Part 8 of 12 Part 9 →

One of the most important Docker techniques for production deployments is multi-stage builds. Without this, your production images contain compilers, build tools, test frameworks, and development dependencies that your running application does not need. The result is bloated images that are slower to pull, take more storage, and have a larger attack surface. Multi-stage builds solve this elegantly.

The Problem — Fat Images

Consider building a Go application. The Go compiler alone is around 600MB. Your compiled application binary might be just 10MB. Why would you ship 600MB to production when the running application only needs 10MB? Before multi-stage builds, this was common. Developers would either use a bloated single-stage image or maintain separate Dockerfiles for building and running — which was messy.

Multi-Stage Build — How It Works

A multi-stage Dockerfile has multiple FROM statements, each starting a new build stage. Docker executes all stages but only keeps what you explicitly copy into the final stage. The intermediate stages and their layers are discarded. Your final image contains only what you need to run the application.

Multi-Stage — Go Application
# Stage 1: Build stage (uses large Go image)
FROM golang:1.21 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -o myapp .

# Stage 2: Final stage (uses tiny scratch image)
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy ONLY the compiled binary from builder stage
COPY --from=builder /app/myapp .
EXPOSE 8080
CMD ["./myapp"]

The first stage uses the full Go development image (about 800MB). The second stage uses Alpine Linux (about 5MB) and copies only the compiled binary. The resulting image is around 15MB instead of 800MB.

Multi-Stage for Python Applications

Multi-Stage — Python
# Stage 1: Build dependencies
FROM python:3.11 AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --user --no-cache-dir -r requirements.txt

# Stage 2: Production image
FROM python:3.11-slim
WORKDIR /app
# Copy installed packages from builder
COPY --from=builder /root/.local /root/.local
COPY . .
ENV PATH=/root/.local/bin:$PATH
EXPOSE 5000
CMD ["gunicorn", "app:app", "-b", "0.0.0.0:5000"]

Multi-Stage for Node.js / React

This pattern is very common for frontend applications where you build during CI but only need the compiled static files at runtime:

Multi-Stage — React App
# Stage 1: Build the React app
FROM node:20 AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
RUN npm run build

# Stage 2: Serve with Nginx (no Node.js needed!)
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

The final image contains only Nginx and your compiled React files. No Node.js, no npm, no source code. The size goes from potentially 1GB+ down to around 30MB.

Named Stages and Selective Targeting

Named Stages
# Build only a specific stage (useful for testing)
docker build --target builder -t my-app:build .
docker build --target production -t my-app:prod .

# Example with test stage
FROM python:3.11 AS base
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

FROM base AS test
RUN pip install pytest
COPY . .
RUN pytest tests/

FROM base AS production
COPY . .
CMD ["python", "app.py"]

Copying from External Images

You can copy files from any image, not just stages in the same Dockerfile:

Copy From External Image
FROM ubuntu:22.04
# Copy a binary from an official image
COPY --from=nginx:latest /usr/sbin/nginx /usr/sbin/nginx
COPY --from=postgres:15 /usr/bin/psql /usr/bin/psql

Best Practices Summary

In Part 9, we will cover Docker security — running containers safely, avoiding common vulnerabilities, and following security best practices for production environments.

Optimizing Multi-Stage Builds Further

Multi-stage builds can be further optimized by targeting specific stages during development. Use docker build --target builder . to build only up to the builder stage — useful for running tests or debugging in the build environment without producing the final runtime image. You can also use build arguments to pass configuration into the build process: ARG BUILD_ENV=production in the Dockerfile and docker build --build-arg BUILD_ENV=development . to override it. Build arguments are available only during the build process, not in the running container — use environment variables for runtime configuration.

Distroless Images for Maximum Security

Google's distroless images take the minimal image philosophy further than Alpine. Distroless images contain only your application and its runtime dependencies — no package manager, no shell, no operating system utilities. This dramatically reduces the attack surface since there is no shell for an attacker to execute even if they gain code execution. The trade-off is that debugging becomes harder since you cannot exec into the container and run commands. A practical compromise: use distroless in production builds, Alpine-based images in development builds where debugging capability matters. The multi-stage pattern supports this easily by maintaining separate final-stage definitions.

Practice Exercise

Take an existing application with a basic Dockerfile and optimize it through multiple iterations: first, measure the baseline image size with docker images. Then apply a multi-stage build and measure the improvement. Then try switching the runtime stage to an Alpine or slim base image and measure again. Finally, experiment with ordering Dockerfile instructions to maximize cache efficiency — move frequently changing layers (like copying application code) after less frequently changing layers (like installing dependencies). Document the size reduction at each step.

Building Cloud Intuition Over Time

Cloud computing is a domain where deep intuition — the ability to make good architectural decisions quickly, to diagnose problems efficiently, and to anticipate how systems will behave under load — develops through accumulated hands-on experience. Every project you build on cloud infrastructure teaches you something that cannot be learned from documentation alone. The cost surprises, the permission errors, the networking debugging sessions, the performance investigations — these are not obstacles to learning, they are the learning. The engineers who have built genuinely deep cloud intuition have usually accumulated it through many projects over several years, not from any single course or certification. Start building things, make mistakes safely in learning environments, and accumulate that experience deliberately.

Disclaimer: This content is for educational purposes only. SRJahir Tech does not guarantee any specific outcome or job placement.