Running one container on one server is simple. Running a real production application means handling traffic spikes, server failures, rolling updates without downtime, and deploying across multiple machines. This is container orchestration. Docker Swarm is Docker's built-in orchestration tool — simpler than Kubernetes, but powerful enough for many real-world use cases.
Docker Swarm turns a group of Docker hosts (machines) into a single virtual cluster. Some machines act as managers — they make scheduling decisions and manage cluster state. Others are workers — they run the actual containers. When you deploy a service to the swarm, Docker distributes the containers across worker nodes, monitors their health, restarts failed containers, and load-balances traffic between them automatically.
# On the manager node
docker swarm init --advertise-addr <MANAGER_IP>
# Output gives you a join token for workers
# Something like:
# docker swarm join --token SWMTKN-1-xxxx 192.168.1.100:2377
# On each worker node, run the join command
docker swarm join --token SWMTKN-1-xxxx 192.168.1.100:2377
# List all nodes in the swarm (from manager)
docker node ls
In Swarm mode you do not run containers directly — you create services. A service defines what image to run, how many replicas, port mappings, resource limits, and update policies.
# Create a web service with 3 replicas
docker service create --name web-service --replicas 3 --publish published=80,target=80 nginx:latest
# List services
docker service ls
# See which nodes are running the replicas
docker service ps web-service
# Scale service up or down
docker service scale web-service=5
# Update service image (rolling update)
docker service update --image nginx:1.25 --update-parallelism 1 --update-delay 10s web-service
Docker Stack lets you deploy a docker-compose.yml file to a swarm cluster. This is the recommended way to manage multi-service applications in Swarm:
version: '3.8'
services:
web:
image: yourusername/my-app:latest
ports:
- "80:5000"
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
order: rolling-update
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
networks:
- app-net
db:
image: postgres:15
volumes:
- db-data:/var/lib/postgresql/data
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
secrets:
- db_password
networks:
- app-net
volumes:
db-data:
networks:
app-net:
driver: overlay
secrets:
db_password:
external: true
# Deploy stack
docker stack deploy -c docker-stack.yml my-app
# List stacks
docker stack ls
# List services in stack
docker stack services my-app
# Remove stack
docker stack rm my-app
Swarm supports rolling updates out of the box. When you update a service, Swarm updates replicas one by one (or in configurable batches), waiting between each update. This means at any moment, some replicas are running the old version and some are running the new version — but the service stays available throughout the entire update process.
This is a common question. Swarm is simpler to set up and manage — ideal for smaller teams and straightforward deployments. Kubernetes is more complex but offers more features, better ecosystem, and is the industry standard for large-scale container orchestration. If you are running 10-50 containers and want simplicity, Swarm works great. If you are building a large microservices platform, Kubernetes is the right tool. Learning Swarm first is a good stepping stone to Kubernetes.
In Part 12, the final part of this series, we will build a complete real-world Docker project from scratch — bringing together everything you have learned in this tutorial series.
Docker Swarm and Kubernetes both orchestrate containers across multiple machines, but they serve different scales and complexity levels. Docker Swarm is simpler to set up and operate — if you know Docker Compose, Swarm uses nearly the same file format with minimal additions. It is appropriate for smaller deployments, teams without Kubernetes expertise, or situations where operational simplicity is the priority. Kubernetes has a steeper learning curve but offers more powerful capabilities: fine-grained resource management, a rich ecosystem of tools, better support for stateful applications, and the operational maturity that comes from running at much larger scale. In practice, most cloud-native deployments at any significant scale use Kubernetes (or managed services like EKS, GKE, AKS). Swarm is a pragmatic choice for smaller self-hosted deployments.
In Swarm mode, the cluster consists of manager nodes (which orchestrate services) and worker nodes (which run containers). Services define what to run and how many replicas to maintain — Swarm automatically distributes replicas across worker nodes and reschedules containers if a node fails. The docker service create command creates a service, docker service scale changes replica count, and docker service update performs rolling updates. Stacks in Swarm extend Docker Compose to deploy multi-service applications to the cluster using docker stack deploy — making the transition from single-machine Compose to multi-machine Swarm relatively straightforward.
Initialize a single-node Swarm with docker swarm init. Deploy a simple service: docker service create --name web --replicas 3 --publish 80:80 nginx. Verify three replicas are running with docker service ps web. Scale it: docker service scale web=5. Update the image: docker service update --image nginx:alpine web and watch the rolling update with docker service ps web. Finally, remove the service and leave the swarm. This exercise gives hands-on experience with all core Swarm operations.
Cloud computing is a domain where deep intuition — the ability to make good architectural decisions quickly, to diagnose problems efficiently, and to anticipate how systems will behave under load — develops through accumulated hands-on experience. Every project you build on cloud infrastructure teaches you something that cannot be learned from documentation alone. The cost surprises, the permission errors, the networking debugging sessions, the performance investigations — these are not obstacles to learning, they are the learning. The engineers who have built genuinely deep cloud intuition have usually accumulated it through many projects over several years, not from any single course or certification. Start building things, make mistakes safely in learning environments, and accumulate that experience deliberately.