Table of Contents
- What is Docker Compose?
- Why Use Docker Compose?
- Installing Docker Compose
- Understanding docker-compose.yml
- Creating a Multi-Container Application
- Managing Services with Compose
- Networking in Docker Compose
- Volumes and Data Persistence
- Best Practices for Writing Compose Files
- Conclusion
What is Docker Compose?
Docker Compose is a tool that allows you to define and run multi-container Docker applications. Using a single YAML configuration file (docker-compose.yml
), you can specify the services, networks, and volumes required for your app and spin them up with one command.
It is especially useful in microservice-oriented applications, where different components (e.g., API, database, cache) run in separate containers.
Why Use Docker Compose?
- Simplified Configuration: All container definitions in one file.
- Easy Environment Replication: Consistent development, staging, and production setups.
- One-Command Setup: Bring up all services using
docker-compose up
. - Supports Volumes and Networks: Preconfigure how containers communicate and store data.
Installing Docker Compose
If you have Docker Desktop (on macOS or Windows), Docker Compose is already included.
On Linux (CLI installation):
sudo curl -L "https://github.com/docker/compose/releases/download/v2.22.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
docker-compose --version
Understanding docker-compose.yml
A basic docker-compose.yml
file looks like this:
version: '3.8'
services:
web:
build: .
ports:
- "3000:3000"
redis:
image: "redis:alpine"
Breakdown:
version
: Specifies the Compose file format version.services
: Defines each container.build
: Builds the image from the current directory’s Dockerfile.image
: Pulls an image from Docker Hub.ports
: Maps container port to host port.
Creating a Multi-Container Application
Let’s build a basic app with a Node.js API and a Redis cache.
Directory Structure
/multi-app
├── app
│ ├── Dockerfile
│ ├── index.js
│ └── package.json
└── docker-compose.yml
index.js
const express = require('express');
const redis = require('redis');
const app = express();
const client = redis.createClient({ url: 'redis://redis:6379' });
client.connect();
app.get('/', async (req, res) => {
const count = await client.incr('visits');
res.send(`Visit count: ${count}`);
});
app.listen(3000, () => {
console.log('Server running on port 3000');
});
package.json
{
"name": "docker-compose-app",
"dependencies": {
"express": "^4.18.2",
"redis": "^4.6.7"
}
}
Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
docker-compose.yml
version: '3.8'
services:
web:
build: ./app
ports:
- "3000:3000"
depends_on:
- redis
redis:
image: redis:alpine
Run the App
docker-compose up --build
Navigate to http://localhost:3000
— Each refresh increments the counter using Redis.
Managing Services with Compose
- Start services:
docker-compose up -d
- Stop services:
docker-compose down
- Rebuild services:
docker-compose up --build
- View logs:
docker-compose logs -f
Networking in Docker Compose
All services defined in a Compose file share a common default network. This allows containers to refer to each other by their service names (redis
, web
, etc.) without needing IP addresses.
You can define custom networks:
networks:
backend:
Then assign services to them:
services:
web:
networks:
- backend
Volumes and Data Persistence
To persist Redis data:
services:
redis:
image: redis:alpine
volumes:
- redis-data:/data
volumes:
redis-data:
This ensures Redis data isn’t lost when the container stops.
Best Practices for Writing Compose Files
- Use
.env
Files for environment configuration. - Use Specific Image Versions: Avoid using
latest
tag blindly. - Keep Services Modular: Break monolith services into distinct containers.
- Use Health Checks to monitor container readiness.
- Avoid Hardcoding Secrets: Use secret management tools or Docker secrets.
Conclusion
Docker Compose enables you to orchestrate and manage multi-container applications effortlessly. It’s foundational for microservices and essential for any DevOps pipeline involving local development, staging, or testing.