Table of Contents
- Overview
- Why Use Kubernetes for Deployment?
- Step-by-Step: Deploying Dockerized Applications
- Kubernetes YAML File Deep Dive
- Managing Pods, Deployments, and Services
- Best Practices
- Conclusion
Overview
In this module, we focus on deploying Dockerized applications to a Kubernetes cluster. You’ll learn how to:
- Package your application as a Docker image
- Write Kubernetes manifests (YAML)
- Create and manage deployments, pods, and services
This is the core workflow for running production-grade containerized applications in Kubernetes.
Why Use Kubernetes for Deployment?
Kubernetes offers a powerful deployment engine that abstracts underlying infrastructure complexity and provides:
- Declarative management: Define the desired state using YAML
- Self-healing: Kubernetes ensures that pods are restarted on failure
- Scalability: Easily scale your application up or down
- Rolling updates and rollbacks: Built-in support for zero-downtime deployments
- Service discovery and load balancing
Step-by-Step: Deploying Dockerized Applications
Let’s go through the complete workflow.
1. Dockerizing Your Application
Suppose you have a simple Node.js app. Create a Dockerfile
:
DockerfileCopyEditFROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
Build and tag the image:
bashCopyEditdocker build -t myapp:latest .
2. Pushing to a Container Registry
Before Kubernetes can pull your Docker image, it must be in a registry like Docker Hub or GitHub Container Registry:
bashCopyEditdocker tag myapp:latest yourusername/myapp:latest
docker push yourusername/myapp:latest
3. Creating Kubernetes Manifests
You need to create YAML files that define your Kubernetes objects:
- Deployment: Specifies the pods and container configuration
- Service: Exposes the pods on a network endpoint
4. Applying the Manifests
Use kubectl
to apply the configurations:
bashCopyEditkubectl apply -f deployment.yaml
kubectl apply -f service.yaml
5. Exposing the Application with a Service
If you’re running on Minikube or using NodePort
, you can expose your application:
bashCopyEditminikube service myapp-service
Kubernetes YAML File Deep Dive
Let’s break down the essential manifest files.
Deployment Manifest
yamlCopyEditapiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: yourusername/myapp:latest
ports:
- containerPort: 3000
replicas
: Number of pod copies to runselector
: Targets pods to managecontainers.image
: Docker image to runcontainerPort
: Port exposed by the container
Service Manifest
yamlCopyEditapiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
type: NodePort
ports:
- port: 3000
targetPort: 3000
nodePort: 30001
selector
: Connects the service to pods with matching labelstype: NodePort
: Exposes the service on a static port on each nodeport
: Port exposed by the servicetargetPort
: Port the container listens on
Managing Pods, Deployments, and Services
Once deployed, you can inspect and manage your resources:
Check Deployments
bashCopyEditkubectl get deployments
kubectl describe deployment myapp-deployment
Check Pods
bashCopyEditkubectl get pods
kubectl logs <pod-name>
kubectl exec -it <pod-name> -- /bin/sh
Check Services
bashCopyEditkubectl get services
Scaling
bashCopyEditkubectl scale deployment myapp-deployment --replicas=5
Rolling Update
Just update the image in your deployment YAML and re-apply:
bashCopyEditkubectl apply -f deployment.yaml
To watch rollout progress:
bashCopyEditkubectl rollout status deployment myapp-deployment
Rollback if needed:
bashCopyEditkubectl rollout undo deployment myapp-deployment
Best Practices
- Use
readinessProbes
andlivenessProbes
for app health checks - Keep manifests DRY using Kustomize or Helm (covered in later modules)
- Use namespaces to isolate environments (e.g., dev, staging, prod)
- Tag your images properly (
myapp:v1.0.0
) for better tracking - Enable resource limits (
resources.requests/limits
) to avoid overuse
Conclusion
Deploying applications to Kubernetes requires an understanding of Docker, YAML manifests, and the core components like Pods, Deployments, and Services. Kubernetes makes the deployment process resilient, scalable, and cloud-native-ready.