Table of Contents
- What is Kubernetes?
- Why Kubernetes?
- Core Kubernetes Architecture
- Key Building Blocks in Kubernetes
- Setting Up a Kubernetes Cluster
- Interacting with Kubernetes using kubectl
- How Kubernetes Works: Behind the Scenes
- Conclusion
What is Kubernetes?
Kubernetes is an open-source container orchestration platform that automates many manual processes involved in deploying, managing, and scaling containerized applications.
Originally created at Google (inspired by their internal system “Borg”), Kubernetes is now maintained by the Cloud Native Computing Foundation (CNCF). It has become the de facto standard for container orchestration in cloud-native application development.
Why Kubernetes?
With microservices and containers dominating modern application architectures, Kubernetes helps address the following challenges:
- Managing hundreds/thousands of containers across machines
- Ensuring application reliability and uptime
- Rolling updates without downtime
- Self-healing of services (e.g., auto-restart failed pods)
- Load balancing and service discovery
- Storage orchestration
- Automated rollbacks
Kubernetes abstracts the underlying infrastructure and provides a consistent platform for running applications anywhere.
Core Kubernetes Architecture
Kubernetes uses a master-worker architecture.
Control Plane Components
These components make decisions about the cluster (e.g., scheduling), and detect/respond to cluster events:
- kube-apiserver: Frontend of the control plane. All REST operations go through it.
- etcd: A consistent and highly-available key-value store for cluster configuration and state.
- kube-scheduler: Assigns workloads (pods) to available nodes based on resource requirements, policies, etc.
- kube-controller-manager: Handles controllers like node controller, replication controller, endpoints controller.
- cloud-controller-manager (optional): Integrates cloud-specific APIs for provisioning resources like load balancers, volumes.
Node Components
These run on every worker node and maintain the lifecycle of pods:
- kubelet: Communicates with the API server and ensures containers are running.
- kube-proxy: Maintains network rules, implements service discovery and routing.
- Container Runtime: Responsible for running the containers (e.g., Docker, containerd, CRI-O).
Key Building Blocks in Kubernetes
Pods
A pod is the smallest unit in Kubernetes, representing a single instance of a running process. A pod can contain one or more tightly coupled containers that share:
- Network IP and port space
- Storage volumes
- Process namespace (optional)
Pods are ephemeral – if a pod dies, it is replaced by another pod with a different IP.
ReplicaSets
ReplicaSets ensure that a specified number of identical pod replicas are running at all times. If a pod crashes, the ReplicaSet automatically replaces it.
Deployments
A Deployment wraps ReplicaSets and manages them declaratively. It allows:
- Rolling updates
- Rollbacks
- Versioned releases
- Declarative configuration
Example Deployment YAML:
yamlCopyEditapiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nginx:1.21
ports:
- containerPort: 80
Services
A Service provides a stable endpoint for accessing a set of pods.
Types of Services:
- ClusterIP: Accessible only within the cluster.
- NodePort: Exposes the service on the host machine IP at a static port.
- LoadBalancer: Provisions an external load balancer (cloud-only).
- Headless: Allows direct access to pods for stateful workloads.
Setting Up a Kubernetes Cluster
Local: Minikube & Kind
Minikube
Minikube runs a single-node Kubernetes cluster on your local machine using a VM or container.
bashCopyEditminikube start
kubectl get nodes
You can deploy a sample app:
bashCopyEditkubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
kubectl expose deployment hello-minikube --type=NodePort --port=8080
minikube service hello-minikube
Kind (Kubernetes IN Docker)
Great for testing in CI or local environments with multi-node cluster simulation.
bashCopyEditkind create cluster
kubectl cluster-info
Managed Kubernetes Services
For production environments, you’ll typically use a managed Kubernetes service:
- GKE (Google Kubernetes Engine)
- EKS (Elastic Kubernetes Service by AWS)
- AKS (Azure Kubernetes Service)
These offer:
- Auto-scaling
- Built-in monitoring/logging
- Native cloud integrations
Interacting with Kubernetes using kubectl
kubectl
is the command-line tool to interact with your Kubernetes cluster.
Common Commands
Command | Description |
---|---|
kubectl get nodes | List all nodes in the cluster |
kubectl get pods | List running pods |
kubectl describe pod <name> | Detailed pod info |
kubectl logs <name> | Show logs of a container |
kubectl exec -it <pod> -- bash | Open shell in a pod |
kubectl apply -f file.yaml | Deploy resources from manifest |
kubectl delete -f file.yaml | Delete deployed resources |
How Kubernetes Works: Behind the Scenes
- You submit a deployment manifest to the API server.
- The scheduler picks a suitable node.
- The kubelet on that node pulls the image, creates a pod, and starts containers.
- A ReplicaSet monitors pod health and spins up new ones if needed.
- A Service is created to provide access to the pods.
- Ingress or LoadBalancer routes external traffic to the Service.
All this happens declaratively, managed by the control loop design pattern built into Kubernetes.
Conclusion
Kubernetes is a powerful tool for orchestrating containerized applications. It brings resilience, scalability, and flexibility to the modern DevOps stack. By understanding its architecture and components, you can harness the full power of cloud-native infrastructure.