Hanso Group

Kubernetes Fundamentals

Julian Lindner 17 minutes read

In today’s cloud-native landscape, Kubernetes has emerged as the de facto standard for container orchestration. Whether you’re running a small startup or managing enterprise infrastructure, understanding Kubernetes is increasingly essential for modern application deployment. This article provides a practical introduction to Kubernetes fundamentals, focusing on core concepts and hands-on examples.

What Is Kubernetes?

Kubernetes (often abbreviated as K8s) is an open-source platform designed to automate deploying, scaling, and operating containerized applications. Originally developed by Google based on their internal system called Borg, Kubernetes provides a container-centric management environment.

The name “Kubernetes” comes from Greek, meaning “helmsman” or “pilot,” which is fitting for a system that steers the complex world of container orchestration. Its key capabilities include:

  • Service discovery and load balancing
  • Storage orchestration
  • Automated rollouts and rollbacks
  • Self-healing
  • Secret and configuration management
  • Batch execution
  • Horizontal scaling

Core Kubernetes Architecture

Before diving into practical examples, let’s understand the fundamental components that make up a Kubernetes cluster:

Control Plane Components

The control plane manages the worker nodes and the Pods in the cluster. These components make global decisions about the cluster and detect and respond to cluster events.

  • kube-apiserver: Exposes the Kubernetes API, serving as the front end for the Kubernetes control plane
  • etcd: Consistent and highly-available key-value store used as Kubernetes’ backing store for all cluster data
  • kube-scheduler: Watches newly created Pods with no assigned node and selects a node for them to run on
  • kube-controller-manager: Runs controller processes such as node controller, job controller, and service account controller
  • cloud-controller-manager: Links your cluster into your cloud provider’s API

Node Components

Node components run on every node, maintaining running Pods and providing the Kubernetes runtime environment.

  • kubelet: Agent that runs on each node, ensuring containers are running in a Pod
  • kube-proxy: Network proxy that maintains network rules on nodes
  • Container runtime: Software responsible for running containers (Docker, containerd, CRI-O, etc.)

Getting Started: Setting Up a Local Kubernetes Cluster

For learning purposes, setting up a local Kubernetes environment is a practical way to experiment without cloud costs. Several tools facilitate this process, but we’ll focus on Kind (Kubernetes IN Docker) for its simplicity and lightweight nature.

Installing Kind

First, ensure you have Docker installed, then install Kind:

## For macOS (using Homebrew)
brew install kind

## For Linux
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

## For Windows (using Chocolatey)
choco install kind

Creating Your First Cluster

Once Kind is installed, create a cluster with a simple command:

kind create cluster --name my-cluster

This command creates a Kubernetes cluster running inside Docker containers instead of virtual machines, making it fast and efficient.

Verify your cluster is running:

kubectl cluster-info

If successful, you should see information about the Kubernetes control plane and CoreDNS.

Kubernetes Objects: The Building Blocks

Kubernetes objects are persistent entities that represent the state of your cluster. Let’s explore the fundamental objects you’ll work with:

Pods

Pods are the smallest deployable units in Kubernetes, representing a single instance of a running process in your cluster. A Pod encapsulates one or more containers, storage resources, a unique network IP, and options that govern how the container(s) should run.

Here’s a simple example of a Pod manifest:

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.17.3
    ports:
    - containerPort: 80

Create the Pod:

kubectl apply -f nginx-pod.yaml

Verify the Pod is running:

kubectl get pods

Deployments

While Pods are the basic unit of computation, Deployments provide declarative updates for Pods and ReplicaSets. They allow you to:

  • Describe a desired state
  • Change the actual state to the desired state at a controlled rate
  • Roll back to previous Deployment revisions

Here’s an example Deployment manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.3
        ports:
        - containerPort: 80

Create the Deployment:

kubectl apply -f nginx-deployment.yaml

Verify the Deployment created the desired number of Pods:

kubectl get deployments
kubectl get pods

Services

Services define a logical set of Pods and a policy to access them. As Pods are ephemeral (they can be created, destroyed, and moved), direct communication with specific Pods isn’t reliable. Services provide:

  • A stable endpoint to connect to Pods
  • Load balancing across multiple Pods
  • Service discovery via DNS

Here’s an example Service manifest:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: ClusterIP  # Default type, accessible within the cluster

Create the Service:

kubectl apply -f nginx-service.yaml

To expose your application externally, change the service type to NodePort or LoadBalancer:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-external
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80
  type: NodePort  # Exposes the Service on each Node's IP at a static port

ConfigMaps and Secrets

ConfigMaps and Secrets help decouple configuration from container images, making applications more portable.

ConfigMaps store non-confidential data in key-value pairs:

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  app.properties: |
    environment=development
    logging.level=INFO
  ui.properties: |
    color.theme=blue

Secrets store sensitive information like passwords, OAuth tokens, and SSH keys:

apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
type: Opaque
data:
  username: YWRtaW4=  # Base64 encoded "admin"
  password: cGFzc3dvcmQxMjM=  # Base64 encoded "password123"

To use a ConfigMap or Secret in a Pod:

apiVersion: v1
kind: Pod
metadata:
  name: web-app
spec:
  containers:
  - name: web-app
    image: my-web-app:1.0
    env:
    - name: DB_USERNAME
      valueFrom:
        secretKeyRef:
          name: db-credentials
          key: username
    - name: ENVIRONMENT
      valueFrom:
        configMapKeyRef:
          name: app-config
          key: environment
    volumeMounts:
    - name: config-volume
      mountPath: /etc/config
  volumes:
  - name: config-volume
    configMap:
      name: app-config

Practical Kubernetes Workflows

Now let’s explore some common workflows you’ll encounter when working with Kubernetes.

Deploying an Application

Let’s deploy a complete web application with a database using Kubernetes:

  1. First, create a namespace to organize resources:
kubectl create namespace web-app
  1. Create a ConfigMap for application settings:
apiVersion: v1
kind: ConfigMap
metadata:
  name: web-app-config
  namespace: web-app
data:
  db.host: "postgres-service"
  db.name: "myapp"
  app.mode: "production"
  1. Create a Secret for database credentials:
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
  namespace: web-app
type: Opaque
data:
  username: cG9zdGdyZXM=  # "postgres"
  password: c2VjdXJlcGFzc3dvcmQ=  # "securepassword"
  1. Deploy the database:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: web-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:13
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: password
        - name: POSTGRES_DB
          valueFrom:
            configMapKeyRef:
              name: web-app-config
              key: db.name
        volumeMounts:
        - name: postgres-data
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: postgres-data
        emptyDir: {}  # In production, use a persistent volume
---
apiVersion: v1
kind: Service
metadata:
  name: postgres-service
  namespace: web-app
spec:
  selector:
    app: postgres
  ports:
  - port: 5432
    targetPort: 5432
  type: ClusterIP
  1. Deploy the web application:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  namespace: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
      - name: web-app
        image: my-web-app:1.0  # Replace with your actual image
        ports:
        - containerPort: 3000
        env:
        - name: DB_HOST
          valueFrom:
            configMapKeyRef:
              name: web-app-config
              key: db.host
        - name: DB_NAME
          valueFrom:
            configMapKeyRef:
              name: web-app-config
              key: db.name
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: username
        - name: DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: password
        - name: APP_MODE
          valueFrom:
            configMapKeyRef:
              name: web-app-config
              key: app.mode
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
  namespace: web-app
spec:
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 3000
  type: LoadBalancer  # Exposes the service externally

Apply all these manifests:

kubectl apply -f web-app-config.yaml
kubectl apply -f db-credentials.yaml
kubectl apply -f postgres.yaml
kubectl apply -f web-app.yaml

Scaling Your Application

One of Kubernetes’ strengths is the ability to scale applications horizontally:

## Scale up to 5 replicas
kubectl scale deployment web-app -n web-app --replicas=5

## Verify the scaling operation
kubectl get pods -n web-app

Alternatively, update the deployment manifest and apply it:

spec:
  replicas: 5  # Changed from 3 to 5
kubectl apply -f web-app.yaml

Rolling Updates

To update your application to a new version:

## Update the container image
kubectl set image deployment/web-app web-app=my-web-app:1.1 -n web-app

Or update the manifest and apply it:

spec:
  template:
    spec:
      containers:
      - name: web-app
        image: my-web-app:1.1  # Updated from 1.0 to 1.1
kubectl apply -f web-app.yaml

Monitor the rollout:

kubectl rollout status deployment/web-app -n web-app

If there’s an issue, you can roll back:

kubectl rollout undo deployment/web-app -n web-app

Advanced Kubernetes Features

As you become more comfortable with Kubernetes basics, you’ll want to explore more advanced features:

Horizontal Pod Autoscaling

Horizontal Pod Autoscaling automatically scales the number of Pods in a deployment based on observed CPU utilization or other metrics:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-app-hpa
  namespace: web-app
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web-app
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80

Apply the autoscaler:

kubectl apply -f web-app-hpa.yaml

Persistent Volumes

For data that needs to persist beyond the lifecycle of a Pod, use Persistent Volumes:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
  namespace: web-app
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: web-app
spec:
  # ... other fields
  template:
    spec:
      containers:
      - name: postgres
        # ... other fields
        volumeMounts:
        - name: postgres-data
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: postgres-data
        persistentVolumeClaim:
          claimName: postgres-pvc

Ingress

To enable sophisticated HTTP routing, use Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-app-ingress
  namespace: web-app
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-app-service
            port:
              number: 80

Note that you need an Ingress controller like NGINX or Traefik running in your cluster for Ingress resources to work.

Monitoring and Troubleshooting

As your applications run in Kubernetes, monitoring and troubleshooting become essential tasks.

Basic Troubleshooting Commands

## Get pod status
kubectl get pods -n web-app

## View pod details
kubectl describe pod <pod-name> -n web-app

## View container logs
kubectl logs <pod-name> -n web-app

## Execute commands in a container
kubectl exec -it <pod-name> -n web-app -- /bin/bash

## View resource usage
kubectl top pods -n web-app
kubectl top nodes

Observing Pod Lifecycle Events

## Watch pods in real-time
kubectl get pods -n web-app --watch

## Check events in the namespace
kubectl get events -n web-app

Best Practices for Kubernetes Deployments

As you continue your Kubernetes journey, consider these best practices:

  1. Resource Requests and Limits: Always specify resource requests and limits for your containers to ensure efficient scheduling and prevent resource starvation:
resources:
  requests:
    memory: "128Mi"
    cpu: "100m"
  limits:
    memory: "256Mi"
    cpu: "500m"
  1. Health Checks: Implement liveness and readiness probes to help Kubernetes manage the lifecycle of your applications:
livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 30
  periodSeconds: 10
readinessProbe:
  httpGet:
    path: /ready
    port: 3000
  initialDelaySeconds: 5
  periodSeconds: 5
  1. Use Namespaces: Organize your resources into namespaces to provide isolation and better resource management.

  2. Label Everything: Use consistent labeling to organize and select your Kubernetes resources effectively.

  3. Use Deployments over Pods: Direct Pod creation should be rare; Deployments provide better management capabilities.

  4. GitOps Workflow: Store your Kubernetes manifests in git repositories and use tools like Flux or ArgoCD for continuous delivery.

  5. Security Context: Configure security contexts to restrict container capabilities:

securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  capabilities:
    drop:
      - ALL
  1. Network Policies: Implement network policies to control traffic flow between Pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-network-policy
  namespace: web-app
spec:
  podSelector:
    matchLabels:
      app: postgres
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: web-app
    ports:
    - protocol: TCP
      port: 5432

Conclusion

This article has provided a practical introduction to Kubernetes fundamentals. We’ve explored core concepts, set up a local development environment, and examined common workflows for deploying, scaling, and updating applications.

Kubernetes has a steep learning curve, but the investment in understanding its principles and practices pays dividends in application reliability, scalability, and operational efficiency. As container orchestration becomes increasingly central to modern application deployment, Kubernetes skills are invaluable for developers and operations teams alike.

As you continue your Kubernetes journey, remember that the ecosystem is vast and constantly evolving. Stay curious, experiment with new features, and engage with the vibrant Kubernetes community.

References

  1. Kubernetes Official Documentation. https://kubernetes.io/docs/home/

  2. Kind Documentation. https://kind.sigs.k8s.io/docs/user/quick-start/

  3. Burns, B., Beda, J., & Hightower, K. (2019). Kubernetes Up & Running: Dive into the Future of Infrastructure. O’Reilly Media.

  4. Ibryam, B., & Huß, R. (2019). Kubernetes Patterns: Reusable Elements for Designing Cloud-Native Applications. O’Reilly Media.

  5. Dobies, J., & Wood, J. (2020). Kubernetes Operators: Automating the Container Orchestration Platform. O’Reilly Media.

Back to all articles