Skip to main content

Cloud & DevOps

Kubernetes (K8s): Pods, Deployments, Services and Running Applications

K8s fundamentals – run units, Controllers, config, secrets, Ingress and resource limits.

Kubernetes (K8s) is a container orchestration platform: it manages where and how applications run, balances load, recovers from failures, and enables scaling resources by demand. This article reviews the basic resources – Pods, Deployments and Services – enough to run applications in production.

A Pod is the smallest run unit in K8s: one container or several running together, sharing network namespace and sometimes volumes. Usually define one Pod per container; multiple containers in a Pod suit sidecars (e.g. proxy or log shipper). A Pod is ephemeral – if it fails or is deleted, K8s does not restore it automatically unless managed through a Controller.

Deployment is the common Controller for stateless applications: you define how many replicas, which image, resource requests/limits (CPU, memory), and update strategy (rolling update or recreate). Kubernetes maintains the replica count you specified – if a Pod fails, it creates a new one; if you update the image, it runs a rolling update. Defining readinessProbe and livenessProbe lets K8s know when to route traffic and when to restart.

Service provides a stable endpoint for Pods. Without a Service, Pods have dynamic IPs that disappear when the Pod is replaced. Service (ClusterIP type, the common one) assigns an IP and DNS name (service-name.namespace.svc.cluster.local) and routes traffic to Pods matching the selector (e.g. label app=api). So internal consumers call the service name, not a specific Pod IP.

Config and secrets: ConfigMap holds config data (key-value or file); Secret holds sensitive data (base64-encoded in etcd, can encrypt at rest). Inject into Pod as environment variables or mounted files. Do not put real secrets in YAML in git; use a secrets management system (Vault, External Secrets) that syncs to K8s.

Ingress: entry of HTTP/HTTPS traffic from outside to Services. Ingress resource defines hosts, paths and backend services; Ingress Controller (e.g. nginx, Traefik) implements it and manages TLS. Alternative: LoadBalancer or NodePort per service – simpler but less flexible.

Resource limits and requests: requests affect scheduling (the scheduler assigns Pod to a node with available resources); limits cap consumption (exceeding leads to throttling or kill). Proper configuration prevents "starvation" between Pods and limits blast radius of a memory-consuming bug.

Managing manifests: YAML file per resource or combined file. Helm (charts) and Kustomize (overlays) enable parameterization and assembly per environment. Prefer storing manifests in git and using GitOps (Argo CD, Flux) to sync state from git to cluster.

HPA (Horizontal Pod Autoscaler): defining auto-scaling by CPU, memory or custom metrics – enables increasing replicas under load and decreasing when load drops. Define min/max and target utilization to prevent extreme scale.

In summary: Pods, Deployments and Services are the foundation for running applications in K8s. Then add ConfigMap/Secret, Ingress and resource limits, and manage config code in git. Understanding these concepts is enough to get started and progress to advanced features (HPA, NetworkPolicies, Operators).

Back to Knowledge Center