Introduction to Kubernetes (K8s)
![](/../assets/images/featured/cncf_hu997114242220090775.webp)
Introduction to Kubernetes (K8s)
In the evolving landscape of cloud computing, Kubernetes (K8s) has emerged as a pivotal force in container orchestration, enabling businesses to deploy, manage, and scale applications with unprecedented efficiency and flexibility. Developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes orchestrates computing, networking, and storage infrastructure on behalf of user workloads.
What is Kubernetes?
Kubernetes is an open-source platform designed to automate the deployment, scaling, and operation of application containers across clusters of hosts. It provides the framework to run distributed systems resiliently, taking care of scaling and failover for your application, providing deployment patterns, and more.
Key Features of Kubernetes
- Automated Scheduling: Kubernetes automatically schedules containers based on resource requirements and other constraints, without sacrificing availability.
- Self-Healing Capabilities: It restarts failed containers, replaces and reschedules containers when nodes die, kills containers that don’t respond to user-defined health checks, and doesn’t advertise them to clients until they are ready to serve.
- Automated Rollouts and Rollbacks: Kubernetes progressively rolls out changes to your application or its configuration, monitoring application health to ensure it doesn’t kill all your instances at the same time.
- Horizontal Scaling: With Kubernetes, you can scale your application up and down with a simple command, a UI, or automatically based on CPU usage.
- Service Discovery and Load Balancing: Kubernetes assigns containers their own IP addresses and a single DNS name for a set of containers and can balance the load between them.
Architecture and Components
Kubernetes follows a client-server architecture. At its core, the system can be divided into two main components: the Master node and Worker nodes.
Master Node
The Master node is responsible for managing the cluster. It makes global decisions about the cluster (e.g., scheduling), and detects and responds to cluster events (e.g., starting up a new container when a deployment’s replicas field is unsatisfied).
Key components of the Master node include:
- API Server: Acts as the front end for Kubernetes. The users, management tools, and cluster components communicate through it.
- etcd: A key-value store used as Kubernetes’ backing store for all cluster data.
- Scheduler: Watches for newly created Pods with no assigned node and selects a node for them to run on.
- Controller Manager: Runs controller processes, which are background threads that handle routine tasks in the cluster.
# demo configuration for the master node
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: "1.20.0"
controlPlaneEndpoint: "master-node.example.com:6443"
networking:
podSubnet: "192.168.0.0/16"
serviceSubnet: "10.96.0.0/12"
apiServer:
extraArgs:
authorization-mode: "Node,RBAC"
controllerManager:
extraArgs:
node-cidr-mask-size: "24"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "iptables"
apiVersion and kind: Specify the API version and resource type the configuration file adheres to.
kubernetesVersion: Specifies the version of Kubernetes to install.
controlPlaneEndpoint: The address of the cluster’s control panel (Master node), which needs to be replaced with the actual address or domain name.
networking: Configures the cluster’s network, including the pod network and service network subnets.
podSubnet: Specifies the CIDR range for the pod network, which should be consistent with your network plugin configuration, such as Calico, Flannel, etc.
serviceSubnet: Specifies the CIDR range for the service network.
apiServer and controllerManager: Configure additional parameters for the API server and controller manager.
KubeProxyConfiguration: Configures the behavior of kube-proxy, for example, using iptables mode.
Worker Nodes
Worker nodes host the Pods that are the components of the application workload. Each worker node is managed by the Master node and contains the necessary services to run Pods, including the Docker runtime and the kubelet.
Key components of Worker nodes include:
- kubelet: An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
- kube-proxy: Maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
- Container Runtime: The software that is responsible for running containers.
# Obtain the kubeadm join command from the Master node
$ kubeadm join <control-plane-host>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
<control-plane-host>:<port>
is the address and port of the Master node (usually 6443), <token>
is the token used to join the cluster, and <hash>
is the hash value of the CA certificate used in the node discovery process.
# Execute the kubeadm join command on the Worker node
$ sudo kubeadm join <control-plane-host>:<port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
On the Worker node you are preparing to join the cluster, execute the kubeadm join command obtained in the previous step as the root user (or using sudo).
<token>
have an expiration limit. If the kubeadm join command shows that the token is invalid, you may need to generate a new join token on the Master node. You can use the command kubeadm token create –print-join-command to generate a new kubeadm join command, including a new token and hash.
Ensure the network configuration of the Worker node allows it to communicate with the Master node, including the opening of the appropriate ports (such as TCP 6443) and without network isolation.
In this way, the configuration and joining process of the Worker node are relatively simple, mainly relying on the kubeadm join command provided by the Master node to complete automatically.
Key Concepts
- Pods: The smallest deployable units of computing that can be created and managed in Kubernetes.
- Services: An abstraction which defines a logical set of Pods and a policy by which to access them.
- Deployments: Provides declarative updates for Pods and ReplicaSets.
- Volumes: Provides a directory, possibly with data in it, which is accessible to the containers in a pod.
- Namespaces: Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.
Conclusion
Kubernetes has revolutionized the way organizations deploy, manage, and scale their applications. With its robust framework for container orchestration, Kubernetes provides the tools needed to build a more efficient, resilient, and scalable cloud-native infrastructure. As businesses continue to move towards microservices architectures, the role of Kubernetes in enabling this transition cannot be overstated. Understanding the fundamentals of Kubernetes is crucial for developers, system administrators, and IT professionals aiming to leverage the full potential of cloud-native technologies.