Kubernetes Fundamental – Part 6

In Part 1, We have gone through what is Kubernetes and its architecture.

In Part 2, We have gone through the key components such as NodesNamespaces and Pods.

In Part 3, we have gone through the components such as  ServiceJob and Ingress.

In Part 4, we have gone through the components such as   ConfigMap , Secret ,Volume , Deployment and StatefulSet .

In Part 5, we have gone through the definition and configuration of Kubernetes resources.

In this part, let us delve in to how we can setup and Kubernetes cluster locally using Minikube and kubectl.

Install minikube AND KUBECTL

  • First, we need to ensure that Hypervisor ( VirtualBox, Hyper-V or KVM) is installed on the machine as it is required by Minikube to run the virtual machine.
  • Install minikube : Download and install the Minikube by following the instruction from this link .
  • Ensure the Minikube is installed by running the following command in your terminal.
minikube version

It should display installed version of minikube.

  • Install kubectl: Download and install the kubectl using this link which is a client tool to interact with Kubernetes Cluster.
  • Verify the installation using the following command and it returns the installed version.
kubectl --version

Start minikube cluster

  • Start the Kubernetes cluster locally using the following command.
minikube start

Minikube will start a Virtual Machine and setup Kubernetes Cluster inside it.

  • Once the cluster is up and running, we can verify using the following.
kubectl cluster-info

This will show the URL for accessing the Kubernetes Cluster.

  • Verify the Kubernetes nodes.
kubectl get nodes

We could see the single node (a minikube virtual machine) listed as ready .

DEPLOY A TEST application

  • To make sure, the setup is configured correct and let us verify by deploying a sample application. Let us create a simple YAML file as below.
apiVersion: v1
kind: Pod
metadata:
name: test-app-pod
spec:
containers:
- name: test-app-container
image: nginx:latest
ports:
- containerPort: 80
  • Apply the configuration to create a test pod
kubectl apply -f test-app.yaml
  • Check the status of the pod
kubectl get pods

It will shows the current status of the pod is running .

  • Access the test application
kubectl port-forward test-app-pod 8080:80

We can test the application by running the URL : http://localhost:8080 on the web browser. Now we can see the default Nginx page.

commonly used commands

We can interact with Kubernetes Cluster using kubectl commands to inspect and manage your application pods and other resources in the cluster .

  • To check the status of the cluster and its component
kubectl cluster-info
  • To get the list of resources (pods, services and deployment) in our namespace.
kubectl get <resource>

resource : Either Pod or service or deployment resource

  • To get detailed information about specific resource
kubectl describe <resource> <resource_name>

resource : Pod or service or deployment

resource_name : Name of the specific resource instance.

  • To create or apply a Kubernetes resource from a YAML configuration file.
kubectl apply -f <filename>

filename – Name of the YAML configuration file to create /upgrade the resource.

  • To delete a resource
kubectl delete <resource> <resource_name>

resource : Pod or service or deployment

resource_name : Name of the specific resource instance.

  • To view the pods in our namespace
kubectl get pods
  • To view the logs inside the pod
kubectl logs <pod_name>

pod_name : Pod of the name

  • To access the interactive shell inside the pod
kubectl exec -it <pod_name> - /bin/bash

pod_name : Pod of the name

  • To view the services in our namespace
kubectl get services
  • To access a service from our local machine
kubectl port-forward <service_name> <local_port>:<service_port>

service_name : Name of the service.

local_port : Port on our local machine.

service_port: Port on our service we want to access.

  • To scale the pods of the deployment
kubectl scale --replicas=2 deployment/<deployment_name>

deployment_name : Name of the deployment

It will scale up the deployment by running two instances of the application.

These are the basic kubectl commands that allow you to interact with your Kubernetes cluster. In the next post, we will explore additional advanced commands to manage and monitor our application effectively.

Happy Container’ising 🙂

Kubernetes Fundamental – Part 5

In Part 1, We have gone through what is Kubernetes and its architecture.

In Part 2, We have gone through the key components such as NodesNamespaces and Pods.

In Part 3, we have gone through the components such as  ServiceJob and Ingress.

In Part 4, we have gone through the components such as   ConfigMap , Secret ,Volume , Deployment and StatefulSet .

In this part, let us delve into Kubernetes configuration. How we can create and configure Kubernetes Components using YAML files. Usually Kubernetes resources can be written in YAML format. So, let see how we can define and configure Pods, Services, Deployment, StatefulSet, ConfigMap and Secrets.

We can use the same example from the previous parts of this series and create it.

How to apply configuration to the Kubernetes?
kubectl apply -f <file_name>
  • file_name – Name of the configuration file.

The file will specify the desired state of the Kubernetes object. The Kubernetes will create or update the resources accordingly to match the specified state.

Note: In the part, I will explain about the kubectl CLI which is a client used to interact with Kubernetes cluster.

DEployment configuration

# webapp-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-webapp
template:
metadata:
labels:
app: my-webapp
spec:
containers:
- name: webapp
image: nginx:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: webapp-config
- name: database
image: mongo:latest
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
volumeMounts:
- name: db-data
mountPath: /data/db
volumes:
- name: db-data
persistentVolumeClaim:
claimName: database-pvc

Here, we have defined the deployment with container, configMap and Volume.

We can apply the above configuration using the below command.

kubectl apply -f webapp-deployment.yaml

Service configuration

# webapp-service.yaml

apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Command

kubectl apply -f webapp-service.yaml

ConFIGMAP configuration

# webapp-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-config
data:
WEBAPP_ENV: "production"
DATABASE_URL: "mongodb://database-service:27017/mydb"
Command

kubectl apply -f webapp-config.yaml

SECRET CONFIGURATION

# db-credentials-secret.yaml

apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
username: <base64-encoded-username>
password: <base64-encoded-password>
Command

kubectl apply -f db-credentials-secret.yaml

STATEFULSET CONFIGURATION

# database-statefulset.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: database-statefulset
spec:
serviceName: database
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: database
image: mongo:latest
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
volumeClaimTemplates:
- metadata:
name: database-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
Command
kubectl apply -f database-statefulset.yaml

By separating out the configuration files like above, it would allow us to easily manage by version control and apply changes to our application consistently and reproducibly across different environments.

Hope it makes sense about create and define the manageable configuration files for Kubernetes components.

In the next part, let us walk through how to setup and access Kubernetes cluster locally.

Happy Container’ising 🙂

Kubernetes Fundamental – Part 4

In Part 1, We have gone through what is Kubernetes and its architecture.

In Part 2, We have gone through the key components such as NodesNamespaces and Pods.

In Part 3, we have gone through the components such as  Service, Job and Ingress.

In this part, Lets talk through the components such ConfigMap , Secret and Volume which are used to store configurational data and sensitive information in an organised, secure and persistent way.

We will see each components with the examples that we used on the previous parts.

ConfigMAP

The ConfigMap is used to store configuration data (key-value pair) which can be accessed by the pods in the cluster. It provides separation of concern, as the configuration data will be stored in separation to pods. So, we could make changes to the configuration data without restarting the pods.

Example
# webapp-with-db-pod-service-ingress-configmap.yaml

apiVersion: v1
kind: Pod
metadata:
name: webapp-with-db
labels:
app: my-webapp
spec:
containers:
- name: webapp
image: nginx:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: webapp-config
- name: database
image: mongo:latest
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-config
data:
WEBAPP_ENV: "production"
DATABASE_URL: "mongodb://database-service:27017/mydb"
  • On top of the example that we used while going through the Pod and Service components, we have created ConfigMap: webapp-config component at the end.
  • In the data section of configMap : webapp-config, it contains two key-value pair for web application.
  • In the pod : webapp-with-db, we have defined the envFrom field and it contains reference to the configMap: webapp-config that we have mentioned before.
  • Now the pod can access the configuration : WEBAPP_ENV and DATABASE_URL to be used by the application.

Secret

The Secret is used to store sensitive information such as username, password, API Key or certificates which we can’t store it in the configMap. It is also store data in key-value pair. It will encode data in base64 format and mounted as files or environment variables in a pod.

Example
# webapp-with-db-pod-service-ingress-configmap-secret.yaml

apiVersion: v1
kind: Pod
metadata:
name: webapp-with-db
labels:
app: my-webapp
spec:
containers:
- name: webapp
image: nginx:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: webapp-config
- name: database
image: mongo:latest
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-config
data:
WEBAPP_ENV: "production"
DATABASE_URL: "mongodb://database-service:27017/mydb"
---
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
username: <base64-encoded-username>
password: <base64-encoded-password>
  • We have added a new block for Secret: db-credentials .
  • In the block, we defined the data section that contain two sensitive data: username and password .
  • In the pod definition, we have added the environment variables and it referenced the secrets using secretKeyRef field.
  • In the secretKeyRef field, we specify the name : db-credentials of the secret and key is name of key specified in the data section of the Secret : db-credentials block.

VOLUME

The Volume is a directory used to store the data that can be accessible to all the containers. As it stores the data persistently in the separate storage from the pods, it will retain the data even though the pod/container is restarted or rescheduled.

Example

In this example, we will use the PersistentVolumeClaim (PVC) to dynamically provision a PersistentVolume (PV) and attach it to the database container.

# webapp-with-db-pod-service-ingress-configmap-secret-volume.yaml

apiVersion: v1
kind: Pod
metadata:
name: webapp-with-db
labels:
app: my-webapp
spec:
containers:
- name: webapp
image: nginx:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: webapp-config
- name: database
image: mongo:latest
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
volumeMounts:
- name: db-data
mountPath: /data/db
volumes:
- name: db-data
persistentVolumeClaim:
claimName: database-pvc
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-config
data:
WEBAPP_ENV: "production"
DATABASE_URL: "mongodb://database-service:27017/mydb"
---
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
username: <base64-encoded-username>
password: <base64-encoded-password>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi

  • Here we have created a block for a volume, PersistentVolumeClaim : database-pvc .
  • In the PersistentVolumeClaim , we specify the storage requirement with IGi storage volume.
  • In the pod definition, we created a volume : db-data and it should dynamically provisioned using the database-pvc PVC.
  • The database container is configured to mount this volume at /data/db. It ensures the data written inside the container will be saved in to the specified volume path (db-data).
  • With this setup, MongoDB data will stored in to db-data volume and backed up by dynamically provisioned PersistentVolume. So, the MongoDB data will be retained there even though the container will restarted or rescheduled on a different node.

Below, let us discuss about Deployment and StatefulSet as it ensures high availability, scalability and persistent storage. Both will be responsible for running the stateless and stateful applications.

Deployment

The deployment is a high level of abstraction in the Kubernetes and it manages the group of identical pods. It works well for stateless application when individual pods are interchangeable. It provides feature such as rolling updates, rollback, and scaling make fit well for Web Server, API and Mircoservices.

Example
# webapp-with-db-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-webapp
template:
metadata:
labels:
app: my-webapp
spec:
containers:
- name: webapp
image: nginx:latest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: webapp-config
- name: database
image: mongo:latest
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
volumeMounts:
- name: db-data
mountPath: /data/db
volumes:
- name: db-data
persistentVolumeClaim:
claimName: database-pvc
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-config
data:
WEBAPP_ENV: "production"
DATABASE_URL: "mongodb://database-service:27017/mydb"
---
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
username: <base64-encoded-username>
password: <base64-encoded-password>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
  • We have replaced Pod resource with Deployment type : webapp-deployment.
  • The replicas is set to 3, when the deployment is executed, it will create 3 instances of web application and database running inside the cluster.
  • The Deployment will ensure the availability of pods based on specified replicas by automatically creates a pod when the pod is failed or crashed. So, it facilitates high availability.

Statefulset

StatefulSet provides statefulness to the applications by providing each pod an unique identity, persistent storage and stable network identity. It works well for databases and key-value pair storage which requires persistent storage and ordered scaling.

Example
# webapp-with-db-deployment-and-statefulset.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-webapp
template:
metadata:
labels:
app: my-webapp
spec:
# ... (same as the previous Deployment config)
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-config
data:
WEBAPP_ENV: "production"
DATABASE_URL: "mongodb://database-service:27017/mydb"
---
apiVersion: v1
kind: StatefulSet
metadata:
name: database-statefulset
spec:
serviceName: database
replicas: 1
selector:
matchLabels:
app: database
template:
metadata:
labels:
app: database
spec:
containers:
- name: database
image: mongo:latest
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: db-credentials
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: password
volumeClaimTemplates:
- metadata:
name: database-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
  • We have created a block StatefulSet resource: database-statefulset.
  • The replicas is set to 1, if it requires any scaling, it needs manual intervention.
  • The StatefulSet ensures unique identity and stable host for each pod. It makes ideal for database or any persistent storage.

Hope it make sense about storage component and running a stateful/stateless applications.

I believe that I have covered most of key components of Kubernetes. In the next part, we will go through Kubernetes Configuration.

Happy Container’ising 🙂

Kubernetes Fundamental – Part 3

In Part 1, We have gone through what is Kubernetes and its architecture.

In Part 2, We have gone some of the key components such as NodesNamespaces and Pods.

Now in this part, we will go through components such as Service, Job and Ingress.

Service

Service provides an abstraction to define a stable endpoint to access a group of pods. It allows us to expose application to other pods within the cluster or to external clients. It provides load balancing and auto scaling of pods behind them to ensure high availability.

Example

Lets have a look with the similar example that we used in the previous post.

# webapp-with-db-pod-and-service.yaml

apiVersion: v1
kind: Pod
metadata:
name: webapp-with-db
labels:
app: my-webapp
spec:
containers:
- name: webapp
image: nginx:latest
ports:
- containerPort: 80
- name: database
image: mongo:latest
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: my-webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
  • Here, we added a new YAML block to define service : webapp-service .
  • The Selector field specifies which pod it targets toward. In this case, it points to the pod with label my-webapp .
  • The service exposes port 80 which matches the port expose by webapp container in the pod.
  • The targetPort specifies the port on the port that the service forwards traffic to. In this case, it matches the port specified on the containerPort of the webApp container.
  • Now we have a service: webapp-service that can be referenced/accessed by other pods within the classes using the name webapp-service .
Job

Job is an object that creates a set of pods and it wait for it to terminate. Once all the pods are terminated, then job will be marked as completes. All failed pods will be retried until certain specified number have exited successfully.

Job provides the mechanism to run ad-hoc tasks within the cluster. Most common use case is to create cronjobs that will automatically run a job at specified time at regular interval to support batch activities, backups and other application related scheduled tasks.

Example

Let us create a simple cronjob that run task every minute.

apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox:1.28
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
  • Creates a cronjob : hello
  • It run based on the schedule expression. Syntax is as below

In this case, it will run every minute.

  • It creates a container : hello. It pulls image from busybox:1.28.
  • It will print Hello from the Kubernetes Cluster inside the container pod at every minute.
  • Based on restartPolicy, it will restart job in case of any failure.
Ingress

Ingress provides a mechanism to expose our services to the external clients outside the cluster. On contrary to the Services provides internal communication between the pods within the cluster.

It acts as an external entry point to our application and it will manage load balancing and incoming traffic routing rules (HTTP routes). It is also support HTTPS traffic secured by TLS certificates.

In order to make Ingress work, we need to install an Ingress Controller deployed in our Kubernetes Cluster.

Example
# webapp-with-db-pod-service-and-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
spec:
rules:
- host: mywebapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80
  • We have created an Ingress object : webapp-ingress.
  • The host specifies the domain name: mywebapp.example.com where the application will be accessed externally.
  • The path specifies the routing rules to access the application. In this case, / will be forwarded to the service : webapp-service
  • The backend specifies the target service : webapp-service behind the cluster to allow the traffic towards.

I hope it makes sense and in the part we will go through ConfigMap, Secret, Volume, Deployment and Statefulset component.

Happy Containerising 🙂

Kubernetes Fundamental – Part 2

In Part 1, We have gone through what is Kubernetes and its architecture.

Now in this section, we will go through the key components of Kubernetes. I am going to breakdown the component explanation over few parts of this Kubernetes series in order to go in detail.

In this part, we will go through Nodes, Namespaces and Pods.

KEY components

Nodes

Nodes are the machines within the cluster and this is where the container will be deployed and run. The machine that mentioned here could be either physical or virtual machine. It responsibility is to provide enough resources to run workload on it. Nodes can be scaled up/down on demand.

Example

Let say, if we have Kubernetes cluster with three nodes A, B and C and it will host and run containerised applications.

Node
- Node A
- Node B
- Node C
Namespaces

Namespace provides the logical grouping the resources within the Kubernetes Cluster. It is quite useful in categorise the resources they are related. For example, if we have a product that contain multiple applications (microservices) that can be grouped together within a same namespace. We can also create namespace divide resources between users and teams by applying role based access control.

Pods

Pods are the smallest deployable units in the Kubernetes. It represents one or more containers which share same namespace within the node. Each pod is a single instance of process within the cluster. It will be always scheduled on the same same node. It can be communicated with each other via localhost.

Example

In case, we have a web application and it can be deployed with application server and database. So, it will be deployed as a single pod. It can be scaled on demand.

Pods
- Pod 1 (Web App + Database)
- Pod 2 (Web App + Database)

Let us create a Kubernetes YAML configuration that deploys web application with application server (Nginx) and database (MongoDB) in to a pod.

# webapp-with-db-pod.yaml

apiVersion: v1
kind: Pod
metadata:
name: webapp-with-db
labels:
app: my-webapp
spec:
containers:
- name: webapp
image: nginx:latest
ports:
- containerPort: 80
- name: database
image: mongo:latest

In the above example,

  • We defined a pod : webapp-with-db
  • Create two containers: One for application and another MongoDb.
  • First Container : webapp . It takes the latest image of nginx and runs on it.
  • Second Container: database. It takes the latest image of mongo and runs on it.
  • Both container will share the same network and can be communicate with each other via localhost.
Why do we need pod instead of container?
  • Grouping Containers: Pod provides the logical grouping of related containers. It simplifies the scheduling, scaling and managing of related containers.
  • Shared Resources: All the containers within the pod will share the same network namespace and volumes. It will be easier to share and communicate data.
  • Atomic Unit: The pod is a atomic unit of deployment. So, the scheduling, scaling etc can be done at pod level.
  • Scheduling: Kubernetes will do scheduling at pod level instead of container. So, all the containers with in the pod should co-exist on the same node.

So in summary, Pod will provide additional layer of abstract on top of related containers to facilitate and simplify sharing and resource sharing capabilities.

Hope, it makes sense about these components and in the next part we will go through other components such as Services, ConfigMap and Secrets.

Happy Containerising 🙂

Kubernetes Fundamental- Part 1

I always have the thought of write the article on Kubernetes and finally it is going to happen now. I will try to breakdown the article in to multi part to avoid a long boring one article and segregate each specific topic in the Kubernetes.

So, in this part, I am going to cover What is Kubernetes and its architecture?

What is Kubernetes?

Kubernetes is an open source container orchestration tool and it is used to automatically deploy, scale and manage containerised applications.

WHY DO WE NEED KUBERNETES?

Kubernetes provides a robust and scalable platform to deploy, scale and manage containerised applications. It abstracts the underlying infrastructure and provides a consistent API for interacting with the cluster. So, it allows developers to focus on their application without worrying much about managing the underlying infrastructure.

For an instance, let say we have an .NET (or any other framework) application. We can package it into a container and run it on container contains a Docker engine or any other container engine. In this case, there is no complexity.

Basically, we pack our application in to a Docker image using Dockerfile and expose a port on a host for the external world to access it.

But there is a drawback on this approach, it incurs a single point of failure as it is running only one server.

To overcome this issue, we need an efficient mechanism that will handle single point of failure by auto scale application on-demand and withstand single-node failures. So, Kubernetes does the job for us.

Kubernetes helps in scaling applications, self-healing and rolling updates, make it well suited for running containers.

Main Use case

Let say, we have a massive application composed of microservices (API, UI, User Management, Credit Card transaction etc). All these microservice components communicate with each other using REST API or other protocols.

As the application has many components or microservices, we cannot deploy all the services in one container or server. The services have to decoupled and each service should be deployed and scaled on it own. This makes the application autonomous to develop and deploy easier and quicker.

But in this case, the complexity lies in networking, shared file system, load balancing and service discovery. This is where Kubernetes comes in to play. It helps in orchestrating complex processes in a manageable way.

So, in a nutshell, Kubernetes with take care of massive overloads such as networking, load balancing, service discovery, disaster recovery, resource scheduling, scalability and high availability.

KUBERNETES ARCHITECTURE

Let’s understand the architecture of Kubernetes based on the following illustration.

Master Node

Master Node acts as a heart of the Kubernetes Cluster. It’s a control plane for the entire cluster. It is responsible for managing the overall status of the cluster and also scheduling new pods, monitor the health of nodes and pods and scale pods on demand.

Let’s look in to the Key Components

API Server

API Server is a central access point of the cluster. It provides the REST API to perform create, update, delete resource operation. So, the client such as kubectl or any UI will interact with it. It is the only component interacts directly with etcd , whereas other components interact via API provided by it.

Controller Manager

Controller Manager will be responsible for monitoring the state of the cluster through the API Server and take necessary actions to ensure the desired state is maintained. For an instance, ReplicataSet Controller is ensure the application is running with expected no. of pods.

etcd

etcd is a distributed key-value pair store contains the configurational data of the cluster. i.e. it stores the persistent data of Kubernetes object such as pods, replication controllers, secrets and services.

Scheduler

The Scheduler is responsible for assigning a new pod to nodes based resource requirement and availability. It is also ensure the workload is distributed evenly across the Worker Nodes .

Worker Nodes

Worker Nodes act as a data plane of the cluster and executes the actual workload. Basically, they are machines where containers (pods) are scheduled and executed.

Each worker node runs several key components.

Kubelet

Kubelet is a worker agent runs on each node and communicates with the master node. It ensure the containers in the specified pods runs healthy.

Container Runtime

Kubernetes can contain multiple container runtimes such as Docker or Containerd . It is responsible from retrieve the docker image from the repository and run containers on the worker nodes.

Kube Proxy

Kube Proxy acts as a service proxy and runs on each worker node. It is responsible for communication between pods/containers/nodes. It ensures the entire network configuration is up to date by listening to API Server on any changes on services/pods.

How do they interact?

The master node and worker nodes communicates with each other via API Server. Also, the user (client) and other components communicate via API Server.

For an instance, if a new application is deployed to Kubernetes, its configuration will be sent to API Server and stored in etcd. The Controller Manager constantly monitors the cluster state via API Server and if there is any deviation from desired start and it will take corrective action to ensure the expected state is maintained by reconcile it.

When the new pod is expected to be scheduled, then Scheduler comes in to play by choosing the appropriate worker node based on the resource availability and other constraints. Then the API Server will inform the chosen worker node. The container runtime such as Docker or Containerd will pull application image from the repo and the Kubelet will kick start the container.

Then the worker nodes will report the status of the pods to master nodes on a regular basis. So, that master node will keep track of the current state of the pods on each worker node.

Hope this gives an idea on what is Kubernetes, why do we need it and architecture. In the next post, we will go through the main components of Kubernetes.

Happy Containerising 🙂