Introduction to Kubernetes

Introduction to Kubernetes

In monolithic architecture, all the components are tightly coupled means all the frontend, backend, database, and networking bundled together and deploy as it once, and all the communication happens are though internally to the system, whereas in microservices architecture all the components are separate basically means all the components will deploy as individual application communication happens through external interfaces

If you have to change anything in the front end then you have to deploy everything again or your little bit of change or scale in any one component then you have to deploy the whole application again. In Microservices, if you have a change or scale in any one component then you can do it easily.

Let's say you have many containers running on the server and the user increases when the server load increases you have to have more and more instances of applications distributing the load over the server. If the container is damaged then you have to restart how can you do it? update the application with zero downtime how can you do this? Then comes the Orchestatores

Orchestrator helps us in deploying and managing the applications dynamically with the use cases. Orchestrator provides these features:-

Deploy the container

Zero-downtime update

Scale

Heal containers

The official definition of Kubernetes is Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.

Still confused?

Simply Kubernetes is the configuration management tool or container management tool which automates Container Deployment, scaling, and Load balancing.Kubernetes manage all the microservices but Kubernetes is much more than container orchestration.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes was originally developed and designed by engineers at Google. Google was one of the early contributors to Linux container technology and has talked publicly about how everything at Google runs in containers. (This is the technology behind Google’s cloud services.)

Google generates more than 2 billion container deployments a week, all powered by its internal platform, Borg. Borg was the predecessor to Kubernetes, and the lessons learned from developing Borg over the years became the primary influence behind much of Kubernetes technology.

Kubernetes provides many features such as:-

  • Auto-Scaling:-If in the server, the load will increase then it will automatically increase the instance of the application.

  • Self-Healing:-If container fails then Kubernetes automatically redeploys the affiliated container.

  • Fault Tolerance:-Kubernates continue operating without interpretation when one or more components fail.

  • Rollback:-Kubernates having it is own rollback mechanism. In Kubernetes, rolling updates are the default strategy to update the running version of your app.

  • Storage orchestration: Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.

  • Secret and configuration management: Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

    Kubernetes follows the client-server architecture. A Kubernates cluster is the form of Kubernates deployment architecture. The basic architecture of Kubernetes consists of The control plane and master node.K8s components include the Kubernetes control plane and worker nodes in the cluster. A Kubernetes cluster is nothing but a group of Kubernetes components or one or more control planes and master nodes. The Kubernetes cluster consists of a Kubernetes API server, Kubernetes scheduler, Kubernetes controller manager, etc. Kubernetes node components include a container runtime engine, a Kubelet service, and a Kubernetes proxy service.

It is responsible for managing the whole cluster. If a worker node fails, the master node moves the load to another healthy worker node. Kubernetes master is responsible for scheduling, provisioning, controlling, and exposing API to clients. It coordinates activities inside the cluster and communicates with worker nodes to keep Kubernetes and applications running.

Distributed key-value lightweight database or storage. Central database to store current cluster state at any point in time. It is only accessible from the API server for security reasons. It is a cluster brain.

It is the Central Management entity for the entire cluster.CRUD operations for servers go through the API. API server configures the API objects such as pods, services, replication controllers, and deployments. They interact with the API through kubectl or kubecontrol. It just manages worker nodes, and also makes sure that the cluster of worker nodes is running healthy and successfully. Acts as a gatekeeper for authentication.

It is responsible for physically scheduling the pods across multiple nodes. Scheduler gets the information for the hardware configuration from the configuration file and schedules the pod on the nodes accordingly. For example,If you mention the application needs a CPU that has 2 cores, memory is 20GB, etc. Once this artifact passes through the API server, the scheduler will look for the appropriate nodes that meet these criteria and schedule the pod accordingly.

It makes sure the actual state of the cluster matches the desired state. It detects cluster state changes like the crashing of pods. When pods die control manager tries to recover the cluster state and for that, it makes a request to the scheduler to reschedule those dead parts in this same cycle happens where the scheduler decides based on the calculation of the resources which worker nodes should restart those pods again and makes the request to the corresponding kubelets on those worker nodes to actually restart the pods. The 4 controllers behind the control manager are

replication controller:-It make sure the pods are always up or available.

endpoints controller

namespace controller

service account controller

It is basically any VM or the physical server where the container are deployed.

Node agent that runs on each worker node inside the cluster.It makes sure container are running inside the pod. Always listen to the API server for example pod creation requests. It sends the success and fails updates to the Kubernetes master.In case Kubelet notices any issues with the pods running on the worker nodes, it tries to restart the Pod on the same node. If the fault is with the worker node itself, then Kubernetes Master detects a Node failure and decides to recreate the Pod on another healthy Node.

Responsible for maintaining the entire network configuration. It is a core networking component inside Kubernetes. It maintains the distributed network across all pods, across all nodes or inside the cluster, and outside the cluster. Assign the IP address to the pod.

A scheduling unit inside Kubernetes. A pod is a group of one and more containers deployed together on the same host. With the help of a pod, you can deploy multiple dependent containers together. We cannot start the container without having a pod. We manage the container through pods. Kubernetes users will configure and interact with the pod. It is basically a wrapper of containers each worker node you have multiple pods and inside pod you can have multiple containers but usually per application, you would have one pod. Pod has own IP address by virtual network despite of Kubernetes cluster thus pods containing server its own IP address. You always work with the pods which are abstraction layers over containers and the pod is a component of Kubernetes that manages the container running inside it.ex:-If container stops or dies inside a pod it will automatically restart inside a pod however pods are epithermal components which mean the pods can also very frequently die when a pod dies then new one gets created and when a new pod gets creates then it gets a new IP address. For ex:-If a pod is talking to the database through its an IP address which it has so it is inconvenient but just that IP address should be there all the time so that for these another component of K8s came into the picture which is known as the Service(in detail in the further blog)services having its own virtual IP address which are sitting in front of each pod talks to each other so now if pods behind services die and get recreated the service stays in place because their life cycle is not tied to the pod life cycle.

Containers reside inside the pods. We can run container applications inside containers. Containers are Runtime Environments for containerized applications. These containers reside inside Pods. Containers are designed to run Micro-services.

One of the best things about using Kubernetes is that the platform helps you drive better business productivity. Since it eliminates the need for most manual processing, you can enhance productivity and drive results. Kubernetes automates many processes, making your business much more efficient.

The other great thing about using Kubernetes is that you can finally ditch the conventions and benefit from the multi-cloud capability. You can keep your workloads in a single cloud or spread them across different cloud platforms. As a result, you can make your infrastructure more resilient. And, as a bonus, you can take advantage of the best services each cloud platform has to offer, maybe also lowering your overall costs.

Of all the advantages of using Kubernetes, the affordability of the platform is one of the most important perks. As an example, the Kubernetes cluster management fee is calculated to be $0.10 per hour for each Google Kubernetes Engine (GKE) cluster.

The stability of an application will make the difference between a performant and a non-performant application. Fortunately, this will be the least of concerns for people using Kubernetes as the platform offers unmatched stability. No matter how feature-rich or complex your application might be, you can always rely on Kubernetes's stability.

Kubernetes allows you to roll out updates easily and efficiently. This way, you can quickly give customers the new features, performance improvements, and bug fixes they require.

In conclusion, Kubernetes is a powerful container orchestration tool that can improve scalability, and efficiency and help achieve faster deployments in modern software development workflows. In this post, we have covered the definition, purpose, key features, architecture, and benefits of getting started with Kubernetes. I hope this introduction has given you a solid understanding of the technology and set the stage for the remaining posts in this series. In the upcoming posts, we will dive deeper into Kubernetes concepts like deployments, stateful sets, pods, and services. Stay tuned and keep learning.