Monolithic Application and Rise of microservices technology
A monolithic application refers to a software architecture in which all the components of an application are tightly coupled together into a single codebase and deployed as a single unit. let's say you have one nice website and the frontend, backend, database, networking and messaging components are bundled together and deployed as a single entity called Monolithic Application. So, there was a problem while handling the application. for example: if you want the frontend to run in the App Server 1 then your backend automatically run in Server 1 cause these all are part of one application and cannot run separately, you can't deploy a single entity. If you would have to make the changes to a single entity then you would have to change the entire application. This was the main problem that all facing. Who solves this problem? Comes into the picture Microservices
In microservices, Every single entity run as each application inside the container. Since we know that every single entity runs as a separate application but in actuality they are part of one application. So, there should be some sort of way that they have to communicate with each other. They communicate with each other via Service-mesh.
Why does the Container Orchestration tool is needed?
Due to the rise of microservices and containers were running on a very large scale and it becomes difficult to manage these containers so we need a Container orchestration tool and one of the popular among them is Kubernetes which is also called K8s.
What is Kubernetes?
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).
What actual problem that it tries to solve?
Auto Scaling: It also enables automatic scaling of applications based on demand, allowing them to handle increased traffic and workload without manual intervention.
High Availability or Zero Downtime: Kubernetes ensures the high availability of applications by automatically restarting containers that fail, replacing and rescheduling containers on failed nodes, and distributing application components across multiple nodes in the cluster.
Load balancing: It also includes a load balancer that distributes incoming traffic across containers providing the service, ensuring efficient utilization of resources.
Backup and Recovery: Kubernetes helps to heal the damaged container and works as the Backup for any inconvenient thing.
Main components of K8s
- pod
The pod is the smallest unit of k8s. It is an abstraction over the container. t is a logical group of one or more containers that are scheduled together on the same host and share the same network namespace. Containers within a pod share the same IP address and can communicate with each other using a local host.
- Service and ingress
Service is an abstraction that defines a logical set of pods and a policy for accessing them. It acts as a stable network endpoint for a group of pods that perform the same function. Services enable internal communication between pods and provide a way to expose applications within the cluster to other services or external clients.
ingress is an API object in Kubernetes that provides external access to services within the cluster by acting as a smart router or entry point for HTTP and HTTPS traffic.
- Configmap and secrete
A ConfigMap is an API object that stores configuration data in key-value pairs. It allows you to decouple configuration details from the container images, making it easier to manage and modify configuration settings without rebuilding the container.
A Secret is an API object used to store and manage sensitive information, such as passwords, API keys, or TLS certificates.
- Volume
In Kubernetes (K8s), a volume is an abstraction that represents a piece of storage in the cluster that can be mounted into a container. It provides a way for containers to store and access data persistently.
- Deployment and StatefulSets
In Kubernetes, a Deployment is a higher-level resource that provides a declarative way to manage and scale a set of identical pods. It is primarily used for stateless applications, where each instance of the application is independent and can be scaled up or down without affecting the application's functionality.
StatefulSets, on the other hand, are specifically designed for managing stateful applications in Kubernetes. Stateful applications are those that require stable and unique network identities, stable storage, and ordered deployment and scaling. Examples include databases, key-value stores, and messaging systems.
Architecture of K8s
Kubernetes cluster is the combination of the worker node and control plane.
Control plane
The control plane manages and controls the entire Kubernetes cluster.
- API Server
API server serves as the primary interface for interacting with the cluster. All the stuff regarding the cluster happens via the API server.
- ETCD
It is sort of like a database. It is a critical component of the control plane and is used for storing and managing the state of the cluster's resources.
- Scheduler
As the name suggests Scheheduler is responsible to schedule the new work to a less busy worker node after getting a request from the API server.
- Controller
Kubernetes (K8s), controllers are key components of the control plane responsible for managing and maintaining the desired state of resources within the cluster.
Worker node
A worker node is like a server where all the containers are running.
- Kubelet
Kubelet is an essential component of the Kubernetes (K8s) cluster responsible for managing and maintaining individual nodes in the cluster. It runs on each node and ensures that the containers are running.
- Kubeproxy
The kube-proxy is a component that runs on each node in the cluster. Its primary responsibility is to handle network routing for services.
- Docker Runtime
It deals with managing containers like starting and stopping the container, pulling the images and running the container from the image.
List of Kubectl commands
kubectl get nodes
: Shows the nodes.kubectl get pod:
Shows the pods.kubectl get services
kubectl create deployment nginx-depl --image=nginx:
It creates the Nginx deployment from the nginx imagekubectl get deployment:
Shows all the deploymentkubectl get replicaset:
Shows all the replica setkubectl edit deployment nginx-depl:
Helps to make changes in deployment file.kubectl logs pod_name:
Provides the logs of the respective podkubectl describe pod pod_name:
provides more information of podkubectl exec -it pod_name --bin/bash:
It helps to move inside the container.kubectl delete deployment deployment_name:
Deletes the deployment
Creating nginx deployment configuration file
First of all, create the nginx_deployment.yaml file using the touch command
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Then run the below command to create the nginx deployment
kubectl apply -f nginx_deloyment.yaml
K8s configuration file
A Kubernetes configuration file, also known as a manifest file, is a YAML or JSON file that defines the desired state of Kubernetes resources. It provides a declarative way to specify how applications and infrastructure should be deployed and managed within a Kubernetes cluster.
The configuration file is divided into three parts
Metadata: The metadata section provides information about the resource being defined. It includes details such as the name of the resource, labels for identification and grouping, and annotations for additional information.
Specification: The specification section defines the desired state of the Kubernetes resource. It contains the configuration parameters and settings required to create or update the resource.
Status: This is auto-generated by K8s after deployment and holds the current status of the cluster.