Table of contents
- 1. What is Kubernetes(K8s) and why it is important?
- 2. What is difference between docker swarm and K8s?
- 3. Explain the working of the master node in K8s?
- 4. What are the main components of K8s Architecture?
- 5. What is the role of Kube-API Server?
- 6. What is the job of the kube-scheduler?
- 7. What is Kubectl?
- 8. What is Kube-proxy?
- 9. What is etcd?
- 10. What is Kubelet?
- 11. What is the K8s controller manager?
- 12. What are Pods?
- 13. How does K8s handle network communication between containers?
- 14. How does K8s handle scaling of applications?
- 15. What is a K8s Deployment and how does it differ from a ReplicaSet?
- 16. Can you explain the concept of rolling updates in K8s?
- 17. How does K8s handle network security and access control?
- 18. Can you give an example of how K8s can be used to deploy a highly available application?
- 19. What is a namespace in K8s? Which namespace does a pod take if we don’t specify any namespace?
- 20. Name the initial namespaces from which Kubernetes starts?
- 21. What is the LoadBalancer in Kubernetes?
- 22. How does Ingress help in K8s?
- 23. What are Services in K8s?
- 24. Explain different types of services in K8s?
- 25. Explain the concept of Self-healing in K8s and how it works?
- 26. How does K8s handle storage management for containers?
- 27. How does the NodePort service work?
- 28. What is a Multi-node Cluster and a Single-node Cluster in K8s?
- 29. Difference between create and apply in K8s?
- 30. What are ConfigMaps and Secrets in K8s?
- 31. What are Daemon sets?
Below are some basic Kubernetes interview questions along with the answers.✍
With the widespread adoption of containers among organizations, Kubernetes
, the container-centric management software, has become a standard to deploy and operate containerized applications and is one of the most important parts of DevOps.
Originally developed at Google and released as open-source in 2014. Kubernetes builds on 15 years of running Google's containerized workloads and the valuable contributions from the open-source community.
1. What is Kubernetes(K8s) and why it is important?
Kubernetes(K8s)
is an open-source containerization orchestration tool that helps you to orchestrate and manage your containers infrastructure on-premises or in the cloud.
It is important because it simplifies the management of containers, ensures high availability, scalability, and facilitates rolling updates, making it easier to maintain and scale applications in a dynamic and cloud-native environment.
2. What is difference between docker swarm and K8s?
Docker Swarm
is a native clustering and orchestration tool for Docker, while Kubernetes
is a more comprehensive, open-source container orchestration platform that can manage containers created with Docker and other container runtimes.
Docker Swarm
helps end-users in creating and deploying a cluster of Docker nodes whereas Kubernetes
offers more advanced features like automatic load balancing, rolling updates, and better scalability.
Here is a table that summarizes the key differences between Docker Swarm and Kubernetes:
Feature | Docker Swarm | Kubernetes |
Ease of use | Easier to set up and use | More complex to set up and use |
Features | Fewer features | More features, including support for stateful applications, self-healing, and auto-scaling |
Community | Smaller community | Larger community |
Popularity | Less popular | More Popular |
3. Explain the working of the master node in K8s?
The master node
dignifies the node that controls and manages the set of worker nodes
. This kind resembles a cluster in Kubernetes. The nodes are responsible for the cluster management and the API used to configure and manage the resources within the collection. The master nodes of Kubernetes can run with Kubernetes itself, the asset of dedicated pods.
4. What are the main components of K8s Architecture?
A Kubernetes cluster
is a form of Kubernetes deployment architecture.
Basic Kubernetes architecture exists in two parts:
The control plane/master node
: Thecontrol plane
is the nerve center that houses Kubernetes cluster architecture components that control the cluster.The nodes/worker nodes
: Managed by the control plane, theworker nodes
are machines that run containers.
Each of these components has individual components in them.
Master Node has - API server, etcd, controllers, and schedulers.
Worker Node has - service-proxy, and kubelet.
5. What is the role of Kube-API Server?
The front end of the K8s control plane, the API Server
supports updates, scaling, and other kinds of lifecycle orchestration by providing APIs for various types of applications.
It provides a REST API that enables users to interact with the cluster and manage its resources. It acts as the primary interface for users and external systems to interact with the cluster and perform actions such as creating or updating deployments, scaling applications, or monitoring the state of the cluster.
6. What is the job of the kube-scheduler?
The K8s scheduler
stores the resource usage data for each compute node determines whether a cluster is healthy, new containers should be deployed, and if so, where they should be placed.
7. What is Kubectl?
Kubectl
is a CLI (command-line interface) that is used to run commands against Kubernetes clusters. As such, it controls the Kubernetes cluster manager through different create and manage commands on the Kubernetes component.
8. What is Kube-proxy?
Kube-proxy
is an implementation of a load balancer and network proxy used to support service abstraction with other networking operations. Kube-proxy
is responsible for directing traffic to the right container based on IP and the port number of incoming requests.
9. What is etcd?
Kubernetes uses etcd
as a distributed key-value store for all of its data, including metadata and configuration data, and allows nodes in Kubernetes clusters to read and write data.
10. What is Kubelet?
The kubelet
is a service agent that controls and maintains a set of pods by watching for pod specs through the Kubernetes API server. It preserves the pod lifecycle by ensuring that a given set of containers are all running as they should. The kubelet
runs on each node and enables the communication between the master and slave nodes.
11. What is the K8s controller manager?
The controller manager
is a daemon that is used for embedding core control loops, garbage collection, and Namespace creation. It enables the running of multiple processes on the master node even though they are compiled to run as a single process.
12. What are Pods?
Pods
are the smallest deployable units of computing that you can create and manage in Kubernetes. Pods
are ephemeral by nature, if a pod (or the node it executes on) fails, Kubernetes can automatically create a new replica of that pod to continue operations.
13. How does K8s handle network communication between containers?
Kubernetes handles network communication between containers in two main ways:
Pod-to-pod communication
: Containers in the same pod share the same network namespace, which means that they can communicate with each other using localhost.
Kubernetes uses aContainer Network Interface(CNI)
plugin to manage the networking between pods. The CNI plugin is responsible for allocating IP addresses to pods and enabling pods to communicate with each other.Pod-to-pod communication using services
: Services are a way to expose ports on a pod to other pods in the cluster. To communicate with another pod using a service, a pod can simply send traffic to the service's IP address.Services are the recommended way for pods to communicate with each other in Kubernetes. Services provide a number of benefits, such as:
Load balancing
: Services can distribute traffic across multiple pods, which can improve performance and reliability.Service discovery
: Services provide a DNS name for a pod, which makes it easier for pods to find each other.Abstraction
: Services hide the implementation details of the underlying network topology, which makes it easier to deploy and manage applications.
Kubernetes also provides a number of other networking features, such as network policies and ingress controllers. These features can be used to control the flow of traffic in and out of your cluster.
14. How does K8s handle scaling of applications?
Kubernetes handles the scaling of applications in two ways:
Horizontal scaling
: Horizontal scaling involves increasing or decreasing the number of replicas of a pod. This can be done manually or automatically using a HorizontalPodAutoscaler (HPA).Vertical scaling
: Vertical scaling involves increasing or decreasing the resources allocated to a pod, such as CPU, memory, or storage. This can be done manually or automatically using a VerticalPodAutoscaler (VPA).
15. What is a K8s Deployment and how does it differ from a ReplicaSet?
A Kubernetes Deployment
tells Kubernetes how to create or modify instances of the pods that hold a containerized application. Deployments are designed for stateless application
. It provides a number of features that make it easier to manage and deploy your applications, such as: Rolling updates, Rollbacks, Self-healing.
ReplicaSets
, on the other hand, are lower-level objects that are responsible for ensuring that a specified number of replicas of your application are running. ReplicaSets
do not provide the same features as Deployments, such as rolling updates, rollbacks, and self-healing.
Here is a table that summarizes the key differences between Deployments and ReplicaSets:
Feature | Deployment | ReplicaSet |
Level | Higher-level | Lower-level |
Features | Rolling updates, rollbacks, self-healing | None |
Use case | Managing and updating a set of Pods | Ensuring that a specified number of replicas of an application are running |
16. Can you explain the concept of rolling updates in K8s?
Rolling updates in Kubernetes involve gradually replacing old Pods with new ones to ensure zero downtime during updates. It’s achieved by creating new Pods with the updated application version and then gradually scaling down the old Pods.
17. How does K8s handle network security and access control?
Kubernetes handles network security and access control in a number of ways, including:
Network policies
: Network policies allow you to control the flow of traffic between pods within a cluster. Network policies can be used to restrict pods from talking to each other, or to restrict pods from talking to certain services or IP addresses.Pod security policies
: Pod security policies allow you to control the capabilities of pods. Pod security policies can be used to restrict pods from running certain commands or accessing certain files or directories.Role-based access control (RBAC)
: RBAC allows you to control who has access to Kubernetes resources. RBAC can be used to restrict users from creating or deleting pods, or from accessing certain services.Network isolation
: Kubernetes networks are isolated from each other by default. This means that pods in one cluster cannot communicate with pods in another cluster.
18. Can you give an example of how K8s can be used to deploy a highly available application?
To deploy a highly available application using Kubernetes, you would typically follow these steps:
Create a Deployment object for your application. The Deployment object will specify the number of replicas of your application that you want to run, and the container image that you want to use.
Create a Service object for your application. The Service object will expose your application on a specific port.
Configure a load balancer to distribute traffic to your application. The load balancer can be a cloud-based load balancer or a hardware load balancer.
Once you have completed these steps, your application will be highly available. This is because Kubernetes will automatically restart failed Pods, and the load balancer will distribute traffic across the healthy Pods.
19. What is a namespace in K8s? Which namespace does a pod take if we don’t specify any namespace?
In Kubernetes, namespaces
provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces.
Namespaces
also facilitate resource management, monitoring, and troubleshooting, making it easier to organize, secure, and scale Kubernetes deployments.
If we do not specify a namespace when creating a pod, the pod will be created in the default
namespace. The default
namespace is a special namespace that is created automatically when you create a Kubernetes cluster.
20. Name the initial namespaces from which Kubernetes starts?
The initial namespaces from which Kubernetes starts: -
Default
Kube – system
Kube – public
21. What is the LoadBalancer in Kubernetes?
The LoadBalancer service is used to expose services to the internet. A Network load balancer, for example, creates a single IP address that forwards all traffic to your service.
22. How does Ingress help in K8s?
Ingress
in Kubernetes helps to control incoming traffic to your applications. It allows you to expose your applications to the outside world without having to expose them directly to the internet. It is typically used in conjunction with a load balancer. The load balancer will distribute traffic to your application based on the rules specified in the Ingress object
.
The Ingress object
specifies the rules for routing traffic to your applications.
23. What are Services in K8s?
A Kubernetes service
is a logical abstraction for a deployed group of pods in a cluster (which all perform the same function).
Since pods are ephemeral, a service
enables a group of pods, which provide specific functions (web services, image processing, etc.) to be assigned a name and unique IP address (clusterIP). As long as the service is running that IP address, it will not change. Services
also define policies for their access.
24. Explain different types of services in K8s?
There are four different types of services in Kubernetes: -
ClusterIP Service
: Exposes a service which is only accessible from within the cluster. This service creates a virtual IP inside the cluster to enable communication between different services.NodePort Service
: Exposes a service via a static port on each node’s IP. This service listen to aport on the node
and forward request on that port to aport on the pod
running the application.
Port on the node: which is used to access the web server externally. It can only be in a valid range which by default is from 30,000 - 32,767.
Port on the pod: where the actual web server is running, also known as TargetPort because here the service forward the request.LoadBalancer Service
: Exposes the service via the cloud provider’s load balancer. For clusters running on public cloud providers like AWS or Azure, creating a load LoadBalancer service provides an equivalent to a clusterIP service, extending it to an external load balancer that is specific to the cloud provider. Kubernetes will automatically create the load balancer, provide firewall rules if needed, and populate the service with the external IP address assigned by the cloud provider.ExternalName Service
: Maps a service to a predefined externalName field by returning a value for the CNAME record.
25. Explain the concept of Self-healing in K8s and how it works?
Self-healing
in Kubernetes is a process by which Kubernetes automatically detects and recovers from pod failures. This is done by continuously monitoring the health of pods and restarting failed pods.
Kubernetes uses a number of mechanisms to achieve self-healing, including:
Liveness probes
: Liveness probes are used to check if a pod is still alive. If a pod fails a liveness probe, Kubernetes will restart the pod.Readiness probes
: Readiness probes are used to check if a pod is ready to receive traffic. If a pod fails a readiness probe, Kubernetes will not direct traffic to the pod.Restart policies
: Restart policies define how Kubernetes should restart failed pods. For example, a restart policy of Always will always restart failed pods, while a restart policy of Never will never restart failed pods.
Self-healing
is a key feature of Kubernetes that helps to ensure the reliability of your applications. By automatically restarting failed pods, K8s helps to keep your applications running even when there are problems.
26. How does K8s handle storage management for containers?
Kubernetes provides a variety of ways to manage storage for containers. The most common way is to use PersistentVolumes
.
PersistentVolumes(PVCs)
are Kubernetes resources that provide persistent storage to containers. PersistentVolumes can be backed by a variety of storage providers, such as local disk, NFS, or cloud storage providers.
Once you have created a PersistentVolume, you can create a PersistentVolumeClaim (PVC).
A PVC
is a request for storage from a container. When you create a PVC, Kubernetes will automatically bind it to a PersistentVolume.
Once a PVC is bound to a PersistentVolume, you can mount the PersistentVolume to a container using a volume mount. A volume mount is a configuration option that tells Kubernetes to mount the PersistentVolume to a specific directory in the container.
27. How does the NodePort service work?
A NodePort service
in Kubernetes is a type of service that exposes an application running inside the cluster on a specific port of each node in the cluster. This type of service makes the application accessible from outside the cluster by binding a port on each node's IP address to the service.
28. What is a Multi-node Cluster and a Single-node Cluster in K8s?
A Multi-node Cluster
consists of multiple worker nodes, while a Single-node Cluster
has only one node.
Multi-node Clusters
are typically used in production environments for high availability, while Single-node Clusters
are useful for development and testing.
Here is a table that summarizes the key differences between Multi-node and Single-node Kubernetes Clusters:
Feature | Multi-node Cluster | Single-node Cluster |
Number of nodes | Multiple nodes | Single node |
Scalability | Scalable | Not scalable |
Reliability | More reliable | Less reliable |
Performance | Can offer better performance | Can offer good performance, but not as good as a multi-node cluster |
Use cases | Production, development, and testing | Development and testing, and small production workloads |
29. Difference between create and apply in K8s?
Create
is used to create a new resource, and if it already exists, it will return an error. Apply
is used to create or update a resource, updating it if it already exists or creating it if it doesn’t. It is often used for declarative configuration management.
In Kubernetes, both kubectl create
and kubectl apply
are commands used to create or update Kubernetes resources, such as pods, services, deployments, and config maps, based on the definitions provided in YAML or JSON files.
30. What are ConfigMaps and Secrets in K8s?
ConfigMaps
: ConfigMaps are used to store configuration data in key-value pairs. They are versatile and can store plain text, configuration files, or environment variables. ConfigMaps help in decoupling configuration from application code, making it easier to manage and update configurations without modifying the application itself.
Secrets
: Secrets are dedicated to managing sensitive information such as passwords, API keys, and other confidential data. Kubernetes encrypts and stores Secrets securely, ensuring that sensitive information is not exposed in plaintext. Secrets provide a way to share confidential data between containers and pods while maintaining security.
31. What are Daemon sets?
A Daemon set is a set of pods that runs only once on a host. They are used for host layer attributes like a network or for monitoring a network, which you may not need to run on a host more than once.
You can go through the detailed blogs regarding Linux, GitHub, Docker, Jenkins, Kubernetes from here: Smriti's Blog.🎇
In this blog, I have put my heart to collect interview questions on Kubernetes. If you have any questions or would like to share your experiences, feel free to leave a comment below👇. Don’t forget to read my blogs, hope you find it helpful🤞 and connect with me on LinkedIn and let’s have a conversation.✨
👆The information presented above is based on my interpretation. Suggestions are always welcome.😊
~Smriti Sharma✌