Cracking Kubernetes: Mastering Key Concepts and Top Interview Questions

A Deep Dive into the Essential Concepts and Best Practices for Kubernetes Mastery

Ajit Fawade
16 min readOct 13, 2023

Kubernetes is one of the most popular and widely used tools for container orchestration and management. It is an open-source platform that allows you to automate the deployment, scaling, and management of your containerized applications across multiple hosts. Kubernetes is also known as K8s, which is derived from the number of letters between K and s in Kubernetes.

If you are preparing for a DevOps or Kubernetes interview, you need to be well-versed in the basic concepts and features of Kubernetes. You also need to be able to demonstrate your practical skills and knowledge by answering some common Kubernetes interview questions.

To help you ace your interview, I have compiled a comprehensive list of the top 16 Kubernetes interview questions and answers. These questions cover a wide range of topics, such as Kubernetes architecture, container concepts, deployment strategies, networking, scaling, security, troubleshooting, and integration with other tools.

So, without further ado, let’s dive into the blog post and learn the top Kubernetes interview questions and answers.

Top 16 Kubernetes Interview Questions and Answers:

1. What is Kubernetes and why it is important?

Kubernetes is an open-source platform that allows you to automate the deployment, scaling, and management of your containerized applications across multiple hosts. It was originally developed by Google based on their experience of running production workloads at scale. It is now maintained by the Cloud Native Computing Foundation (CNCF).

Kubernetes is important because it provides several benefits for your applications, such as:

  • Portability: You can run your applications on any platform that supports Kubernetes, such as on-premises, cloud, or hybrid environments. You can also migrate your applications from one platform to another without any hassle.
  • Scalability: You can scale your applications up or down based on the demand or performance. You can also use horizontal or vertical scaling methods to adjust the number or size of your pods (the smallest unit of deployment in Kubernetes).
  • Availability: You can ensure that your applications are always available and resilient to failures. You can use features such as replication, load balancing, health checks, self-healing, and rolling updates to achieve high availability.
  • Efficiency: You can optimize the utilization of your resources and reduce operational costs. You can use features such as resource limits, requests, quotas, and autoscaling to allocate and manage your resources effectively.
  • Security: You can protect your applications from unauthorized access or exposure. You can use features such as network policies, service accounts, secrets, encryption, authentication, and authorization to enforce network security and access control.

2. What is the difference between docker swarm and Kubernetes?

Docker Swarm and Kubernetes are both tools for container orchestration and management. They both allow you to create clusters of nodes (servers) that run containers and provide features such as service discovery, load balancing, scaling, scheduling, networking, and security.

However, there are some differences between Docker Swarm and Kubernetes in terms of their architecture, functionality, complexity, and maturity.

Some of the main differences are:

  • Architecture: Docker Swarm follows a simple architecture where all the nodes are equal and communicate with each other using a gossip protocol. There is no master node or central control plane in Docker Swarm. Kubernetes follows a more complex architecture where there are master nodes that control the cluster state and worker nodes that run the containers. There is a central control plane in Kubernetes that consists of several components such as an API server, controller manager, scheduler, etcd, etc.
  • Functionality: Docker Swarm provides basic functionality for container orchestration and management. It supports only one type of service (replicated) and one type of load balancer (round-robin). It does not support advanced features such as rolling updates, health checks, self-healing, ingress controllers, etc. Kubernetes provides more functionality for container orchestration and management. It supports multiple types of services (replicated, daemonset, statefulset) and multiple types of load balancers (round-robin, least-connection). It also supports advanced features such as rolling updates, health checks, self-healing, ingress controllers, etc.
  • Complexity: Docker Swarm is easier to set up and use than Kubernetes. It has a simpler and more intuitive command-line interface and configuration file than Kubernetes. It also has less overhead and maintenance than Kubernetes. Kubernetes is harder to set up and use than Docker Swarm. It has a more complex and less intuitive command-line interface and configuration file than Docker Swarm. It also has more overhead and maintenance than Docker Swarm.
  • Maturity: Docker Swarm is less mature and stable than Kubernetes. It has fewer features and integrations than Kubernetes. It also has less community support and documentation than Kubernetes. Kubernetes is more mature and stable than Docker Swarm. It has more features and integrations than Docker Swarm. It also has more community support and documentation than Docker Swarm.

3. How does Kubernetes handle network communication between containers?

Kubernetes handles network communication between containers using the following concepts:

  • Pod: A pod is a group of one or more containers that share the same network namespace and IP address. Containers within a pod can communicate with each other using localhost. Pods are the smallest unit of deployment in Kubernetes.
  • Service: A service is an abstraction that defines a logical set of pods and a policy to access them. A service provides a stable and consistent IP address and DNS name for the pods, regardless of where they are scheduled or how they are scaled. Services can be of different types, such as ClusterIP, NodePort, LoadBalancer, or ExternalName. Services allow pods to communicate with each other across nodes or clusters.
  • Ingress: An ingress is an API object that defines rules to expose services to external traffic. An ingress controller is responsible for fulfilling the ingress rules by routing the traffic to the appropriate services. An ingress can provide features such as load balancing, SSL termination, name-based virtual hosting, etc. Ingress allows pods to communicate with external clients or services.

4. How does Kubernetes handle the scaling of applications?

Kubernetes handles the scaling of applications using the following concepts:

  • ReplicaSet: A ReplicaSet is a controller that ensures that a specified number of replicas of a pod are running at any given time. A ReplicaSet can be used to scale up or down the number of pods manually or automatically based on CPU utilization or other metrics.
  • Deployment: A deployment is a higher-level abstraction that manages ReplicaSets and provides declarative updates to pods. A deployment can be used to scale up or down the number of pods by changing the replica field in the deployment specification. A deployment can also perform rolling updates or rollbacks to pods without any downtime.
  • Horizontal Pod Autoscaler (HPA): An HPA is a controller that automatically scales the number of pods in a ReplicaSet, Deployment, StatefulSet, or any other custom resource that implements the scale subresource based on the observed CPU utilization or other metrics. An HPA can be configured with a target CPU utilization percentage and a minimum and maximum number of pods.
  • Vertical Pod Autoscaler (VPA): A VPA is a controller that automatically adjusts the CPU and memory requests and limits of pods based on their resource consumption and availability. A VPA can be configured with different modes, such as off (no action), initial (set requests only on pod creation), auto (set requests and limits on pod creation and update), or recreate (set requests and limits on pod creation and recreate existing pods).

5. What is a Kubernetes Deployment and how does it differ from a ReplicaSet?

A Kubernetes Deployment is a higher-level abstraction that manages ReplicaSets and provides declarative updates to pods. A Deployment can be used to create new ReplicaSets or update existing ones. A Deployment can also perform rolling updates or rollbacks to pods without any downtime.

A ReplicaSet is a lower-level abstraction that ensures that a specified number of replicas of a pod are running at any given time. A ReplicaSet can be used to scale up or down the number of pods manually or automatically based on CPU utilization or other metrics.

The main difference between a Deployment and a ReplicaSet is that a Deployment can perform declarative updates to pods, while a ReplicaSet can only ensure that a fixed number of pods are running. A Deployment can also manage multiple ReplicaSets, while a ReplicaSet can only manage one set of pods.

6. Can you explain the concept of rolling updates in Kubernetes?

Rolling updates are a way of updating pods in a Deployment without any downtime. Rolling updates allow you to gradually replace old pods with new ones while ensuring that at least a certain number of pods are available at all times.

To perform rolling updates in Kubernetes, you need to specify the following parameters in your Deployment specification:

  • updateStrategy.type: This defines the type of update strategy for your Deployment. The default value is RollingUpdate, which means that old pods are replaced with new ones gradually.
  • updateStrategy.rollingUpdate.maxUnavailable: This defines the maximum number of pods that can be unavailable during the update process. The default value is 25%, which means that up to 25% of the total number of pods can be unavailable at any time.
  • updateStrategy.rollingUpdate.maxSurge: This defines the maximum number of pods that can be created above the desired number of pods during the update process. The default value is 25%, which means that up to 25% more pods than the desired number can be created at any time.

For example, if you have a Deployment with 10 replicas and you want to update it with a new image, you can use the following specification:

apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 10
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:v2 # This is the new image that you want to update to.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2 # This means that up to 2 pods can be unavailable during the update process.
maxSurge: 3 # This means that up to 3 more pods than the desired number can be created during the update process.

The rolling update process will work as follows:

  • First, Kubernetes will create 3 new pods with the new image, since the maxSurge value is 3. This will result in 13 pods in total, out of which 10 are available and 3 are pending.
  • Next, Kubernetes will terminate 2 old pods, since the maxUnavailable value is 2. This will result in 11 pods in total, out of which 8 are available and 3 are pending.
  • Then, Kubernetes will create 2 more new pods with the new image, since the maxSurge value is still 3. This will result in 13 pods in total, out of which 10 are available and 3 are pending.
  • After that, Kubernetes will terminate 2 more old pods, since the maxUnavailable value is still 2. This will result in 11 pods in total, out of which 10 are available and 1 is pending.
  • Finally, Kubernetes will create the last new pod with the new image, since the maxSurge value is still 3. This will result in 12 pods in total, out of which 11 are available and 1 is pending.
  • At this point, all the old pods have been replaced with new ones, and the rolling update is complete. Kubernetes will terminate the extra pod that was created due to the maxSurge value, and bring the total number of pods back to the desired number of replicas, which is 10.

7. How does Kubernetes handle network security and access control?

Kubernetes handles network security and access control using the following concepts:

  • Network Policy: A network policy is a resource that defines how pods are allowed to communicate with each other and with other network endpoints. A network policy specifies a set of rules that determine which pods can send or receive traffic based on their labels, ports, protocols, or IP addresses. A network policy can be applied to a namespace or a pod selector. A network policy requires a network plugin that supports it, such as Calico, Cilium, or Weave Net.
  • Service Account: A service account is an identity that represents a pod or a group of pods for accessing the Kubernetes API server. A service account can be assigned to a pod using the serviceAccountName field in the pod specification. A service account can also be associated with a set of permissions or roles using role-based access control (RBAC) or attribute-based access control (ABAC).
  • Secret: A secret is a resource that stores sensitive data in an encrypted form. A secret can be used to store passwords, tokens, keys, certificates, or any other data that you want to keep secret. A secret can be mounted as a volume or injected as an environment variable into a pod. A secret can also be referenced by a service account for accessing external services.

8. Can you give an example of how Kubernetes can be used to deploy a highly available application?

Kubernetes can be used to deploy a highly available application using the following steps:

  • Create a Deployment that defines the desired state of your application, such as the number of replicas, the image, the ports, etc.
  • Create a Service that exposes your application to other pods or external clients. The service acts as a load balancer that distributes traffic among your pods.
  • Create an Ingress that defines rules to expose your service to external traffic. The ingress controller routes the traffic to your service based on the host name or path.
  • Configure health checks for your pods using liveness and readiness probes. These probes allow Kubernetes to monitor the health and availability of your pods and restart them if they fail or become unresponsive.
  • Configure rolling updates for your deployment using updateStrategy parameters. These parameters allow you to perform zero-downtime updates to your pods without affecting the availability of your application.
  • Configure horizontal pod autoscaling for your deployment using HPA parameters. These parameters allow you to automatically scale your pods based on CPU utilization or other metrics.

For example, you can use the following specifications to deploy a highly available application:

# Deployment specification
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3 # This defines the desired number of replicas for your application.
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:v2 # This is the new image that you want to update to.
ports:
- containerPort: 8080
livenessProbe: # This defines a health check for your pod using an HTTP request.
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 10 # This defines how long to wait before performing the first probe.
periodSeconds: 10 # This defines how often to perform the probe.
failureThreshold: 3 # This defines how many failures to tolerate before restarting the pod.
readinessProbe: # This defines a readiness check for your pod using an HTTP request.
httpGet:
path: /readyz
port: 8080
initialDelaySeconds: 10 # This defines how long to wait before performing the first probe.
periodSeconds: 10 # This defines how often to perform the probe.
successThreshold: 2 # This defines how many successes to require before marking the pod as ready.
serviceAccountName: my-app-sa # This defines the service account that the pod will use to access the Kubernetes API server.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 2 # This means that up to 2 pods can be unavailable during the update process.
maxSurge: 3 # This means that up to 3 more pods than the desired number can be created during the update process.

# Service specification
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app # This matches the label of the pods that are part of the service.
ports:
- protocol: TCP
port: 80 # This is the port that the service will expose externally.
targetPort: 8080 # This is the port that the pods will listen on internally.
type: LoadBalancer # This means that the service will be exposed externally using a cloud provider's load balancer.

# Ingress specification
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
spec:
rules:
- host: my-app.example.com # This is the host name that will be used to access the service from outside the cluster.
http:
paths:
- path: / # This is the path that will be used to access the service from outside the cluster.
backend:
serviceName: my-app-service # This refers to the name of the service that will handle the traffic.
servicePort: 80 # This refers to the port of the service that will handle the traffic.

9. What is a namespace in Kubernetes? Which namespace any pod takes if we don’t specify any namespace?

A namespace is a logical grouping of resources and objects in Kubernetes. A namespace allows you to isolate and manage different aspects of your cluster, such as projects, teams, environments, or applications. A namespace also provides a scope for names, so that you can have resources with the same name in different namespaces.

If you don’t specify any namespace for a pod, it will take the default namespace, which is the default namespace for all resources that are not explicitly assigned to any other namespace.

10. How does ingress help in Kubernetes?

Ingress is a resource that defines rules to expose services to external traffic. Ingress helps in Kubernetes by allowing you to:

  • Expose your services to the internet or other networks using a single IP address or domain name
  • Route the traffic to your services based on the host name or path
  • Provide features such as load balancing, SSL termination, name-based virtual hosting, etc.
  • Simplify the management and configuration of your services

11. Explain different types of services in Kubernetes

Services are abstractions that define a logical set of pods and a policy to access them. Services allow pods to communicate with each other across nodes or clusters. There are different types of services in Kubernetes, such as:

  • ClusterIP: This is the default type of service that assigns a stable and internal IP address to the service. This IP address is only reachable within the cluster and can be used by other pods to access the service.
  • NodePort: This type of service exposes the service on a static port on each node in the cluster. This port can be accessed from outside the cluster using the node IP address and the port number.
  • LoadBalancer: This type of service exposes the service externally using a cloud provider’s load balancer. This load balancer assigns an external IP address to the service and routes the traffic to the nodes and ports that are part of the service.
  • ExternalName: This type of service maps the service to an external DNS name. This DNS name can be resolved by pods or external clients using a CNAME record.

12. Can you explain the concept of self-healing in Kubernetes and give examples of how it works?

Self-healing is a concept that refers to the ability of Kubernetes to detect and correct failures or errors in its resources and objects. Self-healing helps in Kubernetes by ensuring that your applications are always available and resilient to failures.

Some examples of how self-healing works in Kubernetes are:

  • If a pod fails or becomes unresponsive, Kubernetes will restart or replace it with a new one based on the liveness probe and readiness probe settings.
  • If a node fails or becomes unreachable, Kubernetes will reschedule the pods that were running on that node to other nodes in the cluster based on the node affinity and anti-affinity settings.
  • If a deployment fails or becomes outdated, Kubernetes will roll back or update it with a new one based on the updateStrategy parameters.
  • If a replicaSet fails or becomes under-scaled or over-scaled, Kubernetes will adjust it with a new one based on the HPA parameters.

13. How does Kubernetes handle storage management for containers?

Kubernetes handles storage management for containers using the following concepts:

  • Volume: A volume is a directory that is accessible by all containers in a pod. A volume can be used to store data that needs to persist across container restarts or share data between containers in a pod. A volume can be backed by different types of storage sources, such as emptyDir, hostPath, nfs, gcePersistentDisk, awsElasticBlockStore, etc.
  • Persistent Volume (PV): A PV is an abstraction that represents a piece of storage in the cluster. A PV can be provisioned manually by an administrator or dynamically by a storage class. A PV can have different attributes, such as capacity, access mode, reclaim policy, etc.
  • Persistent Volume Claim (PVC): A PVC is a request for storage by a user or an application. A PVC specifies the size and access mode of the storage that it needs. A PVC can be bound to a PV that matches its requirements.
  • Storage Class: A storage class is an abstraction that defines a type of storage that can be dynamically provisioned by a provisioner. A storage class can have different parameters, such as performance, availability, replication, etc.

14. How does the NodePort service work?

A NodePort service exposes the service on a static port on each node in the cluster. This port can be accessed from outside the cluster using the node IP address and the port number.

A NodePort service works as follows:

  • When you create a NodePort service, Kubernetes will allocate a port from a range (default 30000–32767) for your service. This port will be the same on all nodes in the cluster.
  • Kubernetes will also create a ClusterIP service for your service, which will be used internally by the pods in the cluster to access the service.
  • Kubernetes will configure the iptables rules on each node to forward the traffic from the NodePort to the ClusterIP service, which will then route the traffic to the pods that are part of the service.
  • To access the service from outside the cluster, you can use any node IP address and the NodePort as the URL, such as http://node-ip:node-port.

15. What is a multinode cluster and single-node cluster in Kubernetes?

A multinode cluster is a cluster that consists of more than one node (server) that runs containers. A multinode cluster can have one or more master nodes that control the cluster state and one or more worker nodes that run the containers.

A single-node cluster is a cluster that consists of only one node (server) that runs both the master and worker components. A single-node cluster can be used for development or testing purposes, but not for production.

16. Difference between create and apply in Kubernetes?

create and apply are two commands that can be used to create or update resources in Kubernetes using files or the command line.

The main difference between create and apply is that create uses an imperative approach, while apply uses a declarative approach.

  • Create uses an imperative approach, which means that you specify what actions you want to perform on the resources, such as create, update, delete, etc. Create does not keep track of the previous state of the resources and does not allow you to modify or merge changes with existing resources. Create is suitable for creating new resources or performing one-time operations.
  • Apply uses a declarative approach, which means that you specify the desired state of the resources, such as how they should look like, what properties they should have, etc. Apply keeps track of the previous state of the resources and allows you to modify or merge changes with existing resources. Apply is suitable for managing existing resources or performing recurring operations.

Conclusion

In this blog post, I have covered some of the most common and important Kubernetes interview questions and answers for 2023. These questions will help you test your knowledge and understanding of Kubernetes concepts and features. They will also help you prepare for any interview or certification exam that you may need to take once you have completed your Kubernetes training.

I hope you enjoyed this blog post and learned something new from it. Please feel free to share your feedback or queries in the comments section below.

Happy learning! 😊

--

--

Responses (2)