The Certified Kubernetes Administrator (CKA) certification is an essential credential for those aiming to excel in working with containers and managing cloud-native applications on Kubernetes. In this blog, we’ll walk you through the top 20 CKA certification questions to give you a solid understanding of what the exam covers.
The Certified Kubernetes Administrator Exam: Things To Remember
The Certified Kubernetes Administrator Exam teaches how to create, deploy, and manage Kubernetes best practices and clusters. Here are a few things you need to remember about the Certified Kubernetes Administrator Exam:
- The exam will cost $395 and includes one retake free of cost if you fail the exam on the first attempt or face any issues with the browser.
- It is a problem-based exam, and you will solve those problems using the command line interface.
- The exam is two hours long, and you need to solve 17 questions. The passing mark is 66%, and each question has a specific weight.
- Some questions will be multiple questions. If you can attempt to get a few parts correct, you will have them added to your final score. This means you will benefit from the step marking.
- The certification is valid for three years. It’s an open-book exam, so you will be given access to these materials:
- https://kubernetes.io/docs/
- https://kubernetes.io/blog/
Top 20 CKA certification questions
Here are a few sample questions to help you level up your preparations and better understand the Certified Kubernetes Administrator certification. These questions are mostly performance-based, and you must work with a live Kubernetes cluster to complete the task.
1. Creating a Service Account with Cluster-Wide Pod Listing Permissions
Create a new service account named `logviewer`. Grant this service account access to list all Pods in the cluster by creating an appropriate cluster role called `podviewer-role` and a `ClusterRoleBinding` called `podviewer-role-binding`.
Answer
Create the Service Account
First, create a YAML file named `logviewer-sa.yaml` for the service account:
“`yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: logviewer
namespace: default
“`
Apply this YAML file using `kubectl`:
“`bash
kubectl apply -f logviewer-sa.yaml
“`Create the ClusterRole:
Next, create a YAML file named `podviewer-role.yaml` for the ClusterRole:
“`yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: podviewer-role
rules:
– apiGroups: [“”]
resources: [“pods”]
verbs: [“list”]
“`
Apply this YAML file using `kubectl`:
“`bash
kubectl apply -f podviewer-role.yaml
“`
Create the ClusterRoleBinding
Finally, create a YAML file named `podviewer-role-binding.yaml` for the ClusterRoleBinding:
“`yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: podviewer-role-binding
subjects:
– kind: ServiceAccount
name: logviewer
namespace: default
roleRef:
kind: ClusterRole
name: podviewer-role
apiGroup: rbac.authorization.k8s.io
“`
Apply this YAML file using `kubectl`:
“`bash
kubectl apply -f podviewer-role-binding.yaml
“`
Explanation:
Service Account (logviewer): This account will be used to interact with the Kubernetes API.
ClusterRole (podviewer-role) defines the resources and actions the logviewer service account can perform. In this case, it can list Pods.
ClusterRoleBinding (podviewer-role-binding): This binds the logviewer service account to the podviewer-role, granting it the specified permissions.
2. Creating and Upgrading a Deployment with Version Annotations
Create a new deployment called webserver-deploy with the image httpd:2.4 and 2 replicas. Record the version in the annotations. Next, upgrade the deployment to version httpd:2.6 using a rolling update. Ensure that the version upgrade is recorded in the resource annotations.
Answer
Create the Initial Deployment:
Create a YAML file named webserver-deploy-v1.yaml for the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver-deploy
annotations:
version: “2.4”
spec:
replicas: 2
selector:
matchLabels:
app: webserver
template:
metadata:
labels:
app: webserver
spec:
containers:
– name: httpd
image: httpd:2.4
ports:
– containerPort: 80
Apply this YAML file using kubectl:
kubectl apply -f webserver-deploy-v1.yaml
Upgrade the Deployment to Version httpd:2.6:
Update the deployment YAML file to the new version. Create a new YAML file named webserver-deploy-v2.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webserver-deploy
annotations:
version: “2.6”
spec:
replicas: 2
selector:
matchLabels:
app: webserver
template:
metadata:
labels:
app: webserver
spec:
containers:
– name: httpd
image: httpd:2.6
ports:
– containerPort: 80
Apply this updated YAML file using kubectl:
kubectl apply -f webserver-deploy-v2.yaml
Explanation:
Initial Deployment (webserver-deploy with httpd:2.4): This creates a deployment with 2 replicas running the httpd:2.4 image and includes an annotation to record the version.
Deployment Upgrade: The deployment is updated to use the httpd:2.6 image, and the version annotation is updated to reflect this change.
3. Creating a Snapshot of a MySQL Database and Saving it to a File
Create a snapshot of the MySQL database running on localhost:3306. Save the snapshot into /var/backups/mysql-snapshot.sql.
Answer:
Ensure MySQL is Running:
Ensure the MySQL server is running and you have the necessary permissions to create a backup.
Create the Snapshot:
Use the mysqldump command to create a snapshot of the MySQL database. You need to specify the database name you want to back up. For example, if your database is named mydatabase, you can create a snapshot with the following command:
mysqldump -h localhost -P 3306 -u yourusername -p mydatabase > /var/backups/mysql-snapshot.sql
- -h localhost: Specifies the host where MySQL is running.
- -P 3306: Specifies the port where MySQL is listening.
- -u yourusername: Specifies the MySQL username.
- -p: Prompts for the MySQL password.
- mydatabase: The name of the database to back up.
- > /var/backups/mysql-snapshot.sql: Redirects the output to the specified file path.
- Verify the Snapshot:
Ensure the snapshot file /var/backups/mysql-snapshot.sql has been created and contains the data.
Explanation:
Database Snapshot: This process creates a backup of the specified MySQL database and saves it as an SQL file, which can be used for restoration or migration purposes.
4. Creating a Persistent Volume with Specific Specifications
Create a Persistent Volume with the following specifications:
Name: data-volume
Capacity: 10Gi
Access Modes: ReadWriteOnce
Storage Class: fast-storage
Host Path: /mnt/data
Answer
Create the Persistent Volume YAML File:
Create a file named data-volume-pv.yaml with the following content
apiVersion: v1
kind: PersistentVolume
metadata:
name: data-volume
spec:
capacity:
storage: 10Gi
accessModes:
– ReadWriteOnce
storageClassName: fast-storage
hostPath:
path: /mnt/data
Apply the YAML File:
Use kubectl to create the Persistent Volume:
kubectl apply -f data-volume-pv.yaml
5. Tainting a Node and Managing Pod Scheduling with Taints and Tolerations
Taint the worker node node02 to be Unschedulable. Once done, create a Pod named test-mysql with the image mysql:5.7 to ensure that workloads are not scheduled to this worker node. Finally, create a new Pod named prod-mysql with the image mysql:5.7 and a toleration to be scheduled on node02.
Answer
Taint the Worker Node:
Taint the node node02 to make it unschedulable by using the following command:
kubectl taint nodes node02 key=value:NoSchedule
Create the Pod test-mysql:
Create a YAML file named test-mysql.yaml for the Pod:
apiVersion: v1
kind: Pod
metadata:
name: test-mysql
spec:
containers:
– name: mysql
image: mysql:5.7
Apply this YAML file using kubectl:
kubectl apply -f test-mysql.yaml
Create the Pod prod-mysql with Toleration:
Create a YAML file named prod-mysql.yaml for the Pod with a toleration:
apiVersion: v1
kind: Pod
metadata:
name: prod-mysql
spec:
tolerations:
– key: “key”
operator: “Equal”
value: “value”
effect: “NoSchedule”
containers:
– name: mysql
image: mysql:5.7
Apply this YAML file using kubectl:
kubectl apply -f prod-mysql.yaml
6. Draining a Node and Rescheduling Pods to Other Nodes
Mark the node named node03 as un-schedulable and safely evict all the pods running on it. Ensure that the pods are rescheduled on other available nodes.
Answer
Drain the Node:
Use the kubectl drain command to mark node03 as unschedulable and evict all the pods running on it. This command will safely evict the pods and attempt to reschedule them on other available nodes:
kubectl drain node03 –ignore-daemonsets –delete-local-data
- –ignore-daemonsets: Ensures that DaemonSet-managed pods are not evicted.
- –delete-local-data: Allows the deletion of pods with local storage.
Verify the Pods are Rescheduled:
Check the status of the pods to confirm they have been rescheduled:
kubectl get pods –all-namespaces -o wide
Explanation:
Draining Node (node03): This command marks the node as unschedulable and safely evicts the pods, allowing them to be rescheduled on other nodes. It helps in maintenance tasks or preparing the node for decommissioning.
7. Creating a Pod
Create a Pod named nginx-pod with the image nginx:latest.
Answer:
Create the Pod YAML File:
Create a YAML file named nginx-pod.yaml for the Pod:
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
spec:
containers:
– name: nginx
image: nginx:latest
Apply the YAML File:
Use kubectl to create the Pod:
kubectl apply -f nginx-pod.yaml
Explanation:
Pod (nginx-pod): This Pod will run the nginx:latest image, which provides the Nginx web server.
8. Creating a NetworkPolicy to Deny All Egress Traffic
Create a NetworkPolicy that denies all egress traffic from a namespace.
Answer:
Create the NetworkPolicy YAML File:
Create a YAML file named deny-all-egress.yaml for the NetworkPolicy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-egress
namespace: default
spec:
podSelector: {}
policyTypes:
– Egress
egress:
– {}
- podSelector: {} applies the policy to all Pods in the namespace.
- policyTypes: [Egress] specifies that this policy applies to egress traffic.
- egress: [ {} ] with an empty rule denies all outbound traffic from the Pods.
Apply the YAML File:
Use kubectl to create the NetworkPolicy:
kubectl apply -f deny-all-egress.yaml
Explanation:
NetworkPolicy (deny-all-egress): This policy denies all egress traffic from Pods in the specified namespace, preventing them from initiating any outbound connections.
9. ClusterRole and ClusterRole Binding
Create a new ClusterRole named view-pods-role. This ClusterRole should allow any resource associated with it to get, list, and watch Pods.
Next, create a new namespace named test-namespace. Within that namespace, create a new ServiceAccount named view-pods-sa.
Bind the view-pods-role ClusterRole to the view-pods-sa ServiceAccount, ensuring that the binding is limited to the test-namespace.
Answer:
Create the ClusterRole:
Create a YAML file named view-pods-role.yaml for the ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: view-pods-role
rules:
– apiGroups: [“”]
resources: [“pods”]
verbs: [“get”, “list”, “watch”]
Apply this YAML file using kubectl:
kubectl apply -f view-pods-role.yaml
Create the Namespace:
Create a YAML file named test-namespace.yaml for the namespace:
apiVersion: v1
kind: Namespace
metadata:
name: test-namespace
Apply this YAML file using kubectl:
kubectl apply -f test-namespace.yaml
Create the ServiceAccount:
Create a YAML file named view-pods-sa.yaml for the ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: view-pods-sa
namespace: test-namespace
Apply this YAML file using kubectl:
kubectl apply -f view-pods-sa.yaml
Create the ClusterRoleBinding:
Create a YAML file named view-pods-role-binding.yaml for the ClusterRoleBinding:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: view-pods-role-binding
subjects:
– kind: ServiceAccount
name: view-pods-sa
namespace: test-namespace
roleRef:
kind: ClusterRole
name: view-pods-role
apiGroup: rbac.authorization.k8s.io
Apply this YAML file using kubectl:
kubectl apply -f view-pods-role-binding.yaml
10. Node Maintenance and Pod Rescheduling
Before proceeding, ensure you have a Kubernetes cluster with at least two worker nodes.
Apply the following manifest file:
kubectl apply -f
https://raw.githubusercontent.com/zealvora/myrepo/master/demo-files/maintenance.yaml
Requirements:
Identify the node where a pod named app-pod is currently running.
Mark that node as unschedulable and evict all the pods from it, ensuring they are rescheduled on the other available nodes.
Answer:
Identify the Node:
To find the node on which the app-pod is running, use the following command:
kubectl get pod app-pod -o wide
Drain the Node:
Suppose the node identified in the previous step is node01. To mark node01 as unschedulable and evict all pods from it, use:
kubectl drain node01 –ignore-daemonsets –delete-local-data
- –ignore-daemonsets: Ensures DaemonSet-managed pods are not evicted.
- –delete-local-data: Allows the deletion of pods with local storage.
Verify Pod Rescheduling:
Check that the app-pod and any other evicted pods have been rescheduled on other nodes:
kubectl get pods –all-namespaces -o wide
Explanation:
Node Identification: This step determines which node is hosting the specified pod.
Node Draining: This process involves marking the node as unschedulable and evicting pods to prepare the node for maintenance or decommissioning.
Pod Rescheduling: Ensures that pods are rescheduled on other nodes to maintain application availability.
11. KUBEADM Cluster Setup
Set up a new Kubernetes cluster using kubeadm. The cluster should be based on Kubernetes version 1.21.0. Ensure that all components, including kubelet, kubectl, and kubeadm, are running the same version. The cluster should be configured on CentOS OS.
Answer:
Prepare the Nodes:
Ensure that the CentOS OS is up to date and has the necessary prerequisites installed:
sudo yum update -y
sudo yum install -y yum-utils
sudo yum install -y epel-release
Install Docker:
Install Docker, which is required for Kubernetes:
sudo yum install -y docker
sudo systemctl start docker
sudo systemctl enable docker
Install Kubernetes Components:
Add the Kubernetes repository and install the required components (kubeadm, kubelet, and kubectl), ensuring they are at version 1.21.0:
sudo tee /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo yum install -y kubelet-1.21.0 kubeadm-1.21.0 kubectl-1.21.0
sudo systemctl enable kubelet
Initialize the Kubernetes Cluster:
Initialize the cluster using kubeadm:
sudo kubeadm init –kubernetes-version=v1.21.0
After the initialization, follow the instructions provided by kubeadm to set up the kubectl configuration for the root user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install a Pod Network Add-on:
Install a network add-on, such as Calico or Flannel. For example, to install Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Explanation:
- OS Preparation: Ensures CentOS is ready for Kubernetes installation.
- Docker Installation: Installs Docker, necessary for running Kubernetes containers.
- Kubernetes Components: Installs kubeadm, kubelet, and kubectl at the specified version.
- Cluster Initialization: Uses kubeadm to set up the Kubernetes cluster.
- Pod Network Add-on: Deploys a network add-on to allow communication between Pods.
12. KUBEADM Cluster Upgrade
Upgrade the kubeadm cluster created in the previous step from version 1.20.0 to version 1.21.0. Ensure that kubelet and kubectl are also upgraded to match the new version of kubeadm. Before starting the upgrade, drain the master node and make sure to uncordon it after the upgrade. No other worker nodes should be upgraded.
Answer:
Drain the Master Node:
Before upgrading, drain the master node to ensure that no workloads are running on it. Replace master-node with the actual name of your master node:
kubectl drain master-node –ignore-daemonsets –delete-local-data
Upgrade Kubernetes Components:
Update the Kubernetes components (kubeadm, kubelet, and kubectl) to version 1.21.0:
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –
echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubeadm=1.21.0-00 kubelet=1.21.0-00 kubectl=1.21.0-00
Upgrade the Cluster:
Perform the upgrade of the cluster components:
sudo kubeadm upgrade apply v1.21.0
Update kubelet Configuration:
After upgrading kubeadm, restart kubelet to apply the new version:
sudo systemctl restart kubelet
Uncordon the Master Node:
Once the upgrade is complete, uncordon the master node to allow scheduling of new pods:
kubectl uncordon master-node
Verify the Upgrade:
Check the versions of the cluster components to ensure the upgrade was successful:
kubectl version –short
kubeadm version
kubelet –version
13. ETCD Backup & Restore with Data Verification
For this task, you should have a working ETCD cluster configured with TLS certificates. You can follow the guide below to set up an ETCD cluster:
ETCD Cluster Setup Guide
Requirements:
- Add the following data to the ETCD cluster:
info “This is a test”
- Take a backup of the ETCD cluster. Store the backup at /tmp/testbackup.db.
- Add one more piece of data to the ETCD cluster:
info “This is an update”
- Restore the backup from /tmp/testbackup.db to the ETCD cluster.
- Create a user named etcduser using the useradd command.
- Ensure that the directory where the ETCD backup data is restored has full permissions for the etcduser.
- Verify if the data “This is a test” is present in the ETCD cluster after the restoration.
Answer:
Add Initial Data to ETCD:
Use the etcdctl command to add data to the ETCD cluster:
etcdctl put info “This is a test”
Take a Backup of the ETCD Cluster:
Use the etcdctl snapshot save command to create a backup:
etcdctl snapshot save /tmp/testbackup.db –cert=/path/to/etcd-cert.pem –key=/path/to/etcd-key.pem –cacert=/path/to/etcd-ca.pem
Add More Data to ETCD:
Add the additional data to ETCD:
etcdctl put info “This is an update”
Restore the Backup:
Use the etcdctl snapshot restore command to restore the backup:
etcdctl snapshot restore /tmp/testbackup.db –data-dir=/var/lib/etcd –name=etcd-restore –cert=/path/to/etcd-cert.pem –key=/path/to/etcd-key.pem –cacert=/path/to/etcd-ca.pem
Create etcduser:
Create the user using:
sudo useradd etcduser
Set Directory Permissions:
Change the ownership of the directory to etcduser:
sudo chown etcduser:etcduser /var/lib/etcd
Ensure that etcduser has full permissions:
sudo chmod 700 /var/lib/etcd
Verify Data Presence:
Check if the data “This is a test” is present in the ETCD cluster:
etcdctl get info –cert=/path/to/etcd-cert.pem –key=/path/to/etcd-key.pem –cacert=/path/to/etcd-ca.pem
Explanation:
- Data Addition: Adds and updates data in ETCD to test the backup and restore process.
- Backup and Restore: Demonstrates how to create and restore a backup of ETCD data.
- User Management and Permissions: Ensures that the backup directory is accessible by the appropriate user.
- Data Verification: Confirms that data has been correctly restored and is present in the ETCD cluster.
14. Network Policies
Create a new namespace named test-namespace.
Create a new network policy named restricted-network-policy in the test-namespace.
Requirements:
- Network Policy should allow Pods within the test-namespace to connect to each other only on Port 443. No other ports should be allowed.
- No Pods from outside of the test-namespace should be able to connect to any Pods inside the test-namespace.
Answer:
Create the Namespace:
Create a YAML file named test-namespace.yaml for the namespace:
apiVersion: v1
kind: Namespace
metadata:
name: test-namespace
Apply the YAML file:
kubectl apply -f test-namespace.yaml
Create the Network Policy:
Create a YAML file named restricted-network-policy.yaml for the Network Policy:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restricted-network-policy
namespace: test-namespace
spec:
podSelector: {}
policyTypes:
– Ingress
– Egress
ingress:
– from:
– podSelector: {}
ports:
– protocol: TCP
port: 443
egress:
– to:
– podSelector: {}
ports:
– protocol: TCP
port: 443
- podSelector: {} applies the policy to all Pods in the namespace.
- policyTypes: [Ingress, Egress] specifies that the policy applies to both inbound and outbound traffic.
- ingress allows incoming connections only from Pods within the same namespace on Port 443.
- egress allows outgoing connections only to Pods within the same namespace on Port 443.
Apply the YAML file:
kubectl apply -f restricted-network-policy.yaml
Explanation:
- Namespace Creation: Creates a separate namespace test-namespace for the Network Policy.
- Network Policy: Configures the Network Policy to restrict traffic such that:
- Pods within the namespace can only communicate with each other on Port 443.
- No external Pods can communicate with Pods inside the namespace.
15. Scale Deployments
Create a new deployment named web-deployment using the nginx:latest image. The deployment should initially have 2 replicas.
Scale the deployment to 8 replicas.
Answer:
Create the Deployment:
Create a YAML file named web-deployment.yaml for the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
– name: nginx
image: nginx:latest
ports:
– containerPort: 80
Apply the YAML file:
kubectl apply -f web-deployment.yaml
Scale the Deployment:
Use the kubectl scale command to change the number of replicas to 8:
kubectl scale deployment web-deployment –replicas=8
Alternatively, you can update the replica count directly in the YAML file and reapply it:
spec:
replicas: 8
Reapply the updated YAML file:
kubectl apply -f web-deployment.yaml
Verify the Scaling:
Check the status of the deployment to ensure it has the desired number of replicas:
kubectl get deployments
Explanation:
Deployment Creation: Initializes a deployment with the nginx:latest image and sets the initial replica count to 2.
Scaling the Deployment: Adjusts the number of replicas to 8 to handle increased load or ensure high availability.
Verification: Confirms that the deployment has been scaled to the desired number of replicas
16. Multi-Container Pods
Create a pod named multi-app-pod that contains two containers: one running the nginx:latest image and the other running the redis:alpine image.
Answer:
Create the Pod YAML File:
Create a YAML file named multi-app-pod.yaml for the pod:
apiVersion: v1
kind: Pod
metadata:
name: multi-app-pod
spec:
containers:
– name: nginx-container
image: nginx:latest
ports:
– containerPort: 80
– name: redis-container
image: redis:alpine
ports:
– containerPort: 6379
- Container 1 (nginx-container): Uses the nginx:latest image and exposes port 80.
- Container 2 (redis-container): Uses the redis:alpine image and exposes port 6379.
Apply the YAML File:
Use kubectl to create the pod:
kubectl apply -f multi-app-pod.yaml
Verify the Pod:
Check the status of the pod to ensure it is running:
kubectl get pods
Describe the Pod (Optional):
To get detailed information about the pod and its containers:
kubectl describe pod multi-app-pod
Explanation:
Pod Creation: Defines a pod with two containers, each running a different image (nginx and redis).
Verification: Ensures the pod is running as expected and both containers are operational.
17. Ingress Resource
Create a new ingress resource based on the following specifications:
Name: app-ingress
Namespace: app-namespace
Expose service frontend on the path of /frontend using the service port of 3000.
Answer:
Create the Namespace:
If the namespace app-namespace does not already exist, create it with the following YAML file:
apiVersion: v1
kind: Namespace
metadata:
name: app-namespace
Apply the YAML file:
kubectl apply -f app-namespace.yaml
Create the Ingress Resource:
Create a YAML file named app-ingress.yaml for the ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: app-namespace
spec:
rules:
– host: example.com
http:
paths:
– path: /frontend
pathType: Prefix
backend:
service:
name: frontend
port:
number: 3000
- Name: app-ingress
- Namespace: app-namespace
- Service: frontend
- Path: /frontend
- Service Port: 3000
Apply the YAML File:
Use kubectl to create the ingress resource:
kubectl apply -f app-ingress.yaml
Verify the Ingress Resource:
Check the status of the ingress to ensure it was created successfully:
kubectl get ingress -n app-namespace
Explanation:
- Namespace Creation: Ensures the namespace app-namespace exists for the ingress resource.
- Ingress Resource: Configures the ingress to route traffic from /frontend to the frontend service on port 3000.
- Verification: Ensures that the ingress resource has been created and is correctly configured.
18. POD and Logging
Apply the following manifest to your Kubernetes cluster:
https://raw.githubusercontent.com/zealvora/myrepo/master/demo-files/sample_pod.yaml
Monitor the logs for all containers that are part of the test-pod pod. Extract log lines that contain the string ERROR and write them to /var/logs/error_logs.txt.
Answer:
Apply the Manifest:
First, apply the manifest to create the pod:
kubectl apply -f https://raw.githubusercontent.com/zealvora/myrepo/master/demo-files/sample_pod.yaml
Verify the Pod:
Ensure the pod test-pod is running:
kubectl get pods
Monitor and Extract Logs:
To monitor logs for all containers in the test-pod pod, use the following command. This will stream the logs for all containers in the specified pod:
kubectl logs -f test-pod –all-containers=true
To extract log lines containing the string ERROR and save them to /var/logs/error_logs.txt, use the following commands.
First, check logs for a specific container in the pod:
kubectl logs test-pod -c <container-name>
Replace <container-name> with the actual container name.
Then, extract lines containing ERROR and save to the file:
kubectl logs test-pod –all-containers=true | grep “ERROR” > /var/logs/error_logs.txt
Ensure the directory /var/logs exists or create it if needed:
sudo mkdir -p /var/logs
Verify the Log File:
Check the contents of the file to ensure the logs were written correctly:
cat /var/logs/error_logs.txt
Explanation:
Manifest Application: Applies the provided manifest to create the pod.
Log Monitoring and Extraction: Monitors logs for all containers in the pod and extracts lines containing ERROR.
Log File Management: Ensures the log file directory exists and verifies the contents.
19. NodeSelector
Create a pod named database-pod using the mysql:5.7 image. The pod should only run on nodes that have a label storage=high-performance.
Answer:
Label a Node:
First, ensure you have a node with the label storage=high-performance. Label a node using the following command:
kubectl label nodes <node-name> storage=high-performance
Create the Pod YAML File:
Create a YAML file named database-pod.yaml for the pod:
apiVersion: v1
kind: Pod
metadata:
name: database-pod
spec:
containers:
– name: mysql-container
image: mysql:5.7
env:
– name: MYSQL_ROOT_PASSWORD
value: rootpassword
nodeSelector:
storage: high-performance
- Pod Name: database-pod
- Image: mysql:5.7
- Node Selector: Ensures the pod runs on nodes with the label storage=high-performance.
Apply the YAML File:
Use kubectl to create the pod:
kubectl apply -f database-pod.yaml
Verify the Pod:
Check the status of the pod to ensure it is running on the correct node:
kubectl get pods -o wide
Explanation:
- Node Labeling: Ensures that the node has the required label for the pod to be scheduled.
- Pod Creation: Defines a pod with a node selector to ensure it runs on the correct node.
- Verification: Checks that the pod is correctly scheduled based on the node selector.
20. Resource Requests and Limits
Create a new deployment named resource-deployment using the busybox image. The deployment should initially have 2 replicas.
Set resource requests and limits for the containers in the deployment:
Requests:
CPU: 100m
Memory: 128Mi
Limits:
CPU: 500m
Memory: 256Mi
Scale the deployment to 4 replicas.
Verify that the resource requests and limits are correctly applied to the containers.
Answer:
Create the Deployment YAML File:
Create a YAML file named resource-deployment.yaml for the deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: resource-deployment
spec:
replicas: 2
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
– name: busybox-container
image: busybox
command: [“sh”, “-c”, “sleep 3600”]
resources:
requests:
cpu: “100m”
memory: “128Mi”
limits:
cpu: “500m”
memory: “256Mi”
- Name: resource-deployment
- Image: busybox
- Requests: CPU 100m, Memory 128Mi
- Limits: CPU 500m, Memory 256Mi
Apply the YAML File:
Use kubectl to create the deployment:
kubectl apply -f resource-deployment.yaml
Scale the Deployment:
Use the kubectl scale command to change the number of replicas to 4:
kubectl scale deployment resource-deployment –replicas=4
Alternatively, update the replica count in the YAML file and reapply:
spec:
replicas: 4
Reapply the updated YAML file:
kubectl apply -f resource-deployment.yaml
Verify Resource Requests and Limits:
Check the deployment to ensure the resource requests and limits are applied correctly:
kubectl describe deployment resource-deployment
Look for the Resources section under the container specifications to verify the requests and limits.
Verify Pods and Replica Count:
Check that the deployment has the desired number of replicas:
kubectl get deployments
Check the status of the pods to confirm they are running:
kubectl get pods
Explanation:
- Deployment Creation: Defines a deployment with specified resource requests and limits for the containers.
- Scaling: Adjusts the number of replicas to manage load and availability.
- Verification: Ensures that the resource requests and limits are correctly applied and that the deployment is running the desired number of replicas.
Kubernetes Exam Preparation Tips
Worry no more if these questions make you doubt your prep strategy or make you nervous.
Here are a few Kubernetes preparation tips to keep up your sleeves:
- Get acquainted with the Kubernetes setup for the exam
When practicing for the CKA practical questions, use the same operating system and tools used in the exam. You can save time by reading a lengthy manual and learning how to use a command-line tool when solving questions.
- Go through the Kubernetes documentation.
The CKA exam covers numerous concepts for setting up K8s Kubernetes clusters, stateless PHP examples, how to use Kubernetes API and more. The best way to dive deep into these topics is to review the Kubernetes documentation carefully.
- Learn Kubectl commands
Kubectl is the most critical Kubernetes tool for interacting with Kubernetes setup and clusters. Ensure you are well-versed in Kubectl commands to create, update, and handle Kubernetes resources. Such commands are also critical to ensure Kubernetes security. Watch our curated video lectures by Kubernetes experts to learn these commands interactively.
- Get hands-on experience with Kubernetes hands-on labs
Learn how to create Kubernetes clusters using Minikube, Kind, or K3s and how you can implement Kubernetes best practices. One of the best ways to do this is to use hands-on labs. Whizlabs offers 20+ Practice Kubernetes Labs to enhance your skills and provide practical experience guided by experts. It includes:
- Kubernetes storage
- Monitoring and logging
- Kubernetes networking
- Cluster configuration
- Kubernetes clusters
- Practice with sample papers
The critical part of the CKA exam is finishing all questions quickly. If you are tight on time, solving practical problems while navigating a terminal window could be difficult. To improve your time management, solve as many practice papers as possible.
Conclusion
Hope this blog helps you understand the CKA exam pattern with the top 20 CKA certification questions in great detail and also what Kubernetes exam preparation tips to consider for beginning your Kubernetes journey. While a robust strategy is needed to prepare for the exam, it’s equally important to access only upgraded and authentic training materials that will help you pass this dynamic course seamlessly. Check out our hands-on labs and video lectures designed to satisfy all Kubernetes exam needs and pass the exam with flying colors.
Want to learn more about the CKA Certification? Talk to our experts today!
- Top 20 Questions To Prepare For Certified Kubernetes Administrator Exam - August 16, 2024
- 10 AWS Services to Master for the AWS Developer Associate Exam - August 14, 2024
- Exam Tips for AWS Machine Learning Specialty Certification - August 7, 2024
- Best 15+ AWS Developer Associate hands-on labs in 2024 - July 24, 2024
- Containers vs Virtual Machines: Differences You Should Know - June 24, 2024
- Databricks Launched World’s Most Capable Large Language Model (LLM) - April 26, 2024
- What are the storage options available in Microsoft Azure? - March 14, 2024
- User’s Guide to Getting Started with Google Kubernetes Engine - March 1, 2024
Thank you for sharing some useful interview questions for preparing CKA. I’ve also created a blog post on how to become a CKA. I hope it’ll help your readers. Please find it here https://www.techwrix.com/an-ultimate-guide-to-become-a-certified-kubernetes-administrator-cka/