CKA
Preparation for the Certified Kubernetes Administrator exam [CKA v1.29]
Install / Use
/learn @theJaxon/CKAREADME
CKA
An environment made as a preparation for the Certified Kubernetes Administrator exam [CKA v1.29]
:pencil: Objectives:
<details> <summary>1- Cluster Architecture, Installation & Configuration 25%</b></summary> <p>- Manage role based access control (RBAC)
- Use Kubeadm to install a basic cluster
- Manage a highly-available Kubernetes cluster
- Provision underlying infrastructure to deploy a Kubernetes cluster
- Perform a version upgrade on a Kubernetes cluster using Kubeadm
- Implement etcd backup and restore
- Understand deployments and how to perform rolling update and rollbacks
- Use ConfigMaps and Secrets to configure applications
- Know how to scale applications
- Understand the primitives used to create robust, self-healing, application deployments [Probes]
- Understand how resource limits can affect Pod scheduling [resources for pods and quota for namespaces]
- Awareness of manifest management and common templating tools
- Understand host networking configuration on the cluster nodes
- Understand connectivity between Pods
- Understand ClusterIP, NodePort, LoadBalancer service types and endpoints
- Know how to use Ingress controllers and Ingress resources
- Know how to configure and use CoreDNS
- Choose an appropriate container network interface plugin
- Understand storage classes, persistent volumes
- Understand volume mode, access modes and reclaim policies for volumes
- Understand persistent volume claims primitive
- Know how to configure applications with persistent storage
- Evaluate cluster and node logging
- Understand how to monitor applications
- Manage container stdout & stderr logs
- Troubleshoot application failure
- Troubleshoot cluster component failure
- Troubleshoot networking
Cluster components:
Kubernetes cluster consists of one or more master nodes + one or more worker nodes, the master node components are called Control plane.
Control plane components:
- kube-api server
- etcd Key value store
- kube-scheduler that allocates nodes for pods
- kube-controller-manager and there are different types of controllers like
replication controller,endpoints controller,service account controllerandToken controller
Worker nodes components:
- Kubelet Ensures pods are running on the nodes
- kube-proxy Maintains network rules on the nodes to keep
SVCsworking.
Important kubectl commands:
--recursive with k explain
k explain pod.spec.containers --recursive | less
k <command> -v=<number> # For verbose output, useful for debugging
k cluster-info
k cluster-info dump
k config -h
k config view # View content of ~/.kube/config | /etc/kubernetes/admin.conf
k get events # Displays events for the current ns
k get events -n <ns>
# Filter out normal events so that warnings are better shown
k get events --field-selector type!=Normal
# Auto completion enable
k completion -h
vi ~/.bashrc
alias k=kubectl
source <(kubectl completion bash | sed 's/kubectl/k/g')
export do="--dry-run=client -o yaml"
source ~/.bashrc
vi ~/.vimrc
set tabstop=2 shiftwidth=2 expandtab ai
# if there's an issue with indentation https://stackoverflow.com/questions/426963/replace-tabs-with-spaces-in-vim
:retab
k explain deploy
# Check all fields in a resource
k explain <resource> --recursive # resource can be pod, deployment, ReplicaSet etc
k explain deploy.spec.strategy
k config -h
k proxy # runs on port 8001 by default
# use curl http://localhost:8801 -k to see a list of API groups
# NOT kubectl but useful
journalctl -u kubelet
journalctl -u kube-apiserver
# Dry run and validate
k apply -f fileName.yml --validate --dry-run=client
kubelet -h
:file_folder: Important Directories:
/etc/kubernetes/pki/ # Here all certs and keys are stored
# ETCD certs
/etc/kubernetes/pki/etcd
/etc/cni
/etc/kubernetes/manifests # Static pods definition files that are used for bootstraping kubernetes are located here
/etcd.yaml
/kube-apiserver.yaml
/kube-controller-manager.yaml
/kube-scheduler.yaml
/etc/kubernetes/kubelet.conf # On Worker nodes
$HOME/.kube/config # --kubeconfig file
/var/lib/docker # ["aufs", "containers", "image", "volumes"]
/var/lib/kubelet/config.yaml # kubelet config file that contains static pod path //usually /etc/kubernetes/manifests
/var/log/pods # The output of kubectl log <pod> is coming from here with a different formatting
/var/log/containers # docker logs are stored here
/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
- API Serves on port
6443by default

- A single kubeconfig file can have information related to multiple kubernetes clusters (different servers). There are 3 core fields in the kubeconfig file:
-
cluster field: includes details related to the URL of the cluster
serverand associated info. -
user field: contains info about authentication mechanisms for the user which can be either
-
user/password
-
certificates
-
tokens
-
context field: groups information related to cluster, user and namespaces.
- To remove any field from the kubeconfig use the unset command
# Remove context from kubeconfig
k config unset contexts.<name>
# Remove user from kubeconfig
k config unset users.<name>
- While kubeconfig can be loaded from the known locations as
.kubein home dir or $KUBECONFIG ENV var, sometimes you want to exactly know from where it gets loaded an here the -v is really helpful
k config view -v=10
Create a kubeconfig file:
# 1. Generate the base config file
k config --kubeconfig=<name> set-cluster <cluster-name> --server=https://<address>
# 2. Add user details
k config --kubeconfig=<name> set-credentials <username> --username=<username> --password=<pwd>
# 3. set context in the kubeconfig file
k config --kubeconfig=<name> set-context <name> --cluster=<cluster-name> --namespace=<ns> --user=<user>
Important Documentation page sections:
:bulb: Imperative usage of kubectl command:
# View api-resources
k api-resources -h
# View only the namespaced api-resources
k api-resources --namespaced=True
# Add annotation to deployment
k annotate deploy/<name> k=v
# Create namespace
k create ns <name>
# Create deployment and set an env variable for it
k create deploy <name> --image=<name>
k set env deploy/<name> K=v
# Create ConfigMap from env varibales and use it in a deployment
k create cm <name> --from-literal=K=v --from-literal=K=v
k set env deploy/<name> --from=cm/<name>
# Those key value pairs can be stored in a .env file and also be used
k create cm <name> --from-env-file=<name>.env
k set env deploy/<name> --from=cm/<name>
# Limit the values that will be used from a configMap using --keys option
k set env deploy/<name> --from=cm/<name> --keys="key,key"
# Set resources for deployment
k set resources -h
k set resources deploy/<name> --requests=cpu=200m,memory=512Mi --limits=cpu=500m,memory=1Gi
# Create HorizontalPodAutoscaler resource [HPA] for a deployment
k autoscale deploy <name> --min=<number> --max=<number> --cpu-percent=<number>
k get hpa
k get all
# Create a secret
k create secret generic <name> --from-literal=K=v
# Delete resource
k delete <resource> --force --grace-period=0
# Get using labels, either --selector or -l followed by key value pairs
k get po --selector | -l k=v # Can be done for multiple labels just separate using a comman k=v,k=v etc
# Get pods with the same label across all namespaces [-A is short for --all-namespaces]
k get po -l k=v -A
# Run a pod
k run <name> --image=<name> -o yaml --dry-run=client > <name>.yml
k apply -f <name>.yml
# Define an env variable for a pod using --env
k run <name> --image=<name> --env K=v --env K=v -o yaml --dry-run=client
## Labelling
# Label a node
k label nodes <node-name> k=v
# Delete a lable from the node
k label nodes <node-name> k-
## Rollout commands
k rollout -h
k rollout [history/pause/restart/resume/status/undo] deploy/<name>
# View details of a specific revision
k rollout history deploy/<name> --revision=<number>
# Scale replicas of a deployment
k scale deploy/<name> --replicas=<number>
<details> <summary>Cluster Maintenance</summary> <p>
# Mark node as unusable
k drain <node> | k cordon <node>
# Remove the drain restriction
k uncordon <node>
Cordon Vs drain:
- Cordon doesn't terminate existing pods on the node but it prevents creation of any new pods on that node
- Drain terminates those pods and they get allocated to a different node
Upgrading a cluster:
kubeadm upgrade plan
kubeadm upgrade apply
Backup resource configuration: 1- Backup all resources
kubectl get all -Ao yaml > all_resources.yml
:bell: Implement etcd backup and restore :bell:
2- Use etcdctl to backup the etcd server
E
Related Skills
node-connect
347.2kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
frontend-design
108.0kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
openai-whisper-api
347.2kTranscribe audio via OpenAI Audio Transcriptions API (Whisper).
qqbot-media
347.2kQQBot 富媒体收发能力。使用 <qqmedia> 标签,系统根据文件扩展名自动识别类型(图片/语音/视频/文件)。
Security Score
Audited on Feb 15, 2026
