SkillAgentSearch skills...

CKA

Preparation for the Certified Kubernetes Administrator exam [CKA v1.29]

Install / Use

/learn @theJaxon/CKA

README

CKA

CKA K8s

An environment made as a preparation for the Certified Kubernetes Administrator exam [CKA v1.29]


:pencil: Objectives:

<details> <summary>1- Cluster Architecture, Installation & Configuration 25%</b></summary> <p>
  1. Manage role based access control (RBAC)
  2. Use Kubeadm to install a basic cluster
  3. Manage a highly-available Kubernetes cluster
  4. Provision underlying infrastructure to deploy a Kubernetes cluster
  5. Perform a version upgrade on a Kubernetes cluster using Kubeadm
  6. Implement etcd backup and restore
</p> </details> <details> <summary>2- Workloads & Scheduling 15%</b></summary> <p>
  1. Understand deployments and how to perform rolling update and rollbacks
  2. Use ConfigMaps and Secrets to configure applications
  3. Know how to scale applications
  4. Understand the primitives used to create robust, self-healing, application deployments [Probes]
  5. Understand how resource limits can affect Pod scheduling [resources for pods and quota for namespaces]
  6. Awareness of manifest management and common templating tools
</p> </details> <details> <summary>3- Services & Networking 20%</b></summary> <p>
  1. Understand host networking configuration on the cluster nodes
  2. Understand connectivity between Pods
  3. Understand ClusterIP, NodePort, LoadBalancer service types and endpoints
  4. Know how to use Ingress controllers and Ingress resources
  5. Know how to configure and use CoreDNS
  6. Choose an appropriate container network interface plugin
</p> </details> <details> <summary>4- Storage 10%</b></summary> <p>
  1. Understand storage classes, persistent volumes
  2. Understand volume mode, access modes and reclaim policies for volumes
  3. Understand persistent volume claims primitive
  4. Know how to configure applications with persistent storage
</p> </details> <details> <summary>5- Troubleshooting 30%</b></summary> <p>
  1. Evaluate cluster and node logging
  2. Understand how to monitor applications
  3. Manage container stdout & stderr logs
  4. Troubleshoot application failure
  5. Troubleshoot cluster component failure
  6. Troubleshoot networking
</p> </details>

Cluster components:

Kubernetes cluster consists of one or more master nodes + one or more worker nodes, the master node components are called Control plane.

Control plane components:

  • kube-api server
  • etcd Key value store
  • kube-scheduler that allocates nodes for pods
  • kube-controller-manager and there are different types of controllers like replication controller, endpoints controller, service account controller and Token controller

Worker nodes components:

  • Kubelet Ensures pods are running on the nodes
  • kube-proxy Maintains network rules on the nodes to keep SVCs working.

Important kubectl commands:

--recursive with k explain

k explain pod.spec.containers --recursive | less
k <command> -v=<number> # For verbose output, useful for debugging
k cluster-info 
k cluster-info dump
k config -h
k config view # View content of ~/.kube/config | /etc/kubernetes/admin.conf

k get events # Displays events for the current ns 
k get events -n <ns>
# Filter out normal events so that warnings are better shown
k get events --field-selector type!=Normal 

# Auto completion enable
k completion -h 
vi ~/.bashrc
alias k=kubectl
source <(kubectl completion bash | sed 's/kubectl/k/g')
export do="--dry-run=client -o yaml"
source ~/.bashrc

vi ~/.vimrc
set tabstop=2 shiftwidth=2 expandtab ai

# if there's an issue with indentation https://stackoverflow.com/questions/426963/replace-tabs-with-spaces-in-vim
:retab

k explain deploy

# Check all fields in a resource
k explain <resource> --recursive # resource can be pod, deployment, ReplicaSet etc

k explain deploy.spec.strategy

k config -h

k proxy # runs on port 8001 by default 
# use curl http://localhost:8801 -k to see a list of API groups

# NOT kubectl but useful
journalctl -u kubelet
journalctl -u kube-apiserver

# Dry run and validate
k apply -f fileName.yml --validate --dry-run=client

kubelet -h

:file_folder: Important Directories:

/etc/kubernetes/pki/ # Here all certs and keys are stored

# ETCD certs 
/etc/kubernetes/pki/etcd

/etc/cni

/etc/kubernetes/manifests # Static pods definition files that are used for bootstraping kubernetes are located here
  /etcd.yaml
  /kube-apiserver.yaml
  /kube-controller-manager.yaml
  /kube-scheduler.yaml

/etc/kubernetes/kubelet.conf # On Worker nodes 

$HOME/.kube/config # --kubeconfig file

/var/lib/docker # ["aufs", "containers", "image", "volumes"]

/var/lib/kubelet/config.yaml # kubelet config file that contains static pod path //usually /etc/kubernetes/manifests

/var/log/pods # The output of kubectl log <pod> is coming from here with a different formatting

/var/log/containers # docker logs are stored here 

/usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
  • API Serves on port 6443 by default

kubeconfig

  • A single kubeconfig file can have information related to multiple kubernetes clusters (different servers). There are 3 core fields in the kubeconfig file:
  1. cluster field: includes details related to the URL of the cluster server and associated info.

  2. user field: contains info about authentication mechanisms for the user which can be either

  3. user/password

  4. certificates

  5. tokens

  6. context field: groups information related to cluster, user and namespaces.

  • To remove any field from the kubeconfig use the unset command
# Remove context from kubeconfig 
k config unset contexts.<name>

# Remove user from kubeconfig 
k config unset users.<name>
  • While kubeconfig can be loaded from the known locations as .kube in home dir or $KUBECONFIG ENV var, sometimes you want to exactly know from where it gets loaded an here the -v is really helpful
k config view -v=10

Create a kubeconfig file:

# 1. Generate the base config file
k config --kubeconfig=<name> set-cluster <cluster-name> --server=https://<address>

# 2. Add user details 
k config --kubeconfig=<name> set-credentials <username> --username=<username> --password=<pwd>

# 3. set context in the kubeconfig file 
k config --kubeconfig=<name> set-context <name> --cluster=<cluster-name> --namespace=<ns> --user=<user>


Important Documentation page sections:


:bulb: Imperative usage of kubectl command:

# View api-resources 
k api-resources -h

# View only the namespaced api-resources
k api-resources --namespaced=True

# Add annotation to deployment
k annotate deploy/<name> k=v

# Create namespace 
k create ns <name>

# Create deployment and set an env variable for it
k create deploy <name> --image=<name> 
k set env deploy/<name> K=v

# Create ConfigMap from env varibales and use it in a deployment
k create cm <name> --from-literal=K=v --from-literal=K=v 
k set env deploy/<name> --from=cm/<name>

# Those key value pairs can be stored in a .env file and also be used 
k create cm <name> --from-env-file=<name>.env
k set env deploy/<name> --from=cm/<name>

# Limit the values that will be used from a configMap using --keys option
k set env deploy/<name> --from=cm/<name> --keys="key,key" 

# Set resources for deployment 
k set resources -h 
k set resources deploy/<name> --requests=cpu=200m,memory=512Mi --limits=cpu=500m,memory=1Gi

# Create HorizontalPodAutoscaler resource [HPA] for a deployment
k autoscale deploy <name> --min=<number> --max=<number> --cpu-percent=<number>
k get hpa

k get all 

# Create a secret 
k create secret generic <name> --from-literal=K=v

# Delete resource 
k delete <resource> --force --grace-period=0 

# Get using labels, either --selector or -l followed by key value pairs
k get po --selector | -l k=v # Can be done for multiple labels just separate using a comman k=v,k=v etc

# Get pods with the same label across all namespaces [-A is short for --all-namespaces]
k get po -l k=v -A

# Run a pod 
k run <name> --image=<name> -o yaml --dry-run=client > <name>.yml
k apply -f <name>.yml

# Define an env variable for a pod using --env
k run <name> --image=<name> --env K=v --env K=v -o yaml --dry-run=client 


## Labelling
# Label a node
k label nodes <node-name> k=v

# Delete a lable from the node
k label nodes <node-name> k-

## Rollout commands
k rollout -h 
k rollout [history/pause/restart/resume/status/undo] deploy/<name>

# View details of a specific revision
k rollout history deploy/<name> --revision=<number>

# Scale replicas of a deployment 
k scale deploy/<name> --replicas=<number>


<details> <summary>Cluster Maintenance</summary> <p>
# Mark node as unusable 
k drain <node> | k cordon <node>

# Remove the drain restriction
k uncordon <node>

Cordon Vs drain:

  • Cordon doesn't terminate existing pods on the node but it prevents creation of any new pods on that node
  • Drain terminates those pods and they get allocated to a different node

Upgrading a cluster:

kubeadm upgrade plan
kubeadm upgrade apply

Backup resource configuration: 1- Backup all resources

kubectl get all -Ao yaml > all_resources.yml

:bell: Implement etcd backup and restore :bell:

2- Use etcdctl to backup the etcd server

E

Related Skills

View on GitHub
GitHub Stars52
CategoryDevelopment
Updated1mo ago
Forks28

Security Score

85/100

Audited on Feb 15, 2026

No findings