Certified Kubernetes Application Developer (CKAD) is a certification developed by the Cloud Native Computing Foundation (CNCF).
The exam consists of a set of problems (~19) to be solved in a command line and is expected to take approximately 2 hours to complete.
Hereafter some notes to help passing the exam.
By default, the editor used to modify k8s resources is Vim. As we are not all familiar with this editr, it is possible to use nano instead :
export KUBE_EDITOR="nano"
It is really time consuming to write a full kubectl command, it can be usefull to define aliases :
export ns=default alias k='kubectl -n $ns' # This helps when namespace in question doesn't have a friendly name
The default output format for all kubectl commands is the human-readable plain-text format. The -o flag allows us to output the details in several different formats. Here are some of the commonly used formats:
-
-o jsonOutput
a JSON formatted API object. -
-o namePrint
only the resource name and nothing else. -
-o wideOutput
in the plain-text format with any additional information. -
-o yamlOutput
a YAML formatted API object.
Whatever the kind of k8s resource, it is possible to generate simply a yaml template into a file :
k run my-pod --image=nginx --restart=Never --dry-run -o yaml > my-pod.yaml
Then, this file can be completed and used to create the resource :
k create -f my-pod.yaml
Each k8s object has a shortcut/alias. Using them can save your time :
-
po
for PODs -
rs
for ReplicaSets -
deploy
for Deployments -
svc
for Services -
ns
for Namespaces -
netpol
for Network policies -
pv
for Persistent Volumes -
pvc
for PersistentVolumeClaims -
sa
for Service Accounts
For example, it is possible to retrieve the configmaps created in the current namespace using the command :
k get cm
In order to debug a behavior from inside the cluster it could be interesting to run a busybox image and exe a shell in it.
It can be done using a simple command :
k run -i --tty busybox --restart=Never --image=busybox -- sh
It’s possible to run a busybox to get a web resource :
k run -it --rm busybox --image=busybox --restart=Never --labels='access=not-granted' -- wget -O- nginx
It is also possible to open a shell from any pod :
k exec -it <pod_name> -- sh
It is possible to run a specific command by tuning the pod sepcification :
apiVersion: v1 kind: Pod metadata: name: time-check namespace: default spec: containers: - image: busybox name: time-check command: ["/bin/sh", "-c", "while true; do date; sleep $TIME_FREQ;done > /opt/time/time-check.log"]
Generate yaml
k create cronjob my-cj --image=busybox --schedule="* * * * *" -o yaml --dry-run
Generate yaml
k run my-deploy --image=nginx --dry-run -o yaml k create deploy my-deploy --image=nginx -o yaml --dry-run
Update the replicas
k scale deploy my-deploy --replicas=3
Autoscale a deployment
k autoscale deploy nginx --min=5 --max=10 --cpu-percent=80
Generate yaml
k expose pod nginx --port=8080 --name nginx-service --dry-run -o yaml k expose deployment my-app --name=my-service --type=NodePort --target-port=80 --port=80 --dryrun -o yaml k create service clusterip nginx --tcp=8080:8080 --dry-run -o yaml
Generate yaml
k create namespace my-namespace --dry-run -o yaml
Specify a namespace
k get pods -n my-namespace k get pods --namespace my-namespace k get pods --all-namespaces
Generate yaml
k create cm my-cm --from-literal MY_ENV=my_value -o yaml --dry-run echo "MY_ENV=my_value" > envs.txt k create cm my-cm --from-file envs.txt -o yaml --dry-run
Reference a cm to a pod
# All env from cm
envFrom:
- configMapRef:
name: my-cm
# Only some keys
env:
- name: MY_ENV
valueFrom:
configMapKeyRef:
name: my-cm
key: MY_ENV
# From volume
volumes:
- name: my-cm-volume
configMap:
name: my-cm
Generate yaml
k create secret generic my-secret --from-literal MY_ENV=my_value -o yaml --dry-run echo "MY_ENV=my_value" > envs.txt k create secret generic my-secret --from-file envs.txt -o yaml --dry-run
Reference a cm to a pod
# All env from secret
envFrom:
- secretRef:
name: my-secret
# Only some keys
env:
- name: MY_ENV
valueFrom:
secretKeyRef:
name: my-secret
key: MY_ENV
# From volume
volumes:
- name: my-secret-volume
secret:
secretName: my-secret
Encode & decode secrets
# encode echo -n 'my_value' | base64 # decode echo -n 'bXlfdmFsdWU=' | base64 --decode k get secret my-secret -o yaml | yq r - data.MY_ENV | base64 --decode
A security context defines the operating system security settings (uid, gid, capabilities, SELinux role, etc..) applied to a container
Update security context
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
# At pod level
securityContext:
runAsUser: 1000
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# Or at container lever
securityContext:
runAsUser: 2000
capabilities:
add: ["MAC_ADMIN"]
A service account provides an identity for processes that run in a Pod.
Generate yaml
k create sa my-sa --dry-run -o yaml
Create pod with service account
ku run nginx --image=nginx --restart=Never --serviceaccount=my-sa -o yaml --dry-run
Reference a service account
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
# Change default service account
serviceAccount: my-sa
# Do not mount automatically service account token
automountServiceAccountToken: false
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
When you specify a Pod, you can optionally specify how much of each resource a Container needs. The most common resources to specify are CPU and memory (RAM); there are others.
Specicy resource requirements at creation
k run nginx --image=nginx --restart=Never --requests='cpu=100m,memory=256Mi' --limits='cpu=200m,memory=512Mi' -o yaml --dry-run
Specify resource requirements
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
resources:
requests:
memory: "1Gi"
cpu: 1
limits:
memory: "2Gi"
cpu: 2
Node affinity, is a property of Pods that attracts them to a set of nodes (either as a preference or a hard requirement). Taints are the opposite — they allow a node to repel a set of pods.
Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.
Taint a node
k taint nodes my-node key=value:taint-effect
taint-effect can be :
-
NoSchedule
: Pod with wrong toleration won’t be schedule -
PreferNoSchedule
: Pod with wrong toleration won’t be schedule, if possible, no warranty -
NoExecute
: Pod with wrong toleration won’t be schedule and existing pod with wrong toleration will be killed
Apply a toleration to a pod
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
tolerations:
- key: "key"
operator: "Equal"
value: "value"
effect: "taint-effect"
nodeSelector
provides a very simple way to constrain pods to nodes with particular labels.
Label a node
k label nodes my-node key=value
Specify a node selector to a pod
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# Select node by node name
nodeName: master
# Select node by label
nodeSelector:
key: value
Node affinity is conceptually similar to nodeSelector
 — it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.
Specify an affinity to a pod
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
affinity:
nodeAffinity:
# preferredDuringSchedulingIgnoredDuringExecution
# requiredDuringSchedulingRequiredDuringExecution
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: key
operator: In|NotIn|Exists
values:
- value
Specify readiness
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# Is the container ready ?
readinessProbe:
# tcpSocket:
# exec:
# command: ["ls /var/www/html/file_check"]
httpGet:
path: /api/ready
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 8
Specify liveness
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
# Is the container still alive ?
livenessProbe:
# tcpSocket:
# exec:
# command:
httpGet:
path: /api/alive
port: 8080
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 8
View pod logs
k logs -f my-pod
View pod logs for a specific container
k logs -f my-pod my-container
View pod logs for the previous instance
k logs -p my-pod
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-busybox
name: my-busybox
namespace: dev2406
spec:
containers:
- image: busybox
name: secret
resources: {}
command: ["/bin/sh", "-c", "sleep 3600"]
restartPolicy: Never
Labels definition
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app-label
function: my-function-label
spec:
containers:
- name: my-app
image: my-app
Show labels
k get pods --show-labels
Show specific label
k get pods -L label_key
Get filtered by label
k get pods --selector key=value
Selector definition
apiVersion: v1
kind: ReplicaSet
metadata:
name: my-rs
labels:
app: my-app-label
function: my-function-label
spec:
replicas: 3
selector:
matchLabels:
app: my-app-label
function: my-function-label
template:
[...]
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-app-label
function: my-function-label
spec:
selector:
matchLabels:
app: my-app-label
function: my-function-label
ports:
- protocol: TCP
port: 80
targetPort: 9376
Annotate
k annotate po nginx1 nginx2 nginx3 description='my description'
Annotations definition
apiVersion: v1
kind: Service
metadata:
name: my-service
annotations:
buildVersions: 1.34
spec:
selector:
matchLabels:
app: my-app-label
function: my-function-label
ports:
- protocol: TCP
port: 80
targetPort: 9376
Deployment strategy
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
# Strategy type : Rolling update by default
strategy:
type: RollingUpdate|Recreate
rollingUpdate:
maxSurge: 2 # how many pods we can add at a time
maxUnavailable: 0 # maxUnavailable define how many pods can be unavailable during the rolling update
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Create deployment
k create -f deploy-def.yaml
Get deployment
k get deploy
Update a deployment
k apply -f deploy-def.yaml k set image deploy/my-deploy container=newImage k edit deploy my-deploy [--record]
Get deployment status
k rollout status deploy/my-deploy k rollout history deploy/my-deploy [--version=version]
Rollback a deployment
k rollout undo deploy/my-deploy
Pause/resume the rollout
kubectl rollout pause deploy nginx kubectl rollout resume deploy nginx
Job definition
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
# Number of pods to create
completions: 3
# Number of pods created in parallel
parallelism: 3
# Force job termiantion after a specific deadline
activeDeadlineSeconds: 20
# Number of restart before job is considered as failed
backoffLimit: 25
template:
[Pod definition]
CronJob definition
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cronjob
spec:
# Cron definition
schedule: "*/1 * * * *"
# Wait for next scheduled job is current job is not scheduled in a specific deadline
startingDeadlineSeconds: 60
# Concurrency policy
concurrencyPolicy: Allow|Forbid|Replace
jobTemplate:
[Job definition]
Services types
-
NodePort
: Forward the requests from the node port to a pod port. -
ClusterIp
: Create a virtual IP inside the cluster and enable communication between services. -
LoadBalancer
: Provision a load balancer distributing the load between pods.
NodePort
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- targetPort: 80
# Required
port: 80
# Range: 30000 - 32767
nodePort: 30008
# Required
selector:
matchLabels:
app: my-app
type: my-app-type
ClusterIp
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
# Default type
type: ClusterIp
ports:
- targetPort: 80
port: 80
# Required
selector:
matchLabels:
app: my-app
type: my-app-type
List endpoints
k get ep
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
# Rewrite the target
nginx.ingress.kubernetes.io/rewrite-target: /
# Rewrite the target with regex
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
# Handle all traffic
backend:
serviceName: my-service
servicePort: my-service-port
# Specific rules
rules:
- host: watch.ecom-store.com
http:
paths:
- backend:
serviceName: video-service
servicePort: 8080
- host: apparels.ecom-store.com
http:
paths:
- backend:
serviceName: apparels-service
servicePort: 8080
By default, pods are non-isolated; they accept traffic from any source.
Pods become isolated by having a NetworkPolicy that selects them. Once there is any NetworkPolicy in a namespace selecting a particular pod, that pod will reject any connections that are not allowed by any NetworkPolicy.
ingress: Each NetworkPolicy may include a list of allowed ingress rules. Each rule allows traffic which matches both the from and ports sections. The example policy contains a single rule, which matches traffic on a single port, from one of three sources, the first specified via an ipBlock, the second via a namespaceSelector and the third via a podSelector.
egress: Each NetworkPolicy may include a list of allowed egress rules. Each rule allows traffic which matches both the to and ports sections. The example policy contains a single rule, which matches traffic on a single port to any destination in 10.0.0.0/24.
Ingress network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-np
spec:
# Pods to apply the policy
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
name: api-pod
ports:
- protocol: TCP
port: 3306
Egress network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-np
spec:
# Pods to apply the policy
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Type of volumes All types of volumes can be found here : https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes
Define volume
apiVersion: v1
kind: Pod
metadata:
name: my-pod
labels:
app: my-app-label
function: my-function-label
spec:
containers:
- name: my-app
image: my-app
volumeMounts:
- mountPath: /opt
name: my-volume
# readOnly: true
volumes:
- name: my-volume
hostPath:
path: /path
type: Directory
# emptyDir: {}
# persistentVolumeClaim:
# claimName: my-claim
# awsElasticBlockStore:
# volumeId: <volume-id>
# fsType: ext4
# configMap:
# name: my-cm
# secret:
# secretName: my-secret
Create Persistent Volume
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-pv
spec:
accessModes:
- ReadWriteOnce|ReadOnlyMany|ReadWriteMany
# What happen to the volume when claim is deleted
persistentVolumeReclaimPolicy: Retain|Delete|Recycle
# Specify storage class name
storageClassName: storage-class-name
capacity:
storage: 1Gi
# Host path
hostPath:
path: /tmp
# Elastic block store
awsElasticBlockStore:
volumeId: <volume-id>
fsType: ext4
# NFS
volumeMode: Filesystem
nfs:
path: /html
server: nfs01