My Revision for Certified Kubernetes Application Developer CKAD

CHOO Jek Bao
40 min readDec 30, 2021

This is my revision for CKAD exam.

I used three sources for my revision.

  1. Sanders: Optional for CKAD revision
  2. Mumshad: Essential for CKAD revision
  3. Killer.SH

Lesson 1: Understanding and Using Containers

1.1 What is a Container

  • Containers are Linux!
  • Linux Kernel Namespaces provide strict isolation between system components at different level.
  • Isolation at: network, file, users, processes, and Inter Process Communication.
  • Several container runtime engines: docker, lxc, runc, cri-o, rkt, containerd.

1.2 Starting Containers

$ yum install docker

Install docker in Linux.

$ systemctl enable --now docker

Start docker service.

$ setenforce 0

Change SELinux mode from targeted to permissive. This is for Kubernetes.

$ vim /etc/sysconfig/selinux

Change SELinux configuration file to make sure it is permanently changed from targeted to permissive i.e. from targetting to disabled.

$ mkdir -p /var/www/html

Make a directory for creating the containers. Containers are using ephermeral storage.

$ echo hello from docker >> /var/www/html/index.html

Redirect “hello from docker” output to index.html file.

$ docker run -d -p 8080:80 --name="myapache" -v /var/www/html:/var/www/html httpd

Run a container with name “myapache” using the httpd image.

  • The option -d will run as a daemon.
  • The option -p 8080:80 is for port forwarding.
  • The option -v is bind mounting storage making sure that /var/www/html on localhost is available in the container at /var/www/html.
$ docker ps

Show the docker processes running now.

$ ss -tunap | grep 8080

Dump socket statistics for port 8080.

See that port 8080 is actively listening.

$ curl http://localhost:8080

See that html is served.

$ docker run -it busybox

Run the docker image busybox as container. After which we can type Control+p, Control+q to disconnect to disconnect from the interactive terminal while keeping the container running.

<inside busybox container terminal>
$ ps
$ top<disconnect from the terminal using Control+p, Control+q>

This will leave the container running.

$ ps aux

The operating system processes will show all the processes running including the containers. However, some people don’t like it because the isolation is not deep enough. For deeper isolation, some people prefer hypervisor for virtualisation.

1.3 Managing Containers

$ docker inspect myapache

Inspect the myapache container.

docker inspect --format='{{.NetworkSettings.IPAddress}}' myapache

Inspect the myapache container with formatting of showing only the IPAddress.

docker inspect --format='{{.State.Pid}}' myapache

Inspect the myapache container with formatting of showing only the operating system process identifier.

1.4 Managing Container

There are two ways to create images.

  • First — using a runner container: Modification is started and modifications are applied to the container. Docker commands are used to write modifications.
  • Second — using a dockerfile: a script for building images. Each instruction adds a new layer to the image. In fact, instead of distributing images, we can distribute the Dockerfile.
$ yum install git

Install git.

$ git clone https://github.com/sandervanvugt/containers

Clone an example from the author.

<where there is a Dockerfile>$ docker build -t nmap .

Build the Dockerfile in the local directory — naming the image nmap.

$ docker run nmap

Run the image that we call it nmap as container.

1.5 Understanding Container Logging

By default, container logs are written in the container. This implies that the logs do not get written to the host operating system. Therefore, we could capture the logs in a container and have it in the host operating system.

$  docker run --rm -v /dev/log:/dev/log fedora:latest logger "message from FEDORA"

Run a docker image Fedora as container with the following options

The options:

  • - -rm is removing the image after use.
  • -v is bind mounting storage making sure that /dev/log in container is sync to localhost /dev/log.
  • logger is to write the following message to the log in the container “message from FEDORA”
$ journalctl | grep FEDO

Read the log using journalctl with filtering for the word “FEDO”.

1.Lab Using Containers

Lesson 2: Understanding Kubernetes

2.1 Understanding Kubernetes Core Functions

Kubernetes is the greek word for “Pilot of the ship”.

Label is important in Kubernetes. Because K8s identify the label as e.g. frontend-app, backend-app, etc…

2.2 Understanding Kubernetes Origins

K8s is based on Google Borg, the internal system Google has been using for many years to scale services.

2.3 Understanding Kubernetes Management Interfaces

  • API — K8s’s API is using REST.
  • Etcd — K8’s database for configuration.

Three ways to push the configuration into the API.

  • First, using the Kubernetes kubectl command. The K8s Control Command is the main method.
  • Second, using the Kubernetes dashboard. The web interface allows us to push the configuration in to etcd through the API.
  • Third, using curl command. The API is REST so curl or any other REST clients could push the configuration in to etcd through the API.

2.4 Understanding Kubernetes Architecture

The above is a Kubernetes cluster. Source: https://kubernetes.io/docs/concepts/overview/components/

Lesson 3: Creating a Lab Environment

3.1 Understanding Kubernetes Deployment Options

3.2 Minikube

  • Install VMWare Fusion for personal use. It’s free.
  • Setup Fedora Workstation version 33 as a Virtual Machine with at least 2 CPUs, 8 GB of RAM, and 40 GB of storage.
  • Use student for the Fedora username.
  • Use kubernetes for the Fedora password.
Enable hypervisor applications in this virtual machine.
$ grep vmx /proc/cpuinfo

Check that virtualisation is possible.

$ yum install -y git

Install git.

$ git clone https://github.com/sandervanvugt/ckad

Clone a repo prepared by the author.

$ vim ./kube-setup.sh

Read the scripts that will be executed to setup K8s.

$ ./kube-setup.sh

Run the bash shell script to install the kubernetes setup the author prepared.

$ reboot

Reboot after installation of Kubernetes setup is completed.

If the above Approach #1 (Setting up Minikube on a Linux virtual machine) is not working, then consider Approach #2 (Setting up AiO-Kubernetes) or Approach #3 (Using Hosted Kubernetes in GCE).

All three approaches are available here. The purpose is to setup a Kubernetes cluster.

$ kubectl get all

Check that the Kubernetes cluster works.

$ kubectl get nodes

View the nodes in the cluster.

$ minikube start

Start the minikube.

$ minikube ssh

Enter minikube through secure shell.

<inside minikube shell>$ docker ps

View all the docker processes.

We can see that a lot of Docker containers running in the minikube environment. Containers include etcd, scheduler, controller, and many more.

3.3 Running Your First Application

$ kubectl get all

View everything that is running in the K8s cluster.

$ kubectl run nginx-started-from-run-command --image=nginx

Run a pod with a container that is using nginx image.

Note: kubectl run is deprecated. So this command will not work in the future.

$ kubectl get all

View all that is running in the K8s cluster.

From no pod running to one pod running.

3.Lab …

Lesson 4: Understanding API and Management Options

4.1 Understanding the Main Kubernetes Objects

4.2 Understanding the Kubernetes API

$ kubectl api-resources

Get a list of API resources in the Kubernetes cluster.

$ kubectl api-versions

Get a list of API versions to see what verion to use for creating a specific resource.

4.3 Using Kubernetes Dashboard to Manage API Objects

K8s dashboard is not the recommended way of creating Kubernetes objects.

4.4 Using kubectl to Manage API Objects

kubectl under the hood uses curl to send API requests to the Kubernetes API.

$ kubectl run --help | less

See a list of kubectl run command with less piping.

$ kubectl config view

View the different parts of the current configuration — telling which cluster to connect to. The configuration file is stored ~/.kube/config

$ cat ~/.kube/config

View the configuration file. The same as using kubectl config view.

4.5 Using curl to Work with API Objects

$ kubectl proxy --port=8001 &

Start a kube-proxy at port 8001. As long as there are no other processes using the port is fine. The proxy acts as an intermediary between API server and API client.

<another terminal tab>$ curl http://localhost:8001

Get all API endpoints.

<use one of the API endpoints>$ curl http://localhost:8001/metrics | less

Get metrics from metric API endpoint with less piping.

$ curl http://localhost:8001/api/v1/namespaces/default/pods | less

Get pods information from pods API endpoint with less piping.

Every item in the items array is a pod

4.6 Understanding Authentication and Authorization

The authorization is using RBAC to take care of the different options.

$ kubectl auth can-i get pods

Check if I am authorised to get pods.

$ kubectl auth can-i get pods --as jek@gmail.com

Check if I am authorised to get pods as jek@gmail.com

4.Lab

Lesson 5: Managing Pod Basic Features

5.1 Understanding Pods

  • Pod is an abstraction of a server.
  • Pod runs one container typically.
  • Namespace is isolated environment which can run multiple containers in a single namespace — exposed by a single IP address.
  • Managing K8s is focusing on managing pods, not so much about managing containers. This is why pod is the smallest entity that can be managed by K8s. Hence, we don’t manage containers. We manage pods.
  • Pods are started through deployment.
$ kubectl run my-ghost-app --image=ghost:0.9

Start a deployment based on ghost:0.9 image.

$ git clone https://github.com/sandervanvugt/ckad$ cat ~/ckad/busybox.yaml$ kubectl create -f ~/ckad/busybox.yaml

View and deploy the busybox.yaml file to a pod.

$ kubectl get pods

Get all the pods that are running.

$ kubectl get pods -o yaml

Get all the pods that are running in yaml file.

$ kubectl describe pods busybox2

Describe a pod by its name.

$ kubectl edit pods busybox2

Edit a pod by its name.

5.2 Creating a YAML Manifest to Configure Pods

Use identation — should use spaces instead of tabs.

$ ~/.vimrc autocmd FileType yaml setlocal ai ts=2 sw=2 et

Take care of yaml indentation when using with vim.

Four main ingredients in YAML manifest are:

  • apiVersion: specifics which version of the API to use for this object.
  • kind: indicates the type of object (Deployment, Pod, etc…)
  • metadata: contains administrative information about the object.
  • spec: contains the specifics for the object.

In the spec, we have container ingredients namely:

  • name: the name of the container
  • image: the image that should be use
  • command: the command the container should run
  • args: the arguments that are used by the command
  • env: the environment variables that should be used by the container
$ kubectl explain pods<to go further in the document use .>$ kubectl explain pods.spec

Use kubectl explain on pods to identify the ingredients needed in YAML instead of memorising the above ingredients.

<add additional containers to the busybox.yaml file in the cloned ckad folder>$ kubectl create -f ~/ckad/busybox.yaml

5.3 Working in a Declarative versus Imperative way

  • Declarative way refers to using manifest.yaml for managing objects in a K8s cluster.
  • Imperative way refers to using dashboard (GUI) for managing objects in a K8s cluster.
$ kubectl get deployments my-nginx-23-aug -o yaml

Get current state of an object (deployment not pod) in YAML format

$ kubectl get deployments my-nginx-23-aug -o yaml > another-nginx-for-deployment.yaml

Output the object to another YAML file.

$ kubectl get pods -o wide

Get a wider range of pods information such as IP address.

$ kubectl create -f ~/ckad/sleepy.yaml<kubectl logs the_pod_name>$ kubectl logs sleepy

Get the logs of a pod by pod name e.g. sleepy.

5.4 Understanding Multi-Container Pods

Recommended one container per pod unless we need :

  • Sidecar container: a container that enhances the primary application, for instance for logging, monitoring, and syncing.
  • Adapter container: a container used to adopt the traffic or data pattern to match the traffic or data pattern in other applications in the cluster.
  • Ambassdor container: a container that represents the primary container to the outside world, such as a proxy.

These helper containers share data through the pod shared volume storage.

$ cat ~/ckad/sidecar.yaml$ kubectl create -f ~/ckad/sidecar.yaml

View the sidecar.yaml to understand that there are one app container and another sidecar container. The app container will log to /var/log in the shared storage. The sidecar container will read from /var/log in the shared storage and display the log information using httpd image to show the log.

$ kubectl exec -it sidecar-pod -c sidecar /bin/bash<Remember if we exit with Exit, it terminates the session. Instead we could use Ctrl+P, Ctrl+Q to disconnect.>

Enter the interactive bash shell of the container name sidecar by going through sidecar-pod. It’s similar to assess docker container through interactive shell.

<inside the sidecar container interactive shell># yum install -y curl# curl http://localhost/date.txt

Install curl in the sidecar container then view the log files in the sidecar container.

5.5 Using Namespace

Namespace is used to separate customer resources. Think of namespace as like data center. Each namespace is a separate data center.

$ kubectl get all --all-namespaces

View all the namespaces.

$ kubectl get namespaces

Get a distinct list of namespaces

$ kubectl create ns secret

Create a namespace by the name “secret”.

$ cat ~/ckad/busybox-ns.yaml$ kubectl create -f ~/ckad/busybox-ns.yaml

Create a pod using busybox-ns.yaml file.

$ vim ~/ckad/busybox-ns.yaml

Use vim editor to add a namespace to the busybox-ns.yaml file as below.

Line 5 of image is newly added
$ kubectl create -f ~/ckad/busybox-ns.yaml$ kubectl get pods --all-namespaces

Create a pod using the updated busybox-ns.yaml and get all pods namespaces to see that secret namespace has a pod running.

Note: the busybox3 is in default and secret namespaces because we had previously created the pod without a namespace hence it is default.
$ kubectl get pods -n secret

Get all the pods with the namespace “secret”.

5.Lab: Managing Pods

Lesson 6: Managing Pod Advanced Features

6.1 Inspecting Pods

$ kubectl describe pods sidecar-pod

Describe sidecar-pod by reading from the etcd key-value store.

$ kubectl logs sidecar-pod

View the logs of sidecar-pod.

$ kubectl exec -it sidecar-pod -- sh<Ctrl+p, Ctrl+q to disconnect instead of exit>

Connect to sidecar-pod with interactive terminal.

6.2 Monitoring Pods

$ kubectl get pods -o wide

View all the pods information with a wide range of information.

6.3 Using Port Forwarding to Access Pods

$ kubectl apply -f nginx.yaml$ kubectl get all$ kubectl port-forward pod/nginx-started-from-run-command 8080:80 &$ curl http://localhost:8080

Apply changes from nginx.yaml.

Setup port forwarding. The port 8080 is the port that is going to be exposed on the localhost. While port 80 is the port within the pod.

However, more advanced ways to access Pod applications are by using services and ingress.

6.4 Understanding SecurityContext

SecurityContext defines privilege and access control setting for a pod or container.

$ cat ~/ckad/securitycontextdemo.yaml

View the file which the author prepared.

runAsNonRoot: true means that this container cannot run as root.
$ kubectl get pods

View the pods information.

$ kubectl describe pods nginxsecure

View the event information of the error of the pod.

6.5 Managing Jobs

  • Normal Pods are created to run forever.
  • Job Pods are created to run for a limited duration e.g. backup, calculation, and batch processing.
  • Job Pods must have the restartPolicy set to either OnFailure or Never.
  • OnFailure will re-run the container on the same Pod.
  • Never will re-run the container in a new Pod.
$ cat ~/ckad/simplejob.yaml

View the simplejob.yaml file.

$ kubectl create -f ~/ckad/simplejob.yaml$ kubectl get jobs$ kubectl get pods

Create the job pod and get the status.

$ kubectl get jobs -o yaml

Get the jobs in yaml format.

Completions and Parallelism are both 1. Completion 1 means that it is going to run one pod only.
$ vim ~/ckad/simplejob.yaml

Insert completions status from default 1 to 3.

$ kubectl create -f ~/ckad/simplejob.yaml

Create job pod with 3 completions.

6.6 Managing Cron Jobs

Cron Job Pods are used for running tasks on a regular basis. A Cron Job → starts a Job → start a Pod.

$ cat ~/ckad/cron-example.yaml

View the cron example that the author prepared.

$ kubectl create -f ~/ckad/cron-example.yaml$ kubectl get cronjobs$ kubectl get cronjobs -o yaml

Create the cron job and output all cronjobs to yaml format.

$ kubectl get jobs$ kubectl get pods

View all jobs and pods to see that a cron job creates a job then a pod.

$ kubectl get cronjobs$ kubectl delete cronjobs <name_of_the_cronjobs>

View and delete the cron job.

6.7 Managing Resource Limitations

CPU limits are expressed in millicore or millicpu, 1/1000 of a CPU core. Hence 500 millicore is half a CPU core.

$ cat ~/ckad/frontend-resources.yaml

View a YAML manifest prepared by the author to illustrate cpu and memory configuration.

250m is 250 millicore — it’s a quarter CPU core.
$ kubectl create -f ~/ckad/frontend-resources.yaml

Create the pod from the YAML manifest.

$ kubectl get pods$ kubectl describe pods <name_of_the_pod>

Get and describe the name of the pod.

6.8 Managing Init Containers

Init container starts before the main container. Hence, as long as the init container has not been started, the main container is not started.

$ cat ~/ckad/initpod.yaml

View a YAML manifest prepared by the author showing init containers vs containers.

6.Lab Managing Pod Advanced Features

Lesson 7: Managing Deployments

7.1 Understanding Deployment Features

From Lesson 1 to 6, we manage pods directly i.e. managing native pods. In reality and production, we use deployment strategies i.e. run pods using deployment. Because we want to scale up and down multiple pods for high availability instead of running one pod only.

7.2 Managing Deployment Scalability

$ cat ~/ckad/redis-deploy.yaml$ kubectl create -f ~/ckad/redis-deploy.yaml

View and create a deployment from YAML manifest.

$ kubectl explain deployment

View the version for deployment.

Make sure version is correct in the redis-deploy.yaml file.

Below shows the difference between Pod and Deployment.

7.3 Understanding Labels, Selectors and Annotations

Labels

  • Label is useful for locating resources at a later stage.
  • Label is used by Kubernetes for Pod selection by Deployments and Services. For example, Deployments are monitoring for sufficient amount of Pods through the running label.

Selector

  • Selector is useful for filtering by label.

Annotations

  • Annotation is used to provide metadata to object.
$ kubectl get deployments --show-labels

Get all deployments with label information.

$ kubectl get pods --show-labels

Get all pods with label information.

$ kubectl label deployment redis key_jek=value_jek

Add label to a deployment object.

$ kubectl get deployment --selector key_jek --show-labels$ kubectl get deployment --selector key_jek=value_jek --show-labels

Get deployment by label.

7.4 Managing Deployment History

Deployment history traces changes that have been applied.

$ kubectl get deployment$ kubectl rollout history deployment <deployment_name>

Get history of a deployment.

7.5 Managing Rolling Updates and Rollback

Update Strategies:

  • Recreate: all Pods are killed and new Pods are created. This will lead to temporary unavailability. This is useful if you cannot simulatanteously run different versions of an application.
  • RollingUpdate: update Pods one at a time. This is the preferred approach.

Under RollingUpdate options:

  • maxUnavailable: determines the maximum number of Pods that are upgraded at the same time.
  • maxSurge: the number of Pods that can run beyond the desired number of Pods.
$ cat ~/ckad/rolling.yaml

View a YAML manifest file prepared by the author.

4 pods; max is 6 during rolling update; min is 3 during rolling update; nginx:1.8 is an old image so we will do an update to demonstrate the rollingUpdate.
$ kubectl create -f ~/ckad/rolling.yaml

Create the deployment using the YAML manifest.

$ kubectl get deployments$ kubectl rollout history deployment <the_deployment_name>$ kubectl edit deployment <the_deployment_name>

Get all deployments.

View deployment rollout history.

Edit deployment to use the latest image.

$ kubectl rollout history deployment <the_deployment_name>$ kubectl describe deployment <the_deployment_name>$ kubectl rollout history deployment <the_deployment_name> --revision=2

View the rollout history of a deployment.

Describe the deployment.

View the rollout history of a deployment by version number.

$ kubectl get rs

Get ReplicaSet information.

View ReplicaSet of version 1 and 2.
$ kubectl rollout undo deployment <the_name_of_deployment> --to-revision=1$ kubectl get rs

Rollback from version 2 to version 1.

Rollback. We can see that from 3 ReplicaSet ready to 4 ReplicaSet ready.
$ kubectl describe deployment <the_deployment_name>

Verify that we are back to the origin image version after rollback.

7.Lab Managing Deployments

Lesson 8: Managing Networking

8.1 Understanding Pod Access Options

The port forwarding for Operator workstation is convenient for internally access. However, for external, always use Ingress.

8.2 Understanding Services

  • Service is an abstraction which defines a logical set of Pods and a policy by which to access them. Put another way, the set of Pods that is targeted by a Service is determined by a Selector (which is a Label).
  • Kube-proxy agent on the nodes watches the Kubernetes API for new services and endpoints.

Service Types:

  • ClusterIP: provides internal access only; the default type.
  • NodePort: allocates a specific node port which needs to be opened on the firewall.
  • LoadBalancer: works in public cloud e.g. AWS, Azure, GCP, and etc because only implemented by public cloud.
  • ExternalName: works on DNS names.
  • Service without selector: uses for direction connections based on IP/port without endpoints; useful for connections to databases or between namespaces.

8.3 Creating Services using kubectl expose

$ kubectl get deployment$ kubectl expose deployment <the_deployment_name> --port=80 --type=NodePort

Expose a current deployment with the name of … This command allocates a random port on all backend nodes — optionally use targetPort to define the port that should be used.

$ kubectl get svc$ kubectl get all

View all the services.

Notice the TYPE: ClusterIP and NodePort. The target port is 31784. This is the exposed random port 31784.
$ kubectl get svc <the_name_of_service> -o yaml

Get the service in YAML output.

8.4 Managing Services Manifest Files

$ cat ~/ckad/service.yaml$ kubectl create -f ~/ckad/service.yaml$ kubectl get svc

Create a service using YAML manifest file.

8.5 Understanding Kubernetes Networking

I’m weak in K8s Networking.

8.6 Understanding Services and DNS

I’m weak in K8s Services and DNS.

Kubectl Expose Services automatically register with K8S internal DNS.

$ kubectl get svc

Get all the services.

$ kubectl exec -it <the_pod_name> -- nslookup <the_service_name>$ kubectl exec -it busybox-and-nginx-23-aug -- nslookup mywebserver

Use a pod to get the service DNS address. If can’t get it, then need to troubleshoot.

8.7 Understanding Network Policies

  • By default, all pods can reach one another.
  • Network isolation can be configured to block traffic to Pods by running Pods in dedicated namespaces.
  • NetworkPolicy can be used to block egress as well as ingress traffic — works like a firewall.
$ kubectl explain NetworkPolicy$ kubectl explain NetworkPolicy.spec

Read the NetworkPolicy docs. Furthermore, get the apiVersion.

$ cat ~/ckad/pods-with-nw-policy.yaml

View the YAML manifest prepared by the author.

This says that web app can access database app via the NetworkPolicy.
$ kubectl create -f ~/ckad/pods-with-nw-policy.yaml$ kubectl get networkpolicy

Create the network policy using the YAML manifest file.

View the networkpolicy.

$ kubectl get networkpolicy <the_networkpolicy_name> -o yaml

Get a networkpolicy by name in YAML format.

8.Lab: Managing Services

Lesson 9: Managing Ingress

9.1 Understanding Ingress

  • Ingress exposes HTTP and HTTPS routes from outside the cluster to Services within the cluster.
  • Traffic routing is controlled by rules defined on the Ingress resource.
  • Ingress can be configured to do the following: (1) External reaching URLs, (2) Load balance traffic, (3) Terminate SSL/TLS, (4) Name based virtual hosting, and etc…
  • Ingress controller is required for Ingress to work.
  • IMPORTANT: Creating Ingress resources without Ingress controller has no effect.
  • Many Ingress controller available: (1) nginx, (2) haproxy, (3) traefik, (4) contour, (5) kong, and etc…

9.2 Configuring Ingress

I’m weak in K8s Ingress.

Source: https://kubernetes.io/docs/concepts/services-networking/ingress/

Copy minimal-ingress.yaml from Kubernetes docs.

$ vim ~/ckad/minimal-ingress.yaml<Paste the copied content>$ cat ~/ckad/minimal-ingress.yaml

Open in vim and paste the content.

View the YAML manifest file.

$ kubectl create deployment nginx --image=nginx$ kubectl get deployment$ kubectl scale deployment nginx --replicas=3$ kubectl get deployment$ kubectl expose deployment nginx --type=NodePort --port=80$ kubectl get svc$ kubectl create -f ~/ckad/minimal-ingress.yaml$ kubectl get ingress

Create Deployment → Scale Deployment → Expose Deployment as Service → Create Ingress.

9.3 Configuring Ingress Rules

I’m weak in K8s Ingress Rules.

9.Lab: Using Ingress

Lesson 10: Managing Storage

10.1 Understanding Kubernetes Storage Options

10.2 Configuring Volume Storage

Decide Pod-local Volume or a Persistent Volume.

This illustrates how two containers share one volume in a pod.

Volume Types:

  • emptyDir: creates a temporary directory on the host
  • hostPath: use for persistent storage
  • azureDisk: Azure cloud storage
  • awsElasticBlockStore: AWS cloud storage
  • gcePersistentDisk: Google cloud storage
  • gitrepo: Git repo
  • and many more. See more using kubectl explain pod.spec.volumes
$ cat ~/ckad/morevolumes.yaml$ kubectl create -f ~/ckad/morevolumes.yaml$ kubectl get pods$ kubectl describe pods <name_of_the_pod>

Create the pod using YAML manifest file.

View all pods.

Describe the pod.

The name of the pod is morevol2
kubectl exec -it <name_of_the_pod> -c <name_of_the_container_inside_the_pod> -- ls -l$ kubectl exec -it morevol2 -c centos1 -- ls -l$ kubectl exec -it morevol2 -c centos2 -- ls -l

View the folders of a container in the pod.

$ kubectl exec -it morevol2 -c centos1 -- touch /centos1/testfilein1$ kubectl exec -it morevol2 -c centos2 -- ls -l /centos2

Create a file name testfilein1 in centos1 container.

View a file name testfilein1 in centos2 container.

10.3 Configuring PV Storage

$ cat ~/ckad/pv.yaml$ kubectl create -f ~/ckad/pv.yaml$ kubectl get pv$ kubectl get pv <name_of_the_persistent_volume> -o yaml

View the YAML manifest file prepared by the author.

Create the persistent volume using YAML manifest file.

Get all persistent volume.

Get a persistent volume by name in YAML format.

10.4 Configuring PVCs

$ cat ~/ckad/pvc.yaml$ kubectl create -f ~/ckad/pvc.yaml$ kubectl get pvc$ kubectl get pvc <name_of_the_pvc> -o yaml$ kubectl get pv

View the YAML manifest file prepared by the author for PersistentVolumeClaim.

Create the PVC using YAML manifest file.

Get all the PVC.

Get a persistent volume by name in YAML format.

Get all the PV which also include PVC.

$ cat ~/ckad/pv-pod.yaml$ kubectl create -f ~/ckad/pv-pod.yaml

View the YAML manifest file prepared by the author for linking Pod local volume to PVC to PV.

Create the pod using the YAML manifest file.

10.5 Configuring Pod Storage with PV and PVC

Lesson 11: Managing ConfigMaps and Secrets

11.1 Understanding ConfigMaps

ConfigMaps and Secrets are special types of volumes.

11.2 Creating ConfigMaps for Variables

ConfigMaps are declared or set in the K8s cluster, and are used within Pods.

envFrom:
- configMapRef:
name: ConfigMapName

Show how to use configMap in YAML manifest file.

$ kubectl create cm jekspecial --from-literal=VAR1=pink --from-literal=VAR2=blue$ kubectl get cm

Create ConfigMap from literal.

Get all ConfigMap.

$ cat ~/ckad/variables$ kubectl create cm jekvariables --from-file=./ckad/variables$ kubectl get cm$ kubectl describe cm jekvariables

View variables prepared by the author.

Create ConfigMap from file.

Get all ConfigMap.

Describe the ConfigMap variables.

$ kubectl create cm jekvariables --from-file=./ckad/variables -o yaml --dry-run=client

View the YAML output if we were to use YAML manifest for creating the ConfigMap.

$ cat ~/ckad/cm-test-pod.yaml$ kubectl create -f ~/ckad/cm-test-pod.yaml

View the YAML manifest prepared by the author.

Create the pod using YAML manifest.

This is how we refer to ConfigMap in a pod YAML manifest.

11.3 Creating ConfigMaps for ConfigFiles

$ cat ~/ckad/nginx-custom-config.conf$ kubectl create cm <name_of_the_configmap> --from-file=./ckad/nginx-custom-config.conf$ kubectl get cm$ kubectl get cm <name_of_the_configmap> -o yaml

View the .config file prepared by the author.

Create a ConfigMap from the .config file.

Get all ConfigMap.

Get the specific ConfigMap in YAML output.

$ cat ~/ckad/nginx-cm.yml

View the pod YAML manifest prepared by the author.

$ kubectl create -f ~/ckad/nginx-cm.yml

Create the pod from the YAML manifest.

$ kubectl get pods$ kubectl exec -it nginx-cm /bin/bash -- sh<inside the /bin/bash shell># cat /etc/nginx/conf.d/default.conf

Get all pods.

Enter interactive terminal of a pod named nginx-cm.

View the config file from within the pod named nginx-cm.

We can see that the ConfigMap is available within the Pod named nginx-cm.

Container → Volume → ConfigMap; hence it is mountPath + path for access as seen in the command above.

11.4 Understanding Secrets

  • Secrets are used by Pods the same way ConfigMaps are used.
  • Secrets are NOT encrpyted. Secrets are encoded.

There are three types of Secrets:

  • docker-registry: used for connecting to Docker registry.
  • TLS: creates a TLS secret
  • generic: creates a Secret from a local file, directory, or literal value.
$ kubectl create -f busybox-ready.yaml$ kubectl get pods$ kubectl describe pods <name_of_the_pod>

Show the Secrets (as Service Accounts) that Kubernetes created for accessing API.

11.5 Creating Secrets

$ ssh-keygen

Generate SSH keypair.

<generic is the type. There are three types>
<it is needs absolute path from root instead of relative path>
$ kubectl create secret generic jek-secret --from-file=jek-ssh-privatekey=/home/jek_bao_choo/.ssh/id_rsa --from-literal=passphrase=password

Create a generic secret from file and from literal.

$ kubectl get secret$ kubectl get secret <name_of_the_secret> -o yaml

View all secrets.

Get a secret by name in YAML format.

$ echo -n 'hello-world' | base64$ echo -n 'hello-world' | base64 > hello-world-secret.yaml

Encode hello-world in base64.

Output the encoded hello-world to a file.

$ echo -n 'jek' | base64 >> hello-world-secret.yaml

Append to a file using double arrow.

$ echo aGVsbG8td29ybGQ= | base64 -d

Decode the encoded hello-world in base64.

11.6 Configuring Pods to Use Secrets

Secrets are used by Pods in two ways:

  • As environment variables
  • Mounted as volumes
$ cat ~/ckad/pod-secret.yaml$ kubectl create secret generic secretstuff --from-literal=user=linda$ kubectl get secret$ kubectl create -f ~/ckad/pod-secret.yaml$ kubectl get pods$ kubectl describe pods secretbox2$ kubectl exec -it secretbox2 -- /bin/sh<inside the interactive terminal of the pod># cat /secretstuff/user

View the YAML manifest file prepared by the author.

Create secret as generic type.

Get all secrets.

Create a pod using the YAML manifest file prepared by the author.

Describe the pod to see that the secret is mounted.

Enter the pod using interactive terminal to view the secret.

$ kubectl create secret generic mysql --from-literal=password=root$ cat ~/ckad/pod-secret-as-var.yaml$ kubectl create -f ~/ckad/pod-secret-as-var.yaml$ kubectl get pod$ kubectl exec -it mymysql -- /bin/sh<inside the interactive terminal># env

Create a generic secret named mysql where the key is password and the value is root.

View a YAML manifest file prepared by the author for pod to read from environment variables.

Create a pod using the YAML manifest file prepared by the author.

Get all pods.

Enter the pod using interactive terminal to view the environment variables.

Lesson 12: Troubleshooting Kubernetes

12.1 Troubleshooting Steps

  • Use kubectl logs <name_of_the_pod> to see what a container in a Pod is doing. However, logging across all Nodes is not part of Kubernetes — instead it is provided by Fluentd and Prometheus.
  • Use kubectl describe <pod> <name_of_the_pod> to see what is the Pod state.

12.2 Using Probes

  • Probes are a part of container spec and can be used to test access to Pods.
  • Probe is a test to verify that the application is reacting.
  • readinessProbe is used to checked that is it ready for access before publishing the pd.
  • livenessProbe is a continual checking the availabilty of a pod.

Types of Probes:

  • exec: connectivity via command execution.
  • httpGet: connectivity via HTTP request.
  • tcpSocket: connectivity to a TCP socket.
$ cat ~/ckad/busybox-ready.yaml

View a YAML manifest prepared by the author to illustrate readinessProbe.

Check every 10 seconds for readiness.
$ kubectl create -f ~/ckad/busybox-ready.yaml ; watch kubectl get pods

Create pod and watch all pods. The semi-colon allows us to execute one command immediately after another.

$ kubectl describe pod <name_of_thepod>

Describe the pod to see if any issue causing it to fail.

$ cat ~/ckad/nginx-probes.yaml$ kubectl describe pod <name_of_the_pod>

View another YAML manifest prepared by the author to illustrate readinessProbe and livenessProbe.

Describe the pod to view the readinessProbe and livenessProbe.

12.3 Monitoring Applications in Kubernetes

  • The goal of monitoring is to collect metrics about infrastructure.
  • Kubernetes monitoring is done by other tools.
  • Prometheus is available as a Kubernetes plugin which provides cross cluster usage metrics.
  • Heapster is part of Kubernetes and can be used for monitoring multiple pods.
  • Fluentd helps to aggregate logs.
$ kubectl config current-context

View the current cluster.

$ kubectl config view

View the configuration file.

$ kubectl get all -o wide

View all in the K8s cluster in wide output.

12.4 Troubleshooting the Minikube Host

$ free -m

View the free memory.

$ kubectl delete all --all

Delete all.

Lesson 13: Using Service Accounts

13.1 Understanding Service Accounts

Every Pod uses the Default ServiceAccount to contact the API server. Default ServiceAccount allows a resource to get information from the API server, but not much more.

$ kubectl get sa$ kubectl get sa -A$ kubectl get sa <name_of_the_service_account> -o yaml

Get all ServiceAccount.

Get all ServiceAccount including those in kube-system namespace.

Get a ServiceAccount in YAML output.

$ kubectl get secret default-token-l9g9w -o yaml

Get a secret that is in ServiceAccount in YAML output. Note: ServiceAccount leverages on Secrets. Hence, ServiceAccount → Secrets.

13.2 Managing Service Accounts

$ kubectl create serviceaccount jek-service-account$ kubectl get sa$ kubectl get sa <name_of_service_account> -o yaml

Create service account.

Get all service account.

Get a service account in YAML output.

13.3 Configuring Pods to Use Service Accounts

$ cat ~/ckad/mypod.yaml$ kubectl create -f ~/ckad/mypod.yaml$ kubectl get pods mypod -o yaml$ kubectl exec -it mypod -- sh<inside mypod interactive terminal># apk add --update curl# curl https://kubernetes/api/v1 --insecure

Create a basic pod. It will come with default serviceaccount.

Enter the pod via interactive terminal

Install curl tool in the pod.

Access the K8s cluster API from the pod. It will be forbidden.

Can NOT allow Kubernetes API from a pod using Default ServiceAccount
<inside mypod interactive terminal># HELLOTOKEN=$(cat /run/secrets/kubernetes.io/serviceaccount/token)# echo $HELLOTOKEN# curl -H "Authorization: Bearer $HELLOTOKEN" https://kubernetes/api/v1 --insecure

Access the k8s cluster API from the pod using the default service account token. It will allow access to the K8s cluster API.

<inside mypod interactive terminal># curl -H "Authorization: Bearer $HELLOTOKEN" https://kubernetes/api/v1/namespaces/default/pods --insecure

Access another k8s cluster API from the pod. However, it will be forbidden. Because the default service account token allows certain API access only.

The default service account toke does NOT allow to access the API.
$ cat ~/ckad/mysa.yaml$ kubectl create -f ~/ckad/mysa.yaml$ kubectl get sa$ cat ~/ckad/list-pods.yaml$ kubectl create -f ~/ckad/list-pods.yaml$ kubectl get roles$ cat ~/ckad/list-pods-mysa-binding.yaml$ kubectl create -f ~/ckad/list-pods-mysa-binding.yaml$ kubectl get rolebindings

Create service account, role, and binding them together. After role binding is done, add the service account to the pod. After which, the pod should be able to have the specific access. Put another way,

Section 1: Introduction

Hello Kubernetes :)

Section 2: Core Concepts

Kubernetes Architecture

  • A node is a physical or a virtual machine.
  • A cluster is a set of nodes. This is important for High Availability architecture because a single node could fail.
Source: https://kubernetes.io/docs/concepts/overview/components/

PODs

  • One pod encapsulates one application container (OKAY).
  • One pod encapsulates multiple application containers (NOT OKAY).
  • One pod encapsulates one application container with related helper containers (OKAY). Because they can share network and storage.
  • A node contains one or many pods.

ReplicaSets

<#1 Edit the YAML file from n replica to t replica>$ kubectl replace -f <YAML_FILE><#2 or without changing the YAML file>$ kubectl scale --replicas=6 <YAML_FILE><#3 or scale the replicaset after it is created using the replicaset type>$ kubectl get replicaset$ kubectl scale --replicas=6 replicaset <NAME_OF_REPLICA>

Illustrates three options to scale a ReplicaSet.

Deployments

Deployment is a strategy to upgrade instances when need build versions of an application are available.

Deployment → ReplicaSet → Pod

Namespaces

Namespace can be use for separating DEV environment from PROD environment.

$ kubectl create namespace dev

Create a namespace called dev.

$ kubectl create -f <YAML_file> --namespace=dev

Create a file in the dev namespace. Or put the namespace attribute in the YAML file.

$ kubectl get pods --all-namespaces

Get pods in all namespaces.

apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-medium
namespace: dev
spec
:
hard:
cpu: "10"
memory: 20Gi
pods: "10"

Set resource quota for a specific namespace.

$ kubectl run httpd --image=httpd:alpine --port=80 --expose

Create a pod called httpd using the image httpd:alpine in the default namespace. Next, create a service of type ClusterIP by the same name (httpd). The target port for the service should be 80.

Section 3: Configuration

Commands and Arguments in Kubernetes

The commands for running Dockers and K8s are not the same.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Docker’s Entrypoint == Kubernetes’ command

Docker’s CMD == Kubernetes’s args

Editing PODs and Deployments

CANNOT edit specifications of an existing POD other than the below:

  • spec.containers[*].image
  • spec.initContainers[*].image
  • spec.activeDeadlineSeconds
  • spec.tolerations

Hence, there are two ways to edit:

  1. Edit the pod → Use the /tmp/…yaml → Delete the pod → Create using the /tmp/…yaml
  2. Output the pod in .yaml → Delete the pod → Create using the .yaml

Problem: CANNOT edit running pods directly other than the above four specified fields.

Solution: Use Deployments that will automatically rollout new Pods for all the changes we make e.g. to Service Account and etc.

Environment Variables

There are three ways to define environment variables:

  1. Plain key value (enter directly into POD yaml file)
  2. ConfigMap (create ConfigMap and inject into POD yaml file)
  3. Secret (create Secret and inject into POD yaml file)

ConfigMaps

There are three ways to inject ConfigMap into Pod.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Secrets

There are three ways to inject Secrets into Pod.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

The below illustrates how a secret is used in a K8s cluster architecture.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Security Contexts

  • Pod’s Security Context → Container’s Security Context
  • Container’s Security Context → Pod’s Security Context
  • If both Pod’s and Container’s Security Contexts exist, then Container’s Security Context → Pod’s Security Context.
Source: https://www.udemy.com/course/certified-kubernetes-application-developer

The above image shows Pod’s Security Context.

The below image shows Container’s Security Context.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Service Accounts

Creating a Service Account, also creates a Secret. This is the concept of Object Oriented Programming. Hence, Service Account → Secret.

Resource Requirements

For instance, we have three nodes. Each node has CPU, Memory (a.k.a. RAM), and Disk (a.k.a. Storage Space). Hence, the Kubernetes’ Scheduler will decide which pod goes to which node. One of the factor is resource availability.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

To set default limit of say of each Pod require 0.5 CPU and 256 Mi Memory, we need to create a LimitRange in that namespace.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

If a Pod tries to consume more CPU than the CPU limit, it will throttle the Pod. On the other hand, if a Pod tries to consume more Memory than the Memory limit, it will terminate the Pod.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Taints and Tolerations

Taints and Tolerations are used to decide what Pod can or cannot be placed on a Node. Taints are set on Node while Tolerations are set on Pod.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer
$ kubectl get nodes$ kubectl describe node <name_of_the_master_node_eg_controlplane> | grep Taint

View Master node where filter by Taint. We’ll see that Master node is set to NoSchedule.

Node Selector and Node Affinity

Node Selector and Node Affinity are two approaches to placing a Pod in a Node.

Node Selector has limitation i.e. placing a pod in either node size XXL or node size L.

Taints & Tolerations vs Node Affinity

Think of these as combination to work well together.

Section 4: Multi-Container PODs

There are mainly three types of helper containers.

  1. Sidecar Container: enables e.g. logging of application container
  2. Adapter Container: transforms e.g. log output to a standard format
  3. Ambassador Container: routes e.g. log output to Dev, Staging, or Production database

In a Pod, we could have Application ContainerSidecar ContainerAdapter ContainerAmbassador Container.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

In the above image, App Pod (App Container + Sidecar Container) → ElasticSearch Pod → Kibana Pod ← User

Section 5: Observability

Readiness and Liveness Probes

readinessProbe and livenessProbe test:

  • API readiness/liveness: using HTTP test
  • Database readiness/liveness: usingTCP test
  • Exec Command readiness/liveness: using custom script to test.

K8s Service will forward traffic to a Pod only if the readinessProbe says it is ready. This is the purpose of a readinessProbe. Because we do NOT want our users to access an API that is not yet ready for use.

Why is readinessProbe useful? Because we want to serve traffic to a Pod only if it is ready to receive.

  • readinessProbe is used to let Service knows when to serve traffic to a Pod.
  • livenessProbe is used to restart unhealthy containers by helping application recover from a deadlock situation.
  • startupProbe is similar to readinessProbe but are use for slow starting containers or applications with unpredictable initialization processes.

Monitor and Debug Applications

  • Heapster for monitoring is deprecated. However, a slim down version was formed named Metrics Server.
  • Metrics Server is an in-memory monitoring solution. It does not store metrics in a disk.
  • Kubelet is an agent in each node which is responsible for receiving instructions.
  • cAdvisor (a.k.a. Container Advisor) inside the Kubelet is responsible for retrieving performance from the pods and make it available through the Kubelet APIs.

Kubernetes Metrics Server is a standalone; hence separate installation is required.

$ kubectl top node$ kubectl top pod

View the CPU consumption of each node.

View the CPU consmptuion of each pod.

Note: The above 2 commands will ONLY work if we have Kubernetes Metrics Server installed.

Section 6: POD Design

Labels, Selectors, and Annotations

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Labels and Selectors work hand-in-hand. Labels are from the perspective of Pods while Selectors are from the perspective of all other Kinds.

Annotations are for indicating buildVersions and etc.

Rolling Updates & Rollbacks in Deployments

There are several strategies for Deployments:

  1. Recreate strategy: Delete all older version Pods then Create all newer version Pods. All at once.
  2. Rolling Update strategy: Delete one older version Pod then Create one newer version Pod. One at a time → this is the default strategy.

Rolling Update strategy will create another ReplicaSet. The old ReplicaSet will see deletion of Pods, one by one. The new ReplicaSet will see creation of Pods, one by one.

For this section, please refer to Kubernetes cheatsheet for all the kubectl commands.

Updating a Deployment

$ kubectl create ... --record=true

Record the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection.

Jobs

  • Job Pods are created one after another if we set it to Completion: 3, until 3 successful Job Pods are created.
  • Job Pods can be created in parallel. If we set Completion: 3 and Parallelism: 3 we can create 3 Job Pods all at once.

Section 7: Services & Networking

Services

NodePort refers to a Node listening on a specific Port for communication Service.

The above image shows that:

  • Service’s NodePort is 30008.
  • Node’s IP address is 192.168.1.2
  • Computer’s IP address is 192.168.1.10
  • Pod’s cluster IP address is 10.244.0.2
  • K8s cluster IP address is 10.244.0.0

Hence for computer to access the Pod through Service, we go through

$ curl http://192.168.1.2:30008

Access the Pod externally with curl.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Besides NodePort, the above image shows several other Services Types.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

When creating a Service YAML manifest, we see targetPort, port, and nodePort from the perspective of the Service. Hence, from Service’s perspective, a targetPort refers to the port on the Pod.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

NodePort is NOT load balancing. Although a Service with type nodePort distributes traffic to several Pods in a Node (as seen above), it is NOT a load balancing service type. Furthermore, it’s uncommon to have multiple Application Pods in a singe Node.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

The above image shows that if we create a Service with the same label as three Pods, the Service will expose the nodePort on all three Nodes allowing access.

Ingress Networking

The below image shows that Ingress is used for routing like how a DNS.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

The ingress layer (above image) requires two elements:

  • Ingress Controller (have to install manually or if we use Cloud Service provider it is installed for us)
  • Ingress Resource

There are a couple of Ingress Controllers such as GCP HTTP(S) Load Balancer, Nginx Ingress Controller, and to name a few.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

In the Ingress Resource there are rules and paths.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

The above example shows how we set rules for subdomains and paths for path routing.

Note: The ingress resource needs to be in the same namespace as the Services’ namespaces.

Has to be in the same namespace as the Services’ namespaces

Network Policies

The below image shows that podSelector && namespaceSelector || ipBlock. For instance, it must be api-pod && prod environment to access || from IP address 192.168.5.10.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

While the below image shows that podSelector || namespaceSelector || ipBlock. In this case, all can access. This is a case of misconfiguration.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Note: NetworkPolicy is applied to Pod.

Section 8: State Persistence

Volumes

The below image shows that volumes is on the Node. The volumeMounts is in the Container.

volumeMounts (Container’s ephemeral storage) → volumes (Node’s storage)

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

The volumes (Node’s storage) can be AWS EBS, GCP, and other public cloud providers.

Persistent Volumes

The below image shows that we could have a large PersistentVolumes (PV) that is centrally managed. Each Pod uses one part of the PersistentVolume (PV) through PersistentVolumeClaim (PVC).

The advantage to Volume is that instead of each Node manages its Volume, we now have a centrally managed volume which we call PersistentVolume which is persistent compared to Volume.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Persistent Volume Claim

Each single PVC is bind to a single PV only.

Section 9: Updates for Sep 2021 Changes

Authentication Mechanisms

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

There are a few ways to register users in an authentication mechanisms.

  1. Static Password File is not secured.
  2. Static Token File is not secured.
  3. Certificates is secured.
  4. Identity Services (such as LDAP and Kerberos) are secured.

Authentication Mechanisms: Certificates

$ ls /etc/kubernetes/manifests$ cat /etc/kubernetes/manifests/kube-apiserver.yaml

View the manifests files of K8s in the host machine.

Kubeconfig

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Clusters + Users = Contexts

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Define a KubeConfig file to set clusters + users = contexts. The KubeConfig file is the access for kubectl to k8s clusters by a user.

API Groups

$ kubectl proxy

Start a proxy server. This proxy server will allow user to access Kube ApiServer with the relevant credentials.

Kubectl proxy server is at localhost:8001 while Kube ApiServer is at locaholst:6443 Source: https://www.udemy.com/course/certified-kubernetes-application-developer
$ curl https://localhost:6443 -k

Return forbidden because trying to access Kube ApiServer directly.

$ curl http://localhost:8001 -k

Display a list of api endpoints available that is reading from Kube ApiServer through Kubectl Proxy.

Some of the common api endpoints. Source: https://www.udemy.com/course/certified-kubernetes-application-developer
The /apis endpoint is categorize into these. Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Authorization Mode

  1. Always Allow
  2. Always Deny
  3. Node
  4. ABAC
  5. RBAC
  6. Webhook

RBAC of Namespace Role & Role Bindings

$ kubectl get roles

View all the roles.

$ kubectl get rolebindings

View all role bindings.

$ kubectl describe role <name of the role>

Describe the role.

User ← role binding → Role

The relationship between user and role is through role binding. Do take note that role and role binding are created in a namespace and works for a particular namespace.

Check Access

$ kubectl auth can-i create deployments$ kubectl auth can-i delete nodes

Check access with can-i command.

RBAC of Cluster Role and Cluster Role Bindings

Namespace Role is different from Cluster Role.

For cluster scoped resources such as access control to nodes. While namespaced scope is for pods, deployments, and etc… Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Admission Controllers

When we run kubectl command to create pod, it goes to Kube ApiServer then create pod.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Under the hood of Kube ApiServer, it is running Authentication, Authorization, and Admission Controller.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

So if we want to create a pod in a namespace not yet exist, it will throw error because Admission Controllers require NamespaceExists.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

So if in the Admission Controllers we add NamespaceAutoProvision, we can create pod in a namespace that has not yet exist.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

To view the admission controllers settings we run the below command.

$ kubectl exec pod/kube-apiserver-controlplane -n kube-system -- kube-apiserver -h | grep enable-admission-plugins

View the admission controller plugins.

Alternatively, we can find the information in kube-apiserver.yaml residing in

$ cat /etc/kubernetes/manifests/kube-apiserver.yaml

View the kube-apiserver yaml file.

$ ps -ef | grep kube-apiserver | grep admission-plugins

Since the kube-apiserver is running as pod you can check the process to see enabled and disabled plugins.

Admission Controller (Mutating vs. Validating)

Helm

If we only want to download a Helm Chart without installing it

$ helm pull --untar bitnami/wordpress 

Download wordpress helm chart from bitnami (without installing). The untar version will extract the download because it is usually downloaded in the tar version. It’s because sometimes we want to make changes to the values.yaml file so we do not install right away.

$ helm install ./wordpress

Install the locally changed helm chart values.yaml.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

More about helm chart usage at https://jek-bao-choo.medium.com/my-basic-understanding-and-usage-of-helm-chart-11b37a3afde4

Kubectl Convert — from deprecated API to active API

Use kubectl convert. Need to download the plugin Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Instruction to install at https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/#install-kubectl-convert-plugin

Custom Resource Definition

We have standard resource and standard controller. For instance deployment, pod, service, and many more are standard.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

We could also create custom resources and custom controller through CustomResourceDefinition.

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

This is an example of CustomResourceDefinition https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition.

Custom Controller

To build custom controller we need to use Go lang. Write custom controller through sample code at https://github.com/kubernetes/sample-controller.git

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Operator Framework

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. https://kubernetes.io/docs/concepts/extend-kubernetes/operator/

There are a number of open source operators available at operatorhub.io/

Deployment

Deployment has two options. Recreate and Rolling Update.

  • Recreate brings down all old version and bring up all new version at once.
  • Rolling Update brings down one old version and bring up another new version one by one.
Source: https://www.udemy.com/course/certified-kubernetes-application-developer

We could implement deployment strategy instead of using Recreate or Rolling Update.

Deployment Strategy — Blue/Green

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

For instance, keep version 1 of deployment as blue. Deploy version 2 as green. Switch traffic using service from version 1 to version 2.

Solution like Istio is supposed to help with Blue/Green deployment strategy.

Deployment Strategy — Canary

Source: https://www.udemy.com/course/certified-kubernetes-application-developer

For instance, Canary deployment strategy routes traffic to both versions with only a small percentage of traffic to Version 2. If it’s all good to version 2, it will use Deployment Rolling Update to bring down and up one by one.

Solution like Istio is supposed to help with Canary deployment strategy.

Killer.Sh

When practising Killer.SH I learned various techniques.

Option 1

- How to use Pod to access another Pod via curl <ip>:<containerPort>?

For example, using nginx:alpine and curl to check if one Pod is accessible on port 80 or 8080 etc.

$ kubectl run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 -v <pod ip address>:<pod port number>
$ kubectl run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 -v 10.1.0.39:8080
  • run command: Run a pod named temporary.
  • restart command: Do not restart it if it exits.
  • rm command: Delete the pod after it exits. Only valid when attaching to the container, e.g. with ‘ — attach’ or with ‘-i/ — stdin’.
  • i command: Keep stdin open on the container in the pod, even if nothing is attached.
  • image command: Indicate the image to use
  • custom argument: -- <arg1> <arg2> ... <argN>
  • curl -m command: curl with max-timeout <seconds>
  • curl -v command: curl with verbose
The outcome will look like this.

- How to get a Pod’s ip?

$ kubectl get pod -o wide

Get pods ip address under IP column.

- How to get a Pod’s port?

$ kubectl get pod/<pod name> -o yaml | grep containerPort

Get a pod detail in yaml output. After which, focus on containerPort. It is available under spec > containers > ports > containerPort.

It is important to understand the differences between:

  • nodePort: The port on the node where external traffic will come in on
  • port: The port of this service
  • targetPort The target port on the pod(s) to forward traffic to

Option 2

- Expose a service

With a service we go from internal cluster ip or external ip.

$ kubectl run tmp --image=nginx:alpine --rm -i --restart=Never -- curl -m 5 -v <the service cluster ip or the service name>:<the port - neither NodePort nor targetPort, it is just port>$ kubectl run tmp --image=nginx:alpine --rm -i --restart=Never -- curl -m 5 -v 10.108.227.197:80$ kubectl run tmp --image=nginx:alpine --rm -i --restart=Never -- curl -m 5 -v jek-nginx-v3:80

This is accessing the service internally.

$ curl -m 5 -v <external ip>:<the port>
$ curl -m 5 -v localhost:80

This is accessing the service externally.

$ kubectl get nodes -o wide
$ curl -m 5 -v <node internal ip>:<the node port>
$ curl -m 5 -v 192.168.65.4:32186

This is accessing the service from the node’s internal ip because aNodePort Service kind of lies on top of a ClusterIP one, making the ClusterIP Service reachable on the Node IPs (internal and external). This is at least the common/default behaviour but can depend on cluster configuration.

Use service name for invocation

Source courtesy: https://killer.sh/attendee/402cd0ac-7f93-45c7-9eb9-5e71420f6870/content

For example the service/manager-api-svc, we could manager-api-svc:4444 with port number directly.

Source courtesy: https://killer.sh/attendee/402cd0ac-7f93-45c7-9eb9-5e71420f6870/content

We could also call a service with <svc name>.<namespace>:<port>.

Additional: Daemonset

My goal of learning CKAD is for current job at Splunk specialising in Observability. It is worth understanding how Splunk OpenTelemetry Collector (Agent) work as Kubernetes Daemonset.

THE END

--

--

CHOO Jek Bao

Love writing my thoughts, reading biographies, and meeting like-minded friends to talk on B2B software sales, engineering & cloud solution architecture.