Menu Close

CKA Exam Task: DaemonSet

Kubernetes administrators often use DaemonSets. So there’s a good chance that you’ll have to create a DaemonSet in your CKA (Certified Kubernetes Administrator) exam. This post shows how to create a DaemonSet, based on an example CKA Exam Task.

Understanding DaemonSet

In Kubernetes, a DaemonSet is a workload that ensures all nodes of a Kubernetes cluster run a replica of a specific Pod. A DaemonSet automatically creates a Pod as nodes join the cluster. When nodes shut down, Kubernetes garbage collection will remove the Pods.

You’ll most likely see DaemonSet for use cases like:

  • Log collection and aggregation
  • Metrics collection and aggregation
  • Resource monitoring on a node
  • Security monitoring on a node

All these use cases have in common: You want to run one Pod per selected node.

Now that we know what DaemonSets are, let’s delve into an example task for the CKA exam.

Exam Task: Create a DaemonSet

Create a DaemonSet called my-daemon in the namespace moon. Use the image busybox running the shell command while true; do echo Hohoho; sleep 1; done;. Ensure the DaemonSet runs Pods on every node, including the control plane nodes. Use the label app=my-daemon to match Pods to the DaemonSet.

Prerequisites

  • A multi-node Kubernetes cluster (I’m using minikube with minikube start -n 3)
  • kubectl installed and configured to access the Kubernetes cluster with administrator rights
  • Run kubectl create ns moon to prepare this DaemonSet exercise task

Create a DaemonSet

The kubectl CLI doesn’t provide commands to create boilerplate manifests for a DaemonSet. The fastest way to get started is to copy an example DaemonSet manifest from the Kubernetes Documentation.

We’ll start with the following yaml Manifest:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # these tolerations are to have the daemonset runnable on control plane nodes
      # remove them if your control plane nodes should not run pods
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log

First, we remove all fields we don’t need from this boilerplate. It should look like this:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # these tolerations are to have the daemonset runnable on control plane nodes
      # remove them if your control plane nodes should not run pods
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
🤓: We didn't remove the tolerations because the task description says the DaemonSet should also run Pods on the control plane. The tolerations are what we need in this case.

Then, we modify the field of the DaemonSet and its template to match the task description. Which will get us this:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: my-daemon
  namespace: moon
  labels:
    app: my-daemon
spec:
  selector:
    matchLabels:
      app: my-daemon
  template:
    metadata:
      labels:
        app: my-daemon
    spec:
      tolerations:
      # these tolerations are to have the daemonset runnable on control plane nodes
      # remove them if your control plane nodes should not run pods
      - key: node-role.kubernetes.io/control-plane
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      containers:
      - name: busybox
        image: busybox
        command:
          - sh
          - -c
          - "while true; do echo Hohoho; sleep 1; done;"

Apply the DaemonSet by running kubectl apply -f my-daemon.yaml

Verify DaemonSet

To verify that the DaemonSet is running a Pod on each node first run:

kubectl get pods -n moon

NAME              READY   STATUS    RESTARTS   AGE
my-daemon-ggbvf   1/1     Running   0          27s
my-daemon-vtn5q   1/1     Running   0          27s
my-daemon-w6ff7   1/1     Running   0          27s

There should be 3 pods running as we started minikube with 3 nodes.

Next, output the nodes the pods are running on:

kubectl get pods -n moon -o jsonpath={.items[*].spec.nodeName}
minikube-m03 minikube-m02 minikube

If you see all three names of your minikube nodes, the DaemonSet works as expected.

Conclusion

In this post, you learned how to solve an example CKA exam task for Kubernetes DaemonSets. You learned how to:

  • Create a DaemonSet
  • Tolerator node taints so your DaemonSet can run Pods on control plane nodes
  • Verify that the DaemonSet is working like expected

Don’t want to miss the next post in the Certified Kubernetes Administrator (CKA) series? Follow me on LinkedIn!

To support my efforts use my affiliate link to buy your courses and exams from the Linux Foundation.

Previous post in the CKA series: CKA Exam Tasks: Kubernetes Deployments