Kubernetes DaemonSet: What it is & How to Use it?
Learn what a Kubernetes DaemonSet is, how it works, and when to use it. This guide covers real-world examples for managing DaemonSets in your cluster.

Kubernetes is a powerful tool for managing containerized applications, and one of its key features is the ability to run specific workloads across your cluster. One such workload is the DaemonSet, a Kubernetes API object designed to ensure that a copy of a Pod runs on every Node in your cluster.
In this article, we’ll explore what DaemonSets are, how they work, and when to use them.
What is a Kubernetes DaemonSet?
A DaemonSet is a Kubernetes object that ensures a specific Pod runs on every Node in your cluster. When new Nodes are added, the DaemonSet automatically schedules the Pod on them. Similarly, when Nodes are removed, the Pods are cleaned up. This makes DaemonSets ideal for running background services that need to be present on every Node, such as monitoring agents, log collectors, or backup tools.
Key Features of DaemonSets:
- Automatic Pod Scheduling: DaemonSets ensure that a Pod runs on every Node, even as Nodes are added or removed.
- Tolerations: DaemonSets can schedule Pods on Nodes with resource constraints or other restrictions that would normally prevent scheduling.
- Node-Specific Customization: You can configure DaemonSets to run Pods only on specific Nodes using labels and selectors.
When Should You Use a DaemonSet?
DaemonSets are particularly useful for workloads that need to run on every Node in your cluster. Here are some common use cases:
- Node Monitoring Agents: Tools like Prometheus Node Exporter or Datadog agents need to run on every Node to collect metrics.
- Log Collection: Services like Fluentd or Logstash can be deployed as DaemonSets to collect logs from each Node.
- Backup Tools: Backup agents that need to interact with Node-level data can be deployed as DaemonSets to ensure all Nodes are covered.
- Network Plugins: Tools like Calico or Weave Net that provide networking functionality often run as DaemonSets to ensure they’re present on every Node.
Unlike ReplicaSets or Deployments, which schedule Pods based on resource availability, DaemonSets are tied to the number of Nodes in your cluster.
Example: Deploying a DaemonSet
Let’s walk through a simple example of deploying a DaemonSet in your Kubernetes cluster. For this tutorial, we’ll use Filebeat, a lightweight log shipper that collects logs and forwards them to Elasticsearch or Logstash.
You can use Minikube to create a local cluster with three Nodes:
minikube start --nodes=3
Step 1: Create a DaemonSet Manifest
Here’s a basic DaemonSet manifest for Filebeat:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
spec:
selector:
matchLabels:
name: filebeat
template:
metadata:
labels:
name: filebeat
spec:
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:8.10.0
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Step 2: Apply the Manifest
Save the manifest to a file filebeat.yaml
and apply it to your cluster:
kubectl apply -f filebeat.yaml
Step 3: Verify the DaemonSet
Check the status of the DaemonSet and the Pods it created:
kubectl get daemonsets
Output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
filebeat 3 3 3 3 3 <none> 10s
For detailed information, run:
kubectl get pods -o wide
Output:
NAME READY STATUS RESTARTS AGE IP NODE
filebeat-abc12 1/1 Running 0 30s 10.244.1.2 minikube-m02
filebeat-def34 1/1 Running 0 30s 10.244.2.2 minikube-m03
filebeat-ghi56 1/1 Running 0 30s 10.244.0.3 minikube
Scoping DaemonSets to Specific Nodes
Sometimes, you may want to run DaemonSet Pods only on specific Nodes. You can achieve this using nodeSelectors or affinity rules. For example, to run Filebeat only on Nodes labeled with log-collection-enabled=true
, update the DaemonSet manifest:
spec:
template:
spec:
nodeSelector:
log-collection-enabled: "true"
Then, label the desired Node:
kubectl label node <node-name> log-collection-enabled=true
Apply the updated manifest, and the DaemonSet will only schedule Pods on the labeled Node.
kubectl apply -f filebeat.yaml
Check the DaemonSet status:
kubectl get daemonsets
Output:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
filebeat 1 1 1 1 1 log-collection-enabled=true 1m
View the Pod list to confirm the Pod is running on the labeled Node:
kubectl get pods -o wide
Output:
NAME READY STATUS RESTARTS AGE IP NODE
filebeat-abc12 1/1 Running 0 2m 10.244.1.2 minikube-m02
Scaling a DaemonSet
DaemonSets are automatically scaled based on the number of Nodes in your cluster. To scale a DaemonSet:
- Add Nodes: New Nodes will automatically run the DaemonSet Pods.
- Remove Nodes: Pods on removed Nodes will be cleaned up.
If you need to temporarily scale a DaemonSet to 0 (e.g., for maintenance), you can patch the DaemonSet with a dummy nodeSelector:
kubectl patch daemonset <daemonset-name> -p '{"spec": {"nodeSelector": {"dummy": "true"}}}'
To scale it back up, remove the dummy selector.
kubectl patch daemonset filebeat -p '{"spec": {"template": {"spec": {"nodeSelector": {}}}}}'
DaemonSet Best Practices
Use DaemonSets for Node-Specific Workloads: Only use DaemonSets when your Pods need to run on every Node or a subset of Nodes.
- Set Restart Policies Correctly: Ensure Pods have a restartPolicy of Always to ensure they restart with the Node.
- Avoid Manual Pod Management: Don’t manually edit or delete DaemonSet Pods, as this can lead to orphaned Pods.
- Leverage Rollbacks: Use Kubernetes’ rollback feature to revert DaemonSet changes quickly if something goes wrong.
Conclusion
Whether you’re collecting logs with Filebeat, monitoring Nodes with Prometheus, or managing backups, DaemonSets provide a reliable and scalable solution. By understanding how to create, configure, and manage DaemonSets, you can ensure that your Node-level workloads are always running where they’re needed most.
LHB Community is made of readers like you who share their expertise by writing helpful tutorials. Contact us if you would like to contribute.