Sometimes you might get in a situation where you need to restart your Pod. For example, if your Pod is in error state.
Depending on the restart policy, Kubernetes itself tries to restart and fix it.
But if that doesn't work out and if you can’t find the source of the error, restarting the Kubernetes Pod manually is the fastest way to get your app working again.
How to restart Pods in Kubernetes
Unfortunately, there is no kubectl restart pod command for this purpose. Here are a couple of ways you can restart your Pods:
- Rollout Pod restarts
- Scaling the number of replicas
Let me show you both methods in detail.
Method 1: Rollout Pod restarts
Starting from Kubernetes version 1.15, you can perform a rolling restart of your deployments.
The controller kills one pod at a time and relies on the ReplicaSet to scale up new Pods until all the Pods are newer than the restarted time. In my opinion, this is the best way to restart your pods as your application will not go down.
Note: Individual pod IPs will be changed.
Let's take an example. You have a deployment named my-dep which consists of two pods (as replica is set to two).
root@kmaster-rj:~# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE my-dep 2/2 2 2 13s
Let's get the pod details:
root@kmaster-rj:~# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-dep-6d9f78d6c4-8j5fq 1/1 Running 0 47s 172.16.213.255 kworker-rj2 <none> <none> my-dep-6d9f78d6c4-rkhrz 1/1 Running 0 47s 172.16.213.35 kworker-rj1 <none> <none>
Now let's rollout the restart for the my-dep deployment with a command like this:
kubectl rollout restart deployment name_of_deployment
Do you remember the name of deployment from the previous commands? Use it here:
root@kmaster-rj:~# kubectl rollout restart deployment my-dep deployment.apps/my-dep restarted
You can watch the process of old pods getting terminated and new ones getting created using
kubectl get pod -w command:
root@kmaster-rj:~# kubectl get pod -w NAME READY STATUS RESTARTS AGE my-dep-557548758d-kz6r7 1/1 Running 0 5s my-dep-557548758d-svg7w 0/1 ContainerCreating 0 1s my-dep-6d9f78d6c4-8j5fq 1/1 Running 0 69s my-dep-6d9f78d6c4-rkhrz 1/1 Terminating 0 69s my-dep-6d9f78d6c4-rkhrz 0/1 Terminating 0 69s my-dep-557548758d-svg7w 0/1 ContainerCreating 0 1s my-dep-557548758d-svg7w 1/1 Running 0 3s my-dep-6d9f78d6c4-8j5fq 1/1 Terminating 0 71s my-dep-6d9f78d6c4-8j5fq 0/1 Terminating 0 72s my-dep-6d9f78d6c4-rkhrz 0/1 Terminating 0 74s my-dep-6d9f78d6c4-rkhrz 0/1 Terminating 0 74s my-dep-6d9f78d6c4-8j5fq 0/1 Terminating 0 76s my-dep-6d9f78d6c4-8j5fq 0/1 Terminating 0 76s
If you check the Pods now, you can see the details have changed here:
root@kmaster-rj:~# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES my-dep-557548758d-kz6r7 1/1 Running 0 42s 172.16.213.43 kworker-rj1 <none> <none> my-dep-557548758d-svg7w 1/1 Running 0 38s 172.16.213.251 kworker-rj2 <none> <none>
Method 2. Scaling the Number of Replicas
In a CI/CD environment, process for rebooting your pods when there is an error could take a long time since it has to go through the entire build process again.
A faster way to achieve this is use the
kubectl scale command to change the replica number to zero and once you set a number higher than zero, Kubernetes creates new replicas.
Let's try it. Check your Pods first:
root@kmaster-rj:~# kubectl get pod NAME READY STATUS RESTARTS AGE my-dep-557548758d-kz6r7 1/1 Running 0 11m my-dep-557548758d-svg7w 1/1 Running 0 11m
Get the deployment information:
root@kmaster-rj:~# kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE my-dep 2/2 2 2 12m
Now, set the replica number to zero:
root@kmaster-rj:~# kubectl scale deployment --replicas=0 my-dep deployment.apps/my-dep scaled
And then set it back to two:
root@kmaster-rj:~# kubectscale deployment --replicas=2 my-dep deployment.apps/my-dep scaled
Check the pods now:
root@kmaster-rj:~# kubectl get pod NAME READY STATUS RESTARTS AGE my-dep-557548758d-d2pmd 1/1 Running 0 10s my-dep-557548758d-gprnr 1/1 Running 0 10s
You have successfully restarted Kubernetes Pods.
Use any of the above methods to quickly and safely get your app working without impacting the end-users.
After doing this exercise you please make sure to find the core problem and fix it as restarting your pod will not fix the underlying issue.
Hope you like this Kubernetes tip. Don't forget to subscribe for more.
DevOps Professional | RHCA | Jenkins | Git | Docker | Kubernetes | Ansible | Prometheus | Grafana | AWS Cloud