Friday, July 19, 2019

When the pods are in trouble, it can be recovered with a liveness probe. The logs for the pods indicate if there were some errors. However, the mitigation to restart the pods cannot always be based on the detection of errors from the logs. This is where a liveness probe helps because it will restart the pod automatically when the probe fails. There are three different types of probes 1) a probe can be a command 2) it can be an HTTP request to a path served by a web server or 3) it can be a generic TCP probe. All types of probes target what is running within the containers.
The traffic flow to a pod can be controlled using a readiness probe. Even if the pods are up and running, we only want to send traffic to them when they are ready to serve requests. The readiness probe also has three different types just like the liveness probe and they can be used one for the other. However, they serve different purposes and should be maintained separately.
The liveness and readiness probes are defined in the containers section of the pod specification. They are denoted by livenessProbe and readinessProbe in the pod deployment yaml specification. 
The kubelet on each worker node uses a livenessProbe to overcome ramp-up issues or deadlocks. A service that load-balances a set of pods uses the readinessProbe to determine if a pod is ready and hence should receive traffic. A livenessProbe uses a restartPolicy and so does the readinessProbe when they are used interchangeably but for different purposes.


No comments:

Post a Comment