Saturday, September 21, 2019

Aspects of monitoring feature in a containerization framework.
This article describes the salient features of tools used for monitoring containers hosted on an orchestration framework.  Some products are dedicated for this purpose and strive to make it easier for administrators to help with this use-case. They usually fall in two categories – one for a limited set of built-in metrics that help with, say, the master to manage the pods and another that gives access to custom metrics which helps with, say, horizontal scaling of resources.
Most of the metrics products such as Prometheus for Kubernetes orchestration framework fall in the second category. The first category is rather lightweight and served over the http using resource APIs
The use of metrics is a common theme and the metrics are defined in a JSON format with key value pairs and timestamps. They might also carry additional descriptive information that might aid the handling of metrics.  Metrics are evaluated by expressions that usually look at a time-based window which gives slices of data points. These data points allow the use of calculator functions such as sum, min, average and so on.
Metrics often follow their own route to a separate sink. In some deployments of the Kubernetes orchestration framework, the sink refers to an external entity that know how to store and query metrics. The collection of metrics at the source and the forwarding of metrics to the destination follow conventional mechanisms that are similar to logs and audit events both of which have their own sinks.
As with all agents and services on a container, a secret or an account is required to control the accesses to resources for their activities. This role-based access control, namespace and global naming conventions is a prerequisite for any agent.
The agent has to run continuously forwarding data with little or no disruption. Some orchestrators facilitate this with the help of concepts similar to a Daemonset that run endlessly. The deployment is verified to be working correctly when a standard command produces an output same as a pre-defined output. Verification of monitoring capabilities becomes part of the installation feature.
The metrics comes helpful to be evaluated against thresholds that trigger alerts. This mechanism is used to complete the monitoring framework which allows rules to be written with expressions involving thresholds that then raise suitable alerts. Alerts may be delivered via messages or email or any other form of notification services. Dashboards and mitigation tools may be provided from the product providing full monitoring solution.
Almost all activities of any resource in the orchestrator framework can be measured. These include the core system resources which may spew the data to logs or to audit event stream. The option to combine metrics, audit and logs effectively rests with the administrator rather than the product designed around one or more of these.
Specific queries help with the charts for monitoring dashboards. These are well-prepared and part of a standard portfolio that help with the analysis of the health of the containers.

No comments:

Post a Comment