Monday, August 26, 2019

Logging in Kubernetes:
The support for logging in Kubernetes starts with a Kubernetes Daemonset that will monitor container logs and forward them to a log indexer. Any log forwarder can be used as the image for the container in the Daemonset yaml specification. Some environment variables and shell commands might be necessary to set and run on the container. There will also be a persistent volume typically mounted at /var/log. The mounted volumes must be verified to be available under the corresponding hostPaths. 
FluentD collector agent is a way to collect logs from across the pods. The logging sidecar model will be beneficial to the applications as it alleviates the concerns from them and at the same time provides a consistent model across the applications. At the store level, it is better that logs have their own storage and index whether it is the file storage or an index store. This will allow uniform aging, archiving and rolling of logs based on timeline and will work across all origins 
Logging can also be considered a service to provision external to the cluster. This is easy to do with a variety of log products that provide service like functionality. As long as there is no data loss, most log system users are tolerant to latency. This makes it easier for Logging to be implemented with merely a Kubernetes service broker and alleviating all concerns for logging from the cluster. 
Logging using Elasticsearch, Kibana and FluentD  helps end to end usage of logs from applications hosted on Kubernetes.
This section briefly discusses the setup specific to Elasticsearch, Kibana and Fluentd 
The elasticsearch resource is specified as a single node deployment in yaml and with the container image as docker.elastic.co/elasticsearch/elasticsearch:6.5.4. The corresponding elasticsearch service is defined with a container port as 9200. 
The Kibana is specified as a deployment with image as docker.elastic.co/kibana/kibana:6.5.4 service with port 5601.  

FluentD is a logging agent. It is specified as a daemonset but first we need a fluentd-rbac.yaml where we define a serviceaccount, a  cluster role and a ClusterRoleBinding for fluentd so that it can access pods and namespaces with get, list and watch. The specification for fluentd as a Kubernetes DaemonSet with names as fluentd-logging which requires a container image as fluent/fluentd-kubernetes-daemonset:v1.3-debian-elasticsearch and a volume mount of /var/log. The environment variables define FLUENT_ELASTICSEARCH_HOST, FLUENT_ELASTICSEARCH_PORT (9200), FLUENT_ELASTICSEARCH_SCHEME and FLUENT_ELASTICSEARCH_UID as 0. The pods should show the fluentd as running. 

No comments:

Post a Comment