Wednesday, July 31, 2019

Logging as a side car model:
There are various levels and areas of emphasis in the stack to support rich analytics over logging. At the Kubernetes system level, among the popular options, we could prioritize fluentD logging sidecar as a way to collect logs from across the pods. This will be beneficial to the applications as it alleviates the concerns from them and at the same time provides a consistent model across the applications. At the store level, we recommend that the logs have their own storage whether it is the file storage or an index store. This will allow uniform aging, archiving and rolling of logs based on timeline and will work across all origins
At the analytics level, we can outsource the reporting stack, charts, graphs and dashboards to the application that is best to analyze machine data. Although StorageApplication, SmartAnalytics, metrics, charts and graphs can be used to store, search and render results, it will take a longer round-trip time between the user query and the results.
There is usually only one sidecar  per service in a sidecar proxy deployment. A sidecar can be in the same pod as the one hosting the service. The number of instances of sidecars can scale as appropriate. A smaller resource profile is necessary for allowing the explosion of sidecar instances.  If the target of a sidecar is available as a daemonset, there will only be one sidecar per host rather than one sidecar per pod.
#codingexercise
Task getNext(List<Task> tasks) {
For (Task task :tasks) {
If (task.Completed == false) {
Return task;
}
}
Return null;

Tuesday, July 30, 2019

Logging as a service broker implementation:
Kubernetes service brokers allow the provisioning of services outside the cluster. This enables services to run anywhere and work independently from the cluster. They can even be deployed in the public and private cloud where they can scale across Kubernetes clusters. This is a helpful addition to any Kubernetes cluster.
Almost all provisioning of resources translates into well-known operations of create, update, delete, get and list on the resources. Therefore, the resource operations are aligned with the usage of resources along most workflows. This makes it handy for use of Kubernetes clusters for many applications and services.
The trend in cloud computing has shifted from service-oriented architecture towards microservices. This has helped the independent provisioning of resources, deployment and scaling of services and overhaul or reworking of services. Service broker makes no claim about how the services need to evolve - whether to form a structural composite pattern or a behavior maintaining pattern. This minimal enforcement has worked mutually beneficial for both the services as well as the Kubernetes cluster.
Logging can also be considered a service to provision external to the cluster. This is easy to do with a variety of log products that provide service like functionality. As long as there is no data loss, most log system users are tolerant to latency. This makes it easier for Logging to be implemented with merely a Kubernetes service broker and alleviating all concerns for logging from the cluster.

Monday, July 29, 2019

Today we continue discussion logging in Kubernetes.  We start with a Kubernetes Daemonset that will monitor container logs and forward them to a log indexer. Any log forwarder can be used as the image for the container in the Daemonset yaml specification. Some environment variables and shell commands might be necessary to set and run on the container. There will also be a persistent volume typically mounted at /var/log. The mounted volumes must be verified to be available under the corresponding hostPaths.
 Journald is used to collect logs from those components that do not run inside a container. For example, kubelet and the controller runtime which is usually Docker will write to journald when the host has systemd enabled. Otherwise, they write to .log files in /var/log directory. Klog is the logging library used by such system components.
One log indexer is sufficient for a three node cluster with thirty containers generating 1000 messages/second each even when the message size can be a mix of small (say 256 byte) and large (1KB).
Timber is an example of a log product. The use of this product typically entails logstash, elasticsearch, and Kibana where the elasticsearch is for api access and the kibana is for web user interface. Any busybox container image can be used to produce logs which we can use as test data for our logging configuration.
The logrotate tool rotates the log once the side exceeds a given threshold.
The typical strategies for pushing logs include the following:
1) use a node-level logging agent that runs on every node. For example, Stackdriver logging on Google cloud platform and ElasticSearch on conventional Kubernetes clusters.
2) include a dedicated sidecar container for logging in an application pod.
3) Push logs directly in the backend from within an application.
Between the options 1 and 2, the latter is preferable for performance reasons. It is not intrusive and i collects the logs with fluentd which provides a rich language to annotate or transform log sources. Also, option 2 can scale independently without impact to the rest of the cluster.

Sunday, July 28, 2019

Today we are going to discuss log indexer deployment to container framework.  We start with a Kubernetes Daemonset that will monitor container logs and forward them to a log indexer. Any log forwarder can be used as the image for the container in the Daemonset yaml specification. Some environment variables and shell commands might be necessary to set and run on the container. There will also be a persistent volume typically mounted at /var/log. The mounted volumes must be verified to be available under the corresponding hostPaths.
In order for logging to be helpful, it is better to have the log sources differentiated. For example, the tools to deploy and monitor the app must be different from the tool to deploy and maintain the container cluster. Although forwarders and indexers can differentiate log stream, it is better to do that at the cluster level.  There are also plugins available from different log indexer product companies which support Docker logging.
The forwarder is also specific to the log product company. We need it only to forward logs. It can be run as a a Daemonset or directly on the Kubernetes nodes. The forwarder is not only a proprietary tool, it is a convenience for deployers to move lots of data reliably and securely using log product maker guidelines. Json driver and journald can be used for integration with Kubernetes. Journald is used to collect logs from those components that do not run inside a container. For example, kubelet and the controller runtime which is usually Docker will write to journald when the host has systemd enabled. Otherwise, they write to .log files in /var/log directory. Klog is the logging library used by such system components.
At this point, it should be important to mention that the collector does not only collect logs.
The log product event collector needs all of the following:
1) logging
2) Metadata/objects
3) Metrics
One indexer is sufficient for a three node cluster with thirty containers generating 1000 messages/second each even when the message size can be a mix of small (say 256 byte) and large (1KB).
Timber is another example of a log product. The use of this product typically entails logstash, elasticsearch, and Kibana where the elasticsearch is for api access and the kibana is for web user interface. Any busybox container image can be used to produce logs which we can use as test data for our logging configuration.
The logrotate tool rotates the log once the side exceeds a given threshold.
The typical strategies for pushing logs include the following:
1) use a node-level logging agent that runs on every node. For example, Stackdriver logging on Google cloud platform and ElasticSearch on conventional Kubernetes clusters.
2) include a dedicated sidecar container for logging in an application pod.
3) Push logs directly in the backend from within an application. 

Saturday, July 27, 2019

A warm welcome for Kubernetes Service brokers.

Kubernetes service brokers allow the provisioning of services outside the cluster. This enables services to run anywhere and work independently from the cluster. They can even be deployed in the public and private cloud where they can scale across Kubernetes clusters. This is a helpful addition to any Kubernetes cluster.

The service broker architecture also enforces consistency across provisioning of resources. This is done with the help of a standard set of APIs for all resources and their corresponding services that implement the service broker.  Resources are the way in which Kubernetes recognizes each and every entity in the cluster. The framework of Kubernetes then takes this inventory of resources and reconciles the state of the cluster to match the definition of the resources. Anything hosted on the Kubernetes clusters can be described with the help of a custom resource.

Service brokers facilitate the resources provisioned by external services to be understood by Kubernetes as custom resources within the cluster.  This bridging of external resource provisioning with in cluster representation of resources has provided significant benefits to users for the use of these resources.

Almost all provisioning of resources translates into well-known operations of create, update, delete, get and list on the resources. Therefore the resource operations are aligned with the usage of resources along most workflows. This makes it handy for use of Kubernetes clusters for many applications and services.

The trend in cloud computing has shifted from service oriented architecture towards microservices. This has helped the independent provisioning of resources, deployment and scaling of services and overhaul or reworking of services. Service broker makes no claim about how the services need to evolve - whether to form a structural composite pattern or a behavior maintaining pattern. This minimal enforcement has worked mutually beneficial for both the services as well as the Kubernetes cluster.



Friday, July 26, 2019

Yesterday we were referring to the design of Kubernetes Service brokers. Today we look at the implementation for one of them.
A typical service broker will implement the  ServiceInstanceService and ServiceInstanceBindingService Methods that correspond to the ServiceInstance and ServiceBinding definititons in the Kubernetes framework.
The ServiceInstanceService implementation will determine
1) whether to accept a given service instance request for create, update, delete, get or list operations.
2) The classes are separate for each resource they represent.
3) They are composed under a composite service that represents a facade to the composite OSBA service. The OSBA service therefore allows service oriented architecture for the implementations of the service broker.
4) The service is looked up by the service broker with a match to the service definition in the request. The method accepting the request in the composite service implementation can determine if it should handle the request.
5) validations for accepting requests are performed by the service broker.
6)  when a service instance is created, it tries to populate the resource representation and persist the resource. Sometimes persisting the resource to be created or updated requires interacting with other services. These may need to be done in a transaction scope and support good exception handling, diagnosability and transparency.

A service instance binding is looked up for a service instance with the help of the following supported lookup techniques:
ClusterServiceClassExternalName and ClusterServicePlanExternalName
ClusterServiceClassExternalID and ClusterServicePlanExternalID
ClusterServiceClassName and ClusterServicePlanName
ServiceClassExternalName and ServicePlanExternalName
ServiceClassExternalID and ServicePlanExternalID
ServiceClassName and ServicePlanName


Thursday, July 25, 2019

We continue with our discussion of Keycloak service broker on Kubernetes.

Service brokers are independent and they are not connected except by passing parameters.
https://github.com/kubernetes-sigs/service-catalog/blob/master/docs/parameters.md

Advantages of service catalog

Enable services to be hosted outside cluster
Adhere to OSBA API
Allow services to be independent and scalable and define their own resources.
The sevice catalog allows services to own and describe their policies

Disadvantages of service catalog
They are not for relations or dependencies between services
They cannot handle sync
The infrastructure cannot look into the resources
There is no querying or mapping of resources to services other than what is declared
Annotations and tags are not supported in a way that service broker mapping for resources can be looked up programmatically.

The upshot is that service broker seems to bring on the complexities of service-oriented architecture where composition is a pillar of organization.