Saturday, June 1, 2019

The Kubernetes framework does not need to bundle up all the value additions from routines performed across applications. Instead it can pass through the data to hosts such as the public cloud and leverage the technologies of the host and the cloud. This techniques allows offloading health and performance monitoring to external layers which may already have significant acceptance and consistency 
There are no new tools, plugins, add-ons or packages needed by the application when Kubernetes supports these routines. At the same time, applications can choose time to evaluate the necessary conditions for distribution of modules to parts. This frees up the applications and their packages. The packages are increasingly written to be hosted on their own pods.  

Separation of the pods also improves modularity and reuse across application clients. This provides the advantage of isolation, troubleshooting and maintenance.  

Applications can make use of the declarative format of their deployment specifications. There are several advantages of specifying options and values in a configuration file but one of the clear advantages is that it can reuse the Kubernetes logic and keep all of the specifications as merely configurations. Without any code for the deployment, the application will find it simpler to deploy. 

Another advantage is that the configurations can be versioned, compared and verified offline without any requirement for the deployment to be attempted. This makes it easy for the configurations to be corrected by going forward or rolling backward between versions, finding out if the configurations have matching versions and the verifying that the entries are syntactically and semantically correct. 

The configuration files have been used for a long time but the format of the configuration has evolved more recently into a terse and simply indented format. This saves on space and errors in authoring the configuration files. The number of files written now no longer needs to be dependent on size.  

Friday, May 31, 2019

We continue our discussion on Kubernetes framework.

Kubernetes can scale up or down the number of pods an instance supports.  Functionality such as load-balancing,  api gatekeeper, nginx controller and others are important to the applications. These routines are therefore provided out of box from the Kubernetes framework. The only observation here is that this is a constant feedback cycle. The feedback from the applications improves the offerings from the host.
An example of the above cycle can be seen with the help of operator sdk. Originally, the operators were meant to make it easy for applications to be deployed. While there are several tools to facilitate this, Kubernetes proposed the deployment via operators. While applications started out with one operator, today applications tend to write more than one operator. It is a recognition of this fact, that Kubernetes now has new features to support operator dedicated to metrics. These metrics operator are new even for the operator-sdk which as a tool enabled boilerplate code to be generated for most applications
The Kubernetes framework does not need to bundle up all the value additions from routines performed across applications. Instead it can pass through the data to hosts such as the public cloud and leverage the technologies of the host and the cloud. This techniques allows offloading health and performance monitoring to external layers which may already have significant acceptance and consistency
There are no new tools, plugins, add-ons or packages needed by the application when Kubernetes supports these routines. At the same time, applications can choose time to evaluate the necessary conditions for distribution of modules to parts. This frees up the applications and their packages. The packages are increasingly written to be hosted on their own pods.
Separation of the pods also improves modularity and reuse across application clients. This provides the advantage of isolation, troubleshooting and maintenance.

Thursday, May 30, 2019

Kubernetes is a container framework and we were discussing emerging trends with application development.


Storage is a layer best left outside the application logic. The applications continue to streamline their business while the storage provides best practice. When the storage layer is managed, it’s appeal grows. For applications that use the container framework, the storage is generally network accessible . There tasks such as storage virtualization, replication and maintenance can be converged into a layer external from the application and reachable over the network.
Storage is not the only layer. The container framework benefits from its own platform as a service. Plugins that perform routines such as monitoring, healing, auditing, can all be delegated to plugins that can be shipped out of the box from the container framework. Then these bundles can be different sized.
Kubernetes itself is very helpful for applications of all sizes. These plugins are merely add-ons.
Kubernetes can scale up or down the number of pods an instance supports.  Functionality such as load-balancing,  api gatekeeper, nginx controller and others are important to the applications. These routines are therefore provided out of box from the Kubernetes framework. The only observation here is that this is a constant feedback cycle. The feedback from the applications improves the offerings from the host.
An example of the above cycle can be seen with the help of operator sdk. Originally, the operators were meant to make it easy for applications to be deployed. While there are several tools to facilitate this, Kubernetes proposed the deployment via operators. While applications started out with one operator, today applications tend to write more than one operator. It is a recognition of this fact, that Kubernetes now has new features to support operator dedicated to metrics. These metrics operator are new even for the operator-sdk which as a tool enabled boilerplate code to be generated for most applications

Wednesday, May 29, 2019

This design and relaxation of performance requirements from applications hosted on Kubernetes facilitates different connectors not just volume mounts. Just like we have log appenders publish logs to a variety of destinations, connectors help persist data written from the application to a variety of storage providers using consolidators, queues, cache and mechanisms that know how and when to write the data.
Unfortunately, the native Kubernetes API does not support any other forms of storage connectors other than the VolumeMount but it does allow services to be written in the form of Kubernetes applications that can accept the data published over http(s) just like a time series database server accepts all kinds of events over the net. The configuration of the endpoint, the binding of the service and the contract associated with the service vary from app to app. This may call for a well-known consolidator app that can provide different storage class that support different application profiles. Appenders and connectors are popular design patterns that get re-used often and justify their business value.
The shared data volume can bee made read-only and accessible only to the pods. This facilitates access restrictions. While authentication, authorization and audit can be enabled for storage connectors, they will still require RBAC access. Therefore, service accounted become necessary with storage connectors. A side-benefit of this security is that the accesses can now be monitored and alerted.
Storage is a layer best left outside the application logic. The applications continue to streamline their business while the storage provides best practice. When the storage layer is managed, it’s appeal grows. For applications that use the container framework, the storage is generally network accessible . There tasks such as storage virtualization, replication and maintenance can be converged into a layer external from the application and reachable over the network.
Storage is not the only layer. The container framework benefits from its own platform as a service. Plugins that perform routines such as monitoring, healing, auditing, can all be delegated to plugins that can be shipped out of the box from the container framework. Then these bundles can be different sized.
Kubernetes itself is very helpful for applications of all sizes. These plugins are merely add-ons.

Tuesday, May 28, 2019

Kubernetes provides a familiar notion of shared storage system with the help of VolumeMounts accessible from each container. The idea is that a shared file system may be considered local to the container and reused regardless of the container. The file system protocols have always facilitated the local and remote file storage with their support for distributed file systems. This allows for databases, configurations and secrets to be available on disk across containers and provide single point of maintenance. Most storage regardless of which storage access protocol – file system protocols, http(s), block or stream are essentially moving data to storage so there is a transfer and latency involved.
The only question has been what latency, and I/O throughput is acceptable for the application and this has guided the decisions for the storage systems, appliances and their integrations. When the storage is tightly coupled with the compute such as between a database server and a database file, all the reads and writes incurred from performance benchmarks require careful arrangement of bytes, their packing, organization, index, checksums and error codes.  But most applications hosted on Kubernetes don’t have the same requirements as a database server.
This design and relaxation of performance requirements from applications hosted on Kubernetes facilitates different connectors not just volume mounts. Just like we have log appenders publish logs to a variety of destinations, connectors help persist data written from the application to a variety of storage providers using consolidators, queues, cache and mechanisms that know how and when to write the data.
Unfortunately, the native Kubernetes API does not support any other forms of storage connectors other than the VolumeMount but it does allow services to be written in the form of Kubernetes applications that can accept the data published over http(s) just like a time series database server accepts all kinds of events over the net. The configuration of the endpoint, the binding of the service and the contract associated with the service vary from app to app. This may call for a well-known consolidator app that can provide different storage class that support different application profiles. Appenders and connectors are popular design patterns that get re-used often and justify their business value.
The shared data volume can bee made read-only and accessible only to the pods. This facilitates access restrictions. While authentication, authorization and audit can be enabled for storage connectors, they will still require RBAC access. Therefore, service accounted become necessary with storage connectors. A side-benefit of this security is that the accesses can now be monitored and alerted.

Monday, May 27, 2019

The following is a continuation of the summary of some of the core concepts of Kubernetes.

Namespaces seclude names of resources. They can even be nested within one another. They provide a means to divide resources between multiple users.

Most Kubernetes resources such as pods, services, replication, controllers, and others are in some namespaces. However, low level resources such as nodes and persistent volumes are not in any namespace.

Kubernetes control plane communication is bidirectional between the cluster to its master and vice-versa. The master hosts an apiserver that is configured to listen for remote connections. The apiserver reaches out to the kubelets to fetch logs, attach to running pods, and provide the port-forwarding functionality. The apiserver manages nodes, pods and services.

Kubernetes has cluster level logging. This stores all of the container logs and sends it to a central log store. The centralized store is then easy to search or browse via an interface. Common kubectl commands are also included. The name of the log file is log-file.log and it goes through rotations. The “kubectl logs” command uses this log file

The System components do not always run in the container.  So, in the cases where the systemd is available, the logs are written to the journald. The node-level logging agent runs on each node. The sidecar container streams to stdout but picks up logs from an application counter using a logging agent.

Logs can also be directly written from the application to a backend log store.




Sunday, May 26, 2019

Today I discuss a coding exercise:
Let us traverse a m x n matrix spirally to find the kth element. A typical method for this would look like:
int GetKth(int[,] A, int m, int n, int k)
{
if (n <1 || m < 1) return -1;
if (k <= m)
    return A[0, k-1];
if (k <= m+n-1)
   return A[k-m, m-1];
if (k <= m+n-1+m-1)
   return A[n-1, (m-1-(k-(m+n-1)))];
if (k <= m+n-1+m-1+n-2)
   return A[n-1-(k-(m+n-1+m-1)), 0];
return GetKth(A.SubArray(1,1,m-2,n-2), m-2, n-2, k-(2*n+2*m-4)));
}
Notice that this makes incremental albeit slow progress towards the goal in a consistent small but meaningful peels towards the finish.
Instead, we could also skip ahead. This will unpeel the spirals by skipping several adjacent rows and columns at a time. The value of k has to be in the upper half of the number of elements in the matrix before it is used.
6When k Is in this range, it can be reduced by 8x, 4x, 2x adjacent perimeter elements before it fits in the half of the given matrix and the above method to walk the spiral can be used. \If we skip adjacent perimeter from the outermost in the m×n matrix, we can skip over the number of elements as 2m+2n-4, 2m+2n-10, 2m+2n-20. In such cases we can quickly reduce k till we can walk the perimeter spiral of the inner matrix starting from the top left.

This follows a  pattern 2m +2 (n-2) , 2 (m-2) + 2 (n-4) , 2 (m-4) +2 (n-6), …

while ( k – ( 4m + 4n – 14) > m*n/2) {

k -= 4m + 4n – 14;

m = m – 4;

n = m – 2;

}

This pattern can be rewritten as 2(m +(n-2) ,  (m-2) +  (n-4) ,  (m-4) + (n-6), …)
which can be written as 2 (m+n-2(0+1), m+n-2 (1+2), m+n-2 (2+3), …)
which can be written as Sum (I = 0, 1, 2 …)(4m + 4n – 4 (I + I +1))
which can be written as Sum (I = 0, 1, 2 …)(4m + 4n – 8i – 4)