Thursday, May 30, 2019

Kubernetes is a container framework and we were discussing emerging trends with application development.


Storage is a layer best left outside the application logic. The applications continue to streamline their business while the storage provides best practice. When the storage layer is managed, it’s appeal grows. For applications that use the container framework, the storage is generally network accessible . There tasks such as storage virtualization, replication and maintenance can be converged into a layer external from the application and reachable over the network.
Storage is not the only layer. The container framework benefits from its own platform as a service. Plugins that perform routines such as monitoring, healing, auditing, can all be delegated to plugins that can be shipped out of the box from the container framework. Then these bundles can be different sized.
Kubernetes itself is very helpful for applications of all sizes. These plugins are merely add-ons.
Kubernetes can scale up or down the number of pods an instance supports.  Functionality such as load-balancing,  api gatekeeper, nginx controller and others are important to the applications. These routines are therefore provided out of box from the Kubernetes framework. The only observation here is that this is a constant feedback cycle. The feedback from the applications improves the offerings from the host.
An example of the above cycle can be seen with the help of operator sdk. Originally, the operators were meant to make it easy for applications to be deployed. While there are several tools to facilitate this, Kubernetes proposed the deployment via operators. While applications started out with one operator, today applications tend to write more than one operator. It is a recognition of this fact, that Kubernetes now has new features to support operator dedicated to metrics. These metrics operator are new even for the operator-sdk which as a tool enabled boilerplate code to be generated for most applications

Wednesday, May 29, 2019

This design and relaxation of performance requirements from applications hosted on Kubernetes facilitates different connectors not just volume mounts. Just like we have log appenders publish logs to a variety of destinations, connectors help persist data written from the application to a variety of storage providers using consolidators, queues, cache and mechanisms that know how and when to write the data.
Unfortunately, the native Kubernetes API does not support any other forms of storage connectors other than the VolumeMount but it does allow services to be written in the form of Kubernetes applications that can accept the data published over http(s) just like a time series database server accepts all kinds of events over the net. The configuration of the endpoint, the binding of the service and the contract associated with the service vary from app to app. This may call for a well-known consolidator app that can provide different storage class that support different application profiles. Appenders and connectors are popular design patterns that get re-used often and justify their business value.
The shared data volume can bee made read-only and accessible only to the pods. This facilitates access restrictions. While authentication, authorization and audit can be enabled for storage connectors, they will still require RBAC access. Therefore, service accounted become necessary with storage connectors. A side-benefit of this security is that the accesses can now be monitored and alerted.
Storage is a layer best left outside the application logic. The applications continue to streamline their business while the storage provides best practice. When the storage layer is managed, it’s appeal grows. For applications that use the container framework, the storage is generally network accessible . There tasks such as storage virtualization, replication and maintenance can be converged into a layer external from the application and reachable over the network.
Storage is not the only layer. The container framework benefits from its own platform as a service. Plugins that perform routines such as monitoring, healing, auditing, can all be delegated to plugins that can be shipped out of the box from the container framework. Then these bundles can be different sized.
Kubernetes itself is very helpful for applications of all sizes. These plugins are merely add-ons.

Tuesday, May 28, 2019

Kubernetes provides a familiar notion of shared storage system with the help of VolumeMounts accessible from each container. The idea is that a shared file system may be considered local to the container and reused regardless of the container. The file system protocols have always facilitated the local and remote file storage with their support for distributed file systems. This allows for databases, configurations and secrets to be available on disk across containers and provide single point of maintenance. Most storage regardless of which storage access protocol – file system protocols, http(s), block or stream are essentially moving data to storage so there is a transfer and latency involved.
The only question has been what latency, and I/O throughput is acceptable for the application and this has guided the decisions for the storage systems, appliances and their integrations. When the storage is tightly coupled with the compute such as between a database server and a database file, all the reads and writes incurred from performance benchmarks require careful arrangement of bytes, their packing, organization, index, checksums and error codes.  But most applications hosted on Kubernetes don’t have the same requirements as a database server.
This design and relaxation of performance requirements from applications hosted on Kubernetes facilitates different connectors not just volume mounts. Just like we have log appenders publish logs to a variety of destinations, connectors help persist data written from the application to a variety of storage providers using consolidators, queues, cache and mechanisms that know how and when to write the data.
Unfortunately, the native Kubernetes API does not support any other forms of storage connectors other than the VolumeMount but it does allow services to be written in the form of Kubernetes applications that can accept the data published over http(s) just like a time series database server accepts all kinds of events over the net. The configuration of the endpoint, the binding of the service and the contract associated with the service vary from app to app. This may call for a well-known consolidator app that can provide different storage class that support different application profiles. Appenders and connectors are popular design patterns that get re-used often and justify their business value.
The shared data volume can bee made read-only and accessible only to the pods. This facilitates access restrictions. While authentication, authorization and audit can be enabled for storage connectors, they will still require RBAC access. Therefore, service accounted become necessary with storage connectors. A side-benefit of this security is that the accesses can now be monitored and alerted.

Monday, May 27, 2019

The following is a continuation of the summary of some of the core concepts of Kubernetes.

Namespaces seclude names of resources. They can even be nested within one another. They provide a means to divide resources between multiple users.

Most Kubernetes resources such as pods, services, replication, controllers, and others are in some namespaces. However, low level resources such as nodes and persistent volumes are not in any namespace.

Kubernetes control plane communication is bidirectional between the cluster to its master and vice-versa. The master hosts an apiserver that is configured to listen for remote connections. The apiserver reaches out to the kubelets to fetch logs, attach to running pods, and provide the port-forwarding functionality. The apiserver manages nodes, pods and services.

Kubernetes has cluster level logging. This stores all of the container logs and sends it to a central log store. The centralized store is then easy to search or browse via an interface. Common kubectl commands are also included. The name of the log file is log-file.log and it goes through rotations. The “kubectl logs” command uses this log file

The System components do not always run in the container.  So, in the cases where the systemd is available, the logs are written to the journald. The node-level logging agent runs on each node. The sidecar container streams to stdout but picks up logs from an application counter using a logging agent.

Logs can also be directly written from the application to a backend log store.




Sunday, May 26, 2019

Today I discuss a coding exercise:
Let us traverse a m x n matrix spirally to find the kth element. A typical method for this would look like:
int GetKth(int[,] A, int m, int n, int k)
{
if (n <1 || m < 1) return -1;
if (k <= m)
    return A[0, k-1];
if (k <= m+n-1)
   return A[k-m, m-1];
if (k <= m+n-1+m-1)
   return A[n-1, (m-1-(k-(m+n-1)))];
if (k <= m+n-1+m-1+n-2)
   return A[n-1-(k-(m+n-1+m-1)), 0];
return GetKth(A.SubArray(1,1,m-2,n-2), m-2, n-2, k-(2*n+2*m-4)));
}
Notice that this makes incremental albeit slow progress towards the goal in a consistent small but meaningful peels towards the finish.
Instead, we could also skip ahead. This will unpeel the spirals by skipping several adjacent rows and columns at a time. The value of k has to be in the upper half of the number of elements in the matrix before it is used.
6When k Is in this range, it can be reduced by 8x, 4x, 2x adjacent perimeter elements before it fits in the half of the given matrix and the above method to walk the spiral can be used. \If we skip adjacent perimeter from the outermost in the m×n matrix, we can skip over the number of elements as 2m+2n-4, 2m+2n-10, 2m+2n-20. In such cases we can quickly reduce k till we can walk the perimeter spiral of the inner matrix starting from the top left.

This follows a  pattern 2m +2 (n-2) , 2 (m-2) + 2 (n-4) , 2 (m-4) +2 (n-6), …

while ( k – ( 4m + 4n – 14) > m*n/2) {

k -= 4m + 4n – 14;

m = m – 4;

n = m – 2;

}

This pattern can be rewritten as 2(m +(n-2) ,  (m-2) +  (n-4) ,  (m-4) + (n-6), …)
which can be written as 2 (m+n-2(0+1), m+n-2 (1+2), m+n-2 (2+3), …)
which can be written as Sum (I = 0, 1, 2 …)(4m + 4n – 4 (I + I +1))
which can be written as Sum (I = 0, 1, 2 …)(4m + 4n – 8i – 4)


Saturday, May 25, 2019

A piece of the puzzle:
This essay talks about connecting the public cloud with third party multi factor authentication (MFA) provider as an insight into identity related technologies in modern computing. Many organizations participate in multi factor authentication for their applications. At the same time, they expect the machines deployed in their private cloud to be joined to their corporate network. If this private cloud were to be hosted on a public cloud as a virtual private cloud, it would require some form of Active Directory Connector. This AD connector is a proxy which connects to the Active directory that is on – premise for the entire organization as a membership registry.  By configuring the connector to work with a  third party MFA provider like Okta, we centralize all the access requests and streamline the process.
Each MFA provider makes an agent available to download and it typically talks the ldap protocol with the membership registry instance The agent is installed on a server with access to the domain controller. 
We can eliminate login and password hassles by connecting the public cloud resources to the organization’s membership provider so that the existing corporate credentials can be used to login.
Further more, for new credentials, this lets us automatically provision, update or de-provision public cloud accounts when we update the organization’s membership provider on any Windows Server with access to the Domain Controller. 
Thus a single corporate account can bridge public and private clouds for unified sign in experience 

Friday, May 24, 2019

The following is a summary of some of the core concepts of Kubernetes as required to write an operator. Before we begin, Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. It is often a strategic decision for any company because it decouples the application from the hosts so that the same application can work elsewhere with minimal disruption to its use.
An operator is a way of automating the deployment of an application on the Kubernetes cluster. It is written with the help of template source code generated by a tool called the operator-sdk. The tool builds three components – custom resource, apis and controllers. The custom resources are usually declarative definition of Kubernetes resources required by the application and its grouping as suited for the deployment of the application. The api is for the custom service required to deploy the application and the controller watches for this service.
Kubernetes does not limit the type of applications that are supported. It provides building blocks to the application. Containers only help isolate modules of the application into well defined boundaries that can run in with operating system level virtualization.
Kubernetes exposes a set of APIs that are used internally by the command line tool called kubectl. It is also used externally by other applications. This API follows the regular REST convention and is also versioned with path qualifiers such as v1, v1alpha1 or v1beta1 – the latter are used with extensions to the APIs.
Kubernetes supports Imperative commands, imperative object configuration, and declarative object configuration. These are different approaches to manage objects. The first approach operates on live objects and is recommended for development environment. The latter two are configurations that operate on individual files or a set of files and these are better suited for production environments.
Namespaces seclude names of resources. They can even be nested within one another. They provide a means to divide resources between multiple users.
Most Kubernetes resources such as pods, services, replication, controllers,  and others  are in some namespaces. However, low level resources such as nodes and persistent volumes are not in any namespace.
Kubernetes control plane communication is bidirectional between the cluster to its master and vice-versa. The master hosts an apiserver that is configured to listen for remote connections. The apiserver reaches out to the kubelets to fetch logs, attach to running pods, and provide the port-forwarding functionality. The apiserver manages nodes, pods and services.