Tuesday, July 16, 2019

#codingexercise
Find an element in a sorted matrix

The downward or to the right front propagation from top-left corner allows linear time search for an element in the sorted matrix.
void printSerialized(int[ ][ ] A, int rows, int cols)
{
if ( A== null || rows == 0 || cols == 0) return;

var items = new List<Pair<int,int>>();
items.Add(new Pair<int,int>(0,0));

while ( items.Count != 0 )
{

// print current min
var item = removeMinValue(items);
System.out.println(item.GetValue());

// add next candidates
var down = new Pair<int,int>(item.first+1, item.second);
var right = new Pair<int,int>(item.first,item.second+1);
if (IsValid(down) && items.Contains(down) == false)
    items.Add(down);
if (IsValid(right) && items.Contains(right) == false)
    items.Add(right);
}
System.out.println(A[r][c]);
}

Sunday, July 14, 2019

Today we continue with our discussion on Kubernetes. We were discussing refresh tokens and now we will be discussing identity providers. The identity provider serves two purposes it honors the Open ID connect way of providing identities. As part of that identity, it will need to support discovery which is required to make calls. Second it is required to support the generation of tokens and to inject them into kube configuration. A variety of identity providers can support both these functionalities 

Saturday, July 13, 2019

Today we continue with our discussion on Kubernetes user accounts and refresh tokens from our earlier post. The refresh token is retrieved from the identity provider's authorization url. Kubectl refreshes the ID token with the help of a refresh token. Kubernetes never uses the refresh token. It is meant to be a secret for the user.  A refresh token is generated only once and it can only be passed between the user and the identity provider. This makes it more secure than long lived bearer tokens. It is also opaque so there is no personally identifiable information divulged.  The Kubernetes dashboard uses id token and refresh token. It does not have a login system so it requires an existing token. The dashboard has therefore required the use of a reverse proxy which will inject the id_token on each request. The same reverse proxy then refreshes the token as needed.  This certainly alleviates the user authentication from the Kubernetes dashboard so much so that it can now be directly included with the user interface of the applications hosted on the Kubernetes system. Most of the panels in the dashboard are read-only so this is very helpful to all users.

Friday, July 12, 2019

We continue with our discussion on Kubernetes service accounts and their rotation from our earlier post. A service account token is a secret that needs to be guarded. If it is leaked,  it can be misused. This token is available in the volume mount at /var/run/secrets/kubernetes.io/serviceaccount. If should not be injected as a secret or configuration in other pods because doing so makes the tokens harder to rotate.
The mechanism to validate the service account is different from the mechanisms to validate the user accounts. Kubernetes never keeps the users credentials so it is impossible to leak. A token representing the user is short lived so it is useless even when intercepted. The identity asserted in a request and presented by the open id connect holds both user and group information Only a proxy is needed between the identity layer and Kubernetes. The id token is a bearer token and gives access to the bearer without any validation. It's only defense is the short expiry time. The refresh token on the other hand can be used to renew the ID token after its expiry. 

Thursday, July 11, 2019

Rotating service accounts on Kubernetes: an external key management solution
Unlike traditional application frameworks, Kubernetes has a special meaning for service accounts. While service accounts continue to be different from user accounts where the former represents applications and the latter represents users, Kubernetes persists service accounts while delegating the same for user accounts to identity providers. The notion behind not persisting user accounts is that Kubernetes does not authenticate users, it validates assertions. Each request must assert an identity to Kubernetes. There is no web interface, login or session associated with this identity. Kubernetes honors requests and every action is specified with the help of an API request. Each request is unique, distinct and self-contained for authentication and authorization purposes. Identity is merely an assertion in these requests.
Service accounts are meant for interactions between applications and Kubernetes resources. A service account is injected by the admission controller at the time of resource creation if one is not provided already. This is the default service account and bears the name as such. A service account should never be mixed with user account otherwise they suffer many drawbacks. It should only be authorized with role-based access control otherwise any other scheme will encounter numerous items to audit. It should never be leaked, otherwise it can be abused indefinitely.
This last item requires a rotation of service accounts so that the misuse of a service account is limited to its discovery or a the issue of a new account. A service account is represented by a toke gold mounted at the location /var/run/secrets/kubernetes.io/serviceaccount in a Kubernetes pod. This can be injected and made to work with the same resources as earlier with the help of wrapping one secret into another
The persistence of service accounts and their usage as a secret makes it valuable to an autonomous secret management system that can keep track of secrets and rotate them as necessary. The external key manager that manages keys and certificates was built for a similar purpose where the secret was a key certificate. That system can work well for Kubernetes since the framework poses no limitation to injection or rotation.
The automation of periodic and on-demand rotation improves security and convenience for all Kubernetes usage.


We continue with the essay on comparison of Kubernetes with platform as a service framework. We compare one of the core functionalities from such frameworks to see the differences between what they offer. We study logging as applicable to container frameworks and PaaS.
Logging is a side-car model of deployment  in Kubernetes so that apps can focus on their logic while having the benefits of managed logging.
In this model, a fluentd configuration is used to pool in the logging from different log files produced by the application. The logging format and print specifiers can be directly added to the fluentd configuration along each source These configurations are essentially collections for individual log sources. The logging thus collected is then merged to a persistent volume source. This is accessible for read only operations without affecting individual sources. The separation of read only logs from their publication from log sources helps in the formation of a reporting stack downstream which can create beautiful charts and graphs.
These log files can also be indexed into an external log analysis product to help with querying based on shell like commands.
The separation of indexing and reporting concerns into downstream products alleviates the maintenance logic for containers and applications while allowing their stacks to vary as per customer convenience.
Logging as  described above is therefore much more gradual and methodical than adding a log analysis product and hardwiring the data sources in the PaaS framework. Their the logging was specified by syslog drain and the timestamp in individual entries were that of the host. This had been limiting to the application since they had to specify additional logging information per entry. The logging improved significantly with separation of concerns and worked across load balancing and auto-scaling. Liveness and readiness probe could also be added to the deployments and the use of Elasticsearch-Fluentd-Kibana significantly improve monitoring. 

Tuesday, July 9, 2019

We continue with the essay on comparison of Kubernetes with platform as a service framework. PaaS may be called out as being restrictive to applications, dictating choice of application frameworks, restricting supported language runtimes and distinguishing apps from services. Kubernetes, on the other hand, aims to support an extremely diverse variety of workloads. As long as the application has been compiled to run in a container, it will work with Kubernetes.  Kubernetes evolved as an industry effort from the native Linux containers support of the operating system.  It can be considered as a step towards a truly container centric development environment. Containers decouple applications from infrastructure which separates dev from ops.
Containers made PaaS possible. Containers help compile the code for isolation. PaaS enables applications and containers to run independently. PaaS containers were not open source. They were just proprietary to PaaS. This changed the model towards development centric container frameworks where applications could now be written with their own     .
. Docker and Kubernetes fueled this move towards custom containers.Many people consider PaaS platform to be fixed and Container framework to be flexible. This is actually a shifting boundary between dev ops and developers
Many people consider PaaS platform to be fixed and Container framework to be flexible. This is actually a shifting boundary between dev ops and developers.

Platform-as-a-service was formed as an abstraction over cloud service providers. Container as a service was formed for the convenience of cloud workloads. Applications written in container frameworks can be moved around from cloud to cloud. For example, a Kubernetes application can be run in different clouds.

Containers made it possible for apps to run anywhere without any virtualization layer or middleware. Container frameworks only increased the coverage to span different systems. Container images also became open source. This made it easier for developers to move from PaaS to container frameworks. It is expected that nearly seventy percent of cloud workloads will run on containers.
In such case what is the future of PaaS?
Containers still need to be managed, deployed and monitored. This is where PaaS is helpful.
One of the improvements we could see going forward is containers dedicated for these purposes in a side-car model of deployment so that apps can focus on their logic while having aa set of sidecars automatically bringing the benefits of PaaS to container framework. Cert-managers and logging sidecars are an example of this.