Wednesday, June 5, 2019

The main advantages of the side car proxy are: 
1) It is independent from the primary application in terms of environment and programming language 
2) A sidecar can access the same resources as the primary application. It can even be used for monitoring. 
3) There is no latency in the communication between the side car proxy and the main application 
4) When the applications don't provide an extensibility mechanism, the side car can extend the functionality often in its own container. 
NGINX has always been a reverse proxy. The authentication provided by a reverse proxy is not available from Kubernetes Services.  Therefore, a reverse proxy is sometimes run in the same pod as the application in a side car container. The side car proxy is exposed only to the world outside the pod while keeping the connection between the application and the proxy private. Envoy proxy demonstrates this and the Istio plugin injects this proxy inside the pod. All http port 80 traffic then goes through the pod and a corresponding request is made to the application. Unless the traffic entering the side-car proxy has basic authentication, it does not make a request to the application. 
# Kubernetes deployment to deploy Sidecar proxy + application in a single pod,  
apiVersion: apps/v1 
kind: Deployment 
metadata: 
name: nginx-deployment 
labels: 
app: example-application 
spec: 
replicas: 1 
selector: 
matchLabels: 
app: example-application 
template: 
metadata: 
labels: 
app: example-application 
spec: 
containers: 
- name: sidecar-proxy 
image: example-application-sidecar-proxy 
imagePullPolicy: Always 
ports: 
containerPort: 80 
- name: application 
image: example-application 
imagePullPolicy: Always 

The basic auth used with the proxy can be specific to clients and not necessarily to the users. Any Identity Access Module can authenticate and authorize the user which can then then translate to a token over client communications with the application.  This is called client credential workflow where the clients treat a guest user or an identified user the same and an IAM module is responsible to ensuring only valid users are able to access the client. Thus, user sends requests from outside world to the IAM, then to the client, then to the proxy and finally to the application. 

Tuesday, June 4, 2019

We started discussing securing nginx controller with side car proxy
The availability of the proxy alleviates the concerns from the application so the application continues on its default configuration without any tweaks while the proxy enables authentication of clients based on sub-requests. If the sub-requests return error in the 401 range, the call is prohibited. Otherwise a success code of 200 is returned which allows the calls to propagate to the application. 
Ngx_http_auth_request_module and envoy proxy demonstrate this very well. These access modules can be turned on or off.  They enable authorization on an incoming request based on the result of a subrequest usually by replacing the address in the request to where the subrequest will be sent. 
This results in a configuration like below:  
location /private/ {      
auth_request /auth;     ...  
}  
 location = /auth {  
    proxy_pass ...  
    proxy_pass_request_body off;   
   proxy_set_header Content-Length "";  
    proxy_set_header X-Original-URI  
  $request_uri;  
} 
These access modules work best with containers or microservices since they usually have endpoints which we want to secure. 
These access modules work tightly with the primary application, but are placed inside their own process or container, which provides a homogeneous interface for platform services across languages and implementations of the application.
The main advantages of the side car proxy are :
1) It is independent from the primary application in terms of environment and programming language
2) A sidecar can access the same resources as the primary application. It can even be used for monitoring.
3) There is no latency in the communication between the side car proxy and the main application
4) When the applications don't provide an extensibility mechanism, the side car can extend the functionality often in its own container.

Monday, June 3, 2019


The support for mutual authentication in GoLang has its limitations:

Mutual authentication can be best described by the presence of two files – keystore and truststore.
A keystore imports a key and a certificate to identify the server to its clients.
A truststore imports only certificates that the clients make to validate itself to the server.

Together the keystore enables the server to be validated to the clients and the truststore enables the clients to be validated to the servers.

The support for these in GoLang is rather limited:
Golang.org/x/crypto/pkcs12 provides an ability to make SafeBags and ShroudedBags. A keystore or a truststore is essentially a collection of safebag or shroudedbag. The former is used to enclose certificates and the latter is used to enclose the private key.
However, pkcs12 does not support making truststores and is left for the caller of the library to implement. The support for keystore is made possible with the help of Encode method which takes a private key and a certificate.

Without the private key the Encode method could be tweaked to make only a truststore however it becomes the task of the caller to add certificates to the truststore as they become available. The ability to pass the certificate to the caller depends exclusively on the clients as they come up. It the clients are known beforehand; their certificates are also known beforehand. However, this is not always the case as clients come-up dynamically and they need to register their certificates.

Most applications are unaware of the clients except for their own internal clients used with say the command line interface. Moreover, these applications delegate the transport layer security to the keystore and truststore files assuming that automations involving tools like keychain will automatically add the certificate to the concerned file. 

Yet this is not really the case and clients need to add their certificates to the pre-existing truststore so that the Kubernetes operators can install and provision the application with transport layer security. Currently this is left as Do-It-Yourself approach both in the standard golang pkcs12 library as well as the upcoming go-pkcs12 library.   

An alternative to using keystore and truststore is to use nginx ingress controller with side car proxy.

Sunday, June 2, 2019

Today we look at a coding exercise for spiral traversal of a m x n matrix.
A typical method for this would look like:
int GetKth(int[,] A, int m, int n, int k)
{
if (n <1 || m < 1) return -1;
if (k <= m)
    return A[0, k-1];
if (k <= m+n-1)
   return A[k-m, m-1];
if (k <= m+n-1+m-1)
   return A[n-1, (m-1-(k-(m+n-1)))];
if (k <= m+n-1+m-1+n-2)
   return A[n-1-(k-(m+n-1+m-1)), 0];
return GetKth(A.SubArray(1,1,m-2,n-2), m-2, n-2, k-(2*n+2*m-4)));
}
The elements in the perimeter of the spiral follow a pattern as 
 2m +2 (n-2) , 2 (m-2) + 2 (n-4) , 2 (m-4) +2 (n-6), …

The skipping does not have to be in regular intervals of fixed number of adjacent perimeter cells of the matrix. It can progress downwards from 8x, 4x, to 2x. This is similar to exponential backoff used in LAN networks.
Int getNumberOfElementsToSkip (int m, int n, int x) {
      Int sum = 0;
      For (int I =0; I  < x; i++) {
            sum += 4*m +4 *n -8 *i- 4;
      }
      return sum;
}
Now we can try in multiples of 2 
Int getNumberOfPerimeterSpiralsToSkip( int m, int n, int k) {
     // start with 8
     Int count = getNumberOfElementsToSkip(m,n,8) 
     If (count < k && count < m *n / 2) {
          return 8;
     }
    // then with 4
count = getNumberOfElementsToSkip(m,n,4) 
     If (count < k && count < m *n / 2) {
          return 4;
     }
   // then with 2
count = getNumberOfElementsToSkip(m,n,2) 
     If (count < k && count < m *n / 2) {
          return 2;
     }
  return 1;
}

Saturday, June 1, 2019

The Kubernetes framework does not need to bundle up all the value additions from routines performed across applications. Instead it can pass through the data to hosts such as the public cloud and leverage the technologies of the host and the cloud. This techniques allows offloading health and performance monitoring to external layers which may already have significant acceptance and consistency 
There are no new tools, plugins, add-ons or packages needed by the application when Kubernetes supports these routines. At the same time, applications can choose time to evaluate the necessary conditions for distribution of modules to parts. This frees up the applications and their packages. The packages are increasingly written to be hosted on their own pods.  

Separation of the pods also improves modularity and reuse across application clients. This provides the advantage of isolation, troubleshooting and maintenance.  

Applications can make use of the declarative format of their deployment specifications. There are several advantages of specifying options and values in a configuration file but one of the clear advantages is that it can reuse the Kubernetes logic and keep all of the specifications as merely configurations. Without any code for the deployment, the application will find it simpler to deploy. 

Another advantage is that the configurations can be versioned, compared and verified offline without any requirement for the deployment to be attempted. This makes it easy for the configurations to be corrected by going forward or rolling backward between versions, finding out if the configurations have matching versions and the verifying that the entries are syntactically and semantically correct. 

The configuration files have been used for a long time but the format of the configuration has evolved more recently into a terse and simply indented format. This saves on space and errors in authoring the configuration files. The number of files written now no longer needs to be dependent on size.