Saturday, June 8, 2019

Docker is a container platform that allows applications to run on a virtual host. Spinning up containers for the duration of a set of tasks allows proper cleanup of resources for these tasks while enabling isolation between workloads and between workloads and hosts.  
Containers are fast. They are not built from the ground up. They are just pre-fabricated as images. These images are just binaries representing a set of bits that can either be 0 or 1. They don’t have text and are very much like executables in the sense that this is machine data and specifically useful only for immediate instantiation of a software product or platform. A binary can have many layers and support different options for each layer thereby creating several flavors of images. Images once created are generally not modified.  
Images are then saved in a repository and re-used. This has been a popular option for packaging binaries. 
Earlier archives and tar balls of executables from different languages were made available to the user to run an application. However, this approach involved more user interaction than images. Images are also convenient for sharing on registries across organizations. 
However, this new format of packaging has posed new challenges for verification and validation as opposed to the executables that often had both binaries and symbols where the symbols could become helpful in interpreting the executables. Moreover, code analysis tools used to work very well with introspection of executables. Rigorous annotations in the code as well as analyzers of these static annotations tremendously improved the sanctity of some of the executables as the end product of all the application development.  With the availability of images, it was harder to tell if there was anything injected or otherwise contaminating the image 
Therefore the only option left for organizations is to control the process with which images proliferate. Strict control of process helps and access control of end product helps ease the validity of the images. However peer to peer exchange of images is also just as important for the general usages as the images from corporations. Therefore there is a certain class differentiation made available in the registry as public and private as well as internal images and external facing images. 
When the verification and validation of images are described as standard routines regardless of the origin of images, they can be encapsulated in a collection where each routine corresponds to a certain security policy. These include policy definitions that are standard as well as those that are custom to specific entities, departments or time-periods. The evaluation of a routine for a policy is however an automatable task and can be made available as a software product or development library depending on the spectrum between full-service and do-it-yourself approach.  
Much of the validation and verification is therefore external to the binaries and images and the technology to perform these can also be saved as custom images 

No comments:

Post a Comment