Tuesday, January 16, 2024

IaC innovations continued...

 

 

Many might point to native supportability from existing tracking systems including issue and code repositories given that files and not databases serve better for difficult to automate clouds such as sovereign clouds, and regions with fewer resource-types availabilities. It is true that the practice of annotating every commit in a repository with rich links to origin, growth and timelines can also provide independent sources of information that can be spanned by custom queries as the need arises, but it remains an extra mile and most teams are left to fulfil that themselves leading to boutique solutions. On the other hand, incident tracking software alone has demonstrated the effectiveness of a knowledge base that supports ITSM, ITBM, ITOM and CMDB capabilities.

 

In addition, a realization dawns in, as the size and scale of infrastructure grows that the veritable tenets of IaC such as reproducibility, self-documentation, visibility, error-free, lower TCO, drift prevention, joy of automation, and self-service somewhat diminish when the time and effort increases exponentially to overcome its brittleness. Packages go out of date, features become deprecated and stop working, backward compatibility is hard to maintain, and all existing resource definitions have a shelf-life. Similarly, assumptions are challenged when the cloud provider and the IaC provider describe attributes differently.  The information contained in IaC can be hard to summarize in an encompassing review unless we go block by block and without a knowledge base, this costly exercise is often repeated. It’s also easy to shoot oneself in the foot by means of a typo or a command during the exercise and especially when the state of the infrastructure disagrees with that of the portal. 

 

The data model would articulate Infrastructure-as-a-code and blueprints, resources, policies, and accesses as an entity and become a unit of provisioning the environment. It would include issue and code tracking references, key performance indicators, x-rays and service map references, alerts and notifications and continuously updated with each deployment.

 

TCO of an IaC for a complex deployment does not include the man-hours required to keep it in a working condition and to assist with redeployments and syncing. One-off investigations are just too many to count on a hand in the case when deployments are large and complex. The sheer number of resources and their tracking via names and identifiers can be exhausting. A sophisticated CI/CD for managing accounts and deployments is good automation but also likely to be run by several contributors.  When edits are allowed and common automation accounts are used, it can be difficult to know who made the change and why.  All of these shortcomings can be overcome with a cloud IaC data model that is continuously updated via each pipeline-based deployment and encompasses silo’ed views of the numerous pipelines and repositories that exist while providing a base for canning repeated queries.

 

Some flexibility is required to make judicious use of automation and manual interventions for keeping the deployments robust. Continuously updating the IaC and its knowledge base, especially by the younger members of the team, is not only a comfort but also a necessity.  The more mindshare the IaC data model gets, the more likely that it will reduce the costs associated with maintaining the IaC and dispel some of the limitations mentioned earlier. 

 

As with all solutions, scope and boundaries apply. It is best not to let IaC or its data model spread out so much that the high priority and severity deployments get affected. It can also be treated like any asset with its own index, model, documentation and co-pilot. 

No comments:

Post a Comment