Wednesday, January 10, 2024

 

Some of the design choices for infrastructure consolidation fall into well-known patterns. These include the following patterns:

1.       Dedicated instance: This is a pattern having a dedicated instance to serve or maintain other instances of the same resource type. Case in point is the deployment of Overwatch monitoring to Databricks workspaces. A single workspace is chosen to host the dashboards that the Overwatch creates but logs, events, and metrics flow from associated workspaces. This pattern draws minimal resource utilization from other participating instances and facilitates the overall cost management and reporting of the resource utilization from the dedicated instance.

2.       Separation of resources by purpose: Compute and storage often have go hand-in-hand in many deployments. Depending on the purpose of the deployment or a group within the deployment, one may be heavier use than other calling for different forms that lean either way. One such example is Kubernetes versus app services. Applications that are not necessarily stateful and are more compute-oriented can become more native to the cloud in the form of Function and Web applications.  A Kubernetes deployment is easier to move entire compute and storage using applications from on-premises to cloud. This pattern also makes scaling and load-balancing independent for computing and storage.

3.       Colocation of data – It is easy to visualize data in an external storage account or s3 store with little or no thought about whether data is being written across regions or if each write is replicated across regions especially if the reference to the store can easily be shared among multiple applications. The same goes for messaging infrastructure but colocation of messages and subscribers and storage accounts and data consumers is not only beneficial in terms of performance and costs but also a necessity in terms of maintenance, assignment, and scoping. Colocation also makes object iteration easier especially when they share the same prefix in their addressability.  Objects grouped together by account help with account throttling. Isolation also improves, which further helps with troubleshooting.

4.       Vertical integration – Since computing and storage go together, it is possible to store computing logic as modules together with data and associated with the same catalog for both. This helps the compute to locate the data locally and render the results faster to the caller. This is not only a performance improvement but also a convenience to share them in the same way as the data. Databases have used this to their advantage and the design has stood the test of data migration to the cloud.

5.       Infrastructure layering – This pattern is evident from container infrastructure layering which allows even more scale because it virtualizes the operating system. While traditional virtual machines enable hardware virtualization and hyper V’s allow isolation plus performance, containers are cheap and barely anything more than just applications. Azure container service serves both Linux and Windows container services. It has standard docker tooling and API support with streamlined provisioning of DCOS and Docker swarm. Job based computations use larger sets of resources such as with compute pools that involve automatic scaling and regional coverage with automatic recovery of failed tasks and input/output handling. Azure demonstrates a lot of fine grained loosely coupled micro services using HTTP listener, Page content, authentication, usage analytic, order management, reporting, product inventory and customer databases.

Previous articles: IaCResolutionsPart62.docx

No comments:

Post a Comment