Friday, January 12, 2024

 This is a continuation of the listing of design choices for infrastructure consolidation that fall into well-known patterns. These include the following patterns:  

  1. Regional redundancy: This is a pattern that adds a resource or a deployment of resources in another region other than the ones used in a primary region. The paired region can be deployed on demand from backups, remain in standby in an active-passive configuration or remain operational as in an active-active configuration. This pattern is used even for full deployments of critical services for the purpose of ensuring business continuity and disaster recovery.  

  1. Managed resources/services: When a cloud resource is assigned to a team, they have all the features of the cloud to interact with it such as command-line interface, software development kit libraries, REST APIs and the cloud management User interface, but when the same resource must be shared between teams, certain operations that are allowed from the cloud must be hidden or locked and alternatives may need to be provided to support isolation between the uses. When the resource is deployed in this manner, the services become managed by one team while they are used by many other teams. 

  1. Automation of access control- Almost any resource or deployment is incomplete without access control and organizing and assigning identities is team specific. When dedicated resources are handed off, the assignee can be granted contributor access but on shared instances, appropriate group creation and role-based access control assignments become necessary even if they are far more in number than actual instances. Use of custom-roles to assign only the minimum number of permissions also become necessary. The custom-roles will have allowed and denied sets of permissions for both control plane and data plane. The exclusion of permissions in the effective permission set for a given principal guarantees the locking of that resource for consumption. 

  1. Alerting – The same best practice that applies to dedicated resources for their monitoring holds true for shared resources except that the audience is more diverse than earlier and involves different teams and their members. It is also necessary to isolate notifications specific to trouble and their intended audience, especially when many teams share the same fate on that resource. 

  1. Infrastructure layering – This pattern is evident from container infrastructure layering which allows even more scale because it virtualizes the operating system. While traditional virtual machines enable hardware virtualization and hyper V’s allow isolation plus performance, containers are cheap and barely anything more than just applications. Azure container service serves both Linux and Windows container services. It has standard docker tooling and API support with streamlined provisioning of DCOS and Docker swarm. Job based computations use larger sets of resources such as with compute pools that involve automatic scaling and regional coverage with automatic recovery of failed tasks and input/output handling. Azure demonstrates a lot of fine grained loosely coupled micro services using HTTP listener, Page content, authentication, usage analytic, order management, reporting, product inventory and customer databases. 

No comments:

Post a Comment