Manifesting Dependencies:
Among the frequently encountered disconcerting challenges faced by engineers who deploy infrastructure is the way to understand, capture and use dependencies. Imagine a clone army where all entities look alike and a specific one or two need to be replaced. Without having a name or identifier at hand, it is difficult to locate those entities but it becomes even harder when we don’t know which of the others are actually using them, so that we are mindful of the consequences of replacements. Grounding this example with cloud resources in azure public cloud, we can take a set of resources with a private endpoint each that gives them a unique private IP address, and we want to replace the virtual network that is integrated with these resources. When we switch the virtual network, the old and the new do not interact with one another and traffic that was flowing to a resource on the old network is now disrupted when that resource moves to a different virtual network. Unless we have all the dependencies known about who is using the resource that is about to move, we cannot resolve the failures they might encounter. What adds to the challenge is that the virtual network is like a carpet on which the resources stand and this resource type is always local to an availability zone or region so there is no built-in redundancy or replica available to ease the migration. One cannot just move the resource as if it were moving from one resource group to another, it must be untethered and tied to another virtual network with a delete of the old private endpoint and the addition of a new. Taking the example a little further, IaC does not capture dependencies between usages of resources. It only captures dependencies on creation or modification. For example, a workspace that users access to spin up compute and run their notebooks. might be using a container registry over the virtual network but its dependency does not get manifested because the registry does not maintain a list of addresses or networks to allow. The only way to reverse-engineer the listing of dependencies is to check the dns zone records associated with the private endpoint and the entries added to the callers that resolve the container registry over the virtual network. These entries will have private IP addresses associated with the callers and by virtue of the address belong to an address space designated to a sub-network, it is possible to tell whether it came from a connection device associated with a compute belonging to the workspace. By painful enumeration of each of these links, it is possible to draw a list of all workspaces using the container registry. These records that helped us draw the list may have a lot of stale entries as the callers disappear but do not clean up the record. So, some pruning might be involved and it might change over time but it will still be handy.
No comments:
Post a Comment