Monday, September 6, 2021

 This article is a proposal to change the de facto public connectivity methods for the Azure Public cloud resources such as a database, cache, servicebus etc. The preferred connectivity method is a public endpoint with a hardcoded public IP address and an assigned port. There are a few limitations with this approach even though it has proven to work for many usages. First, the address, binding and contract for the resource is now tied directly to the resource and are static which increases security concerns since anyone can access them as many times as they want. Second, the resource does not differentiate between the availability zones that provide redundancy and availability for the resource and consequently provide no control to the user to refer to the endpoints in those zones. Third, there is no load balancing between the connectivity methods of the same resource. Instead, this is a proposal for commissioning of an application gateway or a load balancer that is automatically provisioned with the resource. The gateway provides the much-needed functionality of an http proxy as well as load balancer which the users have had to install themselves. The services provisioning these resources are now free to expand their abilities beyond the limitation of including them behind the same public endpoint.

The idea of using a proxy and load balancer is widely accepted with resource orchestration technology such as Kubernetes with its definition of a Kubernetes Service. Kubernetes is a container framework that enables migration of applications across hosts and provides all abstractions that they need. It introduces the separation of hosts in the form of pods. Container infrastructure layering allows even more scale because it virtualizes the operating system. A Kubernetes Service is no longer a single entity behind an IP address and a port. Instead, it supports auto-scaling and dynamic port assignment.

The Azure resource are provisioned with one of the following three connectivity methods: 1) Public Endpoint, 2) Virtual Networks and 3) Private Endpoint. These methods are by themselves sufficient to reach from all on-premises and cloud clients, but these methods do not provide the benefit of an application gateway. The gateway-sold-separately technique does not automatically address the often-repeated tasks faced by the clients when they have many resources to place behind a load balancer or a proxy. It also imposes restrictions on the service to provide all the features behind the single endpoint.

The use of a gateway-like connectivity method with the Azure resource enables staged migrations as the portfolio of features are expanded and the older dependencies retired. Each new feature could be a swap replacement of an existing functionality. Incremental additions of features help pave the way for testing.

The logging is improved significantly with separation of concerns and enhanced load balancing and auto-scaling. Liveness and readiness probe could also be added to the deployments which improves overall visibility, health and readiness of the resource across availability zones and replicas.

If the user must use an application gateway in addition to provisioning the resource, providing it as a built-in option for connectivity with the resource merely adds much needed convenience to the user and hence this proposal.


No comments:

Post a Comment