Sunday, September 12, 2021

 

Azure Service Operator and Kubernetes service object

Introduction: In the previous article, we discussed Kubernetes Open Service Broker API. We followed it up on the discussion with an introduction to Azure OSBA which is also complying with the open standard and introduces Azure resources to the Kubernetes control plane. Then we discussed the Azure Service Operator that provisions those resources via the Kubernetes control plane. Then we discussed Kustomization. Today we evaluate the public connectivity methods for the respective services. 

Description: Azure services that provide resources for the user, often provide the option to choose the connectivity methods as one from public endpoints, private endpoints, and virtual network. The preferred connectivity method is a public endpoint with a hardcoded public IP address and an assigned port. It is simple and popular. The private endpoints and virtual networks can be used together with Azure Gateway and Azure private link. When the resources are provisioned via the Kubernetes control plane as discussed with Azure OSBA and Azure Service operator, they retain these connectivity methods as the primary means of interaction with the resource.

Kubernetes service, on the other hand, appears to take a more robust approach with its use of ExternalName, Load Balancer, NodePort, and Cluster IP. If an IP connectivity internal to the cluster is required, a Cluster IP can be used. If the service needs to be exposed at a static port, the NodePort can be used. When the load balancer is used, routes to NodePort and ClusterIP are automatically created. Finally, by using a CNAME record, the service can be universally reached via DNS. In addition to all these, a Kubernetes service can be exposed via Ingress object. Ingress is not a service type, but it acts as the entry point for the cluster and consolidates the routing rules into a single resource. This allows multiple services to be hosted behind the ingress resource that can be reached with an IP address.

An ingress resource is defined say for example on the Nginx where the HTTP and HTTPS ports are defined. The ingress resource is merely a declaration of the traffic policy.  An ingress control can be strictly HTTPS by redirecting HTTP traffic to HTTPS. For the Ingress resource to work, clusters are deployed with an ingress controller. Notable ingress controllers include AKS Application gateway Ingress Controller which configures the Azure Application Gateway. The Ambassador API gateway is an Envoy-based ingress controller.

The gateway also acts as an HTTP proxy. Any implementation of a gateway must maintain a registry of destination addresses. The advantages of an HTTP proxy include aggregations of usages. In terms of success and failure, there can be a detailed count of calls. The proxy could include all the features of a conventional HTTP service such as Client based caller information, destination-based statistics, per object statistics, categorization by cause, and many other features along with a RESTful API service for the features gathered. When gateways solve problems where data does not have to move, they are very appealing to many usages across the companies that use cloud providers.  There have been several vendors in their race to find this niche.

Conclusion: Load balancer, HTTP proxy, and Ingress resource are additional connectivity methods that can be added out of the box for some resources so that they are easier to work with interoperability between container orchestration systems and cloud service providers.

 

 

 

 

No comments:

Post a Comment