Monday, September 13, 2021

 

Host Integration Server on Azure

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator.

Description: Kubernetes is not the only external environment supported by Azure. While the possibilities are endless for customers to integrate their own solutions, some planning might be required for legacy enterprise computing. In this section, we introduce the Host Integration Server and describe what it takes to host a similar service on Azure. A brief introduction follows:

A Host Integration Server empowers enterprise developers to write applications faster and will less custom code than directly be writing it on IBM host systems. There is no requirement to know the IBM host system, development tools or infrastructure. It also eliminates the need to convert data to and from data sources as the application can now connect directly to business intelligence tools.

There are five technology areas which include:

1) Network Integration that connects application infrastructure to existing IBM mainframes and midrange system network architectures. This service connects desktops, devices, and servers to existing host systems while reducing costs. For example, the print service provides server-based printer emulation.

2) The data integration component offers direct access to data stored in IBM DB2 management systems. It includes multiple data clients and one data service with support for a variety of data providers such as ADO.Net, OLEDB and ODBC.

3) Application Integration is provided by the Transaction Integrator which allows enterprise developers to call business rules in host mainframe. It comprises of a plugin designer, administration tool and runtime components.

4) Message Integration is provided by WCF channel for IBM websphere MQ which allows enterprise developers to send or receive MQ messages between WCF And heterogeneous or native IBM programs.

5) Enterprise Single-sign on provides AD integration to secure IBM host systems and maps to their host credentials storing them in SQL Server. These mappings can be retrieved at runtime from both ESSO SDK and HIS features.

Conclusion: Writing any service using Azure Services as backend is made easy with the programmability support that comes with Azure via its REST API, SDK, ARM manifests, CLI and PowerShell support. But virtualizing external environments on Azure requires a little bit more planning than just the integration of network, data, application, message, and security.

Sunday, September 12, 2021

 

Azure Service Operator and Kubernetes service object

Introduction: In the previous article, we discussed Kubernetes Open Service Broker API. We followed it up on the discussion with an introduction to Azure OSBA which is also complying with the open standard and introduces Azure resources to the Kubernetes control plane. Then we discussed the Azure Service Operator that provisions those resources via the Kubernetes control plane. Then we discussed Kustomization. Today we evaluate the public connectivity methods for the respective services. 

Description: Azure services that provide resources for the user, often provide the option to choose the connectivity methods as one from public endpoints, private endpoints, and virtual network. The preferred connectivity method is a public endpoint with a hardcoded public IP address and an assigned port. It is simple and popular. The private endpoints and virtual networks can be used together with Azure Gateway and Azure private link. When the resources are provisioned via the Kubernetes control plane as discussed with Azure OSBA and Azure Service operator, they retain these connectivity methods as the primary means of interaction with the resource.

Kubernetes service, on the other hand, appears to take a more robust approach with its use of ExternalName, Load Balancer, NodePort, and Cluster IP. If an IP connectivity internal to the cluster is required, a Cluster IP can be used. If the service needs to be exposed at a static port, the NodePort can be used. When the load balancer is used, routes to NodePort and ClusterIP are automatically created. Finally, by using a CNAME record, the service can be universally reached via DNS. In addition to all these, a Kubernetes service can be exposed via Ingress object. Ingress is not a service type, but it acts as the entry point for the cluster and consolidates the routing rules into a single resource. This allows multiple services to be hosted behind the ingress resource that can be reached with an IP address.

An ingress resource is defined say for example on the Nginx where the HTTP and HTTPS ports are defined. The ingress resource is merely a declaration of the traffic policy.  An ingress control can be strictly HTTPS by redirecting HTTP traffic to HTTPS. For the Ingress resource to work, clusters are deployed with an ingress controller. Notable ingress controllers include AKS Application gateway Ingress Controller which configures the Azure Application Gateway. The Ambassador API gateway is an Envoy-based ingress controller.

The gateway also acts as an HTTP proxy. Any implementation of a gateway must maintain a registry of destination addresses. The advantages of an HTTP proxy include aggregations of usages. In terms of success and failure, there can be a detailed count of calls. The proxy could include all the features of a conventional HTTP service such as Client based caller information, destination-based statistics, per object statistics, categorization by cause, and many other features along with a RESTful API service for the features gathered. When gateways solve problems where data does not have to move, they are very appealing to many usages across the companies that use cloud providers.  There have been several vendors in their race to find this niche.

Conclusion: Load balancer, HTTP proxy, and Ingress resource are additional connectivity methods that can be added out of the box for some resources so that they are easier to work with interoperability between container orchestration systems and cloud service providers.

 

 

 

 

Saturday, September 11, 2021

 

Azure Service Operator and Kubernetes Kustomization

Introduction: In the previous article, we discussed Kubernetes Open Service Broker API. We followed it up on the discussion with an introduction to Azure OSBA which is also complying with the open standard and introduces Azure resources to Kubernetes control plane. Then we discussed the Azure Service Operator that provisions those resources via the Kubernetes control plane. Finally, we discuss Kustomization.

Description: Kustomize is a standalone tool for the Kubernetes platform that supports the management of objects using a kustomization file.
“kubectl kustomize <kustomization_directory>” command allows us to view the resources that can be kustomized. The apply verb instead of the kustomize verb can be used to apply it again.
It can help with generating resources, setting cross-cutting fields such as labels and annotations or metadata and composing or customizing groups of resources.
The resources can be generated and infused with specific configuration and secret using a configMap generator and a secret generator respectively. For example,  it can take an existing application.properties file and generated a configMap that can be applied to new resources.
Kustomization allows us to override the registry for all images used in the containers for an application.

There are two advantages to using it. First, it allows us to configure the individual components of the application without requiring changes in them. Second, it allows us to combine components from different sources and overlay them or even override certain configurations. The kustomize tool provides this feature. Kustomize can add configmaps and secrets to the deployments using their specific generators respectively.
Kustomize is static declaration. We can add labels across components. We can choose the groups of Kubernetes resources dynamically using selectors but they have to be declared as yaml. This kustomization yaml is usually stored as manifests and applied on existing components so they refer to other yamls. The manifests is a way of specifying the location of the kustomization files and passing it as a commandline parameter to kubectl commands with -k option
For example, we can say:
commonLabels:
  app: potpourri-app
resources:
- deployment.yaml
- service.yaml
We can even add new resources such as K8s secret
This comes useful to inject username passwords for say a database application at the time of install and uninstall with the help of a resource called secret.yaml. It just won't detect a virus to force an uninstall of the product. Those actions remain with the user.
Kustomize also helps us to do overlays and overrides. Overlay means we change parameters for one or more existing components. Override means we take an existing yaml and change portions of it such as changing the service to be of type LoadBalancer instead of NodePort or vice versa for developer builds. In this case, we provide just enough information to lookup the declaration we want to modify and specify the modification. For example:
apiVersion:v1
kind:Service
metadata:
  name: myservice
spec:
  type: NodePort
If the above service type modification were persisted side by side as prod and dev environment, it would be called an overlay.
Finally the persistence of kustomization files is not strictly required and we can run:
kustomize build manifests_folder | kubectl apply -f
or
kubectl apply -k
One of the interesting applications of Kustomization is the use of internal docker registries.
we use the secretGenerator to create the secret for the registry which typically has the
docker-server, docker-username, docker-password and docker-email and the secret type to be type: docker-registry
This secret can take environment variables and the kustomization file can even be stored in source control.

Azure has native Kustomization for its manifest using the parameters, variables, and builtin functions.These vary from the kubernetes side but by exposing Azure resources to the Kubernetes control plane, we  can leverage all the functionality native to Kubernetes.

 

 

Friday, September 10, 2021

Azure Service Operator  

Introduction: In the previous article, we discussed Kubernetes Open Service Broker API. We followed it up on the discussion with an introduction to Azure OSBA which is also complying with the open standard. This article is about the Azure Service Operator.  

Description: Azure Service Operator is an open-source project that exposes Azure Services as Kubernetes operators. Exposing Azure Resources on the Kubernetes control plane is desirable for several reasons and if the exponential growth in popularity of the Kubernetes infrastructure is any indication, then those reasons must hold. Azure resources are also managed in the cloud via the Azure resource manager so the concepts of resource manifest and state reconciliation are similar. Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. It is often a strategic decision for any company because it decouples the application from the hosts so that the same application can work elsewhere with minimal disruption to its use.   

An operator is a way of automating the deployment of an application on the Kubernetes cluster. It is written with the help of template source code generated by a tool called the operator-SDK. The tool builds three components – custom resources, APIs, and controllers. The custom resources are usually the declarative definitions of Kubernetes resources required by the application and its grouping as suited for the deployment of the application. The api is for the custom service required to deploy the application and the controller watches for this service.  
Kubernetes does not limit the type of applications that are supported. It provides building blocks to the application. Containers only help isolate modules of the application into well-defined boundaries that can run in with operating system-level virtualization.  
Kubernetes exposes a set of APIs that are used internally by the command line tool called kubectl. It is also used externally by other applications. This API follows the regular REST convention and is also versioned with path qualifiers such as v1, v1alpha1, or v1beta1 – the latter is used with extensions to the APIs.  
Kubernetes supports Imperative commands, imperative object configuration, and declarative object configuration. These are different approaches to manage objects. The first approach operates on live objects and is recommended for a development environment. The latter two are configurations that operate on individual files or a set of files and these are better suited for production environments.  
Namespaces seclude names of resources. They can even be nested within one another. They provide a means to divide resources between multiple users.  
Most Kubernetes resources such as pods, services, replication, controllers, and others are in some namespaces. However, low-level resources such as nodes and persistent volumes are not in any namespace.  
Kubernetes control plane communication is bidirectional between the cluster to its master and vice-versa. The master hosts an API server that is configured to listen for remote connections. The API server reaches out to the kubelets to fetch logs, attach to running pods, and provide the port-forwarding functionality. The API server manages nodes, pods, and services.   

When Azure resources are exposed on the Kubernetes control plane, we have all the advantages of continuous state reconciliation, networking best practices, and application portability across hosts, environments, and clouds. Azure provides some of the best backend services for an application but instead of managing them directly, an application can delegate its management to the Kubernetes control plane.  

  
Conclusion: The Azure OSBA expands the service catalog and enabled Azure resources to be recognized as Kubernetes custom resources. Azure Service operators take it a step forward and enable the resources to be managed via the Kubernetes control plane.  

  

 

 

Thursday, September 9, 2021

 Introduction: In the previous post, we discussed Kubernetes Open Service Broker API. We follow up on the discussion with an introduction to Azure OSBA which is also complying with the open standard.  

DescriptionThe standard meets the demand for application connectivity to the wide variety of services in the Azure marketplace. OSBA is simple and flexible. It allows applications to provision commodity products such as a MySQL database instance or Azure’s own multi-model database. OSB for Azure is a connector that can work with a Kubernetes, Cloud Foundry, or Open Shift in Azure. At the heart of every OSBA, it's a service catalog that lists the services corresponding to the resource types. Azure invested in the Kubernetes service catalog to leverage cloud-native services to be visible in that container orchestration framework. One of these investments is a command-line interface for the Kubernetes service catalog, aka `svcat`, which enables Azure services to be browsed and matched to resource types. OSBA provisions and binds Azure services in Kubernetes as well as Cloud Foundry and OpenShift. Its support for Azure service fabric is coming soon. All these cloud-native environments need OSBA probation Azure resources like the SQL database the MySQL database, PostgreSQL database, or the assure zone Cosmos DB. OSBA ensures mission-critical applications to be connected to enterprise-grade backend services that are hosted outside the cluster and are not governed by the resource constraints of the cluster. By bringing the container orchestration framework to work with the scale and scope of a backend cloud service, Applications can now leverage the best of both worlds.  

The level of integration does not stop but just a handful of Azure services are being made available on the container orchestration frameworks. It has been planned to be expanded into as many assure services as possible as well as up a plan took to communicate with the Kubernetes community and align the capabilities of the service catalog with the behavior that the customers expect. Finally, Microsoft plans to have an old generic way of describing services by their criteria rather than by specific resource types so that any and every matching service in the OSBA registry can then be used to qualify and meet the requirements for the customer's application.  

As with any API layer, OSBA is expected to be resilient and scalable with support for multiple concurrent requests and fully asynchronous processing that can seamlessly resume even if one replica goes down. OSBA is, therefore, suitable to work in a cloud-native environment like Kubernetes, Cloud Foundry, open shift, or ServiceFabric. The ease of use that comes with the service catalog is not sacrificed with the large variety of resources that can be provisioned with the OSBA.  

Support for OSBA in Kubernetes varies somewhat with OpenShift although the notion of provisioning Azure resources via a command-line interface remains the same, the project template in OpenShift is considerably different from the resource types registered with the service catalog in Kubernetes. The service catalog stores a set of entities called the cluster service classes that describe the services that handled those types of resources. It describes another set of entities called the Service plan that are variations of those services using the service class and the plan a resource can be provisioned outside the cluster in this case the resource happens to be provisioned in the cloud. there are no transactional guarantees expected from the create update delete and list of resources from resource provider as it spans the cluster to the cloud for this purposes state is maintained and the calls are reentrant and idempotent when the user chooses a service class and a plan the service instance is bound to do a set of resources which enables applications to use them as if they were local to the cluster. The process of deleting service instances involves that the bindings are removed, and the resources are de-provisioned. It is possible to have references left in the service catalog when the binding is broken, and the cleanup doesn't occur, but this can be done afterward. Since subsequent service instances create their own bindings, it has little or no impact on the applications' reliance on one or more service instances. the service binding is a link between the service instance and the application. It holds value so long as the service instance is used. The service class and the service plan can have parameters passed in to enable them to function in different modes and environments. 

Conclusion: Together the SBA and the service catalog enable tremendous choices for an application to delegate its dependencies to backend services 

Wednesday, September 8, 2021

Service Broker for Azure


Problem Statement: Custom resources in the public cloud can be defined with the help of an Azure Resource Manager manifest and the registration of an Azure Resource provider, usually a service hosted on Azure, to provide resources based on the manifest. This article investigates the role of an external resource provider for Azure where the service is not directly registered with the resource manager for a lookup but resolved indirectly. 

Description:  The use of resource and an orchestration framework to reconcile the desired state for the resource as described by its manifest to the actual state of the resource as visible to a resource manager is a control loop paradigm widely popular across infrastructure frameworks both in the enterprise and in the cloud. For example, the well-known Kubernetes is a container orchestration framework that designates a ‘Kube-controller-manager' that embeds the core control loops shipped with it. This non-terminating loop watches the shared state of the cluster through an ‘apiserver’ and makes changes to the resource to move it from its current state to the desired state declared in its manifest. Some examples of these controllers are replication controller, endpoint controller, namespace controller, and service accounts controller. The same applies to the Azure public cloud with the help of the Azure Resource Manager and the corresponding ARM templates for the resources provisioned in the cloud.  

The difference is in the use of a service broker that allows implementations of the resource provisioning to exist outside the cluster or the cloud for Kubernetes and Azure respectively. Kubernetes service brokers allow the provisioning of services outside the cluster. This enables services to run anywhere and work independently from the cluster. They can even be deployed in the public and private cloud where they can scale across Kubernetes clusters. This is a helpful addition to any Kubernetes cluster. 

 

The service broker architecture also enforces consistency across the provisioning of resources. This is done with the help of a standard set of APIs for all resources and their corresponding services that implement the service broker.  Resources are the way in which Kubernetes recognizes every entity in the cluster. The framework of Kubernetes then takes this inventory of resources and reconciles the state of the cluster to match the definition of the resources. Anything hosted on the Kubernetes clusters can be described with the help of a custom resource. 

 

Service brokers facilitate the resources provisioned by external services to be understood by Kubernetes as custom resources within the cluster.  This bridging of external resource provisioning within-cluster representation of resources has provided significant benefits to users for the use of these resources. 

 

Almost all provisioning of resources translates into well-known operations of creating, update, delete, get and list on the resources. Therefore, the resource operations are aligned with the usage of resources along with most workflows. This makes it handy for use of Kubernetes clusters for many applications and services. 

 

The trend in cloud computing has shifted from service-oriented architecture towards microservices. This has helped the independent provisioning of resources, deployment, and scaling of services, and overhaul or reworking of services. The service broker makes no claim about how the services need to evolve - whether to form a structural composite pattern or a behavior maintaining pattern. This minimal enforcement has worked mutually beneficial for both the services as well as the Kubernetes cluster. 

One example of the use of a resource with a service broker is logging. Logging, although available out-of-box with Kubernetes, can also be considered a service to be provisioned external to the cluster. This is easy to do with a variety of log products that provide service-like functionality. If there is no data loss, most log system users are tolerant to latency. This makes it easier for Logging to be implemented with merely a Kubernetes service broker and alleviating all concerns for logging from the cluster. 

The same could apply to Azure if it could entertain cloud extenders for the customization of resources. Another convenience that comes with this approach is the use of a hierarchical namespace rather than the type definitions we have with Azure Resource Manager templates today.  

Azure supports extensions but the extensions are not a replacement for the Open Service Broker API implementation. There is also a difference between the existing OSBA API and resolving of the namespaces by an existing Azure OSBA. This would be a service that performs like a DNS service. 

Conclusion: This article discusses improvement possibilities that are not yet available in the public cloud. 

 



Tuesday, September 7, 2021

 Subscription Provisioning Automation Task:


Problem statement: Subscription provisioning automation task is encountered when isolation is required between the billing profiles in the Azure public cloud cost management system.  This example describes how to achieve it

Description: The automation of this task relies on the notion that a subscription within Azure is a resource just like any other Azure Resource Management definition. Usually, it is created by the account owner who signed up with a Microsoft Customer Agreement or registered with Azure via the Azure Portal. A subscription is a way of organizing Azure resources and supports a billing method. Organizations create additional subscriptions when they expand their management group hierarchy to support business priorities. 

The task of creating a subscription can be automated programmatically. It requires the following parameters:

 {

      "name": "SubscriptionProvisioning",

      "displayName": "__GroupSubscription_NAME__",

      "initialOwnerPrincipalId": "__BILLING_initialOwnerPrincipalId__",

      "workload": "DevTest",

      "billing": {

        "scope": "/billingAccounts/__BILLING_costManagement__:__BILLING_billingScope___2019-05-31/billingProfiles/__BILLING_Profile__/invoiceSections/__BILLING_Invoice__",

        "pcCode": "P7007777",

        "costCategory": "FX",

        "airsRegisteredUserPrincipalId": "__BILLING_airsRegisteredOwnerPrincipalId__"

      }

    }

Where the parameters are explained as 

“displayName”: the name with which the account will be displayed for finding it in the list of subscriptions pertaining to the account.

“initialOwnerPrincipalId”: the initial owner for the subscription who can add additional owners as necessary. Usually this is the same principal that is associated with the account in the first place.

“workload”: describes the environment as production or development purposes.

“billing.scope”: refers to the invoice scope and is resolved by the cost management hierarchy comprising of the cost management account, billing scope, billing profile and invoice. All of these are object identifiers in the form of GUIDs. The billing invoice profile guid corresponds to the service identifier in the service tree registrations maintained by Azure.

“airsRegisteredUserPrincipalId”: refers to the service principal who requested the cost management plus billing profile to be setup at https://azuremsregistration.microsoft.com/Request.aspx

 


With these parameters, it is a simple pseudo-resource registration step to provision a subscription automatically.

Conclusion: Changing business priorities can now be handled with isolation of assets via additional subscriptions provisioned with the help of the automation described here.