Wednesday, September 15, 2021

 Whitepaper continued... 

Introduction: This is a continuation of the whitepaper on the Host Integration Server introduced here. We now elaborate on the four components of the overall design – namely, APIs, Events, Messaging, and Orchestration 

Description:  

1) APIs 

APIs are a prerequisite for interactions between services. They facilitate functional programmatic access as well as automation. For example, a workflow orchestration might implement a complete business process by invoking different APIs in different applications, each of which carries out some part of that process. The majority of these steps are backend processing for the host integration server and providing APIs from this server enables downstream automation. 

Yet making APIs available for other software to call is harder than one might think due to the questions that need to be answered as follows: 

How is the number of calls from the application be rate-limited, authenticated, authorized, and audited? 

If the APIs are going to be published which is the typical case, how will they be secured and hardened? 

If the API response time is critical, how to squeeze out top-notch performance from the API layer? 

How to monitor and analyze how the APIs are called so that the patterns in API usage might indicate a trend that significantly impacts the business? 

How to make the APIs developer-friendly to endear them for integration? Will documentation, code samples, and others suffice? 

The benefit of hosting the service on Azure encompasses these questions and automatically addresses all these concerns by the Azure API management. 

2) Orchestration: 

Integrating applications commonly requires implementing all or part of a business process. It can involve connecting software-as-a-service implementation such as Salesforce CRM, update on-premises data stored in SQL Server and Oracle database, and invoke operations in an external application. These translate to specific business purposes and custom logic for the Host Integration Server. 

This logic does not need to be monolithic. One option is to build it in the traditional way using C#, JavaScript, Java, or some other programming language which brings certain limitations such as delays and retries. Another approach might involve defining these via a workflow so that the steps can be completed independently by any agent and with context. 

Azure Logic Apps facilitate this approach. Each logic app is a workflow that implements some process. This might be a system-to-system process, such as connecting two or more applications. Alternatively, it might be a user-to-system process, one that connects people with software and potentially has long delays. Logic Apps is designed to support either of these scenarios. 

Finally, a logic application can access all kinds of other applications. 


Tuesday, September 14, 2021

 

A whitepaper on writing a Host Integration Service for Azure Cloud:

Introduction: Among the services that move to the cloud from on-premises, those that serve to integrate external hardware and software with Azure Cloud services appeal to an organization’s bottom-line. With a specific example of the Host Integration Server, this whitepaper describes the right way to implement it.

Description: A Host Integration Server empowers enterprise developers to write applications faster and will less custom code than directly be writing it on IBM host systems. There is no requirement to know the IBM host system, development tools, or infrastructure. It also eliminates the need to convert data to and from data sources as the application can now connect directly to business intelligence tools. It supports five integration areas which include network, data, application, security, and message. 

There are five technology areas of integration for this service, and they are: 

1) Network Integration that connects application infrastructure to existing IBM mainframes and midrange system network architectures. This service connects desktops, devices, and servers to existing host systems while reducing costs. For example, the print service provides server-based printer emulation. 

2) Data integration component offers direct access to data stored in IBM DB2 management systems. It includes multiple data clients and one data service with support for a variety of data providers such as ADO.Net, OLEDB, and ODBC. 

3) Application Integration is provided by the Transaction Integrator which allows enterprise developers to call business rules in the host mainframe. It comprises a plugin designer, an administration tool, and runtime components.  

4) Message Integration is provided by the WCF channel for IBM WebSphere MQ which allows enterprise developers to send or receive MQ messages between WCF And heterogeneous or native IBM programs. 

5) Security Integration which is provided by Enterprise Single-sign on with AD integration to secure IBM host systems. It maps to their host credentials which are stored in a SQL Server. These mappings can be retrieved at runtime from both ESSO SDK and HIS features.

When these integration areas are reimplemented on the Azure control pane, we can leverage the Azure iPaaS solution which is a set of cloud services that are essential for mission-critical enterprise integration. These services provide four core technologies that are required for cloud-based integration – a way to publish and manage application programming interface, a straightforward way to create and run integration /workflow logic with the help of orchestration, some messaging that facilitates the loose coupling between applications and a technology that supports communication via events.

There are always other services that can be combined from other cloud technologies but the above four iPaaS offerings namely API management, Logic Apps, Service Bus, and Event Grid are sufficient to perform integration for services such as HIS. Both on-premises applications and cloud applications can be combined which makes these useful for integrations especially with off-site devices and legacy enterprise investments.

Monday, September 13, 2021

 

Host Integration Server on Azure

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator.

Description: Kubernetes is not the only external environment supported by Azure. While the possibilities are endless for customers to integrate their own solutions, some planning might be required for legacy enterprise computing. In this section, we introduce the Host Integration Server and describe what it takes to host a similar service on Azure. A brief introduction follows:

A Host Integration Server empowers enterprise developers to write applications faster and will less custom code than directly be writing it on IBM host systems. There is no requirement to know the IBM host system, development tools or infrastructure. It also eliminates the need to convert data to and from data sources as the application can now connect directly to business intelligence tools.

There are five technology areas which include:

1) Network Integration that connects application infrastructure to existing IBM mainframes and midrange system network architectures. This service connects desktops, devices, and servers to existing host systems while reducing costs. For example, the print service provides server-based printer emulation.

2) The data integration component offers direct access to data stored in IBM DB2 management systems. It includes multiple data clients and one data service with support for a variety of data providers such as ADO.Net, OLEDB and ODBC.

3) Application Integration is provided by the Transaction Integrator which allows enterprise developers to call business rules in host mainframe. It comprises of a plugin designer, administration tool and runtime components.

4) Message Integration is provided by WCF channel for IBM websphere MQ which allows enterprise developers to send or receive MQ messages between WCF And heterogeneous or native IBM programs.

5) Enterprise Single-sign on provides AD integration to secure IBM host systems and maps to their host credentials storing them in SQL Server. These mappings can be retrieved at runtime from both ESSO SDK and HIS features.

Conclusion: Writing any service using Azure Services as backend is made easy with the programmability support that comes with Azure via its REST API, SDK, ARM manifests, CLI and PowerShell support. But virtualizing external environments on Azure requires a little bit more planning than just the integration of network, data, application, message, and security.

Sunday, September 12, 2021

 

Azure Service Operator and Kubernetes service object

Introduction: In the previous article, we discussed Kubernetes Open Service Broker API. We followed it up on the discussion with an introduction to Azure OSBA which is also complying with the open standard and introduces Azure resources to the Kubernetes control plane. Then we discussed the Azure Service Operator that provisions those resources via the Kubernetes control plane. Then we discussed Kustomization. Today we evaluate the public connectivity methods for the respective services. 

Description: Azure services that provide resources for the user, often provide the option to choose the connectivity methods as one from public endpoints, private endpoints, and virtual network. The preferred connectivity method is a public endpoint with a hardcoded public IP address and an assigned port. It is simple and popular. The private endpoints and virtual networks can be used together with Azure Gateway and Azure private link. When the resources are provisioned via the Kubernetes control plane as discussed with Azure OSBA and Azure Service operator, they retain these connectivity methods as the primary means of interaction with the resource.

Kubernetes service, on the other hand, appears to take a more robust approach with its use of ExternalName, Load Balancer, NodePort, and Cluster IP. If an IP connectivity internal to the cluster is required, a Cluster IP can be used. If the service needs to be exposed at a static port, the NodePort can be used. When the load balancer is used, routes to NodePort and ClusterIP are automatically created. Finally, by using a CNAME record, the service can be universally reached via DNS. In addition to all these, a Kubernetes service can be exposed via Ingress object. Ingress is not a service type, but it acts as the entry point for the cluster and consolidates the routing rules into a single resource. This allows multiple services to be hosted behind the ingress resource that can be reached with an IP address.

An ingress resource is defined say for example on the Nginx where the HTTP and HTTPS ports are defined. The ingress resource is merely a declaration of the traffic policy.  An ingress control can be strictly HTTPS by redirecting HTTP traffic to HTTPS. For the Ingress resource to work, clusters are deployed with an ingress controller. Notable ingress controllers include AKS Application gateway Ingress Controller which configures the Azure Application Gateway. The Ambassador API gateway is an Envoy-based ingress controller.

The gateway also acts as an HTTP proxy. Any implementation of a gateway must maintain a registry of destination addresses. The advantages of an HTTP proxy include aggregations of usages. In terms of success and failure, there can be a detailed count of calls. The proxy could include all the features of a conventional HTTP service such as Client based caller information, destination-based statistics, per object statistics, categorization by cause, and many other features along with a RESTful API service for the features gathered. When gateways solve problems where data does not have to move, they are very appealing to many usages across the companies that use cloud providers.  There have been several vendors in their race to find this niche.

Conclusion: Load balancer, HTTP proxy, and Ingress resource are additional connectivity methods that can be added out of the box for some resources so that they are easier to work with interoperability between container orchestration systems and cloud service providers.

 

 

 

 

Saturday, September 11, 2021

 

Azure Service Operator and Kubernetes Kustomization

Introduction: In the previous article, we discussed Kubernetes Open Service Broker API. We followed it up on the discussion with an introduction to Azure OSBA which is also complying with the open standard and introduces Azure resources to Kubernetes control plane. Then we discussed the Azure Service Operator that provisions those resources via the Kubernetes control plane. Finally, we discuss Kustomization.

Description: Kustomize is a standalone tool for the Kubernetes platform that supports the management of objects using a kustomization file.
“kubectl kustomize <kustomization_directory>” command allows us to view the resources that can be kustomized. The apply verb instead of the kustomize verb can be used to apply it again.
It can help with generating resources, setting cross-cutting fields such as labels and annotations or metadata and composing or customizing groups of resources.
The resources can be generated and infused with specific configuration and secret using a configMap generator and a secret generator respectively. For example,  it can take an existing application.properties file and generated a configMap that can be applied to new resources.
Kustomization allows us to override the registry for all images used in the containers for an application.

There are two advantages to using it. First, it allows us to configure the individual components of the application without requiring changes in them. Second, it allows us to combine components from different sources and overlay them or even override certain configurations. The kustomize tool provides this feature. Kustomize can add configmaps and secrets to the deployments using their specific generators respectively.
Kustomize is static declaration. We can add labels across components. We can choose the groups of Kubernetes resources dynamically using selectors but they have to be declared as yaml. This kustomization yaml is usually stored as manifests and applied on existing components so they refer to other yamls. The manifests is a way of specifying the location of the kustomization files and passing it as a commandline parameter to kubectl commands with -k option
For example, we can say:
commonLabels:
  app: potpourri-app
resources:
- deployment.yaml
- service.yaml
We can even add new resources such as K8s secret
This comes useful to inject username passwords for say a database application at the time of install and uninstall with the help of a resource called secret.yaml. It just won't detect a virus to force an uninstall of the product. Those actions remain with the user.
Kustomize also helps us to do overlays and overrides. Overlay means we change parameters for one or more existing components. Override means we take an existing yaml and change portions of it such as changing the service to be of type LoadBalancer instead of NodePort or vice versa for developer builds. In this case, we provide just enough information to lookup the declaration we want to modify and specify the modification. For example:
apiVersion:v1
kind:Service
metadata:
  name: myservice
spec:
  type: NodePort
If the above service type modification were persisted side by side as prod and dev environment, it would be called an overlay.
Finally the persistence of kustomization files is not strictly required and we can run:
kustomize build manifests_folder | kubectl apply -f
or
kubectl apply -k
One of the interesting applications of Kustomization is the use of internal docker registries.
we use the secretGenerator to create the secret for the registry which typically has the
docker-server, docker-username, docker-password and docker-email and the secret type to be type: docker-registry
This secret can take environment variables and the kustomization file can even be stored in source control.

Azure has native Kustomization for its manifest using the parameters, variables, and builtin functions.These vary from the kubernetes side but by exposing Azure resources to the Kubernetes control plane, we  can leverage all the functionality native to Kubernetes.

 

 

Friday, September 10, 2021

Azure Service Operator  

Introduction: In the previous article, we discussed Kubernetes Open Service Broker API. We followed it up on the discussion with an introduction to Azure OSBA which is also complying with the open standard. This article is about the Azure Service Operator.  

Description: Azure Service Operator is an open-source project that exposes Azure Services as Kubernetes operators. Exposing Azure Resources on the Kubernetes control plane is desirable for several reasons and if the exponential growth in popularity of the Kubernetes infrastructure is any indication, then those reasons must hold. Azure resources are also managed in the cloud via the Azure resource manager so the concepts of resource manifest and state reconciliation are similar. Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services. It is often a strategic decision for any company because it decouples the application from the hosts so that the same application can work elsewhere with minimal disruption to its use.   

An operator is a way of automating the deployment of an application on the Kubernetes cluster. It is written with the help of template source code generated by a tool called the operator-SDK. The tool builds three components – custom resources, APIs, and controllers. The custom resources are usually the declarative definitions of Kubernetes resources required by the application and its grouping as suited for the deployment of the application. The api is for the custom service required to deploy the application and the controller watches for this service.  
Kubernetes does not limit the type of applications that are supported. It provides building blocks to the application. Containers only help isolate modules of the application into well-defined boundaries that can run in with operating system-level virtualization.  
Kubernetes exposes a set of APIs that are used internally by the command line tool called kubectl. It is also used externally by other applications. This API follows the regular REST convention and is also versioned with path qualifiers such as v1, v1alpha1, or v1beta1 – the latter is used with extensions to the APIs.  
Kubernetes supports Imperative commands, imperative object configuration, and declarative object configuration. These are different approaches to manage objects. The first approach operates on live objects and is recommended for a development environment. The latter two are configurations that operate on individual files or a set of files and these are better suited for production environments.  
Namespaces seclude names of resources. They can even be nested within one another. They provide a means to divide resources between multiple users.  
Most Kubernetes resources such as pods, services, replication, controllers, and others are in some namespaces. However, low-level resources such as nodes and persistent volumes are not in any namespace.  
Kubernetes control plane communication is bidirectional between the cluster to its master and vice-versa. The master hosts an API server that is configured to listen for remote connections. The API server reaches out to the kubelets to fetch logs, attach to running pods, and provide the port-forwarding functionality. The API server manages nodes, pods, and services.   

When Azure resources are exposed on the Kubernetes control plane, we have all the advantages of continuous state reconciliation, networking best practices, and application portability across hosts, environments, and clouds. Azure provides some of the best backend services for an application but instead of managing them directly, an application can delegate its management to the Kubernetes control plane.  

  
Conclusion: The Azure OSBA expands the service catalog and enabled Azure resources to be recognized as Kubernetes custom resources. Azure Service operators take it a step forward and enable the resources to be managed via the Kubernetes control plane.  

  

 

 

Thursday, September 9, 2021

 Introduction: In the previous post, we discussed Kubernetes Open Service Broker API. We follow up on the discussion with an introduction to Azure OSBA which is also complying with the open standard.  

DescriptionThe standard meets the demand for application connectivity to the wide variety of services in the Azure marketplace. OSBA is simple and flexible. It allows applications to provision commodity products such as a MySQL database instance or Azure’s own multi-model database. OSB for Azure is a connector that can work with a Kubernetes, Cloud Foundry, or Open Shift in Azure. At the heart of every OSBA, it's a service catalog that lists the services corresponding to the resource types. Azure invested in the Kubernetes service catalog to leverage cloud-native services to be visible in that container orchestration framework. One of these investments is a command-line interface for the Kubernetes service catalog, aka `svcat`, which enables Azure services to be browsed and matched to resource types. OSBA provisions and binds Azure services in Kubernetes as well as Cloud Foundry and OpenShift. Its support for Azure service fabric is coming soon. All these cloud-native environments need OSBA probation Azure resources like the SQL database the MySQL database, PostgreSQL database, or the assure zone Cosmos DB. OSBA ensures mission-critical applications to be connected to enterprise-grade backend services that are hosted outside the cluster and are not governed by the resource constraints of the cluster. By bringing the container orchestration framework to work with the scale and scope of a backend cloud service, Applications can now leverage the best of both worlds.  

The level of integration does not stop but just a handful of Azure services are being made available on the container orchestration frameworks. It has been planned to be expanded into as many assure services as possible as well as up a plan took to communicate with the Kubernetes community and align the capabilities of the service catalog with the behavior that the customers expect. Finally, Microsoft plans to have an old generic way of describing services by their criteria rather than by specific resource types so that any and every matching service in the OSBA registry can then be used to qualify and meet the requirements for the customer's application.  

As with any API layer, OSBA is expected to be resilient and scalable with support for multiple concurrent requests and fully asynchronous processing that can seamlessly resume even if one replica goes down. OSBA is, therefore, suitable to work in a cloud-native environment like Kubernetes, Cloud Foundry, open shift, or ServiceFabric. The ease of use that comes with the service catalog is not sacrificed with the large variety of resources that can be provisioned with the OSBA.  

Support for OSBA in Kubernetes varies somewhat with OpenShift although the notion of provisioning Azure resources via a command-line interface remains the same, the project template in OpenShift is considerably different from the resource types registered with the service catalog in Kubernetes. The service catalog stores a set of entities called the cluster service classes that describe the services that handled those types of resources. It describes another set of entities called the Service plan that are variations of those services using the service class and the plan a resource can be provisioned outside the cluster in this case the resource happens to be provisioned in the cloud. there are no transactional guarantees expected from the create update delete and list of resources from resource provider as it spans the cluster to the cloud for this purposes state is maintained and the calls are reentrant and idempotent when the user chooses a service class and a plan the service instance is bound to do a set of resources which enables applications to use them as if they were local to the cluster. The process of deleting service instances involves that the bindings are removed, and the resources are de-provisioned. It is possible to have references left in the service catalog when the binding is broken, and the cleanup doesn't occur, but this can be done afterward. Since subsequent service instances create their own bindings, it has little or no impact on the applications' reliance on one or more service instances. the service binding is a link between the service instance and the application. It holds value so long as the service instance is used. The service class and the service plan can have parameters passed in to enable them to function in different modes and environments. 

Conclusion: Together the SBA and the service catalog enable tremendous choices for an application to delegate its dependencies to backend services