Thursday, October 7, 2021

 

Azure Functions:

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. Most recently, we discussed Azure Synapse link for Cosmos DB. This article talked about connecting all Azure functionalities via extensions that do not affect existing deployments with the help of Azure Functions. Specifically, we discuss sending emails from Azure Function using a Notification Hub Binding.

Description: The Notification Hub Output Binding for Azure functions enables us to send push notifications by using Azure Notification Hubs. Azure functions support output bindings for Notification Hubs. There are many languages to choose from for writing the Azure function, but in all these choices, the Notification Hub must be configured for the Platform Notification Service. We can get the push notifications in the client application from Notification Hub.

Templates can be used with notifications which enable a client application to specify the exact format of the notifications it wants to receive. This helps with a platform agnostic backend, personalized notifications, client-version independence, and easy localization.  A notification using template registration contains a message placeholder in the template.

For example:

using System;

using System.Threading.Tasks;

using System.Collections.Generic;

 

public static void Run(string myQueueItem,  out IDictionary<string, string> notification, TraceWriter log)

{

    log.Info($"C# Queue trigger function processed: {myQueueItem}");

    notification = GetTemplateProperties(myQueueItem);

}

 

private static IDictionary<string, string> GetTemplateProperties(string message)

{

    Dictionary<string, string> templateProperties = new Dictionary<string, string>();

    templateProperties["message"] = message;

    return templateProperties;

}

Wednesday, October 6, 2021

 Azure Functions:

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. Most recently, we discussed Azure Synapse link for Cosmos DB. This article is about connecting all Azure functionalities via extensions that do not affect existing deployments. Specifically, we discuss Azure Functions.

Description: Azure Functions serve compute-on-demand. When there are many blocks of code to run versus large monolithic code base, it improves application modularity and maintenance. These code blocks are called functions. They react to critical events. When there are many requests, the functions can scale instances as necessary and once the traffic has died down, they can scale down. All the compute resources come from Azure Functions and as a developer of Azure functions, there is no need to be concerned about infrastructure and operations.  When the function is written, a hosting plan must be chosen. There are three basic hosting plans available for Azure functions – consumption plan, premium plan, and dedicated app service plan. The hosting plan decides how the function is scaled, what resources are available to each function app instance, and the support for connectivity methods such as Azure virtual network connectivity. Azure functions can even be Kubernetes based.  It is made up of two key components – a runtime and a scale controller. The function runtime runs and executes the code. The scale controller monitors the rate of events that aretargeting the functions.  When the functions are hosted on Kubernetes, the Kubernetes-based Event Driven Autoscaling enables metrics to be published so that the Kubernetes autoscaler can scale from o to n instances. The function runtime is hosted in a Docker container. KEDA supports Azure Function triggers in the form of Azure storage queues, Azure Service Bus, Azure Events, Apache Kafka, and RabbitMQ queue. HTTP triggers are not directly managed by KEDA.
Visual Studio provides a template for writing Azure functions. It introduces a host.json file that allows specifying the settings for the host. This includes logging and application insights for settings such as samplingSettings.  Many of the host settings determine infrastructure requirements from the logic in the Azure Function. Judicious use of these settings promote health and maintenance of the Azure Function. The corresponding Azure Function project file declares the setting for AzureFunctionsVersion. The Azure Functions are implemented using the method signature as public static async Task<IActionResult> Run(

            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,

            ILogger log)

Sample code: https://1drv.ms/u/s!Ashlm-Nw-wnWhKYdp9QwpiEacjprGQ?e=1LD6jY

 


Tuesday, October 5, 2021

  Verification of declarative resource templates: 

Problem Statement:  

Azure public cloud resource manifests follow a convention for all resource types. They are used by the resource manager to reconcile a resource instance to what’s described in the manifest. The language used to author the manifests is one that makes use of predefined resource templates, built-ins, system variables, scopes, and their bindings, and configurations. The verification of the correctness of the authored manifests is done by the Azure Resource Manager. Similarly, the onboarding of a resource provider to the Azure public cloud is done by the deployment templates also called deployment artifacts. These are also validated but their rules are limited to general purposes while the resource provider as well as the platform that onboards a set of services to the cloud may require their own set of rules. This article explores the design of a validation tool for that purpose. 

Solution:  
The problem of validating deployment manifests as opposed to the resource manifests is that the final state of the resources is known beforehand and described in no uncertain terms by their manifest. The validation of the resource templates is therefore simpler between the two. The deployment templates, on the other hand, describe workflows that are expressed in a set of orchestrated steps. The validation of the workflow goes beyond the syntax and semantics of the steps. Azure deployments are required to be idempotent and declarative so that they can aspire to be zero-touch. The more validation performed on these templates, the more predictable the workflow for their targets. 

These additional validations or custom validations as they are referred to can be performed at various stages of the processing beginning before any actions are taken and finally at the end of the processing. This article discusses the offline validation for the deployment as described by their templates. The runtime validation can be assumed to be taken care of by validation steps interspersed with the actual steps. Consequently, we discuss only the initial and the final evaluation when the templates are received and when they have been interpreted. 

 

The static evaluation involves pattern matching. Each rule in the rule set for static evaluation can be described by patterns of text whose occurrence indicates a condition to be flagged. These patterns can be easily described by regular expressions. The validation, therefore, runs through the list of regular expressions for all the text in the templates which usually comprises a set of files. Care must be taken to exclude those configuration files that have hardcoded values and literals which are used towards substitution of the variables and template intrinsic at this time. They might be evaluated towards the final pass after the templates have been interpreted. 

The final evaluation requires this interpretation to proceed when the key values defined in the configurations and setting have been substituted for the variables as part of the binding for a given scope. The scopes can be evaluated one-by-one and the substitution performed again and again from different scope bindings. They might also involve multi-pass evaluations if the values of a given pass involve other variables that need to be resolved. Since the scopes are usually isolated from each other, the set of variables, their values, and the number of passes is finitely resulting in a cyclical scope binding and resolution that repeats until there are no more variables. This final set of resolved templates can be written out and evaluated for the rules at this stage in the same way that the initial evaluation was done. This concludes the design of the tool as a two-stage static evaluation of a distinct set of rules. 

Sample two-phase implementation: https://1drv.ms/u/s!Ashlm-Nw-wnWhKVmHyTDt3GspSXKqQ?e=PYLr9Q 

 

Monday, October 4, 2021

Writing an Azure Resource Provider:

 


Introduction: Azure offers a control plane for all resources that can be deployed to the cloud and services take advantage of them both for themselves and their customers. While Azure Functions allow extensions via new resources, Azure Resource provider and ARM APIs provide extensions via existing resources. This eliminates the need to have new processes introduced around new resources and is a significant win for reusability and user convenience. New and existing resources are not the only way to write extensions, there are other options such as writing it in the Azure Store or via other control planes such as container orchestration frameworks and third-party platforms. This article focuses on the ARM API.

Description:  The {resource-provider}/{resource-type} addition to the inventory in Azure is required for tasks such as deployment because it retrieves information that can assist with the orchestration of steps for deployment. As the resources mature and the SKUs evolve, the resource APIs are revision-ed and the client must keep up.

When the deployment actions need to be expanded, revisioning the API is not sufficient. New capabilities must be added to the resource. One way to do this is to write an extension resource that modifies another resource such as to assign a role to a resource. In this case, the role assignment is an extension resource type.

Just like with any actions surrounding Azure resources via the Azure Resource Manager templates, a resource must be added to the extension resource template at a proper scope such as resource group, subscription, management group and tenant. For example, resource createRgLock ‘Microsoft.Authorization/locks@2016-09-01' can be declared to add a lock at the resource group level. A lock prevents actions that can usually be taken on a resource with the provision for overrides. This is sometimes necessary when authoring policies surrounding resources.

A ‘scope’ property allows an extension resource to target another resource. It specifies the resource to which this extension applies. It is a root property of the extension resource. An extension resource is for custom actions around the resource that are not generally available from the resource. It is different from a child resource.

A child resource exists only within the context of another resource. Each parent resource can accept only certain resource types as child resource types. For example, where we can refer to resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets/extensions@2021-04-01' only within the context of virtual machine scale sets and not without VMSS. The hierarchy of parent-child resource types are already registered before they can be used. An extension resource can extend the capabilities of another resource.

Resources and their extensions can be written only in Bicep and ARM templates. Bicep provides more concise syntax and improved type safety, but they compile to ARM templates which is the de facto standard to declare and use Azure resources and supported by the unified Azure Resource Manager. Bicep is a new domain-specific language that was recently developed for authoring ARM templates by using an easier syntax. You can use either template format for your ARM templates and resource deployments. Bicep is typically used for deployments to Azure. It is a new deployment-specific language that was recently developed. Either or both JSON and Bicep can be used to author ARM templates and while JSON is ubiquitous, Bicep can only be used with Resource Manager Templates. In fact, Bicep has tooling that converts Bicep templates into standard Json Templates for ARM Resources by a process called transpilation. This conversion happens automatically but it can also be manually invoked. Bicep is succint so it provides a further incentive and there is a playground to try it: Bicep Playground 0.4.1-ge2387595c9 (windows.net).

Sunday, October 3, 2021

Azure Functions as natural extensions for existing applications and integrations


Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. Most recently, we discussed Azure Synapse link for Cosmos DB. This article is about connecting all Azure functionalities via extensions that do not affect existing deployments. Specifically, we discuss Azure Functions.

Description: 

Azure does not limit resources to be commissioned only on the public cloud control plane. It is quite flexible and extensible in nature providing vast integration features that allow on-premises resources to be connected to the cloud, resources to be made available to an instance of a Kubernetes control plane, resources that can span Apache stack and a variety of ecosystems. Within the public cloud, the services proliferate to well over a hundred different types and even more SKUs. When existing deployments and assets become immovable or untouchable, it might be mind boggling to find a right home for extra functionality or a module. Azure Functions fills this gap very well without any impact or hindrance to ongoing operations.

For a sample usage of this Resource and its implementation, please refer this catalog and sample code: https://1drv.ms/u/s!Ashlm-Nw-wnWhKYdp9QwpiEacjprGQ?e=1LD6jY. It takes the use case of segregating azure resources under resource groups which are within subscriptions and in turn accounts on Azure. This hierarchy is very useful towards billing and cost management and since the allocations are isolated by subscriptions, this provides a technique to set a property on those subscriptions so that they can be managed independently from others.

The Azure function takes very little resource but encapsulates the logic to something that is specific and required for integration across other existing resource. This provides architects with a low-cost production grade option for deployments that are lightweight and ready for mission critical purposes.

It is also important to recognize where the Azure Functions do not apply. Service Bus, Message broker, message processing architecture and continuous events is a paradigm that is well suited for IoT traffic. Background processing in heavy online transaction systems can also have multiple Azure Functions as delegates instead of actual services but it is harder to study the flow across discrete functions as compared to a service where all the components log to the same destination and the service represent a system driven top-down architecture rather than a bottom-up convenience. If there is a need to run a user-defined action that is outside the scope of any RP or ARM template, then there is an option now to add Functions as part of orchestrated rollout execution. If the resource provider can provide that functionality and manifest via an ARM template, the need to have logic in its own function is eliminated. Similarly, hundreds of extensions are desirable in certain cases while they can quickly get out of hand in others.

 


Saturday, October 2, 2021

 

Azure Synapse Link for Cosmos DB ...

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. In this article, we discuss Azure Synapse link for Cosmos DB.

Description: 

Analytics use case for any storage is usually understated but heavily used because the data, no matter its size, is useful only if there is usage. The Azure Synapse Link for Azure Cosmos DB creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics which is the de facto standard for enterprise analytics service. Azure Synapse quickens insights across data warehouses and big data systems. It brings together the best of SQL technologies used in data warehousing, Spark technologies used for Big Data, Pipelines for data integration and ETL/ELT for transformations and deep integration with Power BI, Azure ML and CosmosDB. What makes Synapse popular is its support for SQL queries which is offered both in serverless and dedicated resource models with predictable performance and costs. It has built-in streaming capabilities to write cloud data to tables and it also helps to query data in stores like Azure Data Lake Storage and Azure CosmosDB without having to run import tasks. Its integration with Apache Spark eliminates the need to manage clusters and has fast startup and aggressive autoscaling.

Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional and analytical processing capability that supports near real-time analytics over operational data in Azure Cosmos DB. It achieves this without impacting the performance of online transactional processing using CosmosDB. This is different from the link between the Azure Cosmos DB’s internal transactional and analytical stores which is set-up out-of-box for auto-sync purposes.  The cloud-native HTAP connects the whole of CosmosDB with the Spark/SQL supportive Azure Synapse Analytics. The benefits are huge when it comes to eliminating the import/export or ETL operations necessary to take data out of multiple operational data sources and eliminating the use of traditional warehouses completely for a faster, streamlined and highly scaleable analytics experience.  It is, however, not recommended for cases where a traditional data warehouse requirements such as high concurrency, workload management and persistence of aggregates is required across multiple data sources.

 

Friday, October 1, 2021

 

<#

# This is a PowerShell scripts to demonstrate a Failover group

#>

 

# Initialize

$primaryServer = "< The name of the primary server >"

$backupServer = "< DesiredBackupServerName >"

$backupLocation = "< a different location than the (existing) primary Server >"

$failoverGroup = "< desired failover group Name >" # Must be globally unique

$database = " < the name of the database > "

$admin = " < the admin username > "

$password = " < the password for the admin > "

$resourceGroup = " < the name of the resource group from the primary server > "

# Create a backup server in the failover region

New-AzSqlServer -ResourceGroupName $resourceGroup -ServerName $backupServer -Location $backupLocation -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $admin,

$(ConvertTo-SecureString -String $password -AsPlainText -Force))

 

# Create a failover group containing the primary and backup servers

New-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $primaryServer -PartnerServerName $backupServer -FailoverGroupName $failoverGroup -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1

 

# Add the database to the failover group

Get-AzSqlDatabase -ResourceGroupName $resourceGroup -ServerName $primaryServer -DatabaseName $database | Add-AzSqlDatabaseToFailoverGroup -ResourceGroupName $resourceGroup -ServerName $server -FailoverGroupName $failoverGroup

 

# Verify the failover group properties

Get-AzSqlDatabasseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $primaryServer

# Either the primary or the backup server is fine here

 

# Verify the current role of the backup server (should be "secondary")

(Get-AzSqlDatabaseFailoverGroup -FailoverGroupName $failoverGroup -ResourceGroupName $resourceGroup -ServerName $backupServer).ReplicationRole

 

# Initiate a manual failover

Switch-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $backupServer -FailoverGroupName $failoverGroup

 

# Verify the current role again (should be "primary")

 

# Initiate a manual failback

Switch-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $primaryServer -FailoverGroupName $failoverGroup

 

# Verify the current role again (should be back to "secondary")

 

# Failover group created and tested!