Tuesday, October 5, 2021

  Verification of declarative resource templates: 

Problem Statement:  

Azure public cloud resource manifests follow a convention for all resource types. They are used by the resource manager to reconcile a resource instance to what’s described in the manifest. The language used to author the manifests is one that makes use of predefined resource templates, built-ins, system variables, scopes, and their bindings, and configurations. The verification of the correctness of the authored manifests is done by the Azure Resource Manager. Similarly, the onboarding of a resource provider to the Azure public cloud is done by the deployment templates also called deployment artifacts. These are also validated but their rules are limited to general purposes while the resource provider as well as the platform that onboards a set of services to the cloud may require their own set of rules. This article explores the design of a validation tool for that purpose. 

Solution:  
The problem of validating deployment manifests as opposed to the resource manifests is that the final state of the resources is known beforehand and described in no uncertain terms by their manifest. The validation of the resource templates is therefore simpler between the two. The deployment templates, on the other hand, describe workflows that are expressed in a set of orchestrated steps. The validation of the workflow goes beyond the syntax and semantics of the steps. Azure deployments are required to be idempotent and declarative so that they can aspire to be zero-touch. The more validation performed on these templates, the more predictable the workflow for their targets. 

These additional validations or custom validations as they are referred to can be performed at various stages of the processing beginning before any actions are taken and finally at the end of the processing. This article discusses the offline validation for the deployment as described by their templates. The runtime validation can be assumed to be taken care of by validation steps interspersed with the actual steps. Consequently, we discuss only the initial and the final evaluation when the templates are received and when they have been interpreted. 

 

The static evaluation involves pattern matching. Each rule in the rule set for static evaluation can be described by patterns of text whose occurrence indicates a condition to be flagged. These patterns can be easily described by regular expressions. The validation, therefore, runs through the list of regular expressions for all the text in the templates which usually comprises a set of files. Care must be taken to exclude those configuration files that have hardcoded values and literals which are used towards substitution of the variables and template intrinsic at this time. They might be evaluated towards the final pass after the templates have been interpreted. 

The final evaluation requires this interpretation to proceed when the key values defined in the configurations and setting have been substituted for the variables as part of the binding for a given scope. The scopes can be evaluated one-by-one and the substitution performed again and again from different scope bindings. They might also involve multi-pass evaluations if the values of a given pass involve other variables that need to be resolved. Since the scopes are usually isolated from each other, the set of variables, their values, and the number of passes is finitely resulting in a cyclical scope binding and resolution that repeats until there are no more variables. This final set of resolved templates can be written out and evaluated for the rules at this stage in the same way that the initial evaluation was done. This concludes the design of the tool as a two-stage static evaluation of a distinct set of rules. 

Sample two-phase implementation: https://1drv.ms/u/s!Ashlm-Nw-wnWhKVmHyTDt3GspSXKqQ?e=PYLr9Q 

 

Monday, October 4, 2021

Writing an Azure Resource Provider:

 


Introduction: Azure offers a control plane for all resources that can be deployed to the cloud and services take advantage of them both for themselves and their customers. While Azure Functions allow extensions via new resources, Azure Resource provider and ARM APIs provide extensions via existing resources. This eliminates the need to have new processes introduced around new resources and is a significant win for reusability and user convenience. New and existing resources are not the only way to write extensions, there are other options such as writing it in the Azure Store or via other control planes such as container orchestration frameworks and third-party platforms. This article focuses on the ARM API.

Description:  The {resource-provider}/{resource-type} addition to the inventory in Azure is required for tasks such as deployment because it retrieves information that can assist with the orchestration of steps for deployment. As the resources mature and the SKUs evolve, the resource APIs are revision-ed and the client must keep up.

When the deployment actions need to be expanded, revisioning the API is not sufficient. New capabilities must be added to the resource. One way to do this is to write an extension resource that modifies another resource such as to assign a role to a resource. In this case, the role assignment is an extension resource type.

Just like with any actions surrounding Azure resources via the Azure Resource Manager templates, a resource must be added to the extension resource template at a proper scope such as resource group, subscription, management group and tenant. For example, resource createRgLock ‘Microsoft.Authorization/locks@2016-09-01' can be declared to add a lock at the resource group level. A lock prevents actions that can usually be taken on a resource with the provision for overrides. This is sometimes necessary when authoring policies surrounding resources.

A ‘scope’ property allows an extension resource to target another resource. It specifies the resource to which this extension applies. It is a root property of the extension resource. An extension resource is for custom actions around the resource that are not generally available from the resource. It is different from a child resource.

A child resource exists only within the context of another resource. Each parent resource can accept only certain resource types as child resource types. For example, where we can refer to resource symbolicname 'Microsoft.Compute/virtualMachineScaleSets/extensions@2021-04-01' only within the context of virtual machine scale sets and not without VMSS. The hierarchy of parent-child resource types are already registered before they can be used. An extension resource can extend the capabilities of another resource.

Resources and their extensions can be written only in Bicep and ARM templates. Bicep provides more concise syntax and improved type safety, but they compile to ARM templates which is the de facto standard to declare and use Azure resources and supported by the unified Azure Resource Manager. Bicep is a new domain-specific language that was recently developed for authoring ARM templates by using an easier syntax. You can use either template format for your ARM templates and resource deployments. Bicep is typically used for deployments to Azure. It is a new deployment-specific language that was recently developed. Either or both JSON and Bicep can be used to author ARM templates and while JSON is ubiquitous, Bicep can only be used with Resource Manager Templates. In fact, Bicep has tooling that converts Bicep templates into standard Json Templates for ARM Resources by a process called transpilation. This conversion happens automatically but it can also be manually invoked. Bicep is succint so it provides a further incentive and there is a playground to try it: Bicep Playground 0.4.1-ge2387595c9 (windows.net).

Sunday, October 3, 2021

Azure Functions as natural extensions for existing applications and integrations


Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. Most recently, we discussed Azure Synapse link for Cosmos DB. This article is about connecting all Azure functionalities via extensions that do not affect existing deployments. Specifically, we discuss Azure Functions.

Description: 

Azure does not limit resources to be commissioned only on the public cloud control plane. It is quite flexible and extensible in nature providing vast integration features that allow on-premises resources to be connected to the cloud, resources to be made available to an instance of a Kubernetes control plane, resources that can span Apache stack and a variety of ecosystems. Within the public cloud, the services proliferate to well over a hundred different types and even more SKUs. When existing deployments and assets become immovable or untouchable, it might be mind boggling to find a right home for extra functionality or a module. Azure Functions fills this gap very well without any impact or hindrance to ongoing operations.

For a sample usage of this Resource and its implementation, please refer this catalog and sample code: https://1drv.ms/u/s!Ashlm-Nw-wnWhKYdp9QwpiEacjprGQ?e=1LD6jY. It takes the use case of segregating azure resources under resource groups which are within subscriptions and in turn accounts on Azure. This hierarchy is very useful towards billing and cost management and since the allocations are isolated by subscriptions, this provides a technique to set a property on those subscriptions so that they can be managed independently from others.

The Azure function takes very little resource but encapsulates the logic to something that is specific and required for integration across other existing resource. This provides architects with a low-cost production grade option for deployments that are lightweight and ready for mission critical purposes.

It is also important to recognize where the Azure Functions do not apply. Service Bus, Message broker, message processing architecture and continuous events is a paradigm that is well suited for IoT traffic. Background processing in heavy online transaction systems can also have multiple Azure Functions as delegates instead of actual services but it is harder to study the flow across discrete functions as compared to a service where all the components log to the same destination and the service represent a system driven top-down architecture rather than a bottom-up convenience. If there is a need to run a user-defined action that is outside the scope of any RP or ARM template, then there is an option now to add Functions as part of orchestrated rollout execution. If the resource provider can provide that functionality and manifest via an ARM template, the need to have logic in its own function is eliminated. Similarly, hundreds of extensions are desirable in certain cases while they can quickly get out of hand in others.

 


Saturday, October 2, 2021

 

Azure Synapse Link for Cosmos DB ...

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. In this article, we discuss Azure Synapse link for Cosmos DB.

Description: 

Analytics use case for any storage is usually understated but heavily used because the data, no matter its size, is useful only if there is usage. The Azure Synapse Link for Azure Cosmos DB creates a tight seamless integration between Azure Cosmos DB and Azure Synapse Analytics which is the de facto standard for enterprise analytics service. Azure Synapse quickens insights across data warehouses and big data systems. It brings together the best of SQL technologies used in data warehousing, Spark technologies used for Big Data, Pipelines for data integration and ETL/ELT for transformations and deep integration with Power BI, Azure ML and CosmosDB. What makes Synapse popular is its support for SQL queries which is offered both in serverless and dedicated resource models with predictable performance and costs. It has built-in streaming capabilities to write cloud data to tables and it also helps to query data in stores like Azure Data Lake Storage and Azure CosmosDB without having to run import tasks. Its integration with Apache Spark eliminates the need to manage clusters and has fast startup and aggressive autoscaling.

Azure Synapse Link for Azure Cosmos DB is a cloud-native hybrid transactional and analytical processing capability that supports near real-time analytics over operational data in Azure Cosmos DB. It achieves this without impacting the performance of online transactional processing using CosmosDB. This is different from the link between the Azure Cosmos DB’s internal transactional and analytical stores which is set-up out-of-box for auto-sync purposes.  The cloud-native HTAP connects the whole of CosmosDB with the Spark/SQL supportive Azure Synapse Analytics. The benefits are huge when it comes to eliminating the import/export or ETL operations necessary to take data out of multiple operational data sources and eliminating the use of traditional warehouses completely for a faster, streamlined and highly scaleable analytics experience.  It is, however, not recommended for cases where a traditional data warehouse requirements such as high concurrency, workload management and persistence of aggregates is required across multiple data sources.

 

Friday, October 1, 2021

 

<#

# This is a PowerShell scripts to demonstrate a Failover group

#>

 

# Initialize

$primaryServer = "< The name of the primary server >"

$backupServer = "< DesiredBackupServerName >"

$backupLocation = "< a different location than the (existing) primary Server >"

$failoverGroup = "< desired failover group Name >" # Must be globally unique

$database = " < the name of the database > "

$admin = " < the admin username > "

$password = " < the password for the admin > "

$resourceGroup = " < the name of the resource group from the primary server > "

# Create a backup server in the failover region

New-AzSqlServer -ResourceGroupName $resourceGroup -ServerName $backupServer -Location $backupLocation -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $admin,

$(ConvertTo-SecureString -String $password -AsPlainText -Force))

 

# Create a failover group containing the primary and backup servers

New-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $primaryServer -PartnerServerName $backupServer -FailoverGroupName $failoverGroup -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1

 

# Add the database to the failover group

Get-AzSqlDatabase -ResourceGroupName $resourceGroup -ServerName $primaryServer -DatabaseName $database | Add-AzSqlDatabaseToFailoverGroup -ResourceGroupName $resourceGroup -ServerName $server -FailoverGroupName $failoverGroup

 

# Verify the failover group properties

Get-AzSqlDatabasseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $primaryServer

# Either the primary or the backup server is fine here

 

# Verify the current role of the backup server (should be "secondary")

(Get-AzSqlDatabaseFailoverGroup -FailoverGroupName $failoverGroup -ResourceGroupName $resourceGroup -ServerName $backupServer).ReplicationRole

 

# Initiate a manual failover

Switch-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $backupServer -FailoverGroupName $failoverGroup

 

# Verify the current role again (should be "primary")

 

# Initiate a manual failback

Switch-AzSqlDatabaseFailoverGroup -ResourceGroupName $resourceGroup -ServerName $primaryServer -FailoverGroupName $failoverGroup

 

# Verify the current role again (should be back to "secondary")

 

# Failover group created and tested!

Thursday, September 30, 2021

Some trivia for Azure Public cloud computing

 


The Azure public cloud offers several resources for use in cloud-based solutions for customers. Even the experts resort to documentation on the resources for ambiguities and when the details escape them. The following is a list of trivia that are generally looked up and go unnoticed in the beginning.

1.       [ NOTE: This continues from the previous article but the numbering has not been restarted. ]

2.      A policy is a default allow and explicit deny system focused on resource properties during deployment and for already existing resources. It supports cloud governance with compliance.

3.      An Azure Sentinel is useful to respond to security incidents but internal analysis via logs and metrics are best done by Azure Security center.

4.      Network traffic can be studied with Azure Network Watcher but the Azure Monitoring helps with application outages and SLA investigations

5.      Filter network traffic with a Network security group. It is associated with a subnet. An application security group enables us to group together servers with similar functions.

6.      Azure Migrate helps with migrating compute and databases. It requires permissions on the source instance.

7.      Unlike the Azure SQL resource, the Azure SQL VM IaaS deployment gives full control over the SQL server instance and best serves migration when server and database permissions are granted on the source instance.

8.      If a database has become slow on a target instance of the SQL Server, leverage the auto-tuning feature to improve performance.

9.      A single Azure SQL instance can host many databases, and many write regions. There is no need to provision a dedicated instance for every region or application. This strategy is different from that of key Ault.

10.  When many virtual machines are deployed to a subscription, they may need to have prerequisites installed. This is easy to specify with a policy.

11.  There is only one storage account that can be bound to the log analytics workspace. Logs from many places can flow to the account but there must only be one account.

12.  If there are many subscriptions within the account, it will help with cost management. Resource groups help with the segregation of resources just like tags, but the billing cannot be based on tags. It must be at the subscription level. So, tags can be used for querying but cannot be relied upon for costing.

13.  Accidental delete of resource can be prevented by applying locks on the higher-level containers such as resource groups and subscriptions. If any one of them needs to be exempted, this can be based on policy.

14.  Role-based access control (RBAC) facilitates principle of least privilege. A higher privilege role such as the domain administrator need not be used to work with AD connect or for deployment purposes. A deployment operator is sufficient in this regard.

15.  Role based access control also enforce the most restrictive permission set so a general ability to read can be taken away for specific cases.

16.  A role-based access control can allow dynamic assignment of users to roles. So, when a person leaves an organization and goes to another, then a scheduled background job can revoke the privileges and perform cleanup.

17.  When it comes to securing KeyVault, the access control policies are pretty much the resorted technique. But the role-based access control is less maintenance. Besides, there is no need to keep adding and removing polices from the KeyVault because they end up being transient even if they are persisted. Instead, the role-based access control for KeyVault requires zero-touch.

These are some of the details that affect cloud capabilities planning.