Tuesday, October 12, 2021

 Some trivia for Azure public cloud (continued from previous article)

Operational requirements for hosting solutions on Azure public cloud:

·        Applications can be installed in Virtual machine scale sets with an Azure Template. A custom script extension can be added to the template. The reference to location of the script can be passed in as a parameter. Alternatively, http calls can also be made.

·        A site-to-site VPN gateway can be configured between Azure and on-premises. Site to Site VPN gateway can provide better continuity for the workload in hybrid cloud setup with Azure. A load balancer front-end ip address cannot be reached with Virtual network peering across regions. Support for basic load balancer only exists within the same region but gateway transit can be allowed in globally peered network

·        An Azure AD Connect server can be set up with either passthrough authentication or password hash authentication. The latter is supportive even when on-premises goes down. Connections must be allowed through the *.msappproxy.net and Azure datacenter IP ranges.

·        Notify Azure Sentinel alert to your email automatically.  This step requires a playbook to be created with designer user-interface and does not require code to be written. Completing this step is required since there is no built-in feature for Sentinel.

·        Network traffic inbound and outbound from a virtual network subnet can be filtered with a network security group using the Azure portal. Security rules can be applied to resources deployed in a subnet. Firewall rules and NSGs both support restricting and allowing traffic.

·        Servers running on Hyper-V can be discovered and assessed via Azure Migrate. Azure account, Hyper-V host and the Azure Migrate appliance is required for this purpose.

·        Diagnostic settings to send platform logs and metrics to different destinations can be authored. Logs include Azure Activity logs and resource logs. Platform metrics are collected by default and stored in the Azure monitor metrics database. Each Azure resource requires its own diagnostic settings, and a single setting can define no more than one of each of the destinations. The available categories will vary for different resource types. The destinations for the logs could include the Log Analytics workspace, Event Hubs and Azure Storage. Metrics are sent automatically to the Azure Monitor Metrics. Optionally, settings can be used to send metrics to Azure monitor logs for analysis with other monitoring data using restricted queries. Multi-dimensional metrics (MDM) are not supported. They must be flattened

·        Data collection rules (DCR) specify whether data coming to Azure monitor should be sent or stored. Input sources include Azure Monitor agents running on virtual machines, virtual machine scale sets, and Azure Arc for servers. A rule includes data sources, streams which is a unique handle that transforms and schematizes as one type, destinations for where the data should be sent and data flows for definitions of which streams should be sent to which destinations.

·        Automatic tuning in Azure SQL database and Azure SQL managed database instance can help with tuning on peak performance and stable workloads. There is support for continuous performance tuning based on AI and machine learning.

·        Limited access to Azure storage resources can be granted using shared access signatures (SAS). With a SAS there is granular control over how a client can access data, what resources it has access to, what permissions it has on the resources and how long it is valid. There are three types of SAS - user delegation SAS, Service SAS and account SAS.  The SAS token is generated on the client side using one of the Azure storage libraries. If this is leaked, it can be used by anyone, but it is set to expire.


Monday, October 11, 2021

 Some trivia for Azure public cloud (continued from previous article)

Operational requirements for hosting solutions on Azure public cloud:

·        Disaster recovery – Azure Site Recovery services contributes to application-level protection and recovery. It provides near-synchronous replication for any workloads for single or multi-tier applications, works with active-directory and sql server replication. It protects Sharepoint, Dynamics AX and Remote Desktop services It has flexible recovery plans with a rich automation library. One of its biggest use cases is to replicate VMs to Azure. It provides end to end recovery plans.

·        Data redundancy – Azure storage services come with built-in redundancy which also improves durability of the existing Blob services. The Geo-redundant storage (GRS) copies data synchronously three times within a single physical location in the primary region using the LRS. The Geo-zone-redundant storage service (GZRS) copies data synchronously across three availability zones in the primary region using ZRS. Then the other regions are replicated asynchronously. Read Access GRS (RA-GRS) can provide redundancy just for read access.

·        Blob rehydration to the archive tier can be for either hot or cool tier. There are two options for rehydrating a blob that is stored in the archive tier. A) One can copy an archived blob to an online tier using the reference of the blob or its URL. B) Or one can change the blob access tier to an online tier. It can rehydrate the archived blob to hot or cool by changing its tier. Rehydrating might take several hours but several of them can be done concurrently. Rehydration priority might also be set.

·        Storage costs can be optimized by managing the data lifecycle. Azure storage lifecycle management offers a rule-based policy that can be used to transition blob data to the appropriate access tiers or to be set with expiration. The lifecycle policy definition has attributes for actions, baseblobs and filters.

·        Azure monitors are full stack monitoring service. Many Azure services use it to collect and analyze monitoring data. The Blob storage collects the same kind of monitoring data as other Azure resources. Platform metrics and activity logs are automatically collected.

·        Virtual Network peering allows us to connect virtual networks in the same region or across regions as in the case of Global VNet Peering through the Azure Backbone network. When the peering is setup, traffic to the remote virtual network, traffic forwarded from the remote virtual network, virtual network gateway or Route server and traffic to the virtual network can be allowed by default.

·        Transaction processing in Azure is not on by default. A transactions locks and logs records so that others cannot use it, but it can be bound to partitions, enabled as distributed transactions and with two phase commit protocol. Transaction processing requires two communication steps for a resource manager and a response from the transaction coordinator which are costly for a datacenter in Azure. It does not scale as the number resource to calls expands as 2 resources – 4 network calls, 4 resources – 16 calls, 100 resource – 400 calls. Besides, the datacenter contains thousands of machines, failures are expected, and the system must deal with network partitions. Waiting for response from all resource managers has costly communication overhead.

Sunday, October 10, 2021

 

Some trivia for Azure public cloud (continued from previous article)

Operational requirements for hosting solutions on Azure public cloud:

·        Disaster recovery – Azure Site Recovery services contributes to application-level protection and recovery. It provides near-synchronous replication for any workloads for single or multi-tier applications, works with active-directory and sql server replication. It protects Sharepoint, Dynamics AX and Remote Desktop services It has flexible recovery plans with a rich automation library. One of its biggest use cases is to replicate VMs to Azure. It provides end to end recovery plans.

·        Data redundancy – Azure storage services come with built-in redundancy which also improves durability of the existing Blob services. The Geo-redundant storage (GRS) copies data synchronously three times within a single physical location in the primary region using the LRS. The Geo-zone-redundant storage service (GZRS) copies data synchronously across three availability zones in the primary region using ZRS. Then the other regions are replicated asynchronously. Read Access GRS (RA-GRS) can provide redundancy just for read access.

·        Blob rehydration to the archive tier can be for either hot or cool tier. There are two options for rehydrating a blob that is stored in the archive tier. A) One can copy an archived blob to an online tier using the reference of the blob or its URL. B) Or one can change the blob access tier to an online tier. It can rehydrate the archived blob to hot or cool by changing its tier. Rehydrating might take several hours but several of them can be done concurrently. Rehydration priority might also be set.

·        Storage costs can be optimized by managing the data lifecycle. Azure storage lifecycle management offers a rule-based policy that can be used to transition blob data to the appropriate access tiers or to be set with expiration. The lifecycle policy definition has attributes for actions, baseblobs and filters.

·        Azure monitors are full stack monitoring service. Many Azure services use it to collect and analyze monitoring data. The Blob storage collects the same kind of monitoring data as other Azure resources. Platform metrics and activity logs are automatically collected.

·        Virtual Network peering allows us to connect virtual networks in the same region or across regions as in the case of Global VNet Peering through the Azure Backbone network. When the peering is setup, traffic to the remote virtual network, traffic forwarded from the remote virtual network, virtual network gateway or Route server and traffic to the virtual network can be allowed by default.

·        Transaction processing in Azure is not on by default. A transactions locks and logs records so that others cannot use it, but it can be bound to partitions, enabled as distributed transactions and with two phase commit protocol. Transaction processing requires two communication steps for a resource manager and a response from the transaction coordinator which are costly for a datacenter in Azure. It does not scale as the number resource to calls expands as 2 resources – 4 network calls, 4 resources – 16 calls, 100 resource – 400 calls. Besides, the datacenter contains thousands of machines, failures are expected, and the system must deal with network partitions. Waiting for response from all resource managers has costly communication overhead.

·        Applications can be installed in Virtual machine scale sets with an Azure Template. A custom script extension can be added to the template. The reference to location of the script can be passed in as a parameter. Alternatively, http calls can also be made.

·        A site to site VPN gateway can be configured between Azure and on-premise. Site to Site VPN gateway can provide better continuity for the workload in hybrid cloud setup with Azure.

 

 

Saturday, October 9, 2021

This is a continuation of some trivia from the Azure Public cloud.

 


 

Introduction:

Many learning programs and tutorials on the topic of solution implementations using the Azure Public cloud include pertinent questions using case studies. While they are dressed up in different criteria, most of them probe for some of the fundamentals in design and architecture with the typical use and nuance of the cloud services. These services all conform for use with the Azure resource manager, provide high availability and performance guarantees and come with significant cost savings and management features. With well over a hundred services in its portfolio, the Azure public cloud poses several choices for solution architects, and they must know their tools to sharpen their blades. There doesn’t seem to be a questionnaire bank that can collect all the knowledge base questions about their use that these experts can go through in one session. On the other hand, the tenets underlying those questionnaires are easy to relate and remember. This article continues to serve that purpose along with some of the earlier that were written and include in the reference.

While there is no particular place to start, the cloud won’t be significant without compute and the most important aspect of compute are virtual machines. There is tremendous documentation on the virtual machines and scale sets. Their definitions are maintained as VHD and VHDX formats, but the difference is that the latter is supported exclusively by windows and includes up to 64 TB of storage capacity. It also supports live resizing with logical sector size of 4KB with better data alignment. Hyper-V supports both formats and the administrator mode is required for both the Hyper-V manager and the Powershell scripts. One can be converted to another format and they can be merged as well. They can be mounted and dismounted independently.

The Azure site recovery can be used to replicate sql server to Azure. Although there are many cases and choices for hosting a relational data store, the most convenient way with full parity is when we host the server on a VM. The managed database instance deployed to the cloud does not give the same amount of control as the one hosted on VM but eventually managed database serve better in the long run and do away with the maintenance and performance tuning that accrues otherwise. It is also unimaginable to host an Master data management catalog on a single Virtual Machine at this time. If a database has become slow on a target instance of the SQL Server, leverage the auto-tuning feature to improve performance. A single Azure SQL instance can host many databases, and many write regions. There is no need to provision a dedicated instance for every region or application. This strategy is different from that of keyvault which can even be provisioned as many as those that use them.

Data, data store and analytics have huge surface area to cover but one of the most useful source of operational data for any deployment is the logs pertaining to the services. These don’t need to be setup per instance but the solutions that are built over the services must have an account for its storage. There is only one storage account that can be bound to the log analytics workspace. Logs from many places can flow to the account but there must only be one account. Use AzCopy with cron jobs for high-rate data transfer that are typical for logs. This will cut costs when compared to Azure Data Factory and Azure Data Lake resources.  Create diagnostic settings to send platform logs and metrics to different destinations. A single diagnostic setting can define no more than one of each destination. A resource can have up to 5 destination settings.  If metrics must flow into logs, leverage the Azure monitor metrics REST API and import them into the Azure Monitor logs using the Azure monitor Data Collector API.

There is difference between fad and fact even on services that are dedicated for a specific purpose. Serverless computing for example is great when the logic is small, isolated and needs to scale but the choices are not that clear when they can proliferate in an uncontrolled manner.  A service is ideal for incubation of features because the additions are not only incremental, they usually conform to top down planning and not pander to bottom up convenience. Then there is the cost of ownership for serverless computing that does not factor into the cost advisor because they are usually on the user side. Cost of maintaining logic hosted on serverless computing is significantly more than that of well designed services with modular components that also have the convenience of being hosted in the cloud.

 

Reference: https://1drv.ms/w/s!Ashlm-Nw-wnWhKYBNRZosAThWjmojg?e=d52IAU

 

 

Friday, October 8, 2021

 

The cost of relays:

Introduction: Azure Functions provide the convenience of extending existing applications with event-based programming in a serverless environment. They improve application modularity and maintenance. They react to critical events. When there are many requests, the functions can scale instances as necessary and once the traffic has died down, they can scale down. All the compute resources come from Azure Functions and as a developer of Azure functions, there is no need to be concerned about infrastructure and operations.  When the function is written, a hosting plan must be chosen. The Notification Hub Output Binding for Azure functions enables us to send push notifications by using Azure Notification Hubs. Azure functions support output bindings for Notification Hubs. There are many languages to choose from for writing the Azure function, but in all these choices, the Notification Hub must be configured for the Platform Notification Service. We can get the push notifications in the client application from Notification Hub.

With the help of requestIds, in the request and response headers, it is easy to correlate callers and callee and their logs. When the API of a backend service cannot be consumed by clients, the Azure functions serve as a proxy easily translating the requests back and forth in a way suitable to both parties. It is also possible to chain numerous functions such that the output of one form the input of another. This sequence is especially useful for data transformations that must be independent of the source and destination but does not require any storage.

One of the toughest problems that arise from proxy or chaining is that the end-to-end fidelity and visibility to the flow of data is lost unless the logs are going to the same destination and correlated with an identifier. It is harder to enforce consistency across the functions especially given that they may be implemented differently, at different times and by different authors. Yet they prove critical for DevOps and short release cycle functionalities such as for infrastructure needs and Information Technology perspectives.

The total cost of ownership for Azure Functions is not a constant even it can be calculated with the help of immense features from the Azure billing and cost management dashboard. While controls can be put in place to not exceed a budget, it is harder to estimate the business value of the requests rejected when the threshold is exceeded. Besides, these continue to be operational cost and the not the cost of software development. The serverless paradigm lowers the cost of maintenance of a single function but it does not factor in the overall expense associated with the proliferation of different functions which tend to multiply based on variations in data and processing. When the existing functions are left behind and the new ones added, the old ones continue to incur costs unless they are sunset. A regression model can be attempted that depends on multiple variables and not all of which are linear.

While cost can be reduced for different variables, the overall cost is cumulative and increases over time.

Thursday, October 7, 2021

 

Azure Functions:

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. Most recently, we discussed Azure Synapse link for Cosmos DB. This article talked about connecting all Azure functionalities via extensions that do not affect existing deployments with the help of Azure Functions. Specifically, we discuss sending emails from Azure Function using a Notification Hub Binding.

Description: The Notification Hub Output Binding for Azure functions enables us to send push notifications by using Azure Notification Hubs. Azure functions support output bindings for Notification Hubs. There are many languages to choose from for writing the Azure function, but in all these choices, the Notification Hub must be configured for the Platform Notification Service. We can get the push notifications in the client application from Notification Hub.

Templates can be used with notifications which enable a client application to specify the exact format of the notifications it wants to receive. This helps with a platform agnostic backend, personalized notifications, client-version independence, and easy localization.  A notification using template registration contains a message placeholder in the template.

For example:

using System;

using System.Threading.Tasks;

using System.Collections.Generic;

 

public static void Run(string myQueueItem,  out IDictionary<string, string> notification, TraceWriter log)

{

    log.Info($"C# Queue trigger function processed: {myQueueItem}");

    notification = GetTemplateProperties(myQueueItem);

}

 

private static IDictionary<string, string> GetTemplateProperties(string message)

{

    Dictionary<string, string> templateProperties = new Dictionary<string, string>();

    templateProperties["message"] = message;

    return templateProperties;

}

Wednesday, October 6, 2021

 Azure Functions:

Introduction: This article is a continuation of the series of articles starting with the description of SignalR service which was followed by a discussion on Azure Gateway service, Azure Private Link, and Azure Private Endpoint and the benefit of diverting traffic to the Azure Backbone network. Then we started reviewing a more public internet-facing service such as the Bing API. and the benefits it provided when used together with Azure Cognitive Services. We then discussed infrastructure API such as Provider API, ARM resources, and Azure Pipeline and followed it up with a brief overview of the Azure services support for Kubernetes Control Plane via the OSBA and Azure operator. Then we followed it with an example of Azure integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security policies. Most recently, we discussed Azure Synapse link for Cosmos DB. This article is about connecting all Azure functionalities via extensions that do not affect existing deployments. Specifically, we discuss Azure Functions.

Description: Azure Functions serve compute-on-demand. When there are many blocks of code to run versus large monolithic code base, it improves application modularity and maintenance. These code blocks are called functions. They react to critical events. When there are many requests, the functions can scale instances as necessary and once the traffic has died down, they can scale down. All the compute resources come from Azure Functions and as a developer of Azure functions, there is no need to be concerned about infrastructure and operations.  When the function is written, a hosting plan must be chosen. There are three basic hosting plans available for Azure functions – consumption plan, premium plan, and dedicated app service plan. The hosting plan decides how the function is scaled, what resources are available to each function app instance, and the support for connectivity methods such as Azure virtual network connectivity. Azure functions can even be Kubernetes based.  It is made up of two key components – a runtime and a scale controller. The function runtime runs and executes the code. The scale controller monitors the rate of events that aretargeting the functions.  When the functions are hosted on Kubernetes, the Kubernetes-based Event Driven Autoscaling enables metrics to be published so that the Kubernetes autoscaler can scale from o to n instances. The function runtime is hosted in a Docker container. KEDA supports Azure Function triggers in the form of Azure storage queues, Azure Service Bus, Azure Events, Apache Kafka, and RabbitMQ queue. HTTP triggers are not directly managed by KEDA.
Visual Studio provides a template for writing Azure functions. It introduces a host.json file that allows specifying the settings for the host. This includes logging and application insights for settings such as samplingSettings.  Many of the host settings determine infrastructure requirements from the logic in the Azure Function. Judicious use of these settings promote health and maintenance of the Azure Function. The corresponding Azure Function project file declares the setting for AzureFunctionsVersion. The Azure Functions are implemented using the method signature as public static async Task<IActionResult> Run(

            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,

            ILogger log)

Sample code: https://1drv.ms/u/s!Ashlm-Nw-wnWhKYdp9QwpiEacjprGQ?e=1LD6jY