Saturday, October 9, 2021

This is a continuation of some trivia from the Azure Public cloud.

 


 

Introduction:

Many learning programs and tutorials on the topic of solution implementations using the Azure Public cloud include pertinent questions using case studies. While they are dressed up in different criteria, most of them probe for some of the fundamentals in design and architecture with the typical use and nuance of the cloud services. These services all conform for use with the Azure resource manager, provide high availability and performance guarantees and come with significant cost savings and management features. With well over a hundred services in its portfolio, the Azure public cloud poses several choices for solution architects, and they must know their tools to sharpen their blades. There doesn’t seem to be a questionnaire bank that can collect all the knowledge base questions about their use that these experts can go through in one session. On the other hand, the tenets underlying those questionnaires are easy to relate and remember. This article continues to serve that purpose along with some of the earlier that were written and include in the reference.

While there is no particular place to start, the cloud won’t be significant without compute and the most important aspect of compute are virtual machines. There is tremendous documentation on the virtual machines and scale sets. Their definitions are maintained as VHD and VHDX formats, but the difference is that the latter is supported exclusively by windows and includes up to 64 TB of storage capacity. It also supports live resizing with logical sector size of 4KB with better data alignment. Hyper-V supports both formats and the administrator mode is required for both the Hyper-V manager and the Powershell scripts. One can be converted to another format and they can be merged as well. They can be mounted and dismounted independently.

The Azure site recovery can be used to replicate sql server to Azure. Although there are many cases and choices for hosting a relational data store, the most convenient way with full parity is when we host the server on a VM. The managed database instance deployed to the cloud does not give the same amount of control as the one hosted on VM but eventually managed database serve better in the long run and do away with the maintenance and performance tuning that accrues otherwise. It is also unimaginable to host an Master data management catalog on a single Virtual Machine at this time. If a database has become slow on a target instance of the SQL Server, leverage the auto-tuning feature to improve performance. A single Azure SQL instance can host many databases, and many write regions. There is no need to provision a dedicated instance for every region or application. This strategy is different from that of keyvault which can even be provisioned as many as those that use them.

Data, data store and analytics have huge surface area to cover but one of the most useful source of operational data for any deployment is the logs pertaining to the services. These don’t need to be setup per instance but the solutions that are built over the services must have an account for its storage. There is only one storage account that can be bound to the log analytics workspace. Logs from many places can flow to the account but there must only be one account. Use AzCopy with cron jobs for high-rate data transfer that are typical for logs. This will cut costs when compared to Azure Data Factory and Azure Data Lake resources.  Create diagnostic settings to send platform logs and metrics to different destinations. A single diagnostic setting can define no more than one of each destination. A resource can have up to 5 destination settings.  If metrics must flow into logs, leverage the Azure monitor metrics REST API and import them into the Azure Monitor logs using the Azure monitor Data Collector API.

There is difference between fad and fact even on services that are dedicated for a specific purpose. Serverless computing for example is great when the logic is small, isolated and needs to scale but the choices are not that clear when they can proliferate in an uncontrolled manner.  A service is ideal for incubation of features because the additions are not only incremental, they usually conform to top down planning and not pander to bottom up convenience. Then there is the cost of ownership for serverless computing that does not factor into the cost advisor because they are usually on the user side. Cost of maintaining logic hosted on serverless computing is significantly more than that of well designed services with modular components that also have the convenience of being hosted in the cloud.

 

Reference: https://1drv.ms/w/s!Ashlm-Nw-wnWhKYBNRZosAThWjmojg?e=d52IAU

 

 

No comments:

Post a Comment