Sunday, January 9, 2022

 

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent discussion on Azure Data Lake which is a full-fledged general availability service that provides similar Service Level Agreements as expected from others in the category.

Monitoring is a critical aspect for any service in the cloud both internal and customer facing. Metrics and alerts are part of the monitoring dashboard.

Each resource provides metrics to monitor specific aspects of the operations. These metrics can be viewed with the Azure Monitor Service or explored and plotted with the Azure Monitor Metrics Explorer. The metrics include QueryVolume, RecordSetCount, RecordSetCapacityUtilization. The last one is a percentage while the first two are counts. The QueryVolume is a sum of all queries received over a period. It can be viewed by browsing the metrics explorer, scoping down to the resource and selecting the metric with sum for aggregation. The RecordSetCount shows the number of Recordsets in Azure DNS for the DNS zone. All the recordsets are counted and the aggregation is the maximum of all the recordsets. The RecordSet capacity utilization shows the percent used for the RecordSet capacity of a DNS Zone. Each zone has a RecordSet limit that defines the maximum number of RecordSets allowed for the zone. The aggregation type is maximum.

Resource metrics can be used to raise alerts.  They can be configured from the monitor page in the Azure portal. It must be scoped to a resource which is the DNS zone in this case. The signal logic can be configured by selecting a signal and configuring the threshold and frequency of evaluation for the metric.

Continuous monitoring of API is also possible via Synthetic monitoring. It provides proactive visibility into API issues before customers find the issues themselves. This is automated probing to validate build-out of deployments, monitoring a service or a mission critical scenario independent of the service deployment cycle and testing the availability of dependencies. It ensures end-to-end coverage of specific scenarios and can even perform validation of the response body not just the status code and headers. By utilizing all properties of making a web request and checking its response as well as a sequence of requests, the monitoring logic begins to articulate the business concerns that must remain available. Synthetic is not just active monitoring of a service. It is a set of business assets that take away the onus from business continuity assurance.

The steps to set up a Synthetic monitoring includes an onboarding, provisioning and deployment. The onboarding is required to isolate all the data structures and definitions specific to the customer and referred to by an account id. The provisioning is the setup of all Azure resources that are necessary to execute the logic. The deployment of the logic involves both the code and the configuration. The code is a .Net assembly and the configuration is a predefined json. It can specify more than one region to deploy and the regions can be changed from deployment to deployment of the same logic.

The use of active and passive monitoring completes the overall probes needed to ensure the smooth running of the services.

 

No comments:

Post a Comment