Sunday, March 13, 2022

 This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Function core tools with the link here. This article discusses Azure Functions best practices. 

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The previous articles discussed the best practices around security and concurrency. This article continues with some of the other best practices such as availability and monitoring.

Functions are popular because they can scale out as load increases. Considerations for enabling this scale out are important and these demand that the way the functions respond to load and handle the incoming events be determined. One way to handle parallelism is when the function itself does parallel processing using workers. The FUNCTIONS_WORKER_PROCESS_COUNT setting determines the maximum number of such workers. After this threshold is exceeded, function app is scaled out by creating new instances. When the planning is for throughput and scaling, then the trigger configuration allows us to control the batching behaviors and manage concurrency. Adjusting the values in these options can help each instance scale appropriately. These configuration options are applied to all triggers in a function application, and maintained by the host.json for the application. When we plan for connections, the number for connections on a per-instance basis must be set. This limit affects the Consumption plan and all outbound connections from the function code. 

Availability of Azure functions is impacted by cold start. This is the delay before a new instance of the function is available to process the incoming request. It occurs both when the instances have been scaled down towards zero count and when there are no active instances.  The function becomes ready only when the dependencies are also available. One of the ways to mitigate is turning on the “always on” setting in premium plans of the function app but it is equally important to understand when the scaling occurs. Scaling can vary on a number of factors but those that stand out are maximum instances, new instance rate, and scale efficiency. A function app can scale out to a maximum of 200 instances. A lower value can limit the scaling. New instances can come up at the rate of 1 per second for http triggers and 1 for every 30 seconds for non-http triggers such as Service Bus triggers. Efficiency can be set by using the Manage rights on the resources such as for Service Bus. This can be set in the access policies. The billing for different plans can be found on the Azure Functions pricing page but the usage is aggregated at the app level and only counted for the duration the app was executing and not when it was idle. The units for billing include: Resource consumption in gigagbyte-seconds (GB-s) and Executions. Another way to overcome cold start is to implement a warm-up trigger in the function app. For non-http triggers, a virtual network trigger can be used. When the autoscaling feature is supported as part of the plan, it can be implemented.

Azure Functions offer built-in integrations with Azure Application Insights which monitors function executions and traces written from the code. The AzureWebJobsDashboard application setting must be removed for improved performance and the Application Insight logs must be reviewed. Sampling can be configured to makes sure entries are found in the logs.


No comments:

Post a Comment