Azure Functions:
This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Functions with the link here. This article continues to discuss Azure Functions’ best practices with a focus on organization and scalability.
Performance and scalability concerns are clearer with serverless function apps. Large long-running functions can cause unexpected timeout issues. Bloating of an application is noticeable when there are many libraries included. For example, a Node.js function application can have dozens of dependencies. Importing dependencies increases load times that result in unexpected timeouts. Dependencies tree can be of arbitrary breadth and length. Whenever possible, it is best to split the function applications into sets that work together and return responses fast. One way of doing this involves the HTTP trigger function being separate from a queue trigger function such that the HTTP trigger returns an acknowledgment while the payload is placed on the queue to be processed subsequently.
Function applications in production should be as overhead-free as possible. There should not be any test-related functions. Shared code between functions must be in its own folder otherwise there will be multiple versions of it. Memory usage is averaged across functions so the less there are the better. Verbose logging in production code has a negative impact.
Async calls can be used to avoid blocking calls. Asynchronous programming is a recommended best practice, especially when blocking I/O operations are involved. Any host instance for functions uses a single worker process. The functions_worker_process_count can increase the number of worker processes per host to up to 10 and function invocations are distributed across these workers.
Messages can be batched which leads to better performance. This can be configured in the host.json file associated with the function application. C# functions can change the type to a strongly typed array to denote batching. Other languages might require setting the cardinality in function.json.
Host behaviors can be configured to better handle concurrency. Host runtime and trigger behaviors can be configured in the host.json file. Concurrency can be managed for a number of triggers and these settings tremendously influence the scaling of the function applications.
Settings apply across all functions in the application for a single instance of that application. A function application having two HTTP functions and maxConcurrentRequests set to 25 will count a request to either trigger towards the shared concurrent requests and when the application is scaled out to say ten instances, the maximum concurrency will be effectively 250 requests across those instances.
No comments:
Post a Comment