Monday, March 14, 2022

 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Functions with the link here. This article continues to discuss Azure Functions best practices.  

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The previous articles discussed the best practices around security and concurrency. This article continues with some of the other best practices such as availability and monitoring. 

The steps to managing the Function Applications include the following:

1.       A function app provides the execution for the individual functions, therefore, any connection strings, environment variables and other application settings must be tightly controlled for smooth running and scaling. Any data that is shared between function applications must be stored in an external persisted store

2.       The portal is the right place to set the application settings and the platform features. Any number of settings can be added but there are also predefined settings. These settings are stored encrypted.

3.       The hosting plan in which the application runs must be properly chosen to suit the scaling and pricing for one or more function applications.

4.       The eventual migration of a function application between plans must be taken into account as the loads increase and age over time. Only certain migrations are allowed but each migration involves creating a destination plan and deleting the source plan. Forward and backward migrations are permitted in certain cases.

5.       The function access keys are required to support REST calls. The url will be in the format https://<App_NAME>.azurewebsites.net/api/<FUNCTION_NAME>?code=<FUNCTION_ACCESS_KEY> for authorized calls. These keys must be shared with the caller after they are generated from the portal.

6.       The platform features such as App Service editor, Console, Kudu advanced tools, deployment options, CORS, and authentication can be leveraged for operational needs. For example, the App Service editor allows json configuration to be spot edited and it enables integration with the git repository which can trigger the CI/CD pipeline steps. The console is an ideal developer tool to interact with the function application via command line interface. Kudu is an administrator friendly app service administration tool that helps with setting system information, settings and variables, site extensions and HTTP headers. It can be browsed with https://<FUNCTION_NAME>.scm.azurewebsites.net. The deployment center can help automate the deployment from source control. The Cross-origin resource sharing lets an “Access-Control-Allow-Origin” to be declared where origins are allowed to call endpoints on the function application. When functions use HTTP triggers, those requests must be authenticated. The App Service provide Azure Active Directory authentication and allows configuring specific authentication providers.

Sunday, March 13, 2022

 This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Function core tools with the link here. This article discusses Azure Functions best practices. 

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The previous articles discussed the best practices around security and concurrency. This article continues with some of the other best practices such as availability and monitoring.

Functions are popular because they can scale out as load increases. Considerations for enabling this scale out are important and these demand that the way the functions respond to load and handle the incoming events be determined. One way to handle parallelism is when the function itself does parallel processing using workers. The FUNCTIONS_WORKER_PROCESS_COUNT setting determines the maximum number of such workers. After this threshold is exceeded, function app is scaled out by creating new instances. When the planning is for throughput and scaling, then the trigger configuration allows us to control the batching behaviors and manage concurrency. Adjusting the values in these options can help each instance scale appropriately. These configuration options are applied to all triggers in a function application, and maintained by the host.json for the application. When we plan for connections, the number for connections on a per-instance basis must be set. This limit affects the Consumption plan and all outbound connections from the function code. 

Availability of Azure functions is impacted by cold start. This is the delay before a new instance of the function is available to process the incoming request. It occurs both when the instances have been scaled down towards zero count and when there are no active instances.  The function becomes ready only when the dependencies are also available. One of the ways to mitigate is turning on the “always on” setting in premium plans of the function app but it is equally important to understand when the scaling occurs. Scaling can vary on a number of factors but those that stand out are maximum instances, new instance rate, and scale efficiency. A function app can scale out to a maximum of 200 instances. A lower value can limit the scaling. New instances can come up at the rate of 1 per second for http triggers and 1 for every 30 seconds for non-http triggers such as Service Bus triggers. Efficiency can be set by using the Manage rights on the resources such as for Service Bus. This can be set in the access policies. The billing for different plans can be found on the Azure Functions pricing page but the usage is aggregated at the app level and only counted for the duration the app was executing and not when it was idle. The units for billing include: Resource consumption in gigagbyte-seconds (GB-s) and Executions. Another way to overcome cold start is to implement a warm-up trigger in the function app. For non-http triggers, a virtual network trigger can be used. When the autoscaling feature is supported as part of the plan, it can be implemented.

Azure Functions offer built-in integrations with Azure Application Insights which monitors function executions and traces written from the code. The AzureWebJobsDashboard application setting must be removed for improved performance and the Application Insight logs must be reviewed. Sampling can be configured to makes sure entries are found in the logs.


Saturday, March 12, 2022

 

Azure Functions: 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Function core tools with the link here. This article discusses Azure Functions best practices. 

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The article details some of the best practices for designing and efficient function apps. 

Functions are popular because they can scale out as load increases. Considerations for enabling this scale out are important and these demand that the way the functions respond to load and handle the incoming events be determined. One way to handle parallelism is when the function itself does parallel processing using workers. The FUNCTIONS_WORKER_PROCESS_COUNT setting determines the maximum number of such workers. After this threshold is exceeded, function app is scaled out by creating new instances. When the planning is for throughput and scaling, then the trigger configuration allows us to control the batching behaviors and manage concurrency. Adjusting the values in these options can help each instance scale appropriately. These configuration options are applied to all triggers in a function application, and maintained by the host.json for the application. When we plan for connections, the number for connections on a per-instance basis must be set. This limit affects the Consumption plan and all outbound connections from the function code.

Friday, March 11, 2022

 

Azure Functions: 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Function core tools with the link here. This article discusses Azure Functions best practices. 

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The article details some of the best practices for designing and efficient function apps. 

Security is best considered during the planning phase and not after the functions are ready to go. Planning for the security considerations of the Azure function is much like that of a cloud service or an application Azure App Service provides the hosting infrastructure for the function applications. Defender for cloud integrates with the function application via the portal and provides a free quick assessment of potential vulnerabilities. Activity monitoring and logging analytics also boost security. Functions integrates with Application Insights to collect log, performance, and error data for the function application. Application Insights automatically detects performance anomalies and includes analytical tools. Functions also integrate with Azure Monitor Logs to enable us to consolidate function app logs with system events for easier analysis. Enterprise level threat detection and response automation involves streaming log events to Log Analytics Workspace.

Clients can connect to function endpoints by using both HTTP or HTTPS but HTTP should be redirected to HTTPS. The use of SSL/TLS protocol provides a secure connection. Functions lets us use API access keys to make it harder to access the endpoints during development. Function-level authorization scopes are available at two levels: at function level and at host level. API access keys at function level allow access only to those functions while host level keys allow access to all functions within the function app. There is also an admin level master key called _master which provides host level access to all the functions in the app. When the access level is set for an admin, requests must use the master key. Specific extensions may require a system-managed key to access webhook endpoints. System keys are designed for extension-specific function endpoints that are called by internal components. Keys are stored in a blob storage container in the account provided by the AzureWebJobsStorage setting. The AzureWebJobsSecretStorageType setting can be used to override this behavior and store keys in a different location. Last but not the least, authentication and authorization decisions can be based on identity. Azure API management can be used to authenticate requests.

Function Applications must be run with the lowest permissions. Functions support built-in Azure RBAC. Function apps can be split based on the credentials and connection strings stored in the application settings.

 

 

Thursday, March 10, 2022

Azure Functions:

 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Function core tools with the link here. This article discusses Azure Functions best practices.

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The article details some of the best practices for designing and efficient function apps.

A function app must have a proper hosting plan.  There are three basic plans available which include: Consumption plan, Premium plan, Dedicated (App Service) plan. Function apps work on both Linux and Windows so the hosting plans are generally available (GA) on both. The hosting plan chosen determines the following behaviors:

-          How the function app is scaled based on demand and how instances allocation is managed

-          The resources available to each function app instance

-          The support for advanced functionality, such as Azure Virtual Network connectivity

Functions provide a limited range of options to switch hosting plans so proper choice at the time of function creation matters.

Functions require a storage account to be associated with a function app.  The storage account connection helps the functions to manage triggers and log entries. It is also used when dynamically scaling out the function apps. Proper configuration helps with the reliability and the performance of the function apps.

There are settings that determine whether the function application is run from the Azure Files endpoint or from the file servers. ARM templates with proper settings are available for Consumption plan, Dedicated plan, Premium plan with VNET integration and Consumption plan with a deployment slot.

Large data sets require special handling. For this purpose, extra storage can be mounted as a file share.

A solution can comprise of multiple functions. Their deployment can be authored via a DevOps release. All functions in a function app are deployed at the same time. When the functions are run from a deployment package, it can be deployed directly to production and reduce cold start times.

Continuous deployment is required for source control solution. A warmup trigger can reduce latencies when new instances are added. Deployment downtimes can be minimized, and rollbacks can be enabled when using deployment slots. These are some strategies for successful deployments.

Several design principles can come helpful to writing reliable robust functions. These include avoiding long running functions, planning cross function communication, writing functions to be stateless, and writing defensive functions.

Wednesday, March 9, 2022

 

Azure Functions Core Tools:

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure App Configuration with the link here. This article discusses Azure Functions Core Tools.

 

Azure Functions Core Tools help to develop and test Azure Functions on the local computer from the command prompt or terminal.  The local function can connect to live Azure Services, and this helps to debug functions on the local computer using the full Functions runtime. It can even help deploy a function application to the Azure subscription. The basic steps are as follows:

1.       Install the core tools and dependencies – The prerequisites for these are Azure CLI or Azure PowerShell.  The core tools version depends on Azure Functions. The version used depends on the local development environment, choice of language and the level of support required.

A version of the same runtime that powers Azure Function can also be run on the local computer.

2.       Create a local functions project: The project directory contains the following files and folders, regardless of language: host.json, local.settings.json, .gitignore, and .vscode\extensions.json. The command “func init [projectFileName]” can create the necessary structure and boilerplate scaffolding.

3.       Register extensions: Function triggers and bindings are implemented as .Net extensions (NuGet) package. These can be referred for the specific triggers and bindings. HTTP bindings and timer triggers don’t require extensions.

4.       Use Extension bundles.  Binding extensions can be installed by enabling extension bundles. When we enable bundles, a predefined set of extension packages is automatically installed. The host.json file bears the additional entry comprising of a version and extensionBundle to enable it.

5.       Install extensions one by one – If there is a non-.Net project, extension bundles cannot be used. Then we need to target a specific version of an extension not in the bundle. In these cases, Core Tools can be used to install the extensions one by one locally.

6.       Local Settings: Settings required by functions are stored securely in app settings when the function is run. When the functions are local, the settings are added to the Values object in the local.settings.json file. The local.settings.json file also stores settings used by the local development tools.

7.       Get the storage connections strings: With the help of the Microsoft Azure Storage Emulator for development, it is possible to run the function locally with an actual storage connection string.

8.       Create the function: The creation of a function in an existing project is done with the help of the “func new” command.

9.       Run the functions locally with the help of the func start command.

 

With these steps, we can get started with an application function in the local development environment and test data can be passed to it.

Tuesday, March 8, 2022

 Azure App Configuration (continued)

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure App Configuration with the link here. This article elaborates on the best practices with Azure App Configuration. 

There are two options for organizing keys – key prefixes and labels. Key-prefixes are the beginning part of keys. A set of keys can be grouped by using the same prefix in their names. Prefixes can have folder path like separator and qualification. Keys are what the application code references to retrieve the values of the corresponding settings. Labels are an attribute of keys and they are used to create variants of a key. A version might be an iteration, an environment, or some other contextual information.

Labels can be used to organize per-environment configuration values which is a common ask from many applications. If a configuration value defines the connection string to a backend database, they  databases are likely to be different between the production and development environments. Since the connection string changes with the database, labels can be used to define different values for the same key. Configuration values can be loaded with a specified label with the following example:

Config.AddAzureAppConfiguration(options =>

options

.Connect(settings.GetConnectionString(“AppConfig”))

.Select(KeyFilter.Any, LabelFilter.Null)

.Select(KeyFilter.Any, hostingContext.HostingEnvironment.EnvironmentName));

The Select method is called twice deliberately, the first time it loads configuration values with no label. Then it loads configuration values with the label corresponding to the current environment.

Azure App Configuration supports import and export of data.  This can be done in bulk as well as for specific key-values.

Data Import and export can be made specific by increasing the refresh timeout, watching a single sentinel key, using Azure Event Grid to receive notifications when configuration changes and spreading requests across multiple app configuration stores. There is an option to bulk import the configuration settings from the current configuration files using either the portal or CLI. The same option can be used to export key-values from app configuration.

 It is also possible to setup an ongoing sync with a GitHub repository with the help of a GitHub Action.

JSON is best suited for key-values as it is evidenced from the widespread use across document stores, virtual date warehouses and object storage.  By setting the default media type of the configuration store as JSON, we get benefits like 1) simpler data management since it is convenient to use with Azure Portal, 2) enhanced data export where the JSON objects will be preserved during data export and 3) native support with application configuration provider libraries in our applications.

Application configurations stores must be backed up and this can be done automatically from a primary Azure App configuration store to its secondary. The backup uses an integration of Event Grid with App Configuration where the latter acts as the publisher of changes to key-values and the former passes it on to interested subscribers which can be a service or a queue.  A storage queue can receive events and use a timer trigger of Azure Functions to process the events from the queue in batches. When the function is triggered, it will fetch the latest values of the keys that have changed from the primary App configuration store and update the secondary store. This helps combine multiple changes that occur in a short period in one backup operation and thus avoids excessive requests to the application configuration store.

The backup of configuration store can be performed across regions which improves the overall geo-resiliency of the application. The primary and the secondary stores should be in different Azure regions for this purpose.

It is recommended that managed identities be used to perform the operations on the configuration store. Managed identities simplify secret management for a cloud application. With managed identity, the application code can use the service principal created for the Azure Service it runs on. Instead of a separate credential stored in Azure Key Vault or a local connection string, the managed identity can be leveraged across function applications and code that is auxiliary to the application.  A managed identity can also be federated so that the same identity works across identity isolation between public and sovereign clouds.