Friday, March 11, 2022

 

Azure Functions: 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Function core tools with the link here. This article discusses Azure Functions best practices. 

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The article details some of the best practices for designing and efficient function apps. 

Security is best considered during the planning phase and not after the functions are ready to go. Planning for the security considerations of the Azure function is much like that of a cloud service or an application Azure App Service provides the hosting infrastructure for the function applications. Defender for cloud integrates with the function application via the portal and provides a free quick assessment of potential vulnerabilities. Activity monitoring and logging analytics also boost security. Functions integrates with Application Insights to collect log, performance, and error data for the function application. Application Insights automatically detects performance anomalies and includes analytical tools. Functions also integrate with Azure Monitor Logs to enable us to consolidate function app logs with system events for easier analysis. Enterprise level threat detection and response automation involves streaming log events to Log Analytics Workspace.

Clients can connect to function endpoints by using both HTTP or HTTPS but HTTP should be redirected to HTTPS. The use of SSL/TLS protocol provides a secure connection. Functions lets us use API access keys to make it harder to access the endpoints during development. Function-level authorization scopes are available at two levels: at function level and at host level. API access keys at function level allow access only to those functions while host level keys allow access to all functions within the function app. There is also an admin level master key called _master which provides host level access to all the functions in the app. When the access level is set for an admin, requests must use the master key. Specific extensions may require a system-managed key to access webhook endpoints. System keys are designed for extension-specific function endpoints that are called by internal components. Keys are stored in a blob storage container in the account provided by the AzureWebJobsStorage setting. The AzureWebJobsSecretStorageType setting can be used to override this behavior and store keys in a different location. Last but not the least, authentication and authorization decisions can be based on identity. Azure API management can be used to authenticate requests.

Function Applications must be run with the lowest permissions. Functions support built-in Azure RBAC. Function apps can be split based on the credentials and connection strings stored in the application settings.

 

 

Thursday, March 10, 2022

Azure Functions:

 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure Function core tools with the link here. This article discusses Azure Functions best practices.

When we want code to be triggered by events, Azure Functions become very useful because it is a compute-on-demand experience. It extends the existing Azure App Service platform with capabilities to implement code triggered by events occurring in Azure, in third-party service, and in on-premises systems. Functions can be built to be reactive, but they are also useful to process data from various data sources. Functions are hosted in environments that like VMs are susceptible to faults such as restarts, moves or upgrades. Functions may also have the same reliability as the APIs it invokes. But functions can scale out so that they never become a bottleneck. The article details some of the best practices for designing and efficient function apps.

A function app must have a proper hosting plan.  There are three basic plans available which include: Consumption plan, Premium plan, Dedicated (App Service) plan. Function apps work on both Linux and Windows so the hosting plans are generally available (GA) on both. The hosting plan chosen determines the following behaviors:

-          How the function app is scaled based on demand and how instances allocation is managed

-          The resources available to each function app instance

-          The support for advanced functionality, such as Azure Virtual Network connectivity

Functions provide a limited range of options to switch hosting plans so proper choice at the time of function creation matters.

Functions require a storage account to be associated with a function app.  The storage account connection helps the functions to manage triggers and log entries. It is also used when dynamically scaling out the function apps. Proper configuration helps with the reliability and the performance of the function apps.

There are settings that determine whether the function application is run from the Azure Files endpoint or from the file servers. ARM templates with proper settings are available for Consumption plan, Dedicated plan, Premium plan with VNET integration and Consumption plan with a deployment slot.

Large data sets require special handling. For this purpose, extra storage can be mounted as a file share.

A solution can comprise of multiple functions. Their deployment can be authored via a DevOps release. All functions in a function app are deployed at the same time. When the functions are run from a deployment package, it can be deployed directly to production and reduce cold start times.

Continuous deployment is required for source control solution. A warmup trigger can reduce latencies when new instances are added. Deployment downtimes can be minimized, and rollbacks can be enabled when using deployment slots. These are some strategies for successful deployments.

Several design principles can come helpful to writing reliable robust functions. These include avoiding long running functions, planning cross function communication, writing functions to be stateless, and writing defensive functions.

Wednesday, March 9, 2022

 

Azure Functions Core Tools:

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure App Configuration with the link here. This article discusses Azure Functions Core Tools.

 

Azure Functions Core Tools help to develop and test Azure Functions on the local computer from the command prompt or terminal.  The local function can connect to live Azure Services, and this helps to debug functions on the local computer using the full Functions runtime. It can even help deploy a function application to the Azure subscription. The basic steps are as follows:

1.       Install the core tools and dependencies – The prerequisites for these are Azure CLI or Azure PowerShell.  The core tools version depends on Azure Functions. The version used depends on the local development environment, choice of language and the level of support required.

A version of the same runtime that powers Azure Function can also be run on the local computer.

2.       Create a local functions project: The project directory contains the following files and folders, regardless of language: host.json, local.settings.json, .gitignore, and .vscode\extensions.json. The command “func init [projectFileName]” can create the necessary structure and boilerplate scaffolding.

3.       Register extensions: Function triggers and bindings are implemented as .Net extensions (NuGet) package. These can be referred for the specific triggers and bindings. HTTP bindings and timer triggers don’t require extensions.

4.       Use Extension bundles.  Binding extensions can be installed by enabling extension bundles. When we enable bundles, a predefined set of extension packages is automatically installed. The host.json file bears the additional entry comprising of a version and extensionBundle to enable it.

5.       Install extensions one by one – If there is a non-.Net project, extension bundles cannot be used. Then we need to target a specific version of an extension not in the bundle. In these cases, Core Tools can be used to install the extensions one by one locally.

6.       Local Settings: Settings required by functions are stored securely in app settings when the function is run. When the functions are local, the settings are added to the Values object in the local.settings.json file. The local.settings.json file also stores settings used by the local development tools.

7.       Get the storage connections strings: With the help of the Microsoft Azure Storage Emulator for development, it is possible to run the function locally with an actual storage connection string.

8.       Create the function: The creation of a function in an existing project is done with the help of the “func new” command.

9.       Run the functions locally with the help of the func start command.

 

With these steps, we can get started with an application function in the local development environment and test data can be passed to it.

Tuesday, March 8, 2022

 Azure App Configuration (continued)

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure App Configuration with the link here. This article elaborates on the best practices with Azure App Configuration. 

There are two options for organizing keys – key prefixes and labels. Key-prefixes are the beginning part of keys. A set of keys can be grouped by using the same prefix in their names. Prefixes can have folder path like separator and qualification. Keys are what the application code references to retrieve the values of the corresponding settings. Labels are an attribute of keys and they are used to create variants of a key. A version might be an iteration, an environment, or some other contextual information.

Labels can be used to organize per-environment configuration values which is a common ask from many applications. If a configuration value defines the connection string to a backend database, they  databases are likely to be different between the production and development environments. Since the connection string changes with the database, labels can be used to define different values for the same key. Configuration values can be loaded with a specified label with the following example:

Config.AddAzureAppConfiguration(options =>

options

.Connect(settings.GetConnectionString(“AppConfig”))

.Select(KeyFilter.Any, LabelFilter.Null)

.Select(KeyFilter.Any, hostingContext.HostingEnvironment.EnvironmentName));

The Select method is called twice deliberately, the first time it loads configuration values with no label. Then it loads configuration values with the label corresponding to the current environment.

Azure App Configuration supports import and export of data.  This can be done in bulk as well as for specific key-values.

Data Import and export can be made specific by increasing the refresh timeout, watching a single sentinel key, using Azure Event Grid to receive notifications when configuration changes and spreading requests across multiple app configuration stores. There is an option to bulk import the configuration settings from the current configuration files using either the portal or CLI. The same option can be used to export key-values from app configuration.

 It is also possible to setup an ongoing sync with a GitHub repository with the help of a GitHub Action.

JSON is best suited for key-values as it is evidenced from the widespread use across document stores, virtual date warehouses and object storage.  By setting the default media type of the configuration store as JSON, we get benefits like 1) simpler data management since it is convenient to use with Azure Portal, 2) enhanced data export where the JSON objects will be preserved during data export and 3) native support with application configuration provider libraries in our applications.

Application configurations stores must be backed up and this can be done automatically from a primary Azure App configuration store to its secondary. The backup uses an integration of Event Grid with App Configuration where the latter acts as the publisher of changes to key-values and the former passes it on to interested subscribers which can be a service or a queue.  A storage queue can receive events and use a timer trigger of Azure Functions to process the events from the queue in batches. When the function is triggered, it will fetch the latest values of the keys that have changed from the primary App configuration store and update the secondary store. This helps combine multiple changes that occur in a short period in one backup operation and thus avoids excessive requests to the application configuration store.

The backup of configuration store can be performed across regions which improves the overall geo-resiliency of the application. The primary and the secondary stores should be in different Azure regions for this purpose.

It is recommended that managed identities be used to perform the operations on the configuration store. Managed identities simplify secret management for a cloud application. With managed identity, the application code can use the service principal created for the Azure Service it runs on. Instead of a separate credential stored in Azure Key Vault or a local connection string, the managed identity can be leveraged across function applications and code that is auxiliary to the application.  A managed identity can also be federated so that the same identity works across identity isolation between public and sovereign clouds.

 

 


Monday, March 7, 2022

 

Summary of a book “The Manager’s Dilemma” by Irial O’Farrell

This book talks about the Manager’s qualities that are applicable to a wide variety of cases. It does this by presenting concrete examples that are eye-opening.

The dichotomy between a manager and an engineer is about their approach when unexpected problems arise. People wearing both hats often jump into problem solving mode and are instead advised to be managing. Managers must decide whether they need to be Fix-It champs or to help their team members develop their own problem-solving skills. Irial O’Farrell is the author of this book and an executive coach who advises the second choice. Managers must teach their reports how to handle problems as they surface so that they can focus on their immediate and ongoing managerial tasks. She advises managers on how to coach their team members and delegate problem solving to them.

The key takeaways from this book include:

1.       Most employees can identify problems, but few can solve them

2.       Employees may want the manager to solve the problem, but they must instead be taught how.

3.       Employees must have three reasons to escalate problems to their managers.

4.       Employees must do the necessary groundwork before bringing a problem to their managers.

5.       The manager’s mindset must be to favor managing rather than fixing things.

6.       Managers who become popular for problem solving while leading their teams are at risk of burnout.

7.       Both managers and employees must learn to solve a problem in eight steps.

8.       Both must use the SMART system to achieve their goals

9.       Managers not only empower their employees but also empower themselves to become effective coaches.

When managers are presented with problems, there is a high chance that the employees identified the problem but could not solve it. The manager has a few tradeoffs at this point. They can focus on problem solving to get promoted or they can invest in an unproven approach to empower the team’s problem solving. The difference is in observing the hidden cost of an employee’s inability to solve a problem. Team members may waste their manager’s time or expect the boss to set aside their priorities and ride to the rescue. This creates an impediment in the team’s ability to attain their goal. If the manager and the team discussed and developed problem-solving skills, they ca tackle the current and the next issues themselves. So, a manager must just change the employees expectations for her to solve problems and instead teach them how to do it.

There are other inefficiencies that occur when the proposed approach is not taken. Staff members who rely on their manager to solve the problem never take ownership of the issues they encounter. It may even get worse as others, including the manager, must invest time to pick up where they left off, which hurts the manager’s and the team’s efficiency and productiveness. Coaching fulfils the manager’s responsibilities and contributes to the professional development of their reports.

When employees escalate problems to their supervisor, they might do so because they are lazy, or they have tried unsuccessfully to fix it, or they want to make sure their managers know about it. The first reason is invalid but the remaining two are valid reasons.

Employees must always do the necessary groundwork before bringing a problem to their manager. Before they can coach their employees to solve their own problems, a manager must persuade them to share that goal. The motivation to escalate must be included with the problem presentation. It must also show what approaches for problem solving were attempted and how far they were carried out.

The employees must demonstrate they are not passing the buck as part of “signposting” where he or she indicates that a problem exists but is beyond his or her ability to solve. At this point, the employee wants the manager to ask what needs to be done and how she can help.

Signposting is a clear indicator that the proposed approach is warranted. When coaching takes place, it simultaneously relieves the manager who might be at risk of burnout. The manager must change their mind-set to manage, not fix things. She must focus on her work and responsibilities and not put out fires. This enlightened mindset might be counter-intuitive to their habit that put them in the position of the manager in the first place but it pays over time. Companies do promote managers who solve problems but managers who grow their reports to do the same provide a greater value. This is the “manager’s dilemma”.

A team’s new and improved problem solving resulting from the manager’s coaching may also meet with some resistance initially, but it will provide an opportunity to reduce the escalations in the future. The manager is left with more time and concentration to lead. This is indeed a long-term solution that is better than the tactical problem-solving habit.

There are eight steps to problem solving and these are: 1. Determine the problem, 2. Assess the problem’s reach, 3. Research the problem’s causes, 4. Determine the various options, 5. Evaluate all options. 6. Figure out the ideal solution, 7. Implement this solution, and 8. Review your results.

The SMART system is geared to achieving managerial goals including the coaching of team members to solve their problems. It is a framework for goal-attainment and a performance management tool. A SMART system focuses on Specific goals to achieve, Measurable progress, Attainable goals, Relevant alignments with organization’s objectives and the Timed response to reach the goal.

Coaching is not restricted to team members. Managers must also empower themselves to become effective coaches. Self-awareness is a cornerstone of effective management and leadership.

 

 

Sunday, March 6, 2022

 

Azure App Configuration (continued)

 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure App Configuration with the link here. This article elaborates on the best practices with Azure App Configuration.  

Azure App maintains a record of changes made to key-values. This record provides a timeline of key-value changes and allows us to reconstruct the history of any key-value to fall back to a value to an earlier point in this history. The az appconfig revision list command enables us to retrieve all recorded changes to the key-values.

App configuration is particularly helpful for feature management. This is a software development practice that decouples feature release from code deployment and enables quick changes to feature availability on demand. A feature flag can be used to toggle the feature on or off.

Feature flags serve to wrap new functionality under development so that the feature can be shipped even if it is unfinished. It will no longer need to maintain code against multiple development lifecycles. It can be tested in production by limiting access to beta customers. It can also be used to roll out new functionality incrementally in production.  It can also help to disable a feature without rebuilding and redeploying the application. The feature flags can be used to segment a user and deliver a specific set of features to each group. All feature flags used in an application can be externalized. One way to avoid proliferation of key-values is to remove the obsolete ones and promote the feature to current source.

Azure App configurations events enable applications to react to changes in key-values. This eliminates code or expensive and inefficient polling services. Events are pushed through Azure Event Grid to subscribers. Common App configuration event scenarios include refreshing application configuration, triggering deployment or any configuration-oriented workflow. There is a difference between polling mechanisms and event subscriptions where each has its advantage and can be seen with the help of the size of the changes made to the configuration store. When the changes are infrequent, but the scenario requires immediate responsiveness, event-based architecture can be especially efficient.  The publisher and subscribers get just the data that has changed and as soon as the change happens which enables downstream systems to be reactive and multiple subscribers to receive the notifications at once. They are also relieved from implementing anything more than a message handler. There is also the benefit that comes from the scope of the change that is passed down to the subscribers, so they get additional information on just what configuration has changed.  If the changes become frequent, the number of notifications is large leading up to performance bottleneck with variations in the queue size and delays in the messages.  Instead, when the changes span a lot of keys, it is best to get those changes in bulk. A polling mechanism can get changes in batches over time and then process through all those changes. It can even find only the updates that were made from the time of the previous polling.  This enables incremental updates at the destination. Since a polling mechanism is a loop that perpetually finds changes, if any, and applies them to the destination, it can work in the background even as a single worker. A polling mechanism is a read-only operation and therefore it does not need to fetch the data from the store where the configuration is being actively updated. It can even fetch the data from a mirror of the configuration store. Separation of the read-write store from a read-only store helps improve the throughput for the clients that update the configuration store. Read-only access is only for querying purposes and with a store that is dedicated to this purpose, the configuration store can deploy a suitable technology to host the read-only store that can assist with queries. It is recommended that both the source and the destination of the configuration store changes be made better suited to their purpose.

 

 

 

Saturday, March 5, 2022

 Azure App Configuration  (continued)


This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction to Azure App Configuration with the link here. This article elaborates on the best practices with Azure App Configuration.  


App Configuration treats all keys stored with it as independent entities. App configurations doesn’t attempt to infer any relationship between the keys. Aggregation is made possible with the help of labels. Application code can perform configuration stacking.

Excessive requests made to App Configuration can result in throttling or overage charges. Requests can be reduced by increasing the refresh timeout, watching a single sentinel key, using Azure Event Grid to receive notifications when configuration changes and spreading requests across multiple app configuration stores. There is an option to bulk import the configuration settings from the current configuration files using either the portal or CLI. The same option can be used to export key-values from app configuration. 

Client applications demonstrate two common risks. First, they use a connection string which is exposed to the public and second, the scale of the requests from client applications can be excessive. It is recommended that a proxy be used instead between the applications and the app configuration store.

Azure App maintains a record of changes made to key-values. This record provides a timeline of key-value changes and allows us to reconstruct the history of any key-value to fall back to a value to an earlier point in this history. The az appconfig revision list command enables us to retrieve all recorded changes to the key-values.

App configuration is particularly helpful for feature management. This is a software development practice that decouples feature release from code deployment and enables quick changes to feature availability on demand. A feature flag can be used to toggle the feature on or off.

Feature flags serves to wrap new functionality under development so that the feature can be shipped even if it is unfinished. It will no longer need to maintain code against multiple development lifecycles. It can be tested in production by limiting access to beta customers. It can also be used to roll out new functionality incrementally in production.  It can also help to disable a feature without rebuilding and redeploying the application. The feature flags can be used to segment a user and deliver a specific set of features to each group. All feature flags used in an application can be externalized. One way to avoid proliferation of key-values is to remove the obsolete ones and promote the feature to current source.

Azure App configurations events enable applications to react to changes in key-values. This eliminates code or expensive and inefficient polling services. Events are pushed through Azure Event Grid to subscribers. Common App configuration event scenarios include refreshing application configuration, triggering deployment or any configuration-oriented workflow.