Azure Functions:
Introduction: This article is a continuation of the series of
articles starting with the description of SignalR service which was followed by a discussion on Azure
Gateway service, Azure
Private Link, and Azure Private Endpoint and the benefit of diverting traffic
to the Azure Backbone network. Then we started reviewing a more public
internet-facing service such as the Bing API. and the benefits it provided when used together with
Azure Cognitive Services. We then discussed infrastructure API such as Provider
API, ARM resources, and Azure
Pipeline and followed it up with a brief
overview of the Azure services support for Kubernetes Control Plane via the
OSBA and Azure operator. Then we followed it with an example of Azure
integration service for Host Integration Server (HIS). We started discussing the Azure FHIR service next. We were reviewing its search capabilities, followed by its regulatory compliance and security
policies. Most recently, we discussed Azure
Synapse link for Cosmos DB.
This article is about connecting all Azure functionalities via extensions that
do not affect existing deployments. Specifically, we discuss Azure Functions.
Description: Azure Functions serve compute-on-demand. When there are
many blocks of code to run versus large monolithic code base, it improves
application modularity and maintenance. These code blocks are called functions.
They react to critical events. When there are many requests, the functions can
scale instances as necessary and once the traffic has died down, they can scale
down. All the compute resources come from Azure Functions and as a developer of
Azure functions, there is no need to be concerned about infrastructure and
operations. When the function is
written, a hosting plan must be chosen. There are three basic hosting plans
available for Azure functions – consumption plan, premium plan, and dedicated
app service plan. The hosting plan decides how the function is scaled, what
resources are available to each function app instance, and the support for
connectivity methods such as Azure virtual network connectivity. Azure
functions can even be Kubernetes based.
It is made up of two key components – a runtime and a scale controller.
The function runtime runs and executes the code. The scale controller monitors
the rate of events that aretargeting the functions. When the functions are hosted on Kubernetes,
the Kubernetes-based Event Driven Autoscaling enables metrics to be published
so that the Kubernetes autoscaler can scale from o to n instances. The function
runtime is hosted in a Docker container. KEDA supports Azure Function triggers
in the form of Azure storage queues, Azure Service Bus, Azure Events, Apache
Kafka, and RabbitMQ queue. HTTP triggers are not directly managed by KEDA.
Visual Studio provides a template for writing Azure functions. It introduces a
host.json file that allows specifying the settings for the host. This includes
logging and application insights for settings such as samplingSettings. Many of the host settings determine
infrastructure requirements from the logic in the Azure Function. Judicious use
of these settings promote health and maintenance of the Azure Function. The corresponding Azure Function project file declares
the setting for AzureFunctionsVersion. The Azure Functions are implemented
using the method signature as public static async Task<IActionResult>
Run(
[HttpTrigger(AuthorizationLevel.Function, "get",
"post", Route = null)] HttpRequest req,
ILogger
log)
Sample code: https://1drv.ms/u/s!Ashlm-Nw-wnWhKYdp9QwpiEacjprGQ?e=1LD6jY
No comments:
Post a Comment