Tuesday, June 7, 2022

 

This is a continuation of a series of articles on Microsoft Azure from an operational point of view that surveys the different services from the service portfolio of the Azure public cloud. The most recent articlediscussed the Dataverse and solution layers. This document talks about managing the application lifecycle using Power Apps, Power Automate, and Microsoft Dataverse in the organization. 

Microsoft Dataverse is a data storage and management system for the various Power Applications so that they are easy to use with Power Query. The data is organized in tables some of which are built-in and standard across applications, but others can be added on a case-by-case basis for applications. These tables enable applications to focus on their business needs while providing a world-class, secure, and cloud-based storage option for the data that are 1. Easy to manage, 2. Easy to secure, 3. Accessible via Dynamics 365, has rich metadata, logic, and validation, and comes with productivity tools. Dynamics 365 applications are well-known for enabling businesses to quickly meet their business goals and customer scenarios and Dataverse makes it easy to use the same data across different applications. It supports incremental and bulk loads of data both on a scheduled and on-demand basis. 

Solutions are used to transport applications and components from one environment to another or to add customizations to an existing application. It can comprise applications, site maps, tables, processes, resources, choices, and flows. It implements Application Lifecycle management and powers Power Automate. There are two types of solutions (managed and unmanaged) and the lifecycle of a solution involves creating, updating, upgrading, and patching.  

Managed and unmanaged solutions can co-exist at different levels within a Microsoft Dataverse environment. They form two distinct layer levels. What the user sees as runtime behavior, comes from the active customizations of an unmanaged layer which in turn might be supported by a stack of one or more user-defined managed solutions and system solutions in the managed layer.  Managed solutions can also be merged. The solution layers feature enables one to see all the solution layers for a component. 

The foremost scenario for Application Lifecycle Management strategy is one that involves creating a new project.  The task involved include 1) determining the environments that are needed and establishing an appropriate governance model, 2) creating a solution and a publisher for that solution, 3) setting up the DevOps project that involves one or more pipelines to export and to deploy the solution, 4) creating a pipeline to export an unmanaged solution to a managed solution 5) configuring and building applications within the solution 6) adding any additional customizations 7) creating a deployment pipeline and granting access to the application and 8) granting access to the application. With these steps, it becomes easy to get started with dataverse solutions and applications.

The next scenario targets the legacy app makers and flow makers in Power Apps and Power Automate, respectively, who work in a Microsoft dataverse environment without a Dataverse database. The end goal, in this case, is a successful migration to a managed ALM model by creating apps and flows in a Dataverse solution. Initial app migration can target the default Dataverse environment but shipping the entities and data model require a robust DevOps with multiple environments each dedicated to the development, testing, and release of applications. It will require the same steps as in the creation of a new project but it requires the business process and environment strategy to be worked out first.

 

Monday, June 6, 2022


Continuous root cause analysis via analysis of time-series events:  

 

Problem statement: Given a method to collect many data points for errors in logs, can there be prediction on the resolution time of the next root-cause   

 

Solution: There are two stages to solving this problem:

1.       Stage 1 – discover root cause and create a summary to capture it

2.       Stage 2 – use a time-series algorithm to predict the relief time.

 

Stage 1:

When the exception stack traces are collected from a batch of log entries, we can transform it into a vector representation and using the notable stacktraces as features. Then we can start with the hidden weighted matrix that the neural network layer generates and then use that hidden layer to determine the salience using the gradient descent method.     

 

All values are within [0,1] co-occurrence probability range.    

 

The solution to the quadratic form representing the embeddings is found by arriving at the minima represented by Ax = b using conjugate gradient method.  

We are given input matrix A, b, a starting value x, a number of iterations i-max and an error tolerance  epsilon < 1  

 

This method proceeds this way:   

 

set I to 0   

 

set residual to b - Ax   

 

set search-direction to residual.  

 

And delta-new to the dot-product of residual-transposed.residual.  

 

Initialize delta-0 to delta-new  

 

while I < I-max and delta > epsilon^2 delta-0 do:   

 

    q = dot-product(A, search-direction)  

 

    alpha = delta-new / (search-direction-transposed. q)   

 

    x = x + alpha.search-direction  

 

    If I is divisible by 50   

 

        r = b - Ax   

 

    else   

 

        r = r - alpha.q   

 

    delta-old = delta-new  

 

    delta-new = dot-product(residual-transposed,residual)  

 

     Beta = delta-new/delta-old  

 

     Search-direction = residual + Beta. Search-direction  

 

     I = I + 1   

 

Root cause capture – Exception stack traces that are captured from various sources and appear in the logs can be stack hashed. The root cause can be described by  a specific stacktrace, its associated point of time, the duration over which it appears and the time of fix introduced, if known. 

 

Stage 2: A time-series algorithm does not need any attributes other than the historical collection of relief times to be able to predict the next relief time. It only looks at scalar value regardless of the type or factors playing into the relief time of an individual incident or its root cause attributes. The historical data is utilized to predict an estimation on the incoming event as if the relief were a scatter plot along the timeline. Unlike other data mining algorithms that involve additional attributes of the event, this approach uses a single auto-regressive method on the continuous data to make a short-term prediction. The regression is automatically trained as the data accrues.     

Sunday, June 5, 2022

 Multitenancy Part 4:

This is a continuation of a series of articles on Microsoft Azure from an operational point of view that surveys the different services from the service portfolio of the Azure public cloud. The most recent article discussed architecting multitenant applications on Azure. This picks up the discussion on tenant model and lifecycle from the differentiation between logical and physical tenants. 

 

One of the key differences between logical and physical tenant is how the isolation is enforced. When multiple logical tenants share a single set of infrastructure, it relies on application code and a tenant identifier in the database to keep each tenant’s data separate. Physical tenants have their own infrastructure so code running on them find it less important to be aware that this is a multi-tenant environment. Physical tenants are also referred to as deployments, super tenants, or stamps. 


Tenant isolation can run deep. For example, having a single set of shared infrastructure, with separate instances of the application and separate databases for each tenant or sharing some common resources while keeping other resources separate for each tenant or keeping data on a separate physical infrastructure. Separate resources for each tenant is a practice for public cloud as well and often translates to separate physical infrastructure using dedicated hosts.


The level of isolation impacts aspects of the architecture such as security, cost, performance, reliability and responsiveness to individual tenants’ needs. When the infrastructure is deployed that is dedicated to one tenant, the configuration of the resources can be tuned to meet the specific needs of that tenant. The architecture of the software deployed also determines the level of isolation. For example, there’s a three-tier solution architecture where the user interface might be a shared multi-tenant web application and all of the tenants access a single hostname. In this case, the application is a shared layer with shared message queues and the data tier can be isolated databases, tables or containers. We can also consider mixing and matching different levels of isolation at each tier. In the case of public cloud, the level of isolation might depend on cost, complexity, customer requirements and the number of resources that can be reached before reaching quotas and limits.

Single-tenant deployments are easy to deploy using automation that repeats the deployment of a dedicated set of infrastructure for each tenant. Solutions built using the model makes use of infrastructure as a code and the resource manager APIs.

On the other hand, the multi-tenant deployments can be such that all the components are shared. There is only one set of infrastructure, and all the components share it. One of the biggest advantages of this approach is that the data does not have to be migrated between deployments.

 

Reference to multitenancy: https://1drv.ms/w/s!Ashlm-Nw-wnWhLMfc6pdJbQZ6XiPWA?e=aWj2Z0   


Saturday, June 4, 2022


Multitenancy Part 3

 

This is a continuation of a series of articles on Microsoft Azure from an operational point of view that surveys the different services from the service portfolio of the Azure public cloud. The most recent article discussed architecting multitenant applications on Azure. This continues to discuss tenant model and lifecycle.

 

The choice of tenancy models is very important to designing the multi-tenant architecture. There are primarily two models - the business-to-business model and the business-to-consumer model. The former requires tenant isolation for organizations, divisions, teams, and departments while the latter is about individual consumers. The business-to-consumer model must respect privacy and security for the data and the business-to-business model must respect regulatory compliance.

 

Tenants can be distinguished between logical and physical tenants. When the scale of tenants increases, one of the relieving measures taken is to replicate the solution or some of its components to meet the increased demand. The load from one single instance may then be spilled over to another or the traffic can be mapped to infrastructure based on certain criteria.  In a B2C model, each user can be a separate logical tenant. They can be mapped to different physical tenants using different deployed instances. This results in a one-many mapping between logical and physical tenants. When compared to the B2B model, the definition of the logical tenant becomes clearer. In a B2B model, the resources for a firm are isolated from the start. In this case, the logical and the physical tenants mean the same.

 

One of the key differences between logical and physical tenant is how the isolation is enforced. When multiple logical tenants share a single set of infrastructure, it relies on application code and a tenant identifier in the database to keep each tenant’s data separate. Physical tenants have their own infrastructure so code running on them find it less important to be aware that this is a multi-tenant environment. Physical tenants are also referred to as deployments, supertenants, or stamps.

 

Tenant isolation can run deep. For example, having a single set of shared infrastructure, with separate instances of the application and separate databases for each tenant or sharing some common resources while keeping other resources separate for each tenant or keeping data on a separate physical infrastructure. Separate resources for each tenant is a practice for public cloud as well and often translates to separate physical infrastructure using dedicated hosts.

 

The tenant lifecycle depends on the tenant. Solutions that are software-as-a-service may want to honor customer requests for trials with a trial tenant. Questions about rigor for trial data, infrastructure for trial tenants, purchase option after trials and limits imposed on trial tenants must be answered. Regular tenants can be onboarded as the first step of their lifecycle which involves routines for allocation and initialization that could also be automated, setting up protection of data, meeting compliance standards, preparing for disaster recovery and setting up pricing options and billing models. If the customers require a pre-production environment, onboarding might be different since expectations around availability might be relaxed.

 

Reference to multitenancy: https://1drv.ms/w/s!Ashlm-Nw-wnWhLMfc6pdJbQZ6XiPWA?e=aWj2Z0  

 

 

Friday, June 3, 2022

This is a continuation of a series of articles on Microsoft Azure from an operational point of view that surveys the different services from the service portfolio of the Azure public cloud. The most recent article discussed architecting multitenant applications on Azure.

 The architectural considerations for a multi-tenant solution architecture are about the resources for the tenants. Some are shared and others are dedicated to a tenant. In a multi-tenant architecture, there is cost and operational efficiency but it introduces complexities which include factors such as whether a tenant maps to a user or a group, how much isolation is required between the tenants, what pricing models will the solution offer, how will that affect the requirements, what level of service to provide the tenants, how to meet the scaling demand from the tenants, any unusual or special requirements pertaining to a tenant, how to monitor, manage, automate, scale and govern the Azure environment, and how multitenancy impacts this.

 

The choice of tenancy models is very important to designing the multi-tenant architecture. There are primarily two models - the business-to-business model and the business-to-consumer model. The former requires tenant isolation for organizations, divisions, teams, and departments while the latter is about individual consumers. The business-to-consumer model must respect privacy and security for the data and the business-to-business model must respect regulatory compliance.

 

The tenant lifecycle depends on the tenant. Solutions that are software-as-a-service may want to honor customer requests for trials with a trial tenant. Questions about rigor for trial data, infrastructure for trial tenants, purchase option after trials and limits imposed on trial tenants must be answered. Regular tenants can be onboarded as the first step of their lifecycle which involves routines for allocation and initialization that could also be automated, setting up protection of data, meeting compliance standards, preparing for disaster recovery and setting up pricing options and billing models. If the customers require a pre-production environment, onboarding might be different since expectations around availability might be relaxed.

 

Applying updates to tenant’s infrastructure and scaling might need to consider traffic patterns such as seasonal variations and changes in the level of consumption. Noisy neighbor issues might need to be worked out when a subset of tenant scales unexpectedly and impacts the performance of other tenants. Mitigations might include scaling individual tenants’ infrastructure, moving tenants between deployments and provisioning capacity to exceed demand.

 

Reference to multitenancy: https://1drv.ms/w/s!Ashlm-Nw-wnWhLMfc6pdJbQZ6XiPWA?e=aWj2Z0 

 

 

Thursday, June 2, 2022

 

This is a continuation of a series of articles on Microsoft Azure from an operational point of view that surveys the different services from the service portfolio of the Azure public cloud. The most recent article discussed Service Fabric and this discusses architecting multitenant applications on Azure

Tenancy is about customers not users. Multiple users from a single organization can form a single tenant. Examples of multi-tenant applications include Business-to-Business solutions, Business-to-Consumer solutions and Enterprise-wide platform solutions. Building your own multi-tenant solution in Azure comes with some guidance. These are discussed below.

The architectural considerations for a multi-tenant solution architecture are about the resources for the tenants. Some are shared and others are dedicated to a tenant. In a multi-tenant architecture, there is cost and operational efficiency but it introduces complexities which include factors such as whether a tenant maps to a user or a group, how much isolation is required between the tenants, what pricing models will the solution offer, how will that affect the requirements, what level of service to provide the tenants, how to meet the scaling demand from the tenants, any unusual or special requirements pertaining to a tenant, how to monitor, manage, automate, scale and govern the Azure environment, and how multitenancy impacts this.

The requirements from the tenants drive the architecture so getting a clearer understanding of the requirements will help meet them. Tenant expectations around how things should behave must be documented properly.  As an example, building a multitenant solution that sells to businesses in the financial services industry can highlight some of these considerations. The customers have very strict security requirements, and they need to provide a comprehensive list of every domain name that the solution uses, so they can add it to their firewall's allow list. This requirement affects the Azure services that are used by the multi-tenant service and the level of isolation that must be provided between the tenants. They also require that their solution has a minimum level of resiliency. There may be other expectations, that must be considered across the whole solution.

The choice of tenancy models is very important to designing the multi-tenant architecture. The business-to-business model differs significantly from the business to consumer model. The former requires tenant isolation for organizations, divisions, teams and departments some of which may be spread across geographical regions. A single customer might need to map to multiple tenants.  A customer might want to maintain two instances of these services separated for production and development environments. The second model is one where each consumer can be a tenant. Grouping depends more dynamically on associations between users. For example, a music streaming service might support both individuals and their families.

 

Reference to multitenancy: https://1drv.ms/w/s!Ashlm-Nw-wnWhLMfc6pdJbQZ6XiPWA?e=aWj2Z0

Wednesday, June 1, 2022

 This is a continuation of a series of articles on Microsoft Azure from an operational point of view that surveys the different services from the service portfolio of the Azure public cloud. The most recent article on Service Fabric discussed the Dataverse and solution layers. This document talks about merging the layers.

Microsoft Dataverse is a data storage and management system for the various Power Applications so that they are easy to use with Power Query. The data is organized in tables some of which are built-in and standard across applications, but others can be added on a case-by-case basis for applications. These tables enable applications to focus on their business needs while providing a world class, secure and cloud-based storage option for the data that are 1. Easy to manage, 2. Easy to secure, 3. Accessible via Dynamics 365, has rich metadata, logic and validation, and come with productivity tools. Dynamics 365 applications are well-known for enabling businesses to quickly meet their business goals and customer scenarios and Dataverse makes it easy to use the same data across different applications. It supports incremental and bulk loads of data both on a scheduled and on-demand basis.

Solutions are used to transport applications and components from one environment to another or to add customizations to an existing application. It can comprise applications, site maps, tables, processes, resources, choices, and flows. It implements Application Lifecycle management and powers Power Automate. There are two types of solutions (managed and unmanaged) and the lifecycle of a solution involves create, updates, upgrade and patch.

Managed and unmanaged solutions can co-exist at different levels within a Microsoft Dataverse environment. They form two distinct layer levels. What the user sees as runtime behavior, comes from the active customizations of an unmanaged layer which in turn might be supported by a stack of one or more user-defined managed solutions and system solutions in the managed layer.  Managed solutions can also be merged. The solution layers feature enables one to see all the solution layers for a component.

Merge behavior is important because it affects the same component when a solution maker should understand the merge behavior when a solution is updated or when multiple solutions are installed that affect the same component. Merging involves only mobile-driven applications, forms and sitemap component types. All other components uses the “top level wins” behavior. The “top wins” behavior is one where the layers that reside at the top determine how the component works at app runtime. A top layer can be introduced by a staged pending upgrade. This behavior can be demonstrated with an example where the current top layer has a property say maximum length=100 and a solution upgrade is performed which adds a layer with the maximum length of 150.  In this case, the affected target will permit a maximum length of 150.

Update and Upgrade merge behavior are applied after they are stacked on top of the base solution. These can be merged by selecting “Apply upgrade” from the Solutions areas in the Power Application, which flattens the layers and creates a new base solution.  When the solution is prepared for distribution, the recipients may already have multiple solutions. Using segmented solutions to achieve isolated solutions is recommended.

Solution segmentation involves exporting solution updates with selected entity assets rather than entity fields, forms and views rather than entire entities with all the assets. A segmented solution is created by selecting one or more options when adding an existing entity to the solution. These include the option to add no components or metadata so that only the minimal entity solution information is added to the solution. Another option is to select components associated with the entity such as fields so that only those that have been added or changed are include in the solution update. Similarly, there is an option to include only the metadata associated with the entity but not the components. Lastly, there is an option to include all components and metadata associated with an entity.