Saturday, October 15, 2022

 

 This section refers some of the documentation for a certification in AZ-305.

 

1.       Multiple tenants – enable access for developers of one tenant in another 

A.      A trust relationship must be setup between the DC receiving the request and the DC in the domain of the requesting account. Forest trusts help to manage a segmented AD DS infrastructures and support access to resources and other objects. Only one-way Transitive relationships are allowed. Federation is a collection of domains that have established trust.

2.       How to setup single tenancy and operations that are restricted for single tenant auth? 

A.      This is required when the traditional approach to restricting access to domains names or IP addresses does not work for SaaS apps or for shared domain names. With tenant restrictions from Azure AD and SSO for the applications used, access can be controlled.

3.       Identity protection versus monitoring, specifically services and purposes 

A.      Both security center and Azure sentinel can be used for Security, but the former helps to collect, prevent, and detect via analytics, the latter helps to detect via hunting, investigating via incidents and responding via automation. 

4.       What identity protection will protect from bot attack? 
A.  
Azure AD Identity protection protects from bot attack. On-premises AD identity protection There are three key reports that administrators use for investigations in Identity Protection:

a.       Risky users

b.       Risky sign-ins

c.       Risk detections

5.       On-premises integration with Azure AD so that on-premises experience is not broken 

There are two ways to do this:

1.       Use Azure AD to create an Active Directory domain in the cloud and connect it to the on-premises Active Directory domain. Azure AD Connect integrates the on-premises directories with Azure AD.

2.       Extend the existing on-premises Active Directory infrastructure to Azure, by deploying a VM in Azure that runs AD DS as a Domain Controller. This architecture is more common when the on-premises network and the Azure virtual network (VNet) are connected by a VPN or ExpressRoute connection. Several variations are possible:

a.       a domain is created in Azure, and it is joined to the on-premises AD forest.

b.       a separate forest is created in Azure that is trusted by domains in the on-premises forest.

c.       an Active Directory Federation Services (AD FS) deployment is replicated to Azure.

 

6.       Order of setting up service resources and tasks for AD integration of on-premises.

A.      This includes Active Directory, Active Directory Domain Services, AD Federation Services.

7.       Conditional access policies versus azure policies – when to use what? 

A.      Azure AD Conditional access can help author conditions such as when the password authentication must be turned off for legacy applications based on DateTime or other such criteria. 

B.      A policy is a default allow and explicit deny system focused on resource properties during deployment and for already existing resources. It supports cloud governance with compliance.  

8.       Can a blueprint be used to force hierarchy of resources specific to region? 

A.      Azure Blueprints can be used to assign policies in how resource templates are deployed which can affect multiple resources, it helps adhere to an organization’s standards, patterns, and best practices. It cannot be used to specify role assignments. It can consist of one or more policies.  

9.        Limits of resources and subscriptions? Can a tenant have more than one subscription? 

A.      When we run a single instance of resource, the service limits, subscription limits and the quota apply. When these limits are encountered, the shared resources must be scaled out. 

 

10.   Do we need availability zone redundancy or geo-redundancy? 

A.      Some tradeoffs based on cost (az is free, region is not), overhead (deploying to additional regions implies additional instances that may need to be monitored and read-only separation is possible only in the case of geo-redundancy.

11.   Azure SQL managed instances – appropriateness over elastic pools and higher compute 

A.      Each elastic pool is contained within a single logical server. Database names must be unique in a pool so multiple geo secondaries cannot share the same pool.

12.   How many databases per tenant?  

A.      a tenant database dedicated to store the company’s business data. The knowledge about the shared application is then stored in a dedicated application database.

13.   How to perform migration of applications from on-premises to Azure – choose appropriate database instance, service and SKU 

A.      The four phases of migration include phase 1 – discover and scope, phase 2 – classify and plan, phase 3 – plan migration and testing, and phase 4 – manage and gain insight.

B.      The first phase is the process of creating an inventory of all applications in the ecosystem. They fall into three categories those that can be migrated, not migrated, or marked for deprecation.

C.       The second phase involves detailing the apps within the categories with criticality, usage, and lifespan.  It prioritizes the application for migration and plans a pilot. 

D.      The third phase involves planning migration and testing by communicating changes and migrating applications and transition users.

E.       The fourth phase involves managing and gaining insight by managing end-user and admin experiences and gaining insight into application and user behavior.

F.       These four phases transition the application experience from old to new smoothly. Migrating from earlier version of Windows to later or from switching one SKU to another is possible.

14.   Will the elastic pool scale or is it better to go with higher compute for certain workloads? 

A.      An elastic pool must have sufficient resources in the pool to accommodate a database. Elastic pools share compute resources between several databases on the same server. This helps to achieve performance elasticity of each database. The sharing of provisioned resources across databases reduced their unit costs. There are built-in protections against noisy neighbor problems. The architectural approach must meet the levels of the scale expected from the system.

B.      Higher Compute boosts the performance for a database.

15.   How do we setup geo-recovery, geo-replication, and geo-failover for restricted MTTR and RTO? 

A.      There is usually a delay when a backup is taken and when it is geo-restored, and the restored database can be up to one hour behind the original database. Geo-restore relies on automatically created geo-replicated backups with a recovery point objective of up to 1 hour and an estimated recovery time objective (RTO) of up to 12 hours. It does not guarantee that the target region will have the capacity to restore the database after a regional outage, because a sharp increase in demand is likely. Therefore, it is most used for small databases. Business continuity for larger databases is ensured via auto-failover groups. It has a much lower RPO and RTO and the capacity is guaranteed.

16.   How to proceed with database migration from on-premises to cloud? 

A.      Geo-replication can also be performed for database migration with minimum downtime and application upgrades by creating an extra secondary as a fail back copy during application upgrades. An end-to-end recovery requires recovery of all components and dependent services. All components are resilient to the same failures and become available within the recovery time objective of the application. Designing cloud solutions for disaster recovery include scenarios using two Azure regions for business continuity with minimal downtime or using regions with maximum data preservation or to replicate an application to different geographies to follow demand.

17.   How can virtual networks enable with securing tenants and connecting on-premises?

A.      virtual networks allow name resolution to be set up. The name resolution to an IP address depends on whether there is a single instance or many instances of the multitenant application. For example, a CNAME for the custom domain of a tenant might have a value pointing to a multi-part subdomain of the multitenant application solution provider. Since this provider might want to set up proper routing to multiple instances, they might have a CNAME record for subdomains of their individual instance to route to that instance. They will also have an A name record for that specific instance to point to the IP address of the provider’s domain name. This chain of records resolves the requests for the custom domain to the IP address of the instance within the multiple instances deployed by the provider. Virtual networks also extend to on-premises.

18.   What is the order of connecting a service instance privately to the enterprise application? 

A.      Network features such as private endpoints and disabled public network access can greatly reduce the attack surface of a data platform of an organization. The simplest solution is to host a jumpbox on the virtual network of the data management landing zone to connect to the data services through private endpoints. Azure Bastion could be a more secure alternative and it would connect to a target vm subnet over NSG.

19.   How to expose nested virtual network access to the internet? Is there a gateway involved? 

A.      Network Watcher can be used to view the topology of an Azure Virtual Network. It can be used to monitor Azure VPN Gateways. The Get-AzureRmVirtualNetworkGatewayConnection PowerShell can be used to retrieve the connection details. If two virtual networks are linked, one of them, must have a gateway to the internet.

20.   How to use a load balancer with the virtual network or for access to an application? 

A.      For an example deployment A virtual network interface for each VM, an internet facing load balancer, two load balancing rules, an availability set, and say two VMs are required.

21.   When to use VMSS for certain migration scenarios? Do we run into specific scaling limits for peak load? 

A.      Scale sets support up to 1,000 VM instances for standard marketplace images and custom images through the Azure Compute Gallery. If a scale set is created using a managed image, the limit is 600 VM instances. VMSS makes it easy to create and manage VM instances, provide high availability and application resiliency, and allows applications to automatically be scaled as resource demand changes

22.   When to use VMs instead of VMSS?  Will it affect availability across regions? Can the VMSS be spread across regions? 

A.      VMs and VMSS are bound to regions.  A regional scale set uses placement groups, which act as an implicit availability set with five fault domains and five update domains Scale sets of more than 100 VMs span multiple placement groups.

23.   Will the VMSS require private endpoints when enterprise services are hosted. 

A.      The private endpoints can be created for a service on a virtual network. VMSS deploys compute.

24.   What are the minimum number of instances 2 or 4 when there are paired regions involved for certain deployment scenario? 

A.      The resource double for paired regions. The minimum number for one region can be taken as 1 of each resource.

25.   How many logging and monitoring namespaces for multi-tenants’ applications? 

A.      One only for all the tenants of the multitenant application.

 

26.   What cloud services will be used for collecting and analyzing IoT traffic from edges? 

A.      Azure IoT Hub connects, monitors and controls billions of IoT assets. Azure TimeSeries Insights can help to explore and gain insights from the Time-Series IoT data in real-time.

B.      CosmosDB and Function Apps can be used for custom processing. Azure EventHub can receive and process millions of events per second for stream processing.

 

27.   How will we scale resources for edge traffic? What databases are best suited for certain data? 

A.      Time-Series data can be analyzed with Azure TimeSeries Insights.

B.      Streaming data can be processed with Azure EventHub and Function Apps

 

28.   Will a time-series database or a cosmos document store be preferred to certain application and its workload? 

A.      IoT traffic is best collected by Azure Event Hub and analyzed via Time-Series Insights. Document store provides many capabilities for documents including SQL queries. It is also general purpose and scales quite well. It can be deployed with separation of read-only and read-write instances.

 

29.   What will be the order of services and namespace creations for creating a reporting dashboard for a specific purpose? 

A.      A data ingestion service, a data collection store, and a reporting stack in that order. Variations depend on the type of data and analysis.

 

30.   When is a container registry prepared and does it need access to the internet and public registries? 

A.      If a registry is accessed over the internet, it must confirm that it allows public network access from the client. By default, the registry instance will allow access to public registry endpoints from all networks, but it can limit access to selected networks or IP addresses.

 

31.   Will the container instances be preferred to azure functions? when is the latter better suited? 

A.      The function is the unit of work whereas in a container instance, the entire container contains the unit of work. So, Azure functions start, and end based on event triggers whereas the microservices in containers run all the time.

 

32.   What are the scaling limits for either of them or which is better suited for hosting APIs?

A.      By virtue of the triggering functionality, functions suffer from cold start for http invocations although it scales very well to the volume of IoT traffic. A container App is better suited to hosting APIs

Friday, October 14, 2022

 

This article talks about organization of data particularly keys and values.

 

Configuration data is stored as key-values. They are a simple and flexible representation of application settings. Keys serve as identifiers to retrieve corresponding values.

Application configuration in a multitenant solution particularly in B2B systems apply to multiple accounts usually several of them that are running on the same system. They are not a secret and not sensitive. They are not applicable to data pertaining to multiple users such as user profiles which can be in hundreds or thousands. These configurations are also edited by multiple teams not just the owning team or its development team but also technical support and other staff members.

A classic example of using a configuration key is to set a default language for an enterprise account. All the users of that account will see this language when they login.

When this configuration data is maintained in a table, then there are rows corresponding to the default language for each of the enterprise account that can be queried with SQL. The configuration store becomes a database in this case. The drawbacks to this approach are that 1) an audit solution needs to be slapped on to the database otherwise direct edits quickly become unmanageable. 2) rollback is more difficult than if it were in files. 3) It is not easy for everyone to see who made the last change 4) It doesn’t support hierarchy for an account such as departments and 5) the table proliferates for as many environments as there are.

The most common approach to storing configuration keys and values is one that facilitates hierarchy. This is best done in folder/file layout while permitting visibility into who changed what along with authenticated access and sharing across all environments. This is best done with a source code control system such as git. When the configuration is checked into the source code such as git. Some best practices continue to apply specifically to configuration key-values.

A best practice involves organizing keys in hierarchical namespaces by using a character delimiter. A convention is not mandated for multiple tenants, but it helps. Keys regardless of their format must be treated. When parsing is avoided, it is easier to not break any usages. Data used within application frameworks might dictate specific naming schemes for key-values. A combined size limit of 10KB usually applies on a key-value. 

Key namespaces must be easier to read with proper use of delimiters to denote hierarchical information. They must also be easier to manage. A key-name hierarchy must represent logical groups of configuration data. They should be easy to query using pattern matching. 

When there is the luxury of using a dedicated configuration service, key-values could optionally have a label attribute. Labels are used to differentiate key-values with the same key. No labels are associated initially, and key-values can be referenced without a label.  A common use of labels is to denote environments for the same key. Labels can also be used to create versions. It is possible to jump forward or fall back between keys using versions. Values depend on content-type with Unicode as the most popular form.  MIME types are also applicable for feature flags, Key Vault references, and JSON key-values. 

Key-values can be soft-deleted which means that they can be recovered. Soft delete will act as a safeguard to scenarios including the case when a deleted app configuration store could be recovered in the retention time-period and the case when an app configuration store is permanently deleted. A soft-deleted store will be retained for a short time known as the retention period. When it elapses, the store is deleted permanently. Key-values can also be purged before the retention period expires. Permission to read and purge deleted stores are granted to owner and contributor roles by default. 

Json content-type is preferable over other formats for key-values because it provides simpler data management, enhanced data export, and native support with app configuration provider. If content is directly edited in file-systems, Yaml might be more terse. When configuration data changes, Event Grid can be used to receive data change notifications. These events can trigger web hooks, Azure functions, Azure storage queues, or any other event handler. Typically, a resource group and a message endpoint are created to subscribe to the topic.

Thursday, October 13, 2022

 The previous article talked about workflows and multitenant systems. This article talks about innovations in cloud-based multitenant systems or solutions for short which include those for industry specific solutions, authoritative marketplace offerings, fleet deployment of agents, and solution-based automation.

In this section, we focus on transitions specifically. When control passes from the tenant workload to the multitenant infrastructure and back, there is an opportunity to add routines that can not only safeguard the state of the caller but also improve the statistics and bookkeeping withing the solution. It is even possible to introduce a tag or inject color into the data stream so that its propagation throughout the cloud can be made visible. This improves forensics as well as the detection of resources that are underutilized for advice towards application optimization.  

Similarly adding headers before and after data segments from specific callers enables the study of those data manipulations by all parties during its lifetime. This study could include stack captures that are authoritative and comprehensive. 

A lot of information can be obtained when specific bookkeeping is added to sequences or patterns of usages or by specific callers. Since this additional bookkeeping might introduce regressions in performance optimizations for the application, it becomes important to turn it on for as granular a session as possible and for the duration specified by the administrator. In this regard, feature flags, variables, and dynamic behavior from the code will be helpful in the isolation of the control and data path under investigation.  

Finally, system performance and behavior capture has traditionally been curated for manual inspection. With the advent of AI and the popularity of data mining techniques, this machine data could automate analysis that draws insights and makes recommendations to the application authors. This strategy could involve cross-application comparisons and associations, historical trends, and collaborative filtering. Some common scenarios are described here.

Conventional software engineering practice involves the use of profiling as a means of studying avenues for application optimizations. A multitenant solution hosted in the public cloud is uniquely positioned toward this goal. Not only does the public cloud have complete visibility into the utilization of cloud resources and profiling, but it can also draw insights into application performance with the help of models that weren’t possible earlier on-premises. The cloud-based solution can introduce a significant number of resources for short periods of time on bursts of analysis activities, so they remain unparalleled in elasticity elsewhere. By incorporating the control and feedback loop directly within the cloud-based solution on behalf of the tenant and their applications, the solution can offer better impact in terms of shaping the tenant resources and the workloads they host, for the future.  Finally, public cloud provides an excellent paradigm for multitenant solutions to imbibe as an infrastructure provider for their tenants

Wednesday, October 12, 2022

 Workflows:

This article focuses on some of the best practices for working with workflows that deploy services.  The tenets are:

1.       Reusability – many of the activity from the library of activities for one workflow can and will be reused for another. Very few workflows might have differences in doing tasks that were not covered by the global collection of activities. There should not be any difference between an activity that appears in bootstrapping and its invocation during redeployment/ rehosting in the new environment. Only the parameter values will change for this.

2.       Dependencies – many of the dependencies will be implicit as they originate from system components and services information. A workflow might additionally specify dependencies via the standard way in which workflows indicate dependencies. These will be on a case-by-case basis for tenants since it adds overhead to other services, many of whom are standalone. Implicit dependencies can be articulated in the format specified by the involved components.

3.       Splitting – Workflows are written for on-demand invocation from the web interface or by the system, so there might be more than one for a specific deployment scenario. It is best to include both the bootstrapping and the redeploy in the main workflow for the specific scenario, but they will be mutually exclusive during their respective phases and remain idempotent.

4.       Idempotency – All workflow steps and activities should be idempotent. If there are conditionals involved, they must be part of activities. The signaling and receiving notifications of dependent workflows if any must be specifically called out.

5.       Bootstrapping – This phase is common to many services and usually requires at least a cluster/set of servers to be made ready but there might be activities that require the service stamp to be deployed even if it is not configured along with necessary activities to do one time preparation such as getting secrets. Until the VIPs are ready, the redeployment cannot be kicked off.  Bootstrapping might involve preparations for both primary and secondary where applicable.

6.       Redeployment or rehosting – This phase involves configuration since the bootstrapping is usually for a stamp and this stage converts it into a deployment for a service. Since it involves reconfiguration, it can be for both primary and secondary and typically done inside the new cloud. It is best to parameterize as much as possible.

7.       Naming convention – Though workflows can have any names inside the package that the owning teams upload, it is best to follow a convention for the specific scenario of one workflow calling another. Standalone single workflows do not have this problem. Even in the case when there are many workflows, a prefix/suffix might be helpful. This applies to both work workflows and activities.

8.       System workflow – Requiring separate workflows for bootstrap and redeployment via a system defined workflow to allow system to inject system defined activities between bootstrap and redeploy is a nice-to-have but the less intrusion into service deployment the better. This calls on the service to do their own tracking via passing parameter values between workflows and activities. A standard need not be specified for this, and it can be left to the discretion of the services.

The above list is not intended to be complete but focuses on the strengths of those that have worked well

Tuesday, October 11, 2022

 

Considerations during migration of Active Directory Domain services objects across forests via tool or cloud service for Active Directory Migration 

Upgrade and migration are two distinct but popular and mainstream operations for working with Active Directory objects and Organizational Units. Migration is usually to a new forest from say domain A to domain B. There are many considerations to migration. 

1.       Migration is about users not machines. It must be focused on users and groups. Machines come along with the users. 

2.       It is not easy to move all the things pertaining to a user all at once. This must be done one after another.  

3.       When organizations want to migrate, they envision something but what turns out at the end is usually different because when the migration is long, the landscape of objects changes from such things as acquisitions and mergers.  

4.       IT is wide and complex spread out across time-zone and geography so migration must scale to many objects 

5.       Migration pattern always involves a scale up and scale down as needed.  The drawing board for migration planning might project a best-case scenario as one where the pilot takes a constant time to build, and the business has no ups or downs but this is never uniform. There may be early adopters, if at all the pilot goes as planned, then there might be blackouts, followed by intense sprints and then the dwindling in terms of stragglers. These varying migration patterns can all be accommodated by self-service which is critical to enable many customers. 

6.       It is best to require users to self-serve themselves because their situation might vary from one to another.  

7.       There must not be any damages caused due to migration. Before and after the migration they must remain working and especially after migration, there must not be any more fixes to be made. 

8.       Prerequisite checking is important towards this purpose. Flights and dry runs are part of the migration process. The migration process is successful only when it has been carefully planned out. 

Expectations must be softened for Active Directory migrated objects. The security identifier for a user, for example, is not going to be the same. That is generated by a domain controller and a new domain has its own controller. The process of issuing a security identifier involves a Relative Identifier Master or RID master which has a Flexible Single Master Operations role. This is a domain scoped role. Each RID master role can allocate active and stand-by Relative identifiers.  The RID master can move multiple objects across domains within the same forest. Since it is a single master, when it is down, no new objects can be created. This might seem severe but in established organizations, creation of new objects is low so the brief downtime can be tolerated. If the role is seized and the original domain controller is brought back online, duplicate RIDs may be introduced. 

The following PowerShell commands can help: 

Get-ADForest <domain> | Format-Table SchemaMaster, DomainNamingMaster, PDCEmulator, RIDMaster, InfrastructureMaster 

If the source and target DC are not the RID master, the error is  
move-adobject : The requested operation could not be performed because the directory service is not the master for that type of operation

Monday, October 10, 2022

 

Support and sustained engineering:  

The previous articles talked about Licensing and purchasing models. This article talks about support and sustained engineering.

Sustaining engineering for multitenant applications is about maintenance after release hence ‘sustaining’ in the way tenants and customers use these and provide feedback with their usage. The difference between sustaining engineering for a software product versus a multitenant application is one of on-premises deployment versus software-as-a-service. Nevertheless, the company that makes the software product is focused on innovations and improved valued additions that come in waves of software releases to tenants. While many are seamlessly upgraded to the newer versions since they have no ownership of the infrastructure, solution providers cannot always guarantee backward compatibility and the resources they provision might themselves not be supported in the same way as they were. These and other external dependencies are often addressed with releases which are versioned to indicate which is older and which is newer. The older versions are upgraded to the newer on existing systems or the newer versions are installed on newer systems. As the software maker focuses on the next release, sustaining engineers focus on maintaining the existing released versions for specific tenant usages.   

The typical rule of thumb for how many versions to maintain is usually determined based on the usages by customers. Some companies maintain all their application versions as far back as the earliest if there are customers who have purchased them and want to actively use them. Others choose to discontinue the maintenance on select older versions so long as both the customers and the company have an agreed exit strategy. Usually, the customers may be eased into newer versions. There are several factors that play out into the extent of maintenance engaged by a software maker such as revenue, customer base, market segment, costs, resources, media, etc. and it is not uncommon to find two or three versions being maintained.  Sustaining is all about this art and science of maintaining released software versions and often engages with customers throughout their usages. It is interesting to note that customers can run into issues of their own accord with any of these versions and not just when the software maker has put out a release the customer wants to use. That is why software sales and sustaining are both ongoing commitments for a software maker while being fundamentally different. 

Nowhere in the industry has there been a better service-level agreement articulation as the warranty and support that comes with the application both for the multitenant application and those applications that are sold via the application store. This is not just legal language. It is one where the software maker is offering a tiered approach to what the customer has paid for and is required to pay for.  

On one end of the spectrum, early multitenant applications have long held on to a difficult bargain for the customer where they were required to pay for the updates and upgrades to their purchases so that their operations could continue without outages. Large commercial multitenant applications even had a wake-up call from the industry to say that this simply cannot go on and there must be resources pitched in to improve the efficiency and experience around the engagement. 

On the other end of the spectrum, cloud service and outsourced business processes have largely muted the discussions on software maintenance with most error data gathering activities and corrections happening independent of the businesses concern. Even the billing has changed to being all-inclusive in the pay as you go approach with the clouds taking over the total cost of ownership and leaving merely the application optimizations to the businesses. 

Somewhere in between, the industry is required to balance and invest in such agreements across and the applications that they use. The maintenance plan and support are drawn out by the multitenant solution providers to best suit their customers and internal schedule. 

 

 

Sunday, October 9, 2022

 Licensing:

The previous articles talked about Licensing with a multitenant application. This article continues to discuss a few more aspects.

The lifecycle of group-based licenses can be managed in Azure Active Directory. This is called entitlement management. Using groups to manage licenses for applications helps to configure periodic access reviews and allows other employees to request membership in the group.

For example, an access package can be created to allow employees to gain access to Office licenses such that group members can be reviewed annually, and new employees can request licenses with their manager's approval.

Azure AD entitlement management itself requires Azure AD Premium P2 license and Enterprise Mobility plus Security EMS ES approval.

The steps to create the access package involve the following steps: 1) the basics for the access package such as name, description, and catalog type must be specified. 2) the resources for the access package must be specified as groups and teams with roles as members. and 3) the requests for the access package must be configured to include approvals and their manner. 4) The requestor information must be collected, 5) the lifecycle for the access package must be configured and 6) finally, the access package must be created and reviewed.

Users with individual licensing can be migrated to use groups. There is a caveat here that a situation where users temporarily lose their currently assigned licenses during migration must be avoided. Any process that may result in the removal of licenses should similarly be avoided. The recommended migration process involves 1) using existing automation to manage license assignment and removal for users. 2) creation of a new licensing group to make sure all the required users are added as members. 3) the required licenses should be assigned to those groups 4) the licenses should be applied to all users in those groups and 5) a check must be performed that no license assignments failed. License assignment errors can be found by finding users in an error state in a group.

Common errors encountered with Licensing involve the following:

1) a situation where there are not enough licenses – this can be mitigated by purchasing more licenses for the product or freeing up unused licenses from other users or groups. Available licenses can be viewed.

2) a situation where there are conflicting service plans. Some service plans are configured in a way that they can’t be assigned to the same user as another related service plan This can be resolved by disabling one of the plans. 

3) a situation where other products depend on this license. A product might have a service plan that requires another service plan in another product to function. This can be mitigated by making sure that the required plan is still assigned to users through some other method or that dependent services are disabled for those users.

4) a situation where the usage location is not allowed. Before a license can be assigned to a user, the usage location property must be specified for the user. When this is violated, an error occurs. This can be resolved by removing users from unsupported locations from the license group.

5) a situation where the proxy addresses are duplicated. when users in the organization specify the same proxy address twice and the group-based licensing tries to assign a license to such a user, it fails. This error must be solved on the user side and the license processing must be forced on the group after the remediation.

6) a situation where the Azure AD mail and Proxy Addresses attribute changes. Some proxy address calculations can trigger attribute changes. These must be investigated on a case-by-case basis.

7) a situation where a concurrency exception occurs in the audit logs. This comes from a concurrent license assignment of the same license to a user. Retrying the process will resolve this issue and there will not be any action required from the customer to fix this issue.

8) a situation where more than one product license must be assigned to a group. We can see users who failed to get assigned and check which products are affected by this symptom.

9) a situation where a licensed group is deleted. All licenses assigned to the group must be deleted before the group can be deleted.

10) a situation where licenses for products with prerequisites must be managed – some products are add-ons and they require a pre-requisite service plan to be enabled for a user or group before they can be assigned a license. The add-on license can be assigned to a group, where the group also contains the prerequisite service plan

11) a situation where group licensing processing can be forced to resolve errors especially for freeing up some licenses

12) a situation where the user licensing processing can be forced to resolve errors such as the duplicate proxy error described above.

When the number of servers or the number of users is large, volume licensing options might be available. This is the practice of selling a license authorizing one piece of software to be used on a large number of computers or by a large number of users. Software training for volume licensing customers might be made available by way of training and certification solutions. A customized software purchase program that grants discounted access to training and certification solutions.