Sunday, October 23, 2022

 

The following section continues to describe the tenant management of enterprise tenants with example of Microsoft 365.

Optimal networking involves optimizing the path between the on-premises users, and the closest location to the Microsoft Global Network, optimizing the access for remote users over VPN, using network insights to design the network perimeter for the office locations, optimizing access to specific assets hosted on SharePoint sites with the office 365 CDN, configure proxy and network edge devices to bypass processing for Microsoft 365 trusted traffic using an allowed list of endpoints.

Network design tries to minimize latency by reducing round trip time between clients and network. Some networks such as the Azure Backbone network offer much lower latencies than the public internet. When the Frontdoors are placed on the internet and the tenant is placed in the Microsoft Global Network, the path and the access are optimized. Routing over the network must also be followed up with proper identification of Microsoft 365 network traffic, allowing local egress of that traffic to the internet from each location, bypassing proxies and packet inspection devices for that traffic and avoiding network hairpins.

As with all networks, some maintenance is required for optimal networking on an ongoing basis. These might include updating edge devices and deployed PAC files for changes in endpoints or verifying that the automated process works correctly, managing assets in the CDN and updating the split configuration in the VPN clients for changes in the endpoints.

Optimal networking is only the first step in tenant management.  Identity Management is the next step.

Identity infrastructure must be configured correctly, which is vital to managing the Microsoft 365 user access and permissions for an organization.

There are two types of identity models which are Cloud only model and Hybrid model. User accounts only exist in the Azure AD tenant for the Microsoft 365 tenant in a cloud-only model.  Hybrid accounts have user accounts both in the on-premises Active Directory domain services as well as in the Azure AD tenant.

The hybrid identity model and directory synchronization are the most common choice for enterprise customers who are adopting Microsoft 365. There are two types of authentications when using the hybrid identity model – the managed authentication and the federated authentication.

In the managed authentication case, Azure AD handles the authentication process by using a locally stored hashed version of the password or sends the credentials to the on-premises Active Directory Domain Services.  In the federated authentication case, Azure AD redirects the client computer requesting authentication to another identity provider.

Saturday, October 22, 2022

 

A multitenant solution provider facilitates service deployments in a new cloud for tenants. This provider creates tenant certificates and provides templates for services to create their service identities. These service identities include both the managed service identity as well as service accounts. The difference between the two is in the usage where the former is system defined and automatically maintained and the latter is an exclusive credential for the service. Also, Managed Service Identity is specific to Azure Active Directory while a service account can exist in any Active Directory domain, both on-premises and in Azure.

When we refer to a tenant, we refer to it by the tenant ID, but it is also possible to refer to them by the host names for the tenants in the deployment. A tenant specific sub-domain is set up in this case. The tenant host name, mytenant.myservice.com must be specified as an alternative in the tenant configuration.  The URL can specify the tenant ID and the tenant host name if we specify the host names as alternative IDs for tenants.

Migrating certificates is easy but migrating tenant identities is not. Even though the certificates change when they have different subject names that include different domains, it is easy to create those identities in either the source or the destination clouds because they request an external certificate authority to issue it. And once issued for a specific domain, they can be added to the concerned domain wherever it is.

New clouds provide a new challenge in that the migration is not between tenants in the same solution, but the tenant identities are migrated from one cloud instance to another. Therefore, there is a source and destination instance and artifacts for a tenant that existed in one instance must have a corresponding artifact in the destination.

As with any migration, there are four phases:

A.      These include phase 1 – discover and scope, phase 2 – classify and plan, phase 3 – plan migration and testing, and phase 4 – manage and gain insight.

B.      The first phase is the process of creating an inventory of all artifacts in the ecosystem. They fall into three categories those that can be migrated, not migrated, or marked for deprecation.

C.       The second phase involves detailing the artifacts within the categories with criticality, usage, and lifespan.  It prioritizes the artifacts for migration and plans a pilot.

D.      The third phase involves planning migration and testing by communicating changes and migrating artifacts and transitioning tenants.

E.       The fourth phase involves managing and gaining insight by managing end-user and admin experiences and gaining insight into artifacts and their usages.

These four phases transition the artifacts usages from old to new smoothly.

Friday, October 21, 2022

This is a continuation in a series of articles on Multitenant Applications. The previous articles talked about tenant-to-tenant migration and this article talks about tenant management in that context.

One of the first tenant decisions is how many to have. Each tenant is distinct, unique, and separate from all other tenants. A single tenant is one that has a single Azure AD tenant, with a single set of accounts, groups, and policies. Permissions and sharing of resources are facilitated by this central identity provider. Multiple tenants are created when there is administrative isolation, decentralized IT, historical decisions, mergers, acquisitions or divestitures, clear separation of branding for a parent company, pre-production, test or sandbox tenants. Some restrictions apply in providing services to users and intertenant collaboration such as a central location for files, conversations, calendars etc. must be set up for users to collaborate more effectively.

Prior to cross-tenant migration such as for mailboxes, it was required to completely offboard a user mailbox from the current tenant to on-premises and onboard them to a new tenant. Cross-tenant migration allows administrators to move artifacts such as mailboxes with minimal dependencies in their on-premises systems.

A tenant allows a central location and one or more satellite locations to facilitate data residency in specific datacenters while the tenant information is mastered centrally and synchronized into each geo-location. When a new datacenter is added to a tenant in a new geo-location, it’s possible to migrate the organization’s core customer data at rest to the new location. Opening a new datacenter does not impact existing usages of the organization’s data.

The set of products and number of licenses for each requires some planning to ensure that there are enough licenses from the users’ accounts that need advanced features and that there are sufficient licenses but not too many unassigned licenses based on staffing.

A summary of the steps for tenant management includes how many tenants are there or needed, what products or licenses must be purchased for each tenant, whether a tenant needs to be multi-geo  to comply with data requirements, whether inter-tenant collaboration must be setup, whether one-tenant must be migrated to another and whether core data from one datacenter must be moved to a new one.

Thursday, October 20, 2022

 Tenant-To-Tenant Migration architecture model (continued):

This is a continuation of the previous article on Migration architecture models. Specifically, it focuses on Microsoft365 migration, tools, and migration of on-premises tenants.

The Tenant-to-Tenant migration architecture model is one of the popular ones among several architectural approaches towards tenant management. A tenant-to-tenant migration can be single-event migration, phased migration, or tenant move or split. These are also referred to as batched migration or cutover migration.

In a single event migration, everything is migrated as a single event. There is a higher risk and a shorter timeline. When there is no strict organization and users wear multiple hats or belong to multiple project teams, it becomes hard to segregate them into migration groups.  A single event can be used to migrate all and eliminate the need for coexistence considerations. The net benefit to the users is that they keep their original email address. The drawback is that automation might rely on APIs and there might be rate limits on their invocations.

Phased migration is the gradual migration of users, services, and data. With separation into multiple migration groups, there is continuity of access for users to their emails and meetings. There is lower risk and a longer timeline. The only drawback is the coexistence limitation.  Users will have an email address in a new domain and a tool might be required to do the migration.  A variety of third-party tools are available from say Consulting Services to migrate Exchange mailboxes, public folders, SharePoint sites, OneDrive folders, Office 365 groups, while users can help themselves with native capabilities from Intune and Windows 10 such as for subscription activation license. Native tools also have their place but they make calculation for Total Cost of Ownership a bit tedious.

Since the migration tasks are complex, there may be many activities occurring simultaneously such as migration of servers and other infrastructures. Coordination to minimize effort and risk and to avoid overlap is essential. Migration might not even be scheduled in certain periods. The overall picture matters more for phased migrations. This warrants the use of an overall plan where individual projects such as Exchange, SharePoint and Teams workload migrations can be managed.

Talking to users is critical towards this purpose. Certain business functions might not even be on the plan until that occurs. Even though these might be limited in number, they matter to the overall migration.

Advanced reporting tools for large and complex projects or periodic migrations can reduce stress because they anticipate common needs and have lower maintenance than custom or ad hoc reporting. Reports also help backwards in the planning process.

As with all projects of this nature, some discretion is required between balancing migration with one or the other factors. Sometimes, a single tool helps better than having teams onboard on multiple native tools.

Migration tools like BitTitan MigrationWiz for migrating Microsoft 365, Exchange and G Suite as well as Fast Cloud File migrations for SharePoint Data are also well-known. Mover.IO is a free migration manager from Microsoft for work or school. These tools work very well for hybrid environment and tenants. When doing the migration directly, it might be better to 1) recreate the teams’ structure by adding users and permissions at the target tenant, 2) migrate content from the associated SharePoint Sites and upload to the target tenant, and 3. Export data from the Exchange mailboxes and import it into the destination. A back-of-the-envelope calculation of the time and cost would be helpful to this case.

Wednesday, October 19, 2022

 

Tenant-To-Tenant Migration architecture model:

Migration comes into play when there are scenarios such as mergers, acquisitions, divestitures, and others where an existing 365 tenant must be moved to a new tenant. There are dedicated personnel and consulting services to help with this migration, so it warrants a brief introduction in a book for multitenant applications.

A specific architecture model called the Tenant-to-Tenant migration architecture model is one of the popular ones among several architectural approaches towards this purpose. This model provides guidance and a starting point for planning with focus on mapping of business scenarios to architecture along with design considerations.

Let us take the traditional names of Contoso for a source tenant and Fabrikam for a target tenant.

A tenant-to-tenant migration can be single-event migration, phased migration, or tenant move or split.

In a single event migration, everything is migrated as a single event. There is a higher risk and a shorter timeline. Single-event migrations larger than 15000 users or 7 TB of site content. In this case, data volumes, network bandwidth, and helpdesk capacity can be limiting factors to scale. The next approach might be preferable when the single-event approach is limiting. Identities migrate to a target tenant and will keep the existing domain as part of the migration. The net benefit to the users is that they keep their email address as say user@contoso.com

Phased migration is the gradual migration of users, services and data. Source domains are not transferred. Users assume a new target domain.  There is lower risk and a longer timeline. The only drawback is the coexistence limitation. Identities will migrate to a new target tenant and will change the brand identity as part of the migration.  Users will have an email address as user@fabrikam.com

Tenant move or split is similar to single-event but it does not include migrating accounts to a new on-premises AD DS forest. This approach should not be used with long-term coexistence for tenant splits. There is additional work required to re-establish existing identities to the new tenant. The identities remain in the source tenant but all the users in the affected domain and all the workloads are moved to a new tenant.

The activities during migration events may vary but include the following:

Before the migration, communication is sent to each user and mailboxes and content are made read-only.

During the  migration, reverse forwarding mail is stopped and new email is allowed to be delivered to the target tenant. Target accounts are enabled, if required. The final data migration is completed.

After the migration, users must recreate their mobile profiles and the client software needs to be reconfigured which includes Outlook and Microsoft 365 clients.

The decision for the choice between the approaches depends on the factors such as if the domains in the target environment must be retained, where the environment to be migrated to is brand new, whether there is continued collaboration between the environments is expected to be in the end-state, whether there are on-premises AD domains that need to be synchronized to Azure, what workloads are being used in the source tenants, how many accounts are involved, is the mail forwarding required after migration, and whether there is a unified Global Address List required.

Reference: https://1drv.ms/w/s!Ashlm-Nw-wnWhLMfc6pdJbQZ6XiPWA?e=fBoKcN      

Tuesday, October 18, 2022

 

This is a continuation in a series of articles on Multitenant Applications. The previous articles talked about configuration management and this article talks about rotation of certificates. 

Multitenant solutions behave as infrastructure providers to tenants. This includes managing and maintaining secrets such as service accounts and certificates. The solution does not need to perform these duties itself and can delegate to external key management solutions for storing and rotating these secrets.  

While service accounts continue to be different from user accounts where the former represents applications and the latter represents users, the solution can register, persist, and rotate service accounts while delegating the same for user accounts to identity providers. User accounts are managed directly in the Active Directory and the solution does not interpret users; it validates assertions. Each request must assert an identity to the solution. The solution honors requests and every action is specified with the help of an API request at some level. Each request is unique, distinct, and self-contained for authentication and authorization purposes. Identity might merely an assertion in these requests.

 Service accounts are meant for interactions between tenants and their resources. A service account should never be mixed with user account otherwise they suffer many drawbacks. It should only be authorized with role-based access control otherwise any other scheme will encounter numerous items to audit. It should never be leaked, otherwise it can be abused indefinitely. This last item requires a rotation of service accounts so that the misuse of a service account is limited to its discovery an issue with the new account. The persistence of service accounts and their usage as a secret makes it valuable to an autonomous secret management system that can keep track of secrets and rotate them as necessary. The external key manager that manages keys and certificates was built for a similar purpose where the secret was a key certificate. That system can work well for a multitenant solution since it poses no restrictions on what these secrets are for.

The automation of periodic and on-demand rotation improves security and convenience for all solution provider usages. Service accounts can be backed by certificates, but the latter can be used for a wide variety of other purposes.

Certificates can be from different issuers. ACME issuer supports certificates from its server. CA supports issuing certificates using a signing key pair. Vault supports issuing certificates using a common vault. Self-signed certificates are issued privately. Venafi certificates supports issuing certificate from a cloud or a platform instance.

Although the solution manages the secrets, a consolidator can help with specific secret types. The libraries for this such as for certificate management are quite popular and well documented.   The use of libraries also brings down the code in the application to manage these specific types of secrets. The external dependencies for generating secrets are like any other dependency in the application code so these can be registered and maintained in one registry or container.

A self-signed certificate is one that is signed with its own private key. Generating a private key and public key is trivial for tools like openssl with the “–t rsa” command line option. For now, let’s look at the steps for self-signing. First, we generate a public-private key pair. Then we create the X509 certificate. Then we sign the certificate with its private key and providing the certificate to sign as well as the certificate with which to sign as the one we just created. 

Algorithms used for creating the keys are called digital signature algorithms. There are two kinds of encryption algorithms - RSA and ECDSA. In both cases, a message signed with the public key can only be opened with the help of the corresponding private key. RSA has historically been more popular with ECDSA gaining support only recently. They are usually compared in terms of bits to denote their security level. Bits is the number of steps taken before an attacker can compromise the security. A 2048-bit RSA public key has a security level of 112 bits. ECDSA needs only 224-bit sized public keys to provide the same security level which provides efficiency for storage. Signing and verification of the signature constitute the two most costly steps performed. The input size plays into this cost for embedded devices.  

The keystore and truststore can be one and the same if the connections are internal. In this case, the client and the server share the same key-certificate. On the other hand, mutual authentication is one where the server and the client present different certificates. In this sequence of message exchanges for mutual authentication between the server and the client, the server initiates the messages. First, the server sends hello message.  Next it sends the certificate, followed by a request to get the client’s certificate and lastly the server-side hello done message. The client responds first with its certificate. Then it sends the session key with the client key exchange message. Then it sends the certificate verify message and changes the cipher spec. Lastly it sends the client-side finished message. The server closes the mutual authentication with the cipher changed message and the server-side finished message.

Monday, October 17, 2022

Workflows (continued)

 
Comparisons to Deis Workflow:
Deis workflow is a platform-as-a-service that adds a developer friendly layer to any Kubernetes cluster so that applications can be deployed and managed easily. Kubernetes evolved as an industry effort from the native Linux container support of the operating system.  It can be considered as a step towards a truly container centric development environment. Containers decouple applications from infrastructure which separates dev from ops.
Containers made PaaS possible. Containers help compile the code for isolation. PaaS enables applications and containers to run independently. PaaS containers were not open source. They were just proprietary to PaaS. This changed the model towards development centric container frameworks where applications could now be written with their own  
Let us look at the components of the Deis workflow:
The workflow manager – checks your cluster for the latest stable components.  If the components are missing. It is essentially a Workflow Doctor providing first aid to your Kubernetes cluster that requires servicing.
The monitoring subsystem consists of three components – the Telegraf, InfluxDB, and Grafana. The first is a metrics collection agent that runs using the daemon set API.The second is a database that stores the metrics collected by the first. The third is a graphing application, which natively stores the second as a data source and provides a robust engine for creating dashboards on top of time-series data.
The logging subsystem which consists of two components – first that handles log shipping and second that maintains a ring buffer of application logs
The router component which is based on Nginx and routes inbound https traffic to applications. This includes a cloud-based load balancer automatically.
The registry component which holds the application images generated from the builder component. 
The object storage component where the data that needs to be stored is persisted. This is generally an off-cluster object storage.
Slugrunner is the component responsible for executing build-pack based applications. Slug is sent from the controller which helps the Slugrunner download the application slug and launch the application
The builder component is the workhorse that builds your code after it is pushed from source control.
The database component which holds most of the platform state. It is typically a relational database. The backup files are pushed to object storage. Data is not lost between backup and database restarts.
The controller which serves as the http endpoint for the overall services so that CLI and SDK plugins can be utilized.
Deis Workflow is more than just an application deployment workflow unlike CloudFoundry.  It performs application rollbacks, supports zero-time app migrations at the router level and provides scheduler tag support that determines which nodes the workloads are scheduled on. Moreover, it runs on Kubernetes so other workloads can be run on Kubernetes along with these workflows. Workflow components have a “deis-” namespace that tells them apart from other Kubernetes workloads and provide building, logging, release and rollback, authentication and routing functionalities all exposed via a REST API. This is a layer distinct from the Kubernetes.  While Deis provides workflows, Kubernetes provides orchestration and scheduling.


Comparision to Azure DevOps:
Some tenets for organization from ADO have parallels in Workflow management systems:
·        Projects can be added to support different business units 
·        Within a project, teams can be added 
·        Repositories and branches can be added for a team 
·        Agents, agent pools, and deployment pools to support continuous integration and deployment 
·        Many users can be managed using the Azure Active Directory. 
 
Conclusion:
The separation of workflows from resources and built-to-scale design is a pattern that makes both workflows and resources equally affordable to customers.