Friday, February 18, 2022

 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction of Microsoft Graph with the link here. The previous articles discussed the Microsoft Graph, its connectors and Data Connect. This article discusses introduces Intune. The Microsoft Graph API for Intune enables programmatic access to Intune information for our tenant. The API performs the same Intune operations as those available via the portal.  It just behaves like another service that provides data into the Graph API.

Microsoft Intune is a cloud-based service that manages devices and their applications. These devices can include mobile phones, tablets and notebooks. It can help configure specific policies to control applications. It allows people in the organization to use their devices for school or work. The data stays protected, and the organizational data can be isolated away from the personal data on the same device. It is part of the Microsoft’s Enterprise Mobility and Security EMS suite. It integrates with the Azure Active Directory to control who has access and what they can access. It integrates with Azure Information Protection for data protection.

Since it is a cloud service, it can work directly with clients over the internet, or be comanaged with Configuration Manager and Intune. The rules and configuration settings can be set on personal, and organization owned devices to access data and networks. Authenticated applications can be deployed on devices. The company information can be protected by controlling the way users' access and share information. The devices and applications can be made compliant with the security requirements. The users must opt into the management with Intune using their devices. Users can opt in for partial or full control by organization administrators. These administrators can add and assign mobile apps to user groups and devices, configure apps to start or run with specific settings enabled and update existing apps already on the device, see reports on which apps are used and track their usage and do a selective wipe by removing only organization data from apps. App protection policies include using Azure AD identity to isolate organization data from personal data, helping secure access on personal devices, and enrolling devices.

Intune makes use of app protection policies and device compliance policies to protect data. It uses profiles and configuration policies to protect data. It uses applications and application configuration policies to manage applications. It saves the device compliance results to Active Directory for conditional access. It uses groups from Active Directory for regulating all the activities it performs for users. The authentication and authorization helper libraries that work with Active Directory, are used by SaaS applications and Office 365 to integrate with Application stores and device experiences.  In a way, Intune works like a collection of microservices instead of a monolithic control and state reconciliation plane. The end-user devices make use of Network access control partner, Mobile Threat defense connector, and Telecom expense management routines to connect with the microservices that protect data and configure devices.

The technology behind the software updates, push notifications is not a new one. The benefits of synchronization over an always-online solution are quite clear – reduced data transfer over the network, reduced loads on the enterprise server, faster data access, increased control over data availability. But it is less understood that there are different types of synchronization depending on the type of data.  For example, the synchronization may be initiated for personal information management (PIM) such as email, calendar entries, etc. as opposed to application files. The latter can be considered artifacts that artifact-independent synchronization services can refresh. Several such products are available, and they do not require user involvement for a refresh. This means one or more files and applications can be set up for synchronization on remote devices although they are usually one-way transfers.

Data synchronization, on the other hand, performs a bidirectional exchange and sometimes transformation between two data stores. This is our focus area in this article. The server data store is usually larger because it holds data for more than one user and the local data store is usually limited by the size of the mobile device. The data transfer occurs over a synchronization middleware or layer. The middleware is set up on the server while the layer hosted on the client. This is the most common way for smart applications to access corporate data.

Synchronization might be treated as a web service with the usual three tiers comprising of the client, the middle-tier, and enterprise data. When the data is synchronized between an enterprise server and a persistent data store on the client, a modular layer on the client can provide a simple easy to use client API to control the process with little or no interaction from the client application. This layer may just need to be written or rewritten native to the host depending on whether the client is a mobile phone, laptop, or some other such device. With a simple invocation of the synchronization layer, a client application can expect the data in the local store to be refreshed.

The synchronization middleware resides on the server, and this is where the bulk of the synchronization logic is written. There can be more than one data store behind the middleware on the server-side and there can be more than one client from the client-side. Some of the typical features of this server-side implementation includes data scoping, conflict detection and resolution, data transformation data compression, and security. These features are maintained with server performance and scalability. Two common forms of synchronization middleware are a standalone server application and a servlet running in a servlet engine.  The standalone server is more tightly coupled to the operating system and provides better performance for large data. The J2EE application servers rely on an outside servlet engine and are better suited for high volume low payload data changes.

The last part of this synchronization solution is the data backend.  While it is typically internal to the synchronization server, it is called out because it might have more than one data stores, technologies, and access mechanisms such as object-relational mapping.

 

Thursday, February 17, 2022

Microsoft Graph 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction of this topic with the link here. The previous article discussed the Microsoft Graph Data Connect used with Microsoft Graph. This article discusses the best practices for using Microsoft Teams activity feed notifications. Microsoft Graph enables integration with the best of Microsoft 365, Windows 10 and Enterprise mobility and security services in Microsoft 365, using REST APIs and client libraries

Microsoft Graph provides a unified programmability model by consolidating multiple APIs into one. As Microsoft’s cloud services have evolved, the APIs to reference them has also changed. Originally, when cloud services like Exchange Online, Sharepoint, OneDrive and others evolved, the API to access those services was launched too. The list for SDKs and REST APIs for these services started growing for developers to access content. Each endpoint also required Access Tokens and returned status code that were unique to each individual service. Microsoft Graph brought a consistent simplified way to interact with these services.

This article covers the best practices for using Microsoft Teams activity feed notifications in Microsoft Graph which apply to:

-          Creating call-to-action notifications

-          Requesting responses to notifications

-          Creating notifications about external events

Microsoft Teams displays notifications in both activity feed and toast formats. Users can receive notifications from multiple sources across chats, channels, meetings, or other applications.  It is recommended that the content be localized in a notification feed or toast and the application must also be localized for this purpose. Appropriate titles and descriptions must be provided for the notified activity types. Short tiles such as @mention or Announcements are preferable. Notifications should be filtered to show only what is relevant to the user. Promotional notifications must be avoided. Notifications from messages and those coming from activity feed notifications can be redundant. Those duplicates must be removed. The text preview section in notifications can be used so that the user can take the necessary action.  A period at the end of the notification title is not required and this will be consistent with those that Teams generates. The relationship between the notification and the content must be clear to the user. The feed experience should be self-contained. The application does not send more than ten notifications per minute, per user. The load time of the application does not negatively affect the experience for the users. The user must be informed about the notification’s storage period.

The activity feed notifications or bot framework messages can be used but they should not be used together. The activity feed notifications must appear in the Teams activity feed for the convenience to the user to take actions. It can include links to other locations, but the user must be able to decipher the notification and follow the link to the source. The corresponding API allows the user to take notifications for each notification type. Delegated notifications create a better notification experience. These can be delegated or application-only calls. The sender of the notifications appears as the user who initiated the notification in delegated calls but appears as the application in the application-only calls.

The bot framework messages are delivered as the chat or channel messages and triggered by the keyword @mention the name of the user. This in-lining of an alert as a chat or channel message is required for the purpose of broadcasting to all channel members. These are some of the best practices to use with such notifications.

 

Wednesday, February 16, 2022

 Microsoft Graph 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction of this topic with the link here. The previous article discussed the Microsoft Graph Data Connect used with Microsoft Graph. This article discusses known limitations and workarounds. Microsoft Graph enables integration with the best of Microsoft 365, Windows 10 and Enterprise mobility and security services in Microsoft 365, using REST APIs and client libraries

Microsoft Graph provides a unified programmability model by consolidating multiple APIs into one. As Microsoft’s cloud services have evolved, the APIs to reference them has also changed. Originally, when cloud services like Exchange Online, Sharepoint, OneDrive and others evolved, the API to access those services was launched too. The list for SDKs and REST APIs for these services started growing for developers to access content. Each endpoint also required Access Tokens and returned status code that were unique to each individual service. Microsoft Graph brought a consistent simplified way to interact with these services.

Some limitations apply to the application and servicePrincipal resources. Some application properties will not be available. Only multi-tenant applications can be registered. Azure Active Directory users can register applications and add additional owners. Support for OpenID connect and OAuth protocols have limitations. Policy assignments to an application fail. Operations on ownedObjects that require appId fail. The best resolution for these limitations is to wait for the changes being made to the application and servicePrincipal roles.

Cloud solution providers must acquire tokens from Azure AD v1 endpoints because Azure AD v2 is not supported for their applications. These include usages of those applications for their partner managed customers.

The pre-consent for CSP applications does not work in some customer tenants. These manifest as error issuing tokens when an application uses delegated permissions or error with an access denied message in using Microsoft Graph after an application acquires token with application permission. The suggested workaround in this case involves opening an Azure AD Powershell  session and connecting to the customer tenant and downloading and installing the Azure AD powershell v2 followed by creating the Microsoft Graph service principal.

Other forms of identity related limitations include conditional access policies requiring consent to permission. The ClaimsMappingPolicy API might require consent to both  the Policy.ReadAll and Policy.ReadWrite.ConditionalAccess for the List operation on /policies/claimMappingPolicies and /policies/claimMappingPolicies/{id} objects. If there are no such objects available to retrieve in a List operation, either permission is sufficient to call the methods. If there are claimMappingPolicy objects, the app must consent to both permissions.

 

Tuesday, February 15, 2022

 

Azure Well-Architected Framework

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction of this topic with the link here. The previous article discussed the Microsoft Graph Data Connect used with Microsoft Graph. This article discusses cloud data governance and the Azure well-architected framework for data workloads.

The Cloud Adoption Framework helps to create an overall cloud adoption plan that guides programs and teams in their digital transformation. The plan methodology provides templates to create backlogs and plans to build necessary skills across the teams. It helps rationalize the data estate, prioritize the technical efforts, and identify the data workloads. Its important to adhere to a set of architectural principles which help guide development and optimization of the workloads. The Azure Well-architected framework lays down five pillars of architectural excellence which include:

-          Reliability

-          Security

-          Cost Optimization

-          Operational Excellence

-          Performance efficiency

 The elements that support these pillars are Azure well-architected review, azure advisor, documentation, patterns-support-and-service offers, reference architectures and design principles.

This guidance provides a summary of how these principles apply to the management of the data workloads.

Cost optimization is one of the primary benefits of using the right tool for the right solution. It helps to analyze the spend over time as well as the effects of scale out and scale up. The Azure Advisor can help improve reusability, on-demand scaling, reduced data duplication, among many others.

Performance is usually based on external factors and is very close to customer satisfaction. Continuous telemetry and reactiveness are essential to tuned up performance. The shared environment controls for management and monitoring create alerts, dashboards, and notifications specific to the performance of the workload. Performance considerations include storage and compute abstractions, dynamic scaling, partitioning, storage pruning, enhanced drivers, and multilayer cache.

Operational excellence comes with security and reliability. Security and data management must be built right into the system at layers for every application and workload. The data management and analytics scenario focus on establishing a foundation for security. Although workload specific solutions might be required, the foundation for security is built with the Azure landing zones and managed independently from the workload. Confidentiality and integrity of data including privilege management, data privacy and appropriate controls must be ensured. Network isolation and end-to-end encryption must be implemented. SSO, MFA, conditional access and managed service identities are involved to secure authentication. Separation of concerns between azure control plane and data plane as well as RBAC access control must be used.

The key considerations for reliability are how to detect change and how quickly the operations can be resumed. The existing environment should also include auditing, monitoring, alerting and a notification framework.

In addition to all the above, some consideration may be given to improving individual service level agreements, redundancy of workload specific architecture, and processes for monitoring and notification beyond what is provided by the cloud operations teams.

Monday, February 14, 2022

Continuous Encoder

BERT is an algorithm for natural language processing that interprets search queries as almost humans do because it tries to understand the context of the words that constitute the query so results match better than without it. It was proposed by Google and stands for Bidirectional Encoder Representations from Transformers. To understand BERT, we must first understand the meaning of the terms Encoder and Bidirectional. These terms come from the machine learning neural network techniques where the term encoding and decoding refer to states between words in a sequence. A short introduction to neural networks is that it comprises of layers of sensors that calculate probabilities of the inputs, in this case these are words, with weighted probabilities across a chosen set of other inputs and are also called features. Each feature gets a set of weights as probabilities in terms of how likely it is to appear together with other words chosen as features. A bag of words from the text is run through the neural network and gets transformed into a set of output that resemble some form of word associations with other words but, in this process, it computes the weighted matrix of words with its features which are called embeddings. These embeddings are immensely useful because they represent words and their context in terms of the features that frequently co-occur with these words bringing out the latent meanings of the words. With this additional information on the words from their embeddings, it is possible to find how similar two words are or what topics the keywords are representing especially when a word may have multiple meanings.  

In the above example, the transformation was forward only with associations between the left to the right context for a layer, but the calculations performed in one layer can jointly utilize the learnings from both sides. This is called bidirectional transformation and since a neural network can have multiple layers with the output of one layer performing as input to another layer, this algorithm can perform the bidirectional transformations for all layers. When the input is not just words but a set of words such as from a sentence, it is called a sequence. Search terms form a sequence. BERT can unambiguously represent a sentence or a pair of sentences in the question/answer form. The state between the constituents of a sequence is encoded in some form that helps to interpret the sequence or to generate a response sequence with the help of decodings. This relationship that is captured between an input and output sequence in the form of encodings and decodings helps to enhance the language modeling and improve the search results.

Natural language processing relies on encoding-decoding to capture and replay state from text.  This state is discrete and changes from one set of tokenized input texts to another. As the text is transformed into vectors of predefined feature length, it becomes available to undergo regression and classification. The state representation remains immutable and decoded to generate new text. Instead, if the encoded state could be accumulated with the subsequent text, it is likely that it will bring out the topic of the text if the state accumulation is progressive. A progress indicator could be the mutual information value of the resulting state. If there is information gain, the state can continue to aggregate, and this can be stored in memory. Otherwise, the pairing state can be discarded. This results in a final state aggregation that continues to be more inclusive of the topic in the text.

State aggregation is independent of BERT but not removed from it. It is optional and useful towards topic detection. It can also improve the precision and relevance of the text generated in response by ensuring that their F-score remains high when compared to the aggregated state. Without the aggregated state, the scores for the response was harder to evaluate.

Sunday, February 13, 2022

 Standard enterprise governance guide and multi-cloud adoption

Cloud governance is a journey not a destination. Cloud governance creates guardrails that keep the company on a safe path throughout the journey of adopting the cloud along the way there are clear milestones and tangible business benefits. Processes must be put in place to ensure adherence to the stated policies. There are five disciplines of cloud governance which support these corporate policies. Each discipline protects the company from potential pitfalls. These include cost management discipline, security baseline discipline, resource consistency discipline, identity baseline discipline, and deployment acceleration discipline.

The actionable governance guide is an incremental approach of the cloud adoption framework governance model. It can be established with an agile approach to cloud governance that will grow to meet the needs of any scenario.

This governance guide serves as a foundation for an organization to quickly and consistently at garb governance guardrails across their subscriptions. Initially, an organization hierarchy may be created to empower the cloud adoption teams. It will consist of one management group for each type of environment, two subscriptions, one for production workloads and another for non-production workloads, consistent nomenclature to be applied at each level of this grouping hierarchy, resource groups to be deployed in a manner that considers its contents lifecycle and region selection such that networking, monitoring and auditing can be in place. These patterns provided room for growth without complicating the hierarchy.

 A set of global policies and RBAC roles will provide a baseline level of governance enforcement. Identifying the policy definitions, creating a blueprint definition, and applying policies and configurations globally are required to meet the policy requirements.

Controls can be added for multi-cloud adoption when customers adopt multiple clouds for specific purposes. All of the IT operations can be run on a different cloud provider.   

In a multi cloud identity could be specific to a cloud or it could be hybrid, facilitated through replication to say Azure Active Directory from an on-premises instance of Active Directory. Each cloud may also have its own identity provider, membership directory as well as authentication and authorization models. Its operations can be managed by monitoring and related automated processes. Disaster recovery and business continuity can be controlled by recovery services and their vaults. Monitoring security violations and attacks as well as enforcing governance of the cloud can be done with the same service. All of these above are used to automate compliance with policy

The changes required to monitor new corporate policy statements include the following: connecting the networks, consolidating identity providers, adding assets to the recovery services, adding assets for cost management and billing, adding assets to the monitoring services and adopting governance enforcement tools.

Saturday, February 12, 2022

 

Microsoft Graph 

This is a continuation of a series of articles on Azure services from an operational engineering perspective with the most recent introduction of this topic with the link here. The previous article discussed the Microsoft Graph Data Connect used with Microsoft Graph. This article discusses the API. Microsoft Graph enables integration with the best of Microsoft 365, Windows 10 and Enterprise mobility and security services in Microsoft 365, using REST APIs and client libraries

Microsoft Graph provides a unified programmability model by consolidating multiple APIs into one. As Microsoft’s cloud services have evolved, the APIs to reference them has also changed. Originally, when cloud services like Exchange Online, Sharepoint, OneDrive and others evolved, the API to access those services was launched too. The list for SDKs and REST APIs for these services started growing for developers to access content. Each endpoint also required Access Tokens and returned status code that were unique to each individual service. Microsoft Graph brought a consistent simplified way to interact with these services.

The data virtualization platform that Microsoft Graph presents also supports querying relationships between:

·        Azure Active Directory

·        Exchange Online – including mail, calendar and contacts.

·        Sharepoint online including file storage

·        OneDrive

·        OneDrive for business

·        OneNote and

·        Planner

As a collaborative app development platform Microsoft Graph is not alone. Microsoft Teams, Slack, Google Workspace are applications with collaboration as their essence and designed for flexibility of hybrid work. For example, Teams toolkit for Visual studio code lets us use existing web development framework to build cross platform Team applications against any backend. Microsoft Graph provides both the seamlessness and the data for realtime collaboration.

Connectors and Microsoft Data Connect round up the data transfer mechanisms. Connectors offer a simple and intuitive way to bring content from external services to Microsoft Graph which enables external data to power Microsoft 365 experiences. It does this with the help of REST APIs that are used to 1. Create and manage external data connections, 2. Define and register the schema of the external data type(s), 3. Ingest external data items into Microsoft Graph and 4. Sync external groups.  Microsoft Graph Data Connect augments Microsoft Graph’s transactional model with an intelligent way to access rich data at scale. It is ideal to connect big data and for machine learning. It uses Azure Data Factory to copy Microsoft 365 data to the application’s storage at configurable intervals. It provides a set of tools to streamline the delivery of this data into Microsoft Azure. It allows us to manage the data and see who is accessing it, and it requests specific properties of an entity. It enhances the Microsoft Graph model, which grants or denies applications access to entire entities.

 

Sample code for enriching user information:

        public static void AddUserGraphInfo(this ClaimsPrincipal claimsPrincipal, User user)

        {

            var identity = claimsPrincipal.Identity as ClaimsIdentity;

            identity.AddClaim(

                new Claim(GraphClaimTypes.DisplayName, user.DisplayName));

            identity.AddClaim(

                new Claim(GraphClaimTypes.Email,

                    claimsPrincipal.IsPersonalAccount()? user.UserPrincipalName : user.Mail));

            identity.AddClaim(

                new Claim(GraphClaimTypes.TimeZone,

                    user.MailboxSettings.TimeZone ?? "UTC"));

            identity.AddClaim(

                new Claim(GraphClaimTypes.TimeFormat, user.MailboxSettings.TimeFormat ?? "h:mm tt"));

            identity.AddClaim(

                new Claim(GraphClaimTypes.DateFormat, user.MailboxSettings.DateFormat ?? "M/d/yyyy"));

        }

 

   Sample delta query for mail folders

   Public async Task<IMailFolderDeltaCollectionPage> GetIncrementalChangeInMailFolders()

{

           IMailFolderDeltaCollectionPage deltaCollection;

              deltaCollection = await _graphClient.Me.MailFolders

                .Delta()

                .Request()

                .GetAsync();

return deltaCollection;

}