Tuesday, February 1, 2022

 Sovereign clouds continued… 

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent discussion on sovereign clouds. This article talks about Azure AD authentication in National clouds.  

National clouds are physically isolated instances of Azure. The difference between Commercial, GCC and GCC High Microsoft 365 environments is important to correctly align the compliance needs of the businesses. Commercial Microsoft 365 is the standard Microsoft 365 cloud used by Enterprise, Academia and even home Office 365 tenants. It has the most features and tools, global availability, and lowest prices. Since it’s the default choice between the clouds, everyone qualifies and there are no validations. Some security and compliance requirements can be met here using tools like Enterprise Mobility and Security, Intune, Compliance Center, Cloud App Security, Azure Information Protection, and the Advanced Threat Protection tools. Some compliance frameworks can also reside in the commercial cloud, and these include HIPAA, NIST 800-53, PCI-CSS, GDPR, CCPA etc. but not government or defense compliance because the cloud shares a global infrastructure and workforce. Even some FedRAMP government compliance can be met in the commercial cloud, but it will be heavily augmented with existing tools and will require finding and patching gaps. 

Each cloud instance is separate from the others and has its own environment and endpoints. Cloud specific endpoints include OAuth 2.0 endpoints, OpenID Connect token request endpoints and URLs for app management and deployment which means an entire identity framework is local to the cloud instance. There’s even a separate Azure portal for each national cloud instance.

Applications can continue to use modern authentication in Azure Government cloud but not GCC High. The identity authority can be Azure AD Public and Azure AD Government.  

Applications can integrate with the Microsoft identity platform in a national cloud but they are required to register their application separately in each Azure portal that’s specific to the environment.

The workflow for authentication is claims based. A claims challenge is the response sent from an API indicating that an access token sent by a client application has insufficient claims. It could be due to one of many reasons such as conditional access policies not met for the API or the access token has been revoked. A claims request is made by the client application to request the user back to the identity provider to retrieve a new token with claims that will satisfy the additional requirements that were not met. Applications must declare the client capabilities in their calls to the service. Then they can use enhanced security features and must be able to handle claim challenges. This is usually presented via a www-authenticate header returned by the service API.

The MSAL library provides the following sample to communicate the client capabilities:

_clientApp = PublicClientApplicationBuilder.Create(App.ClientId)

 .WithDefaultRedirectUri()

 .WithAuthority(authority)

 .WithClientCapabilities(new [] {"cp1"})

 .Build();

 An API implementer can receive information about whether client applications can handle claim challenges using the xms_cc optional claim in the application manifest.


Monday, January 31, 2022

Sovereign clouds continued…

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent discussion on sovereign clouds. This article talks about Government Community Cloud.

The difference between Commercial, GCC and GCC High Microsoft 365 environments is important to correctly align the compliance needs of the businesses. Commercial Microsoft 365 is  the standard Microsoft 365 cloud used by Enterprise, Academia and even home Office 365 tenants. It has the most features and tools, global availability and lowest prices. Since it’s the default choice between the clouds, everyone qualifies and there are no validations. Some security and compliance requirements can be met here using tools like  Enterprise Mobility and Security, Intune, Compliance Center, Cloud App Security, Azure Information Protection, and the Advanced Threat Protection tools. Some compliance frameworks can also reside in the commercial cloud and these include HIPAA, NIST 800-53, PCI-CSS, GDPR, CCPA etc but not government or defense compliance because the cloud shares a global infrastructure and workforce. Even some FedRAMP government compliance can be met in the commercial cloud but it will be heavily augmented with existing tools and will require finding and patching gaps.

The Government Community cloud is government focused copy of the commercial environment. It has many of the same features as the commercial cloud buth has datacenters within the Continental United States. Compliance frameworks that can be met in the GCC include DFARS 252.204-7012, DoD SRG level 2, FBJ CJIS, and FedRAMP High. It is still insufficient for ITAR, EAR, Controlled Unclassified information and Controlled Defense information handling because the identity component and network that GCC resides on Azure Commercial and is not restricted to US Citizens. That said, GCC does have additional employee background checks such as verification of US Citizenship, verification of seven year employment history, verification of highest degree attained, Seven year criminal record check, validation against the department of treasury list of groups, the commerce list of individuals and the department of state list, criminal history and fingerprint background check. 

The Dod Cloud kicks it up a notch and is only usable for the Department of Defense purposes and Federal contractors who meet the stringent cybersecurity and compliance requirements. The GCC High is a copy of the DoD cloud but it exists in its own sovereign environment. The GCC High does not compare to the commercial cloud in terms of feature parity but it does support calling and audio conferencing. Features are added to the GCC High cloud only when they meet the federal approval process, a dedicated staff is available that has passed the DoD IT-2 adjudication and only when the features do not have an inherent design that fails to meet the purpose of this cloud.

Applications can continue to use modern authentication in Azure Government cloud but not GCC High. The identity authority can be Azure AD Public and Azure AD Government


Sunday, January 30, 2022

Sovereign clouds


This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent discussion on cloud protection. This article talks about sovereign clouds.  

Public clouds are general purpose compute for all industries and commerce. Most of the service portfolio from the public cloud providers are made available in the public cloud for general acceptance. Some services are also supported in the sovereign cloud. This article discusses the role and purpose of sovereign clouds. Let’s begin with a few examples of Sovereign clouds. These are 1) US Government clouds (GCC) 2) China Cloud and 3) Office 365 GCC High cloud or USDoD. Clearly, organizations must evaluate which cloud is right for them.  The differences between them mostly aligns with compliance. The Commercial, GCC, and GCCHigh Microsoft 365 environments must protect their controlled and unclassified data. These clouds offer enclosures within which the data resides and never leaves outside that boundary. It meets sovereignty and compliance requirements with geographical boundaries for the physical resources such as datacenters.  The individual national cloud and global Azure cloud are cloud instances. Each instance is separate from the others and has its own environment and endpoints. Cloud specific endpoints can leverage  the same OAuth 2.0 protocol and Open ID connect to work with the Azure Portal but even the identities must remain contained within that cloud. There is a separate Azure Portal for each one of these clouds. For example, the portal for Azure government is https://portal.azure.us and the portal for China National Cloud is https://portal.azure.cn

The Azure Active Directory and the Tenants are self-contained within these clouds. The corresponding Azure AD authentication endpoints are https://login.microsoftonline.us and https://login.partner.microsoftonline.cn respectively.

The Regions within these clouds in which to provision the azure resources also come with unique names that are not shared with any other regions in any of the other clouds. Since these environments are unique and different, the registering of applications, the acquiring of tokens and the calls to the services such as Graph API are also different.

Identity models will change with the application and location of identity. There are three types: On-Premises identity, Cloud identity and Hybrid identity 

The On-premises identity belongs to the Active Directory hosted on-premises that most customers already use today.

Cloud identities originate, exist and are managed only in the Azure AD within each cloud.

The Hybrid identities originate as on-premise identities but become hybrid through data synchronization to Azure AD. After directory synchronization, they exist both on-premises and in the cloud. This gives the name hybrid identity model.

Azure Government applications can use Azure Government identities but can also use Azure AD public identities to authenticate to an application hosted in Azure Government. This is facilitated by the choice of Azure AD Public or the Azure AD Government.


Saturday, January 29, 2022

 

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent discussion on controlled folder access. This article talks about cloud protection.

Cloud protection is part of the next-generation portfolio of technologies in Microsoft Defender Antivirus that provides near-instant automated protection against new and emerging threats and vulnerabilities. The definitions are kept up to date in the cloud, but their role does not stop there. The Microsoft Intelligent Security Graph includes large sets of interconnected data as well as powerful artificial intelligence systems driven by advanced machine learning models. It works together with Microsoft Defender Antivirus to deliver accurate, real-time intelligent protection.

Cloud protection consists of the following features:

-          Checking against metadata in the cloud

-          Cloud protection and sample submission

-          Tamper protection enforcement

-          Block at first sight

-          Emergency signature updates

-          Endpoint detection and response in block mode

-          Attack surface reduction rules

-          Indicators of compromise (IoCs)

These are enabled by default. If for any reason, they get turned off, then the organization can enforce turning in back on using the Windows Management Instruction, Group Policy, PowerShell or with MDM configuration service providers.

Fixes for threats and vulnerabilities are delivered in real-time with Microsoft Defender Antivirus, unlike waiting for the next update in its absence.

5 billion threats to devices are caught every month. Windows Defender Antivirus does it under the hood. It uses multiple engines to detect and stop a wide range of threats and attacker techniques at multiple points. They provide industry with the best detection and blocking capabilities. Many of these engines are local to the client. If the threats are unknown, the metadata or the file itself is sent to the cloud service. The cloud service is built to be accurate, realtime and intelligent. While trained models can be hosted anywhere, they are run efficiently in the cloud with the transfer of input and prediction between the client and the cloud. Threats are both common and sophisticated and some are even designed to slip through protection. The earliest detection of a threat is necessary to ensure that not even a single endpoint is affected. With the models hosted in the cloud, protection is even more enriched and made more efficient. The latest strains of malware and attack methods are continuously included in the engines.

These cloud-based engines include:

-          Metadata based ML engine – Stacked set of classifiers evaluate file-types, features, sender-specific signatures, and even the files themselves to combine results from these models to make a real-time verdict which allow or block files pre-execution.

-          Behavior based ML engine where the suspicious behavior sequences and advanced attack techniques are monitored to trigger analysis. The techniques span attack chain, from exploits, elevation and persistence all the way through to lateral movement and data exfiltration.

-          AMSI paired ML engine – where pairs of client-side and cloud side models perform advanced analysis of scripting behavior pre- and post- execution to catch advanced threats like fileless and in-memory attacks

-          File-classification ML Engine - where deep neural network examine full file contents. Suspicious files are held from running and submitted to the cloud protection service for classification.  The predictions determine whether the file should be allowed or blocked from execution.

-          Detonation-based ML Engine - a sandbox is provided where suspicious files are detonated so that classifiers can analyze the observed behaviors to block attacks.

-          Reputation ML engine – which utilizes sources with domain expert reputations and models from across Microsoft, to block threats that are linked to malicious URLs, domains, emails, and files.

-          Smart rules engine - which features expert written smart rules that identify threats based on researcher expertise and collective knowledge of threats.

 

These technologies are industry recognized and proven to come with customer satisfaction.

 

 

Friday, January 28, 2022

 

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent discussion on controlled folder access. This article talks about customization.

Controlled folder access helps protect valuable data from malicious apps and threats, such as ransomware. There are four ways to customize this control which include:

1) Protecting additional folders

2) Adding applications that should be allowed to access protected folders.

3) Allowing signed executables files to access protected folders.

4) Customizing the notification

Controlled folder access applies to system folders and default locations, but they cannot be changed to any alternate locations. Adding other folders can be helpful to cases where the default location has changed. It could also include mapped network drives. Environment variables and wild cards are also supported. These folders can be specified from Windows security application, with Group Policy or with PowerShell. MDM configuration service providers can also be used to protect additional folders.

Specific applications can also be allowed to make changes to controlled folders. Write access to files in protected folders must be protected. Allowing applications can be useful if a specific application must override the controlled folder access. An application can be specified by its location. If the location changes, it is no longer trustworthy and cannot be allowed to override the controlled folder access. Application exceptions can also be specified via the Windows Security application, Group Policy. PowerShell or with MDM configuration service providers.

When a rule is triggered and an application or file is blocked, the alert notifications can be customized in the Microsoft Defender for the Endpoint. Notifications can be in the form of emails to a group of individuals. If we are using role-based access control, recipients will only receive notifications based on the device groups that were configured in the notification rule.

Signed executable files can be allowed to access protected folders. We use indicators based on certificates for scenarios where we write rules for attack surface reduction and controlled folder access but need to permit signed applications by adding their certificates to the allow list. Indicators can also be used to block signed applications from running.

Rules can also be suppressed to avoid alerts and notifications that are noisy. A suppression rule will display status, scope, action, number of matching alerts, created by and date when the rule was created.

 

 

Thursday, January 27, 2022

 This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent networking discussions on controlled folder access. In this article, we review the access control lists for Azure Data Lake Storage Gen 2.

The access control model in Azure Data Lake storage gen 2 supports both Azure role-based access control (Azure RBAC) and POSIX like access control lists (ACL). The shared key and SAS authorization grants access to a user without requiring them to have an identity in Azure Active Directory (Azure AD). When these are used, the Azure RBAC and ACLs have no effect. Only when there is an identity in Azure AD, can the Azure RBAC and ACL be used. The

Azure RBAC and ACL both require the user (or application) to have an identity in Azure AD. Azure RBAC gives broad and sweeping access to storage account data, such as read or write access to all the data in a storage account, while ACLs is for granting privilege at a finer level where the write access must be to a specific directory or file. The evaluation of RBAC and ACL for authorization decisions is not static. The access control lists, and policy resolution artifacts are static but the evaluation of the identity and its permissions for an identity context is dynamic. It can even involve composition and inheritance. It can allow dynamic assignment of users to roles. So, when a person leaves an organization and goes to another, then a scheduled background job can revoke the privileges and perform cleanup.

Users are mapped via policies to roles, and they are granted different level of access to different resources. The permissions can vary with owner_read, owner_write, owner_delete, group_read, group_write, group_delete, other_read, other_write and other_delete which along with the other state for granted or revoked. The purpose of specifying privilege is that we only need to grant on a need basis. Role-based access control (RBAC) facilitates principle of least privilege. A higher privilege role such as the domain administrator need not be used to work with AD connect or for deployment purposes. A deployment operator is sufficient in this regard.

Role based access control also enforce the most restrictive permission set so a general ability to read can be taken away for specific cases.

When it comes to securing KeyVaults and Storage accounts. The access control policies are the resorted technique. On the contrary, the role-based access control is less maintenance. There is no need to keep adding and removing conditional access polices from the KeyVault because they end up being transient even if they are persisted. Instead, the role-based access control for KeyVault requires zero-touch and automatically flows to all items in the vault.

An Access Control List consists of several entries called Access control lists. It can have zero or more ACEs. Each ACE controls or monitors access to an object by a specified trustee. There are six types of ACEs three of which are general purpose and applicable to all while the other three object-specific ACEs. Every ACE has a security identifier part that identifies the trustee, an access mask that specifies the access rights and a flag that indicates the ACE type and a set of bit flags that determine whether the child container or objects can inherit the ACE from the primary object to the which the ACL is attached. The general-purpose ACEs includes an access-denied ACE which is used in discretionary access control list, an access-allowed ACE which is used to allow access rights to a trustee and a system-audit ACE which is used in a System Access Control List. The special purpose ACEs carry an object type GUID that identifies one of the following: a type of child object, a property set or property, an extended right, and a validated write.

The Active Directory contains two policy objects called a centralized authorization policy (cap) and a centralized authorization policy rule (capr). These polices are based on expressions of claims and resource attributes. Capr targets specific resources and articulates the access control to satisfy a condition. Capes apply to an organization where the set of resource to which it applies can be called out. It is a collection of caprs that can be applied together. A user folder will have a specific cap comprising of several caprs and there will be a similar new cap for assignment to the finance folder.

 

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent networking discussions on controlled folder access. In this article, we review the access control lists for Azure Data Lake Storage Gen 2.

The access control model in Azure Data Lake storage gen 2 supports both Azure role-based access control (Azure RBAC) and POSIX like access control lists (ACL). The shared key and SAS authorization grants access to a user without requiring them to have an identity in Azure Active Directory (Azure AD). When these are used, the Azure RBAC and ACLs have no effect. Only when there is an identity in Azure AD, can the Azure RBAC and ACL be used. The

Azure RBAC and ACL both require the user (or application) to have an identity in Azure AD. Azure RBAC gives broad and sweeping access to storage account data, such as read or write access to all the data in a storage account, while ACLs is for granting privilege at a finer level where the write access must be to a specific directory or file. The evaluation of RBAC and ACL for authorization decisions is not static. The access control lists, and policy resolution artifacts are static but the evaluation of the identity and its permissions for an identity context is dynamic. It can even involve composition and inheritance. It can allow dynamic assignment of users to roles. So, when a person leaves an organization and goes to another, then a scheduled background job can revoke the privileges and perform cleanup.

Users are mapped via policies to roles, and they are granted different level of access to different resources. The permissions can vary with owner_read, owner_write, owner_delete, group_read, group_write, group_delete, other_read, other_write and other_delete which along with the other state for granted or revoked. The purpose of specifying privilege is that we only need to grant on a need basis. Role-based access control (RBAC) facilitates principle of least privilege. A higher privilege role such as the domain administrator need not be used to work with AD connect or for deployment purposes. A deployment operator is sufficient in this regard.

Role based access control also enforce the most restrictive permission set so a general ability to read can be taken away for specific cases.

When it comes to securing KeyVaults and Storage accounts. The access control policies are the resorted technique. On the contrary, the role-based access control is less maintenance. There is no need to keep adding and removing conditional access polices from the KeyVault because they end up being transient even if they are persisted. Instead, the role-based access control for KeyVault requires zero-touch and automatically flows to all items in the vault.

An Access Control List consists of several entries called Access control lists. It can have zero or more ACEs. Each ACE controls or monitors access to an object by a specified trustee. There are six types of ACEs three of which are general purpose and applicable to all while the other three object-specific ACEs. Every ACE has a security identifier part that identifies the trustee, an access mask that specifies the access rights and a flag that indicates the ACE type and a set of bit flags that determine whether the child container or objects can inherit the ACE from the primary object to the which the ACL is attached. The general-purpose ACEs includes an access-denied ACE which is used in discretionary access control list, an access-allowed ACE which is used to allow access rights to a trustee and a system-audit ACE which is used in a System Access Control List. The special purpose ACEs carry an object type GUID that identifies one of the following: a type of child object, a property set or property, an extended right, and a validated write.

The Active Directory contains two policy objects called a centralized authorization policy (cap) and a centralized authorization policy rule (capr). These polices are based on expressions of claims and resource attributes. Capr targets specific resources and articulates the access control to satisfy a condition. Capes apply to an organization where the set of resource to which it applies can be called out. It is a collection of caprs that can be applied together. A user folder will have a specific cap comprising of several caprs and there will be a similar new cap for assignment to the finance folder.

 

Tuesday, January 25, 2022

 

This is a continuation of a series of articles on operational engineering aspects of Azure public cloud computing that included the most recent networking discussions on private connectivity. This article focuses on controlled folder access

Controlled folder access helps protect valuable data from malicious apps and threats, such as ransomware. It protects data by checking applications against a list of known trusted applications. Controlled folder access can be turned on using the Windows Security App, Microsoft Endpoint Connection manager, or Intune. The Microsoft Defender for endpoint can give detailed reporting into controlled folder access events and blocks which forms part of the usual alert investigation scenarios. It works by only allowing trusted applications to access protected folders which are specified when this access is configured. Apps that are not in the trusted list of applications are prevented from making any changes to files inside protected folders. Application can be added manually to the trusted list using the configuration manager or Intune. Additional actions can be performed from the Microsoft 365 defender portal.

The controller folder access is important to prevent tampering of files. Ransomware encrypts files so that it cannot be used. When this access is enabled, unauthorized usages pop up as notifications. The notification can be customized using the company details and contact information. Rules can be enabled individually to customize what criteria the feature monitors. The protected folders include common system folders which include boot sectors and additional user folders. Applications can be given access to protected folders. Audit mode can be used to evaluate how controlled folder access would impact the organization.

Attack surface reduction technique in the environment hinges on audit mode. In audit mode, we can enable attack surface reduction rules, exploit protection, network protection, and controlled folder access in audit mode. It lets us see a record of what would happen if the feature had been enabled. The audit mode can be enabled when testing how the features will work. Since it is not part of business operations, this mode facilitates study of suspicious file modifications over a certain period. The features won’t block or prevent applications, scripts, or files from being modified but all those events will be recorded in the Windows Event Log. With audit mode, we can review the event log to see what effect the feature would have had if it was enabled. The Defender can help get details for each event. They are especially helpful for investigating attack surface reduction rules. It lets us investigate issues as part of the alert timeline and investigation scenarios. Audit mode can be used with Group Policy, PowerShell and configuration service providers.

When the audit applies to all events, the controlled folder access can be enabled to turn on the audit mode and the corresponding events can be viewed. When the audit applies to individual rules, the attack surface reduction rules can be tested, and the attack surface reduction can be viewed on the rules reporting page. When the audit applies to individual mitigations, the exploit protection can be enabled, and the corresponding events can be viewed. Custom views can be exported and imported. The events described in these scenarios can also be saved as xml.