Thursday, July 22, 2021

This article continues with the previous one for claim provisioning. A claim is a combination of a claim type, right, and a value. A claim set is a set of claims issued by an issuing authority.  A claim can be a DNS, email, hash, name, RSA, sid, SPN, system, thumbprint, Uri, and X500DistinguishedName type.  An evaluation context is a context in which an authorization policy is evaluated. It contains properties and claim sets and once the evaluation is complete, it results in an authorization context once authorization policies are evaluated. An authorization policy is a set of rules for mapping a set of input claims to a set of output claims and when evaluated, the resulting authorization context has a set of claims sets and zero or more properties. An identity claim in an authorization context makes a statement about the identity of the entity. A group of authorization policies can be compared to a machine that makes keys. When the policies are evaluated a set of claims is generated, it is like the shape of the key. This key is stored in the form of an authorization context and can be used to open some locks. The granting of access is the opening of a lock Identity model does not mandate how the claims should be authored but it requires that the set of required claims must be satisfied by those in the authorization context.


The identity model is based on claims organized in a collection of sets of claims within an authorization context created by chained authorization policies. When a web service implements this identity model such as in Windows Communication Framework (WCF), the service determines the authorization policies with a ServiceAuthorizationBehavior class evaluated by a ServiceAuthorizationManager and results in an AuthorizationContext class.


Systems can not only grant access but can also deny access based on the presence of claims. Such systems must review the authorization context for claims that result in the denial before viewing the claims that result in the access. For example, an identity claim with a name and age allowing access based on the name but denying access based on age must be evaluated by first visiting the age claim before visiting the name claim. Access can be granted solely based on name in this case.


Services must use service class identity instead of user principal name and they can impersonate the user. A logged-in user on Windows has a Kerberos ticket granted from authentication. A WCF service representing the user has the choice of impersonating between two methods. The first involves a windows token obtained from the Security Support Provider interface or Kerberos authentication and this can be cached in the service for future use. The second involves a windows token obtained from Kerberos extension collectively called Service-for-User or (S4U) 


Sometimes the entire service call does not need to be executed with an impersonation context. In this case, the windows identity of the caller is obtained inside the service method and the impersonation is performed imperatively. For example:


WindowsIdentity caller = ServiceSecurityContext.Current.WindowsIdentity;

 

using(caller.impersonate()) { 

 

// do something

 

}

 

 

Wednesday, July 21, 2021

Claim provisioning:

 


Introduction: This is an interesting topic in identity model used by applications and services. A user presents an identity to an application as a set of claims. Some examples of a claim include a username, an email address or even a fingerprint. The application itself does not resolve the identity and instead delegates it to an external identity system which can be specified via configuration to the application. The delegation works to provide the user information to the application and to ensure that the application receives that from a trusted source. This article explains this concept.

Description: Using this technique, an application is no longer responsible for the following: 1) authenticating users 2) storing user accounts and passwords, 3) calling membership providers like enterprise directories to lookup user information 4) integrating with identity systems from other organizations and 5) providing implementations for several protocols to be compliant with industry standards and business practice. All the identity related decisions are based on claims supplied by the user.  An identity is a set of attributes that describe a principal. A claim is a piece of identity information. The more slices of information an application receives, the more complete the pie representing the individual.  Instead of the application looking up the identity, it merely serializes them to the external system. A security token is a serialized set of claims that is digitally signed by the issuing authority. It gives the assurance that the user dd not make up the claim. An application might receive the claim via the security header of the SOAP envelope of the service. A browser-based web application arrives through an HTTP POST from the user’s browser which may later be cached in a cookie if a session is enabled. The manner might vary depending on the clients and the medium but the claim can be generalized with a token.  Open standards including some well-known frameworks are great at creating and reading security tokens. A security token service is the plumbing that builds, signs and issues security tokens. It might implement several protocols for creating and reading security tokens but that is hidden from the application.

The issuing authority for a security token does not have to be of the same type as the consumer. Domain controllers issue Kerberos tickets and X.509 certificate authorities issue chained certificates. A token that contains claims is issued by a web application or web service that is dedicated to this purpose. This plays a significant role in the identity solution.

The relying parties are the claim-aware applications and the claims-based applications. These can also be web applications and services, but they are usually different from the issuing authorities. When it gets a token, the relying parties extract claims from the tokens to perform specific identity related tasks

Interoperability between issuing authorities and relying parties is maintained by a set of industry standards. A policy for the interchange is retrieved with the help of a metadata exchange and the policy itself is structured. Sample standards include Security Assertion Markup Language which is an industry recognized XML vocabulary to represent claims.

A claim to token conversion service is common to an identity foundation. It extracts the user principal name as a claim from heterogeneous devices, applications and services and generates an impersonation token granting user level access to those entities.

Conclusion: Claims based model enables a flexible approach to an identity model.

Reference: https://1drv.ms/w/s!Ashlm-Nw-wnWsUjNXSm1Aoi94OKA

 

 

Tuesday, July 20, 2021

 

Proper authorization of services onboarded for deployment.

Context:

The installer platform for services in a cloud, builds them out one after another and adds permissions required for those services so that they are ready. One of the most common permissions granted are those for deployment and storage. As the permissions are granted and the service enabled, an approval request is generated to allow the Service Admin for those services to register the new deployment of their service instance. This addition is then blessed with the help of a Security configuration dashboard to allow the new instance to be recognized, discovered, and used. This article investigates the work items needed to help with automation so that the deployment can scale to all the services.

Requirement:

The tasks begin with a script to add permissions via role assignments where pre-existing roles are defined to secure new instances so that they may proceed with deployment and storage tasks. In addition to the role-based access controls, whitelisting of appropriate folders might be required from the operational side to permit authorized services selectively.

The whitelists and the role-based assignments might both be necessary since they complement each other in the current state of deployment logic and only the operator can take steps to permit service on a case-by-case basis. Usually new regions onboard services but the operator might decide to allow one to work until another one is ready, so whitelisting provides that granularity.

The action items are listed as follows:

1.      Add the storage permission to the instance via role-based assignment.

2.      Add the deployment permission to the instance via role-based assignment.

3.      Whitelist the new instance in the root folders.

4.      Generate an approval request for the service administration.

Even with the approval service integration, it might require manual operation to complete the approval but the request for the approval can now be automated. If a link is generated that can automatically allow the deployment to proceed with the click of the link, it must have sufficient and irrefutable proof that it was indeed issued by the sender. A JWT token might be helpful in this regard.

Observations:

The following are not in scope for this automation proposal:

1.      Any telemetry associated with those that use the current automation

2.      Any telemetry associated with those that will use the proposed automation proposal.

3.      A mechanism to pass the identifier for the new instance to the platform that drives the buildout of services.

4.      Changes to individual buildouts for deployment of services.

Tradeoffs:

This change is a suitable candidate for taking it on the platform side because it is not specific to any one buildout. The other option is to have this whitelist+role-assignment+approval request logic to be taken on the service buildout side but that will only be on a participation basis. Without the telemetry or enforcement mechanisms, it's hard to say whether this will be beneficial and cost-effective. Also, it is possible to one-point maintenance on the platform side.

Conclusion:

This change can be made by taking the mentioned work items on the platform side with the addition of a workflow or sub-workflows for whitelist+role-assignment+approval request raising steps.

Monday, July 19, 2021

 

Introduction:

Azure KeyVault stores secrets consumed by users, applications, services, and devices without the need for clients to manage it themselves. The documentation on this service offering from the Azure Public cloud helps us review some of the features that can be leveraged for its usage. This article captures one aspect of their usage that is popular with DevOps but does not get much attention otherwise:

Whitelisting

Secrets are used to safeguard access to resources and access to those resources must be whitelisted. Depending on the resources, there can be many whitelists and subscriptions, or domains can be whitelisted for root folders. We begin with a root folder that can be environment-specific and includes Deployment subscriptions and Storage subscriptions. Adding a subscription to this root folder under one of the categories is equivalent to whitelisting that subscription for access to resources. Similarly, there can be many paths granting access and the subscription may need to be added to all these paths. Even new regions can be part of the path and adding a subscription to the new region grants access based on this whitelist.  A whitelist can be followed up with an approval service to complete the addition.

Role Based Access Control

An Azure login context can be set to a given subscription which can then be used to find the service principal and the role that needs to be allowed access to the resource. With the help of this principal, an application can be added to its operation service role. The addition of principal to role is done with internal security context and not that of the logged-in principal. This security context or the privilege to add members to roles can be facilitated by a secret in the key-vault. Similarly, other security group-based role assignments which can empower different workflows in different contexts, can also be created. This completes the access control for the resources by leveraging the least-privilege policy.

Proxy services:

KeyVault can be integrated with existing applications as an HTTP pipeline. An HttpPipelinePolicy is one that can mutate the request and the received response enabling it to work just like an Nginx handler. It is called pipelining because multiple HTTP requests can be sent in the same TCP connection. In a pipeline, there is no limitation of a response right after a request in a sequence. The resources that KeyVault supports, are part of "/secrets/", "/keys/", or "/certificates/" path qualifiers which allows policies specific to those.

 

Sunday, July 18, 2021

 One of the most interesting aspects of using keyvault services is that the client application can treat it merely as a resource with little or no maintenance for the scope and lifetime of that resource. This enables it to integrate with existing applications as a pipeline:

For example:

public class KeyVaultProxy : HttpPipelinePolicy, IDisposable

    {

        private readonly Cache _cache;

        public KeyVaultProxy()
        {
            _cache = new Cache();
        }

        public void Clear() => _cache.Clear();

       public override async ValueTask ProcessAsync(HttpMessage message, ReadOnlyMemory<HttpPipelinePolicy> pipeline) =>
            await ProcessAsync(true, message, pipeline).ConfigureAwait(false);

        private async ValueTask ProcessAsync(bool isAsync, HttpMessage message, ReadOnlyMemory<HttpPipelinePolicy> pipeline)
        {
            Request request = message.Request;
            if (request.Method == RequestMethod.Get)
            {
                string uri = request.Uri.ToUri().GetLeftPart(UriPartial.Path);
                if (IsSupported(uri))
                {
                    message.Response = await _cache.GetOrAddAsync(isAsync, uri, null, async () =>
                    {
                        await ProcessNextAsync(isAsync, message, pipeline).ConfigureAwait(false);
                        return message.Response;
                    }).ConfigureAwait(false);

                    return;
                }
            }

            await ProcessNextAsync(isAsync, message, pipeline).ConfigureAwait(false);
        }

        private static async ValueTask ProcessNextAsync(bool isAsync, HttpMessage message, ReadOnlyMemory<HttpPipelinePolicy> pipeline)
        {
            if (isAsync)
            {
                await ProcessNextAsync(message, pipeline).ConfigureAwait(false);
            }
            else
            {
                ProcessNext(message, pipeline);
            }
        }

        /// <inheritdoc/>
        void IDisposable.Dispose()
        {
            _cache.Dispose();
            GC.SuppressFinalize(this);
        }
    }

An HttpPipelinePolicy is one that can mutate the request and the received response enabling it to work just like an Nginx handler. It is called pipelining because multiple HTTP requests can be sent in the same TCP connection. In a pipeline, there is no limitation of a response right after a request in a sequence. The resources that KeyVault supports, are part of "/secrets/", "/keys/", or "/certificates/" path qualifiers which allows policies specific to those.

Saturday, July 17, 2021

 

 

Since the secrets can vary, their scope and lifetime can also vary, a new secret can be used for granular purpose if the naming convention for the secrets are maintained so it is easy to locate a secret or use the name to know identify the secret and its intended use.

Another way to use key-vault secret is to use it in conjunction with monitoring and alerting. It provides a a secure way to store keys, secrets and certificates in the cloud, so their access is equally worth monitoring – both from the perspective of whether the key-vault is functioning properly for its clients and to know if the clients are accessing it correctly. If the SLA for key-secrets is not met, then the business suffers a disruption because there are numerous usages of that secret

Monitoring is a very helpful service in many scenarios and deserves its own elaboration but in this section, the emphasis is on the usage of Key-Vault monitoring. The set of events processed by the key-vault monitors include NewVersionCreated, NearExpiry, and Expired. These events are consumed via the event grid by Logic applications, Azure functions and Azure Service Bus. Although Key-vault monitoring provides comprehensive coverage of its functionality, it does not integrate with events raised from hardware layer when key-vault supports hardware security modules. In the software plane, key-vault can integrate with almost any cloud service by virtue of REST calls, SDK and Command-line interface.

The Azure key-vault portal provides the options to setup an event grid, with the help of logic applications, then configure the event grid trigger with the subscription parameter as the one where the key-vault exists, resource type as Microsoft.KeyVault.vaults and with the resource name as the keyvault to be monitored. This can be displayed from the resource group view as an “Event grid system topic”

There are two recovery features that can be enabled with Azure Key-Vault based on expiration time event handling. These are soft-delete and purge protection. The former is like a recycle bin that can be used to reclaim accidentally deleted keys, secrets and certificates. If they need to be removed completely, then they can be purged. The latter option of purge protection increases the retention period so that the permanent delete or purge option cannot occur until the retention period expires.

 

Friday, July 16, 2021

 

Using Key-Vault services from Azure Public cloud:

Introduction: The previous article introduced one aspect of using secrets from Azure public cloud. It showed the use of proxy for secret management to add whitelists to folders specified by path.  With folder specified for different categories and subscriptionIds added to each folder, the whitelisting provided a way to complement the role-based access control. This article introduces another aspect of key-vault via the use of its SecretClient to get access to the resource directly.

Description. While DSMSProxy usage shown earlier provided categories for organizing whitelists based on SubscriptionId, ServiceId and ServiceTreeId, the use of SecretClient is primarily for the purpose of getting and setting secrets in the vault. These secrets can be credentials, passwords, keys, certificates and other forms of identity that can be persisted safely.  A sample of using this client involves the following:

        DefaultAzureCredential credential = new DefaultAzureCredential(

            new DefaultAzureCredentialOptions

            {

                ExcludeEnvironmentCredential = true,

                ExcludeManagedIdentityCredential = true,

            });

 

      SecretClient secretClient = new SecretClient(vaultUri, credential, options);

      KeyVaultSecret sasToken = await secretClient.GetSecretAsync($"{storageAccountName}-{sasDefinitionName}", cancellationToken: s_cancellationTokenSource.Token);

 

Since the secrets can vary, their scope and lifetime can also vary, a new secret can be used for granular purpose as long as the naming convention for the secrets are maintained so it is easy to locate a secret or use the name to know identify the secret and its intended use.