Thursday, September 30, 2021

Some trivia for Azure Public cloud computing

 


The Azure public cloud offers several resources for use in cloud-based solutions for customers. Even the experts resort to documentation on the resources for ambiguities and when the details escape them. The following is a list of trivia that are generally looked up and go unnoticed in the beginning.

1.       [ NOTE: This continues from the previous article but the numbering has not been restarted. ]

2.      A policy is a default allow and explicit deny system focused on resource properties during deployment and for already existing resources. It supports cloud governance with compliance.

3.      An Azure Sentinel is useful to respond to security incidents but internal analysis via logs and metrics are best done by Azure Security center.

4.      Network traffic can be studied with Azure Network Watcher but the Azure Monitoring helps with application outages and SLA investigations

5.      Filter network traffic with a Network security group. It is associated with a subnet. An application security group enables us to group together servers with similar functions.

6.      Azure Migrate helps with migrating compute and databases. It requires permissions on the source instance.

7.      Unlike the Azure SQL resource, the Azure SQL VM IaaS deployment gives full control over the SQL server instance and best serves migration when server and database permissions are granted on the source instance.

8.      If a database has become slow on a target instance of the SQL Server, leverage the auto-tuning feature to improve performance.

9.      A single Azure SQL instance can host many databases, and many write regions. There is no need to provision a dedicated instance for every region or application. This strategy is different from that of key Ault.

10.  When many virtual machines are deployed to a subscription, they may need to have prerequisites installed. This is easy to specify with a policy.

11.  There is only one storage account that can be bound to the log analytics workspace. Logs from many places can flow to the account but there must only be one account.

12.  If there are many subscriptions within the account, it will help with cost management. Resource groups help with the segregation of resources just like tags, but the billing cannot be based on tags. It must be at the subscription level. So, tags can be used for querying but cannot be relied upon for costing.

13.  Accidental delete of resource can be prevented by applying locks on the higher-level containers such as resource groups and subscriptions. If any one of them needs to be exempted, this can be based on policy.

14.  Role-based access control (RBAC) facilitates principle of least privilege. A higher privilege role such as the domain administrator need not be used to work with AD connect or for deployment purposes. A deployment operator is sufficient in this regard.

15.  Role based access control also enforce the most restrictive permission set so a general ability to read can be taken away for specific cases.

16.  A role-based access control can allow dynamic assignment of users to roles. So, when a person leaves an organization and goes to another, then a scheduled background job can revoke the privileges and perform cleanup.

17.  When it comes to securing KeyVault, the access control policies are pretty much the resorted technique. But the role-based access control is less maintenance. Besides, there is no need to keep adding and removing polices from the KeyVault because they end up being transient even if they are persisted. Instead, the role-based access control for KeyVault requires zero-touch.

These are some of the details that affect cloud capabilities planning.

 

Wednesday, September 29, 2021

Some trivia for Azure Public cloud computing

 

The Azure public cloud offers several resources for use in cloud-based solutions for customers. Even the experts resort to documentation on the resources for ambiguities and when the details escape them. The following is a list of trivia that are generally looked up and go unnoticed in the beginning.

1.       [ NOTE: This continues from the previous article but the numbering has not been restarted. ]

2.      A policy is a default allow and explicit deny system focused on resource properties during deployment and for already existing resources. It supports cloud governance with compliance.

3.      An Azure Sentinel is useful to respond to security incidents but internal analysis via logs and metrics are best done by Azure Security center.

4.      Network traffic can be studied with Azure Network Watcher but the Azure Monitoring helps with application outages and SLA investigations

5.      Filter network traffic with a Network security group. It is associated with a subnet. An application security group enables us to group together servers with similar functions.

6.      Azure Migrate helps with migrating compute and databases. It requires permissions on the source instance.

7.      Unlike the Azure SQL resource, the Azure SQL VM IaaS deployment gives full control over the SQL server instance and best serves migration when server and database permissions are granted on the source instance.

8.      If a database has become slow on a target instance of the SQL Server, leverage the auto-tuning feature to improve performance.

9.      A single Azure SQL instance can host many databases, and many write regions. There is no need to provision a dedicated instance for every region or application. This strategy is different from that of key Ault.

10.  When many virtual machines are deployed to a subscription, they may need to have prerequisites installed. This is easy to specify with a policy.

11.  There is only one storage account that can be bound to the log analytics workspace. Logs from many places can flow to the account but there must only be one account.

12.  If there are many subscriptions within the account, it will help with cost management. Resource groups help with the segregation of resources just like tags, but the billing cannot be based on tags. It must be at the subscription level. So, tags can be used for querying but cannot be relied upon for costing.

13.  Accidental delete of resource can be prevented by applying locks on the higher-level containers such as resource groups and subscriptions. If any one of them needs to be exempted, this can be based on policy.

14.  Role-based access control (RBAC) facilitates principle of least privilege. A higher privilege role such as the domain administrator need not be used to work with AD connect or for deployment purposes. A deployment operator is sufficient in this regard.

15.  Role based access control also enforce the most restrictive permission set so a general ability to read can be taken away for specific cases.

16.  A role-based access control can allow dynamic assignment of users to roles. So, when a person leaves an organization and goes to another, then a scheduled background job can revoke the privileges and perform cleanup.

17.  When it comes to securing KeyVault, the access control policies are pretty much the resorted technique. But the role-based access control is less maintenance. Besides, there is no need to keep adding and removing polices from the KeyVault because they end up being transient even if they are persisted. Instead, the role-based access control for KeyVault requires zero-touch.

These are some of the details that affect cloud capabilities planning.

 

Tuesday, September 28, 2021

Some trivia for Azure Public cloud computing

 


The Azure public cloud offers several resources for use in cloud-based solutions for customers. Even the experts resort to documentation on the resources for ambiguities and when the details escape them. The following is a list of trivia that are generally looked up and go unnoticed in the beginning.

1.       While certain resources need to be provisioned only once during the application process, KeyVaults can be numerous with instances isolated per application per region. This helps with overcoming a high volume of requests and the KeyVault service limits.

2.       Ensure throttling is in place and that the client honor exponential back-off policies for 429’s with retries

3.       Cache the secrets retrieved from KeyVault in memory and reuse them from memory whenever possible.

4.       Use AzCopy with cron jobs for high-rate data transfer such as for logs. This will cut costs when compared to Azure Data Factory and Azure Data Lake resources

5.       Create diagnostic settings to send platform logs and metrics to different destinations. A single diagnostic setting can define no more than one of each destination. A resource can have up to 5 destination settings.

6.       If metrics must flow into logs, leverage the Azure monitor metrics REST API and import them into the Azure Monitor logs using the Azure monitor Data Collector API.

7.       Access Tiers for Azure blob storage can be one of hot, cool, or archive.  Infrequently accessed items are best served by the last one of the tiers. Data in the archive tier can take several hours to retrieve depending on the specified rehydration priority. 

8.       Account-level tiering – Blobs in all three tiers can co-exist within the same account.

9.       AD Connect is required to enable SSO between on-premises and cloud-based Azure Active Directory instances. A password hash mechanism will allow the cloud instance to complete authentication requests even when the on-premises instance goes down.

10.   Azure AD Conditional access can help author conditions when the password authentication must be turned off for legacy applications based on DateTime or other such criteria.

11.   Azure account lockout policy can be specified to thwart unwanted repeated authentication requests from clients

12.   Resources can be locked to prevent unexpected changes. For example, the lock level can be set to CanNotDelete or ReadOnly which is the equivalent of restricting all authorized users to the permissions granted by the Reader role.

13.   When a lock is applied at a parent scope, all resources within that scope inherit the same lock. Resources added later will inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence.

14.   Azure Blueprints can be used to assign policies in how resource templates are deployed which can affect multiple resources, it helps adhere to an organization’s standards, patterns, and best practices. It cannot be used to specify role assignments. It can consist of one or more policies.

15.   A policy is a default allow and explicit deny system focused on resource properties during deployment and for already existing resources. It supports cloud governance with compliance.

These are some of the details that affect cloud capabilities planning.

Monday, September 27, 2021

Effective cost management best practices in Azure public cloud computing:

 

Azure management and best practice involve a virtuous cycle of visibility accountability and optimization in saving costs this cycle can be better understood when we review the features from the actions that can be taken on the billing account when the billing account is created at the time of signing up to Azure it begins to accumulate invoices payments and cost tracking measurements. There can be more than one billing accounts. Some accounts begin with the pay-as-you-go billing model it can account for resource usage is and allow the option for users to terminate resources when a threshold is exceeded. Other accounts fall under enterprise and customer agreements and they are typically signed business to business or in the latter case when the organization signs up for a customer agreement with Microsoft billing differs from cost management altogether while billing is the process of invoicing customers for goods or services and managing the commercial relationship cost management is an operational practice it identifies costs and usage patterns that can be provided with advanced analytics and reports based on the negotiated prices and factors in reservations it can provide even discounts the reports on internal and external costs based on usages and marketplace charges can be collectively presented via cost management features these reports help understand the drill down on spending as well as the breakouts under different categories some predictive analytics are also available which help identify the resources that cost more than others. One such feature is a reservation and as your resource reservation helps save money by committing to a one year or three-year plan for multiple products this commitment gets some discount on the resources despite their usage it can significantly reduce resource costs and in some cases up to 72% of paid pay as you go prices when they play discount they don't alter the runtime state of the resources so it's merely a policy the total cost of upfront and monthly reservations is the same and we don't pay any extra fee when we choose to go monthly there are some advantages to buying preservations such as an immediate reservation discount not being charged for resources on a continuous basis and tolerating fluctuations. certain attributes for reservations determine the resource to be purchased choices between SKUs and regions wherever applicable and scope can change the instance being reserved determining what to purchase is one of the key decisions in cost management and any such decision can be applied on an hourly basis well it's easy to buy reservations online via the Azure portal the same can be done via API's PowerShell is decays and command-line interfaces the billing for reservation proceeds from a subscription but the reservation can even be applied two different subscriptions. Reservation can also be split into two reservations if the assured result virtual machine instance is purchased then a reservation discount can be applied to that resource. at the time of purchase, there are two objects created a reservation order and reservation actions such as split merge partial refund or exchange created new reservations are included under the reservation order it can be viewed by selecting the reservations and navigating to the order ID. The reservation scope determines the set of resources to which the reservation applies the billing contest is dependent on the subscription used to buy the reservation if the reservation scope is changed from shared to single then only the owner can select some of their subscriptions for the reservation scope, but enterprise agreement and Microsoft customer agreement billing contributors can manage hold reservations for their organizations. there are two ways to go about sharing this privilege first access management can be deleted-delegated for an individual reservation order by assigning the owner role to the individual at the resource scope of the reservation order the other way is to use a user as a billing administrator to an agreement backed reservation. all users go to the shore portal to manage their costs from the cost management and billing section of the portal. There are some extended features available for self-service exchanges and refunds for Azure reservations, but the reservations must be similar for their users to take advantage of these features exchanges can work both ways from downsizing to upscaling also service features are available from the portal.

 

Sunday, September 26, 2021

 

Effective cost management best practices in Azure public cloud computing:

Introduction: This article is a continuation in the series of articles on Azure services, some of which can be referenced as follows: 1)  Azure Gateway service, 2) SignalR service 3)  Bing API, 4) Azure Pipeline, 5) Host Integration Server (HIS), 6) Azure Healthcare API and Azure Custom Resource Provider. While the service portfolio for Azure grows in size, SKU, and number, Cloud architects are limited in utilizing the full potential of the Azure Public Cloud due to perceptions around cost and the poor appreciation and use of the cost management features. This article is not prescribing in any manner, but it highlights gap between the business perceptions of the cloud and the potential for use of the cloud features by architects.

Description: Any literature on public cloud computing and its billing will extol the pay-as-you-go model of cloud computing resources, its elasticity to workloads and savings for off-peak use, it's granular and illustrative pricing, and costs, the ability to impose limits and rates, the one-stop-shop for cost management dashboard and a few others. Both the business unit managers and the cloud architects are savvy about when to use the cloud. For example, starting up a new venture or business is easier with the cloud while established multi-tenant software-as-a-service offerings prefer hybrid cloud computing, if not private datacenters. The cost of a commodity physical desktop might appear lower but the total cost of ownership of the corresponding virtual machine in the cloud is in fact the lower of the two. There are several benefits and corresponding savings from a virtual machine that does not reflect in the comparison between the outright purchase of a desktop for less than five hundred dollars and the flat rate of ninety-three dollars per month for a corresponding virtual machine. Even when a TCO calculator is available from the cloud for a given public cloud resource, its usage and acceptance are hardly exercised to a full study. Organizations and big company departments are averse to their employees increasing their cloud budget to anything beyond a thousand dollars a month when their travel expenses and employee training alone exceed that by ten times. This is not the only gap. Business owners cite those existing channels of supply and demand are becoming savvy in their competition with the cloud while the architects do not truly enforce the right practice to keep the overall budget of cloud computing expenses to be kept under a limit. Employees and resource users are indeed being secured by role-based access control but the privilege to manage subscriptions granted to those users allows them to disproportionately escalate costs. Architects on the other hand complain that such costs are artificially and unnecessarily estimated when the trend is easy to observe and reactionary actions to restrict costs are easy to take. Let us look at some of the best practices available from the public cloud for all parties involved.

Saturday, September 25, 2021

 Verification of declarative resource templates: 

Problem Statement:  

Azure public cloud resource manifests follow a convention for all resource types. They are used by the resource manager to reconcile a resource instance to what’s described in the manifest. The language used to author the manifests is one that makes use of predefined resource templates, built-ins, system variables, scopes, and their bindings, and configurations. The verification of the correctness of the authored manifests is done by the Azure Resource Manager. Similarly, the onboarding of a resource provider to the Azure public cloud is done by the deployment templates also called deployment artifacts. These are also validated but their rules are limited to general purposes while the resource provider as well as the platform that onboards a set of services to the cloud may require their own set of rules. This article explores the design of a validation tool for that purpose. 

Solution:  
The problem of validating deployment manifests as opposed to the resource manifests is that the final state of the resources is known beforehand and described in no uncertain terms by their manifest. The validation of the resource templates is therefore simpler between the two. The deployment templates, on the other hand, describe workflows that are expressed in a set of orchestrated steps. The validation of the workflow goes beyond the syntax and semantics of the steps. Azure deployments are required to be idempotent and declarative so that they can aspire to be zero-touch. The more validation performed on these templates, the more predictable the workflow for their targets. 

These additional validations or custom validations as they are referred to can be performed at various stages of the processing beginning before any actions are taken and finally at the end of the processing. This article discusses the offline validation for the deployment as described by their templates. The runtime validation can be assumed to be taken care of by validation steps interspersed with the actual steps. Consequently, we discuss only the initial and the final evaluation when the templates are received and when they have been interpreted. 

 

The static evaluation involves pattern matching. Each rule in the rule set for static evaluation can be described by patterns of text whose occurrence indicates a condition to be flagged. These patterns can be easily described by regular expressions. The validation, therefore, runs through the list of regular expressions for all the text in the templates which usually comprises a set of files. Care must be taken to exclude those configuration files that have hardcoded values and literals which are used towards substitution of the variables and template intrinsic at this time. They might be evaluated towards the final pass after the templates have been interpreted. 

The final evaluation requires this interpretation to proceed when the key values defined in the configurations and setting have been substituted for the variables as part of the binding for a given scope. The scopes can be evaluated one-by-one and the substitution performed again and again from different scope bindings. They might also involve multi-pass evaluations if the values of a given pass involve other variables that need to be resolved. Since the scopes are usually isolated from each other, the set of variables, their values, and the number of passes is finitely resulting in a cyclical scope binding and resolution that repeats until there are no more variables. This final set of resolved templates can be written out and evaluated for the rules at this stage in the same way that the initial evaluation was done. This concludes the design of the tool as a two-stage static evaluation of a distinct set of rules. 

Sample two-phase implementation: https://1drv.ms/u/s!Ashlm-Nw-wnWhKVmHyTDt3GspSXKqQ?e=PYLr9Q