Friday, May 31, 2024

 This is a continuation of IaC shortcomings and resolutions. In this section, we focus on the deployment of azure machine learning workspaces with virtual network peerings. When peerings are established traffic from any source in virtual network can flow to any destination in another. This comes very helpful when egress must be from one virtual network. Any number of virtual networks can be peered in hub-and-spoke model or as transit but they have their drawbacks and advantages. The impact this has on the infrastructure for az ml deployments is usually not called out in deployments and there can be quite a few surprises in the normal functioning of the workspace. Some of the previous articles explained these from the workspace side but in this section, we describe the network side in more detail, specifically the configuration options with peering.

When a local virtual network is peered with a remote virtual network, then there are four options presented to the user out of which only the first is selected and the rest remain unselected. Unfortunately, the default settings are not always appropriate for every situation and deserve special attention. These four options are:

1. Allow local network to access remote network

2. Allow local network to receive forwarded traffic from remote network

3. Allow gateway or route server in local network to forward traffic to remote network

4. Allow local network to use remote network’s gateway or route server.

Now, local and remote are interchangeable and these options repeated for the opposite direction as well with both sections of four choices each appearing on the ‘Add Peering’ page. This gives complete control over all aspects of treating the local and remote network in an asymmetrical manner rather than symmetrical bidirectionally-equal configuration.

Now, let’s revisit the options themselves assuming we have picked one of the networks as local. If the first option is not selected, there is no peering because traffic does not flow at all for the local network. This option is therefore selected by default in both sections and can be overridden selectively by the cloud network contributor role, but seldom done.

The second option is necessary for Microsoft hosts such as login.microsoftonline.com aka Microsoft Entra ID, management.azure.com aka Azure Portal and Azure Resource Manager to reach the local network. Failure to do so will result in incomplete handshakes during authentication as users begin to use resources in the local network.

The third  and fourth options are for leveraging egress traffic to use gateway or route server. Often, a designated third remote virtual network was chained behind the remote and local networks for its firewall. When the firewall is enabled configuring the gateway or route server helps to ensure that all resources use that gateway or route server as their next hop. Setting this option allows the local network to use that single gateway or route server for all chained virtual networks. Between the third and the fourth options, the gateway or route server only happens to be in the local or the remote network. They can also be both selected with preference for local as well as remote appliance because third occurs before fourth.

In this way, peering configuration has complete control over the traffic between the participating networks. Traffic can optionally be observed with the help of a network watcher. This completes the discussion around network side and workspace side configuration options for ensuring full connectivity to the compute and successful code execution on those hosts.


Thursday, May 30, 2024

 

This is a summary of the book titled “Be Data Analytical: How to use analytics to turn data into value” written by Jordan Morrow and published by Kogan Page in 2023. The author is a data expert who empowers organizations by elevating their data literacy levels and supporting an ethos of curiosity and experimentation. He argues that decision making must comprise of both human intuition and data analytics. A data driven culture that supports curiosity and experimentation must be nurtured. Descriptive analytics must capture and communicate meaningful patterns and trends. Outperform your competition with diagnostic analytics to uncover root causes. Explore multiple outcomes with predictive analytics to improve strategic decision making. Build better descriptive, diagnostic, predictive and prescriptive analytics in six steps. Apply your data and analytics mindset to your life.

Data-driven activities involve leveraging data and analytics to assist in decision-making, allowing individuals and organizations to make better data-informed decisions. To improve decision capabilities, progress through four levels of analytics: descriptive, diagnostic, predictive, and prescriptive. Nurture a data-driven culture that supports curiosity and experimentation, aiming to build a "data and analytics mindset" that encourages experimentation and making mistakes.

Data-driven cultures should align with data ethics, embracing transparency and questioning data rigorously. Descriptive analytics can be used to capture and communicate meaningful patterns and trends, with various roles playing in generating data. Data analysts, data scientists, data architects, and leaders can all contribute to generating descriptive analytics.

To create a data-driven culture, embrace the democratization of data, giving everyone access to the information they need. By embracing data ethics, embracing transparency, and fostering a culture of data literacy, organizations can effectively problem-solve effectively with data.

Diagnostic analytics is a crucial tool for organizations to uncover root causes and make informed decisions. It helps organizations understand the reasons behind various phenomena, enabling them to make more informed decisions. This can be achieved using tools like Tableau, Microsoft Power BI, and Qlik, as well as coding languages like R and Python. Predictive analytics is another powerful tool for strategic decision-making, allowing organizations to anticipate supply-chain challenges and forecast credit card delinquency rates. Leaders play a significant role in driving better predictive analytics, requiring data literacy and data-driven decision-making. Data science platforms like RapidMiner can be used to perform predictive analytics, allowing users to understand data visually. While not everyone in the organization will build predictive analytics, democratizing predictions can ensure the right parties have access to the necessary information. Prescriptive analytics, which uses machine learning to make recommendations and create action steps, can also be beneficial. However, it's important to remember that predictions are not prophecies and should be communicated clearly.

Prescriptive analytics is a powerful tool that can be used to make decisions based on patterns and trends. However, it is essential to maintain the human element in analytics, as it allows for the freedom to change your workout regimen and downsize your company. Everyone at your company plays a role in building these analytics, from C-suite executives to data analysts, engineers, and data scientists. To build better analytics, follow six steps:

1. Awareness: Ensure staff are familiar with the four levels of analytics, their problems, and solutions.

2. Understanding: Understand how each phase of data analytics fits within the bigger picture, helping you achieve broader goals.

3. Assessing: Evaluate personal skills and the organization as a whole, identifying gaps to fill.

4. Questioning: Improve each phase of analytics by asking questions about data quality, purpose, and future implications.

5. Learning: Gain data literacy and improve problem-solving abilities.

6. Implementation: Don't waste valuable insights and execute data-informed decisions.

Applying a data and analytics mindset to your life is crucial, as failures present opportunities to improve and refine your approach to data analytics.

Previous book summary: BookSummary99.docx

My writing: MLOps3.docx

 

Wednesday, May 29, 2024

 This is a continuation of articles on IaC shortcomings and resolutions. In this section too, we focus on the deployment of azure machine learning workspaces with virtual network peering and securing it with proper connectivity. When peerings are established between virtual networks and the AZ ML Workspace is secured with a subnet dedicated to the creation of compute, improper settings of private and service endpoints, firewall, NSGs and user-defined routing  traffic, may cause quite a few surprises in the normal functioning of the workspace. For example, data scientists may encounter an error as: “Performing interactive authentication. Please follow the instructions on the terminal. To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXYYZZAA to authenticate.”  Even if they complete the device login, the resulting message will tell them they cannot be authenticated at this time. Proper configuration of the workspace and the traffic is essential to overcome this error. 

One of the main deterrence in the completion of pass-through authentication is the resolution of DNS names and their IP addresses to route the reverse traffic. Since the public plane connectivity is terminated at the workspace, the traffic to and from the compute goes over the private plane. A Private DNS lookup is required for the IP address of the private endpoint to the workspace. When the private endpoint is created, the DNS zone registrations for predetermined domain prefixes and their corresponding private IP addresses as determined by the private endpoint must be registered. This is auto-registered when the endpoint is suitably created, otherwise they must be manually added.

With just the compute and the clusters having private IP connectivity to the subnet, the outbound IP connectivity can be established through the workspace in an unrestricted setting or with a firewall in a conditional egress setting. The subnet that the compute and clusters are provisioned from must have connectivity to the subnet that the storage account, key vault and azure container registry that are internal to the workspace. A subnet can even have its own Nat gateway so that all outbound access can get the same IP address prefix which is very helpful to secure using an IP rule for the prefix for incoming traffic at the destination. Storage account and key vault can gain access via their service endpoints to the compute and cluster’s private IP address while the container registry must have a private endpoint for the private plane connectivity to the compute. A dedicated image server build compute can be created for designated image building activities.

User-defined routing and local hosts file become pertinent when a firewall is used to secure outbound traffic. Local host file with the private ip address of the compute and a name like ‘mycomputeinstance.eastus.instances.azureml.ms’, is an option to connect to the virtual network with the workspace in it. is also important to set user-defined routing when a firewall is used, and the default rule must have ‘0.0.0.0/0’ to designate all outbound internet traffic to reach the private ip address of the firewall as a next hop. This allows the firewall to inspect all outbound traffic and security policies can kick in to allow or deny traffic selectively.


Tuesday, May 28, 2024

 This is a summary of the book titled “The AI playbook: mastering the art of machine learning deployment” written by Eric Siegel and published by MIT press in 2024. Prof. Siegel urges business and tech leaders to come out of their silos and collaborate to harness the full potential of machine learning models that will transform their organization and optimize their operations. He provides a step-by-step framework to do that which includes establishing value-driven deployment goal by leveraging “backward planning”, collaborating for a specific prediction goal, finding the right evaluation metrics, preparing the data to achieve desired outcomes, training the model to detect patterns, deploying the models such that there is a full-stack buy-in from stakeholder departments in the organization and committing to a strong ethical compass for maintaining the models.

Machine Learning (ML) opportunities require collaboration between business and data professionals. Business professionals need a holistic understanding of the ML process, including models, metrics, and data collection. Data professionals must broaden their perspective on ML to understand its potential to transform the entire business. BizML, a six-step business approach, bridges gaps between the business and data ends of an organization. It focuses on organizational execution and complements the Cross Industry Standard Process for Data Mining (CRISP-DM). Successful ML and AI projects require "backward planning" to establish a value-driven deployment goal. ML's applications extend beyond predicting business outcomes, addressing social issues like abuse or neglect. After choosing how to apply ML, stakeholders with decision-making power should approve it, focusing on the gains ML can make rather than fixating on the technology.

Business and tech leaders should collaborate to specify a prediction goal for machine learning (ML) projects. This involves defining the goal in detail, identifying viable prediction goals, and adhering to the "Law of ML Planning." Ensure that deployment and the predictions will shape business operations are at the forefront of the project. Consider potential ethical issues, such as the potential for predictive policing models to inflate the likelihood of Black parolees being rearrested.

For new ML projects, consider creating a binary model or binary classifier that makes predictions by answering yes/no questions. Other predictive models, such as numerical or continuous models, can also be used.

Evaluating the model's performance is crucial to determine its success. Accuracy is not the best way to measure the model's success. High accuracy models only perform better than random guessing, and metrics such as "lift" and "cost" should be used to evaluate the model's performance.

To train a machine learning (ML) model, ensure that the data is long, wide, and labeled. This will help the model accurately predict outcomes and identify patterns. Ensure that the data is structured and unstructured and be wary of "noise" or "corrupt data" that may be causing issues.

Teach the ML model to detect patterns in a sensible way, as ML algorithms learn from your data and use patterns to make predictions. Understanding your model is not always straightforward, but if the patterns your model detects and uses to make predictions are reliable, you don't necessarily need to establish causation.

Familiarize yourself with different modeling methods, such as decision trees, linear regression, and logistic regression. Investigate your models to ensure they don't contain bugs, as some models may combine input variables in problematic ways. For example, a model designed to distinguish huskies from wolves using images may label all images with snow as "wolves" when it might be discovered that the model was labeling all images without snow as "huskies."

To deploy an AI model, it's crucial to gain full-stack cooperation and buy-in from all team members within your organization. Building trust in the model is essential, as it can automate decision-making processes. Humans still play a role in some processes, and deploying a "human-in-the-loop" approach allows them to make operational decisions after integrating data from the model. Deployment risk can be mitigated by using a control group or incremental deployment. Maintaining the model is essential to prevent model drift, which can occur when the data used degrades. To avoid discrimination, ensure the model doesn't operate in a discriminatory way, aiming to equally represent different groups and avoid inferring sensitive attributes. Aspire to use data ethically and responsibly, based on empathy.


Monday, May 27, 2024

 This is a continuation of articles on IaC shortcomings and resolutions. In this section too, we focus on the deployment of azure machine learning workspaces with virtual network peering and securing it with proper connectivity. When peerings are established traffic from any source in virtual network can flow to any destination in another. This is very helpful when egress must be from one virtual network. Any number of virtual networks can be peered into hub-and-spoke model or as transit, but they have their drawbacks and advantages. The impact this has on the infrastructure for AZ ML deployments is usually not called out in deployments and there can be quite a few surprises in the normal functioning of the workspace. The previous article focused on DNS name resolution and the appropriate names and ip addresses to use with A records. This article focuses on private and service endpoints, firewall, NSG, and user defined routing. 

The workspace and the compute can have public and private ip addresses and when a virtual network is used, it is intended to isolate and secure the connectivity. This can be done in one of two ways. A managed virtual network or a customer specified virtual network for the compute instances and cluster. Either way, the workspace can retain public ip connectivity while the compute instances and clusters can choose to be assigned public and private connectivity independently. The latter can be provisioned with disabled public ip connectivity and only using private ip addresses from a subnet in the virtual network. It is important to say that the workspace’s ip connectivity can be independent from that of the compute and clusters because this affects end-users’ experience. The workspace can retain both a public and private ip address simultaneously but if it were made entirely private, then a jump server and a bastion would be needed to interact with the workspace including its notebooks, datastores and compute. With just the compute and the clusters having private ip connectivity to the subnet, the outbound ip connectivity can be established through the workspace in an unrestricted setting or with a firewall in a conditional egress setting. The subnet that the compute and clusters are provisioned from must have connectivity to the subnet that the storage account, key vault and azure container registry that are internal to the workspace. A subnet can even have its own Nat gateway so that all outbound access can get the same ip address prefix which is very helpful to secure using an ip rule for the prefix for incoming traffic at t the destination. Storage account and key vault can gain access via their service endpoints to the compute and cluster’s private ip address while the container registry must have a private endpoint for the private plane connectivity to the compute. A dedicated image server build compute can be created for designated image building activities. On the other hand, if the computer and cluster were assigned public ip connectivity, the azure batch service would need to be involved and these would reach the compute and cluster’s ip address via a load balancer. If created without a public ip, we get a private link service to accept the inbound access from Azure Batch Service and Azure Machine Learning Service without a public ip address. Local host file with the private ip address of the compute and a name like ‘mycomputeinstance.eastus.instances.azureml.ms’, is an option to connect to the virtual network with the workspace in it. is also important to set user-defined routing when a firewall is used, and the default rule must have ‘0.0.0.0/0’ to designate all outbound internet traffic to reach the private ip address of the firewall as a next hop. This allows the firewall to inspect all outbound traffic and security policies can kick in to allow or deny traffic selectively.

Previous article: IaCResolutionsPart126.docx


Sunday, May 26, 2024

 This is a continuation of IaC shortcomings and resolutions. In this section, we focus on the deployment of azure machine learning workspaces with virtual network peerings. When peerings are established traffic from any source in virtual network can flow to any destination in another. This comes very helpful when egress must be from one virtual network. Any number of virtual networks can be peered in hub-and-spoke model or as transit but they have their drawbacks and advantages. The impact this has on the infrastructure for az ml deployments is usually not called out in deployments and there can be quite a few surprises in the normal functioning of the workspace. This article explains these.

First, the azure machine learning workspace requires certain hosts and ports to reach it and they are maintained by Microsoft. For example, the hosts: login.microsoftonline.com,  and management.azure.com are necessary for the Microsoft Entra ID, Azure Portal and Azure Resource Manager to respond to the workspace. Users of the azml workspace might encounter error such as: “Performing interactive authentication. Please follow the instructions on the terminal. To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXYYZZAA to authenticate.” Such a direction does not result in a successful authentication and leads to the dreaded You-cannot-access-this-right-now with the detailed message “Your sign-in was successful but does not meet the criteria to access this resource”. To resolve this error, ensure that the workspace can be reached back from these hosts. If the compute attached to the workspace has public ip connectivity, the host can reach it back but if the compute were created with no public ip and was deployed to a subnet, then the reaching back occurs by name resolution. Consequently, the private endpoint associated with the workspace must be linked to the virtual networks that must have access and the following dns names must be registered with those zones: <workspace-identifier-guid>.workspace.<region>.privatelink.api.azureml.ms, <workspace-identifier-guid.workspace.centralus.cert.privatelink.api.azureml.ms. *.<workspace-identifier-guid>.inference.centralus.privatelink.api.azureml.ms and ml-ml-pod-innov—centralus-<workspace-identifier-guid>.<region>.privatelink.notebooks.azure.net whose corresponding private ip addresses can be found from the private endpoint associated with the workspace where workspace-identifier-guid is specific to a workspace and the region such as ‘centralus’ is where the workspace is deployed. With peered networks, private dns zones in those networks must allow reverse lookup of these names.

Second, the network watcher or tools like that must be used to diagnose that the traffic to the public network addresses registered with Microsoft which are typically well advertised in both documentation and api from Azure. These include CIDR like 13.0.0.0/8, 51.0.0.0/8 52.0.0.0/8, 20.0.0.0/8 and 40.0.0.0/8 and more specific ranges can be obtained via CLI/API.

Previous articles: IaCResolutionsPart125.docx


Saturday, May 25, 2024

 

This is a continuation of previous articles on IaC shortcomings and resolutions. In this section, we focus on automation involving external tools and APIs. Almost all mature DevOps pipelines rely on some automation that is facilitated by scripts and executable rather than IaC resources. The home for these scripts usually turns out to be in pipelines themselves or gravitate to centralized one-point maintenance destinations such as Azure Automation Account Runbooks or Azure DevOps depending on scope and reusability.

While deciding on where to save automation logic, some considerations often get ignored. For example, Runbooks run in sandbox environment or as Hybrid Runbook Worker.

When the executables are downloadable from the internet, either can be used since internet connectivity is available in both. But when local resources need to be managed such as an Azure storage account or on-premises store, they need to be managed via a Hybrid Runbook Worker. The Runbook worker enables us to manage local resources that are not necessarily native to the cloud and bridges the gap between cloud-based automation and on-premises or hybrid scenarios. There are two installation platforms for the Hybrid Runbook Worker: Extension-based (v2) and Agent-based (v1). The former is the recommended approach because it simplifies installation and management by using a VM extension.  It does not rely on the Log Analytics Agent and reports directly to an Azure Monitor Log Analytics workspace.  The V1 approach requires the Log Analytics agent to be installed first.  Both v1 and v2 can coexist on the same machine. Beyond those choices are just limitations and other options such as Azure DevOps might be considered instead. Webhooks and APIs are left out of this discussion, but they provide the advantage that authentication and encryption become part of each request.

 

Azure DevOps aka ADO is a cloud-based service, and it does not have restrictions on its elasticity. The DevOps based approach is critical to rapid software development cycles. The Azure DevOps project represents a fundamental container where data is stored when added to Azure DevOps. Since it is a repository for packages and a place for users to plan, track progress, and collaborate on building workflows, it must scale with the organization. When a project is created, a team is created by the same name. For enterprise, it is better to use collection-project-team structure which provides teams a high level of autonomy and supports administrative tasks to occur at the appropriate level. 

Some tenets for organization from ADO have parallels in Workflow management systems:

·       Projects can be added to support different business units 

·       Within a project, teams can be added 

·       Repositories and branches can be added for a team 

·       Agents, agent pools, and deployment pools to support continuous integration and deployment 

·       Many users can be managed using the Azure Active Directory. 

It might be tempting to use GitOps and third-party automation solutions including Jenkins-based automation, but they only introduce more variety. Consolidating resources and automation in the public cloud is the way to go.

As with all automation, it is important to register them in source control so that their maintenance can become easy. It is also important to secure the credentials with which these scripts run. Finally, lockdown of all resources in terms of network access and private planes is just as important as their accessibility for automation.

 

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd 

 

 

 

 

 

 

Friday, May 24, 2024

 This is a continuation of previous articles on IaC shortcomings and resolutions. In this section, we focus on automation involving external tools and APIs. Almost all mature DevOps pipelines rely on some automation that is facilitated by scripts and executable rather than IaC resources. The home for these scripts usually turns out to be in pipelines themselves or gravitate to centralized one-point maintenance destinations such as Azure Automation Account Runbooks or Azure DevOps depending on scope and reusability.

While deciding on where to save automation logic, some considerations often get ignored. For example, Runbooks run in sandbox environment or as Hybrid Runbook Worker.

When the executables are downloadable from the internet, either can be used since internet connectivity is available in both. But when local resources need to be managed such as an Azure storage account or on-premises store, they need to be managed via a Hybrid Runbook Worker. The Runbook worker enables us to manage local resources that are not necessarily native to the cloud and bridges the gap between cloud-based automation and on-premises or hybrid scenarios. There are two installation platforms for the Hybrid Runbook Worker: Extension-based (v2) and Agent-based (v1). The former is the recommended approach because it simplifies installation and management by using a VM extension.  It does not rely on the Log Analytics Agent and reports directly to an Azure Monitor Log Analytics workspace.  The V1 approach requires the Log Analytics agent to be installed first.  Both v1 and v2 can coexist on the same machine. Beyond those choices are just limitations and other options such as Azure DevOps might be considered instead. Webhooks and APIs are left out of this discussion, but they provide the advantage that authentication and encryption become part of each request.

 

Azure DevOps aka ADO is a cloud-based service, and it does not have restrictions on its elasticity. The DevOps based approach is critical to rapid software development cycles. The Azure DevOps project represents a fundamental container where data is stored when added to Azure DevOps. Since it is a repository for packages and a place for users to plan, track progress, and collaborate on building workflows, it must scale with the organization. When a project is created, a team is created by the same name. For enterprise, it is better to use collection-project-team structure which provides teams a high level of autonomy and supports administrative tasks to occur at the appropriate level.  

Some tenets for organization from ADO have parallels in Workflow management systems: 

· Projects can be added to support different business units  

· Within a project, teams can be added  

· Repositories and branches can be added for a team  

· Agents, agent pools, and deployment pools to support continuous integration and deployment  

· Many users can be managed using the Azure Active Directory.  

It might be tempting to use GitOps and third-party automation solutions including Jenkins-based automation, but they only introduce more variety. Consolidating resources and automation in the public cloud is the way to go.

As with all automation, it is important to register them in source control so that their maintenance can become easy. It is also important to secure the credentials with which these scripts run. Finally, lockdown of all resources in terms of network access and private planes is just as important as their accessibility for automation. 



Thursday, May 23, 2024

 This is a summary of the book titled “Nonviolent or Compassionate Communication – a language of life” written by Marshall P. Rosenberg and published by the Puddledancer press in 2003. The author explains how to express needs and feelings in ways that promote respectful empathic interpersonal communications. This is not about conflict resolution alone but about compassionate communication. It provides a framework about human needs and emotions and ultimately leads to clearer communication, mindfulness, relationships, and personal growth. Imperfect communication causes misunderstandings and frustrations. NVC is based on the language “from the heart”. It has four components: observations, feelings, needs and requests. We can practice it first by observing without judgement or evaluation.  We express our needs without associating our feelings which can easily be manipulated by environmental factors. Too often, we blame those external factors for our feelings, but we begin to prioritize our needs and by ourselves first before others. When we express requests, we can include both needs and feelings but not demands. Checking whether the message our requests sank in is good practice. Applying NVC practices can help in dealing with emotions and resolving conflicts. Simple substitutions of “I choose to” instead of “I have to” helps in this regard.

Nonviolent Communication (NVC) is a method of communication that promotes interpersonal connection and empathy. It consists of four components: observations, feelings, needs, and requests. NVC is applied by observing what is happening, sharing how it makes us feel and what we need, and asking for specific actions. NVC can be applied to personal relationships, family, business, and societal conflicts.


Observation should be specific to a time and context, and evaluation should be specific to the behavior observed. Identifying and expressing feelings is crucial, but people may not always support it. It can be improved by distinguishing between emotions and thoughts, and focusing on what is enriching or not enriching our life.


Feelings result from how we receive others' actions and statements, which is a choice made in combination with our needs and expectations. If someone says something negative to us, we have four response options: blaming ourselves, blaming others, paying attention to what we feel and need, or paying attention to what others feel and need. This helps us become aware of what's happening, what people are feeling, and why.

Identifying needs is crucial for emotional liberation, as it helps individuals recognize their physical, spiritual, autonomy, and interdependence needs. This process involves three stages: emotional slavery, where one feels responsible for others' feelings, the obnoxious stage, where one rejects responsibility, and the third stage, emotional liberation, where one takes responsibility for their actions.


NVC's fourth component is requesting, which involves asking others for things that would enrich one's life. Active language is used when making requests, and specific, positive actions are requested. Emphasizing empathy and asking listeners to reflect back on their responses can make requests seem less like demands. It is important to present requests as requests rather than demands, as people may view those who make a demand as criticizing or making them feel guilty. The goal is to build a relationship based on honesty and empathy, rather than presenting a demand.

NVC principles emphasize self-expression and empathy in interactions with others. Listening with our whole being, letting go of preconceptions, and focusing on what people feel and need is crucial. Empathy can be achieved by paraphrasing what we think we've heard, correcting our understanding if we're wrong, and empathizing when someone stays silent. NVC can help develop compassion for oneself, helping to grow rather than reinforcing self-hatred. It helps connect with feelings or needs arising from past actions, allowing for self-forgiveness.


NVC also helps in expressing anger by separating the link between others and their actions. Instead of blaming others, we look inside ourselves to identify unmet needs. Making requests in clear, positive, concrete action language reveals what we really want. When angry, we choose to stop and take a breath, identify judgments, and express our feelings and needs. To get someone to listen, we need to listen to them.

NVC-style conflict resolution focuses on establishing a connection between parties, allowing productive communication and understanding of each other's perspectives. It emphasizes listening to needs, providing empathy, and proposing strategies. Mediation should not be solely intellectual, but also involve playing different roles and avoiding punishment. It helps individuals recognize their feelings and needs and avoid repeating negative judgments. NVC also encourages expressing appreciation without unconscious judgment, avoiding negative compliments that can alienate. Instead, it encourages celebrating actions that enhance well-being and identifying the needs fulfilled by others. This approach helps to move people out of fixed positions and promotes a more positive and productive resolution.


Wednesday, May 22, 2024

 This is a continuation of previous articles on IaC shortcomings and resolutions. With the example of Azure Front Door, we were explaining the use of separate origin groups for logical organization of backend and front-end endpoints. This section talks about route configuration.

A route is the primary directive to Azure Front Door to handle traffic. The route settings define an association between a domain and an origin group.  Features such as Pattern-to-match and rulesets enable granular control over traffic to the backend resources.

A routing rule is composed of two major parts, the “left-hand-side” and the “right-hand-side”. Front Door matches the incoming request to the left-hand side of the route while the right-hand side defines how the request gets processed. On the left-hand side, we have the HTTP Protocols, the domain, and the path where these properties are expanded out so that every combination of a protocol, domain and path is a potential match set. On the right-hand side, we have the routing decisions. If caching is not enabled, the requests are routed directly to the backend.

Route matching is all about the “most-specific-request” that matches with the “left-hand-side”. The order of match is always protocol first, followed by the domain and then the path. The Match is always a yes or a no. Yes, there is a route with an exact match on the frontend host or no there is no such match. In the case of a “No”, a bad request error gets sent. After the host matching comes path matching. A similar logic to frontend hosts is used to match the request path. The only difference is that between a yes or a no, an approximate match based on wild card pattern is allowed. And as always, a failed match returns a bad request error.

One of the key differences between an application gateway and Front Door is this hybrid custom-domain and path-based routing combination matching as described above. Application gateway can be either custom-domain based or path-based routing in most deployments but FrontDoor by its nature to being global across different regional resource types, allows for both custom-domain and path-based matches. 

The anycast behavior from FrontDoor requires a comprehensive test matrix to avoid any unpredictability with low-latency choices made by default. For a choice of host and path, there can be four test cases at least even for a “/*” path. Predictability also involves trying those requests from various regions.

Thus, separate endpoints, routing and host header all play a role in determining the responses from the Azure Front Door. 

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd 


#codingexercise

#codingexercise

Position eight queens on a chess board without conflicts:

    public static void positionEightQueens(int[][] B, int[][] used, int row) throws Exception {

        if (row == 8) {

            if (isAllSafe(B)) {

                printMatrix(B, B.length, B[0].length);

            }

            return;

        }

        for (int k = 0; k < 8; k++) {

            if ( isSafe(B, row, k) && isAllSafe(B)) {

                B[row][k] = 1;

                positionEightQueens(B, used, row + 1);

                B[row][k]  = 0;

            }

        }

    }

    public static boolean isSafe(int[][] B, int p, int q) {

        int row = B.length;

        int col = B[0].length;

        for (int i = 0; i < row; i++) {

            for (int j = 0; j < col; j++) {

                if (i == p && j == q) { continue; }

                if (B[i][j] == 1) {

                    boolean notSafe = isOnDiagonal(B, p, q, i, j) ||

                            isOnVertical(B, p, q, i, j) ||

                            isOnHorizontal(B, p, q, i, j);

                    if(notSafe){

                        return false;

                    }

                }

             }

        }

        return true;

    }

    public static boolean isAllSafe(int[][] B) {

        for (int i = 0; i < B.length; i++) {

            for (int j = 0; j < B[0].length; j++) {

                if (B[i][j]  == 1 && !isSafe(B, i, j)) {

                    return false;

                }

            }

        }

        return true;

    }

    public static boolean isOnDiagonal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k ++) {

            if (r2 - k >= 0 &&  c2 - k >= 0 && r1 == r2 - k && c1 == c2 - k) {

                return true;

            }

            if (r2 + k < row && c2 + k < col && r1 == r2 + k && c1 == c2 + k) {

                return true;

            }

            if (r2 - k >= 0 && c2 + k < col && r1 == r2 - k && c1 == c2 + k) {

                return true;

            }

            if (r2 + k < row  && c2 - k >= 0 && r1 == r2 + k && c1 == c2 - k) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnVertical(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (c2 - k >= 0  && c1 == c2 - k && r1 == r2 ) {

                return true;

            }

            if (c2 + k < row && c1 == c2 + k && r1 == r2) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnHorizontal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (r2 - k >= 0  && r1 == r2 - k && c1 == c2 ) {

                return true;

            }

            if (r2 + k < row && r1 == r2 + k && c1 == c2) {

                return true;

            }

        }

        return result;

    }


Sample output:

1 1 2 1 1 1 1 1

1 1 1 1 1 2 1 1

1 1 1 2 1 1 1 1

1 2 1 1 1 1 1 1

1 1 1 1 1 1 1 2

1 1 1 1 2 1 1 1

1 1 1 1 1 1 2 1

2 1 1 1 1 1 1 1




Tuesday, May 21, 2024

 

 

This is the summary of the book titled “The Cybersecurity playbook – How every leader and Employee can contribute to a culture of security.” written by Allison Cerra and published by Wiley in 2019. The author draws upon years of fighting hacking and cybercrimes to produce a practical checklist for employees at all levels and disciplines such that the mindset for cybersecurity becomes part of the culture. These good habits can thwart attacks and boost preparedness. She calls on product designers to build security into network connected products from the ground up. She calls on human resources to increase awareness, capabilities, and resilience. Security breaches must be clearly communicated, and the response plan must be detailed. Since risk management is part of the cybersecurity initiatives, the finance office must also be involved. CISOs or the Chief Information Security Officer can co-ordinate and maintain the ultimate responsibility.

Corporate cybersecurity relies heavily on employee good habits, as one in five security breaches involves a negligent employee's mistake. Key practices include creating strong passwords, changing them frequently, and not reusing them. Employees should be familiar with common hacker tactics, such as phishing emails, and should check with IT security before using cloud services and tools. Encrypted thumb drives, reporting suspicious emails, and never leaving sensitive information unattended are essential.

Convincing employees to adopt these practices is challenging, as those responsible for cybersecurity often operate in the shadows. CISOs and their teams must weave safe practices and habits into the organization's culture to prepare for attacks and minimize damage. Cybersecurity preparedness requires the combined efforts of all parts of the organization, led by a CISO. The talent market for cybersecurity professionals is also struggling, with new techniques appearing daily.

Cybercriminals organize online communities on the Dark Web, sharing information and strategies. CEOs and board members must recognize that cybersecurity is a continuous escalating battle with measures and countermeasures, and no single tool can solve the problem.

Cybersecurity is a crucial investment for businesses, and it should be prioritized in every board meeting. The CISO should present and update the board on strategic risk management, explaining how the firm is protecting its most important assets. Regular updates from the CISO can help earmark security budgets for protecting these assets. Product designers must build security into network-connected products and devices from the ground up, as recent hacker attacks have highlighted the need for greater risk in every adoption of technology. Developers should make security a priority in product design, building security features as requirements and assigning accountability for continuous security monitoring and upkeep throughout the product life cycle.

Human resources play a crucial role in building cybersecurity awareness, capabilities, and resilience. A talent shortage in IT security talent is prevalent, with HR professionals sourcing candidates from atypical places and with less obvious credentials, such as women. HR should lead the charge in training employees in good cybersecurity practices, adjust reward programs, review personnel access to sensitive data, add questions to job interviews, and ensure every executive has at least one cybersecurity-related metric in their performance plan.

Developing and practicing a detailed communications and response plan to major security breaches is essential. Hacker stealth is a frightening aspect of cybersecurity, and firms should report breaches immediately to reduce damage and serve customers ethically. Preparing ahead of a breach involves scenario planning, developing a full communications plan, and preparing responses for tough questions.

CISOs must reframe their conversations with CFOs from a focus on ROI to one of risk management, estimating financial damage and potential avoidance of losses. CFOs should hold CISOs accountable for their past resource use and training.

CFOs and CISOs must ensure the corporate supply chain adheres to IT security standards, including outsourcing partners, suppliers, and new products or platforms. CISOs must balance policing employees with preventing a free-for-all that puts the firm at risk. They must translate threats to strategy and risks, ensuring that potential attacks put revenue and strategic objectives at risk. CISOs should also share phishing test results and maintain basic security best practices. AI is a weapon in both the company's cybersecurity arsenal and its enemies' arsenals. They must work closely with CIOs, agreeing on metrics, penetration testing schedules, and planned purchases. AI can automate threat detection but also results in more false positives, requiring resources to investigate. Organizations must develop a "sixth sense" for detecting threats and breaches, which can only be achieved when cybersecurity infuses the culture.

Previous book summary: BookSummary94.docx

Summarizing Software: SummarizerCodeSnippets.docx 


#codingexercise

Given a string of digits, count the number of subwords (consistent subsequences) that are anagrams of any palindrome.

Public class solution {

Public static int getSubWords(String digits) {

    Int count = 0;

    for (int k = 1; k < digits.length; k++) {

           for (int I = 0; I <digits.length; I++) {

                Int end = I + k;

                If (end < digits.length) {

                     String word = digits.substring(words, I, end);

                      If (isAnagram(word)) { 

                          count++;

                      }

                }

           }

    }

    return count;

}

Public boolean isAnagram(String word) {

        Map<Char, Integer> charMap = new HashMap<>();

        for (int I = 0; I < word.length; I++) {

               If (charMap.containsKey(word.charAt(I))) {

                    charMap[word.charAt(i)] = charMap.get(word.charAt(I)) + 1;

               } else {

                    charMap.put(word.charAt(I), 1);

               }

        }

        If (charMap.size() %2 == 1) {

            // count of only one element must be odd 

            return charMap.values().stream().filter(x-> x%2 == 1).count() == 1;

        }

        Else { 

             // count of all elements must be even

             return charMap.values().stream().filter(x -> x%2 == 0).count() == charMaps.size();

        }

}

}

test:

14641

2


 

Monday, May 20, 2024

 Given an integer array arr of distinct integers and an integer k.


A game will be played between the first two elements of the array (i.e. arr[0] and arr[1]). In each round of the game, we compare arr[0] with arr[1], the larger integer wins and remains at position 0 and the smaller integer moves to the end of the array. The game ends when an integer wins k consecutive rounds.


Return the integer which will win the game.


It is guaranteed that there will be a winner of the game.

class Solution {

    public int getWinner(int[] arr, int k) {

        int win = 0;

        if (arr == null || arr.length < 2) { return Integer.MIN_VALUE; }

        if (k > arr.length){ 

            int max = Integer.MIN_VALUE;

            for (int i = 0; i < arr.length; i++) {

                if (arr[i] > max) {

                    max = arr[i];

                }

                return max;

            }

        }

        for (int i = 0; i < arr.length * arr.length; i++) {

            if (win >= k) { 

                break; 

            } 

            if (arr[0] > arr[1]) {

                win++;

                int temp = arr[1];

                for (int j = 2; j < arr.length; j--) {

                    arr[j-1] = arr[j];

                }

                arr[arr.length - 1] = temp;

                continue;

            }

            win = 1;

            int temp = arr[0];

            for (int j = 1; j < arr.length; j--) {

                arr[j-1] = arr[j];

            }

            arr[arr.length - 1] = temp;

        }

        return arr[0];

    }

}


Arr: 2,8,5,6,6 k=3

8,5,6,6,2

8,6,6,2,5

8,6,2,5,6

8


Sunday, May 19, 2024

 Given a string of digits, count the number of subwords (consistent subsequences) that are anagrams of any palindrome.

Public class solution {

Public static int getSubWords(String digits) {

    Int count = 0;

    for (int k = 1; k < digits.length; k++) {

           for (int I = 0; I <digits.length; I++) {

                Int end = I + k;

                If (end < digits.length) {

                     String word = digits.substring(words, I, end);

                      If (isAnagram(word)) { 

                          count++;

                      }

                }

           }

    }

    return count;

}

Public boolean isAnagram(String word) {

        Map<Char, Integer> charMap = new HashMap<>();

        for (int I = 0; I < word.length; I++) {

               If (charMap.containsKey(word.charAt(I))) {

                    charMap[word.charAt(i)] = charMap.get(word.charAt(I)) + 1;

               } else {

                    charMap.put(word.charAt(I), 1);

               }

        }

        If (charMap.size() %2 == 1) {

            // count of only one element must be odd 

            return charMap.values().stream().filter(x-> x%2 == 1).count() == 1;

        }

        Else { 

             // count of all elements must be even

             return charMap.values().stream().filter(x -> x%2 == 0).count() == charMaps.size();

        }

}

}

test:

14641

2


Saturday, May 18, 2024

 Error Correction for Drone flight path management

With the popularity of modular composition, many industries are taking advantage of a fleet of functional units that can collectively function as a whole and eliminate the risks of the monoliths that used to serve the purpose earlier. Fleet refers to many drones, bots or other software automations that are capable of a specific function such as moving from point A to point B in space. 

While single remote-controlled units can follow the handlers' commands in real-time, the fleet usually operates according to a program. Centralized logic maps an initial state to a final state and issues command to each drone to move from the starting point to the ending point. It is easy for software to map initial co-ordinates for each drone in a formation on land and command them to move to a final co-ordinate in the sky to form a specific arrangement. 

Autonomous drone fleet formation avoids the need for a centralized controller that plots determines the final co-ordinates and non-overlapping flight path for each unit. The suggestion is that the computation for the final position from the initial position for each unit does not need to be performed at the controller and that logic can be delegated to the autonomous units. For example, if we wanted to change a fleet forming the surface of a sphere to a concentric plane of Saturn-like rings, then the final co-ordinates of each unit must be distinct, and its determination is not restricted to processing at the controller. While autonomous decisions are made by individual drones, they must remain within the overall trajectory tolerance space for the entire fleet. This can be achieved with the popular neural net for softmax classification. The goodness of fit for a formation or sum of squares of errors of F-score are other alternative measures. This is not a one-time adjustment to the formation but a continuous feedback-loop circuit where the deviation is monitored, and suitable error corrections are performed. Correction can also be translated as optimization problems where the objective function is maximized. It is common to describe optimization problems in local versus global optimization. Local optimization involves finding the optimal solution for a specific region of the search space while global optimization involves finding the optimal solutions on problems  that contain local optima. In some cases, joint local and global optimization recommendations can be computed and applied. The choice of algorithms for local search include Nelder-Mead algorithm, BFGS algorithm, and Hill-Climbing algorithm.  Global optimization algorithms include Genetic algorithm, Simulated Annealing, and Particle Swarm Optimization. The sum of square errors is almost independent of space-time variables and gives a quantitative measure that works well as the objective function to optimization problems. Therefore, the sum of the squares and the simulated annealing algorithm are good general-purpose choices that are applicable to drone formations. The formation is singleton otherwise the divisions can be treated like formations with cohesion and separation. The relationship between cohesion and separation is written as TSS = SSE + SSB where TSS is the total sum of squares. SSE is the sum of squared error and SSB is the between group sum of squares, the higher the total SSB, the more separated the formations are. Minimizing SSE (cohesion) automatically results in maximizing SSB (separation). Formation can be ranked and processed based a Silhouette co-efficient that combines both cohesion and separation.  This is done in three steps. 

For the I'th object, calculate its average distance to all other objects in formation and call it ai 

For the I'th object, and any formation not containing the object, calculate the object's average distance to all the objects in the given formation. Use the minimum value and call it bi 

For the I'th object, the silhouette coefficient is given by (bi-ai)/max (ai, bi)

Sample python implementation: 

#! /usr/bin/python 

def determining_replacement_for_team(nodes): 

                              Return formation_centroids_of_top_formations(nodes) 

 

def batch_formation_repeated_pass(team, nodes) 

                    team_formation = classify(team, nodes) 

                    proposals = gen_proposals(team_formation) 

                    formations = [(FULL, team_formation)] 

                     For proposal in proposals: 

                                             Formation = get_formation_proposal(proposal, team, nodes) 

                                              formations += [(proposal, formation)] 

                     Selections = select_top_formations(formations) 

                     Return team_from(selections) 

 

Def select_top_formations(threshold, formations, strategy = goodness_of_fit): 

                     return formations_greater_than_goodness_of_fit_weighted_size(threshold, formations)


def annealingoptimize(domain,costf,T=10000.0,cool=0.95,step=1): 

     # Initialize the values randomly 

     vec=[float(random.randint(domain[i][0],domain[i][1])) 

          for i in range(len(domain))] 

     while T>0.1: 

          # Choose one of the indices 

          i=random.randint(0,len(domain)-1) 

          # Choose a direction to change it 

          dir=random.randint(-step,step) 

          # Create a new list with one of the values changed 

          vecb=vec[:] 

          vecb[i]+=dir 

          if vecb[i]<domain[i][0]: vecb[i]=domain[i][0] 

          elif vecb[i]>domain[i][1]: vecb[i]=domain[i][1] 

          # Calculate the current cost and the new cost 

          ea=costf(vec) 

          eb=costf(vecb) 

          p=pow(math.e,(-eb-ea)/T) 

          # Is it better, or does it make the probability 

          # cutoff? 

          if(eb<ea or random.random( )<p): 

               vec=vecb 

          # Decrease the temperature 

          T=T*cool 

     return vec 

#codingexercise

Given a sorted integer array nums and an integer n, add/patch elements to the array such that any number in the range [1, n] inclusive can be formed by the sum of some elements in the array.
Return the minimum number of patches required.
class Solution {
    public int minPatches(int[] nums, int n) {
        int count = 0;
        int[] sums = new int[n+1]; 
        Arrays.fill(sums, 0);
        sums[0] = 1;
        List<Integer> elements = new ArrayList<>(nums);
        while(!allOnes(sums)){
            List<List<Integer>> combinations = new ArrayList<>();
            List<Integer> selection = new ArrayList<Integer>();
            combine(elements, selection, 0, 0, combinations);
            for (int i = 0; i < combinations.size(); i++) {
                int sum = combinations.get(i).stream().sum();
                if (sum <= n && sums[sum] != 1) {
                    sums[sum] = 1; 
                } 
            }
            addLowestMissingNumber(elements);
            count++;
        }
        return count;
    }
}


Friday, May 17, 2024

 This is a continuation of previous articles on IaC shortcomings and resolutions. With the example of Azure Front Door, we were explaining the use of separate origin groups for logical organization of backend and front-end endpoints. This section talks about route configuration.

A route is the primary directive to Azure Front Door to handle traffic. The route settings define an association between a domain and an origin group.  Features such as Pattern-to-match and rulesets enable granular control over traffic to the backend resources.

A routing rule is composed of two major parts, the “left-hand-side” and the “right-hand-side”. Front Door matches the incoming request to the left-hand side of the route while the right-hand side defines how the request gets processed. On the left-hand side, we have the HTTP Protocols, the domain, and the path where these properties are expanded out so that every combination of a protocol, domain and path is a potential match set. On the right-hand side, we have the routing decisions. If caching is not enabled, the requests are routed directly to the backend.

Route matching is all about the “most-specific-request” that matches with the “left-hand-side”. The order of match is always protocol first, followed by the domain and then the path. The Match is always a yes or a no. Yes, there is a route with an exact match on the frontend host or no there is no such match. In the case of a “No”, a bad request error gets sent. After the host matching comes path matching. A similar logic to frontend hosts is used to match the request path. The only difference is that between a yes or a no, an approximate match based on wild card pattern is allowed. And as always, a failed match returns a bad request error.

One of the key differences between an application gateway and Front Door is this hybrid custom-domain and path-based routing combination matching as described above. Application gateway can be either custom-domain based or path-based routing in most deployments but FrontDoor by its nature to being global across different regional resource types, allows for both custom-domain and path-based matches. 

The anycast behavior from FrontDoor requires a comprehensive test matrix to avoid any unpredictability with low-latency choices made by default. For a choice of host and path, there can be four test cases at least even for a “/*” path. Predictability also involves trying those requests from various regions.

Thus, separate endpoints, routing and host header all play a role in determining the responses from the Azure Front Door. 

#codingexercise 

Given a linked list, reverse the nodes of a linked list k at a time and return its modified list. 

Node reverse(Node master, Node start, Node end) {

If (start == null) return null;

If (start.next == null) return start;

Node tail = start;

Node prev = end;

Node cur = start;

Node next = cur.next;

While (next && cur != end) {

Cur.next = prev;

Prev = cur;

Cur = next;

Next = cur.next;

}

if (cur != end) {

     cur.next = prev;

     prev = cur;

     cur = next;

}

if (master != null) {

     master.next = prev;

} else {

    master = prev;

}

Return tail;

}

public Node reverse(Node head, int k) {

Node start = head;

Node end = head;

Node master = null;

while (end != null) {

    int count = 0;

    while (count < k && end != null) {

       end = end.next;

       count++;     

    }

    if (count == k) {

        Node last = master;

        master = reverse(master, start, end); 

         if (start == head) {

         head = master;

         }

         if (last != null) {

              last.next = master;

         } 

         while(master.next != end)  {

              master = master.next;

         }

         start = master.next;

         end = start;

    }

}

return head;

}


Thursday, May 16, 2024

 

This is a continuation of previous articles on IaC shortcomings and resolutions. With the example of Azure Front Door, we were explaining the use of separate origin groups for logical organization of backend and front-end endpoints. This section talks about route configuration.

A route is the primary directive to Azure Front Door to handle traffic. The route settings define an association between a domain and an origin group.  Features such as Pattern-to-match and rulesets enable granular control over traffic to the backend resources.

A routing rule is composed of two major parts, the “left-hand-side” and the “right-hand-side”. Front Door matches the incoming request to the left-hand side of the route while the right-hand side defines how the request gets processed. On the left-hand side, we have the HTTP Protocols, the domain, and the path where these properties are expanded out so that every combination of a protocol, domain and path is a potential match set. On the right-hand side, we have the routing decisions. If caching is not enabled, the requests are routed directly to the backend.

Route matching is all about the “most-specific-request” that matches with the “left-hand-side”. The order of match is always protocol first, followed by the domain and then the path. The Match is always a yes or a no. Yes, there is a route with an exact match on the frontend host or no there is no such match. In the case of a “No”, a bad request error gets sent. After the host matching comes path matching. A similar logic to frontend hosts is used to match the request path. The only difference is that between a yes or a no, an approximate match based on wild card pattern is allowed. And as always, a failed match returns a bad request error.

One of the key differences between an application gateway and Front Door is this hybrid custom-domain and path-based routing combination matching as described above. Application gateway can be either custom-domain based or path-based routing in most deployments but FrontDoor by its nature to being global across different regional resource types, allows for both custom-domain and path-based matches.

The anycast behavior from FrontDoor requires a comprehensive test matrix to avoid any unpredictability with low-latency choices made by default. For a choice of host and path, there can be four test cases at least even for a “/*” path. Predictability also involves trying those requests from various regions.

Thus, separate endpoints, routing and host header all play a role in determining the responses from the Azure Front Door.

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd

 

 

Wednesday, May 15, 2024

 This is a continuation of previous articles on IaC shortcomings and resolutions. With the example of Azure Front Door, we were explaining the use of separate origin groups for logical organization of backend. This section talks about organization of endpoints.

An endpoint is a logical grouping of one or more routes associated with domain names. Each endpoint can be assigned a domain name either built-in or custom.  A Front Door profile can contain multiple domains especially when they have different routes and route paths. They can be combined into a single endpoint when there is a need to turn on or off collectively.

Azure Front Door can create managed certificates for custom domains even when the dns resolution occurs outside Azure. This makes https for end to end SSL easier to setup as the signed certificate is universally validated.

The steps taken to create endpoints is similar on both the frontend and the backend. The origin should be viewed as an endpoint for the application backend. When an origin is created in an origin group, Frontdoor requires to know the origin type and host headers. For an Azure app service, this could be contoso.azurewebsites.net or a custom domain. FrobtDoor validates if the request hostname matched the host name I’m the certificate provided by the origin.

Thus, separate endpoints, routing and host header all play a role in determining the responses from the Azure Front Door.

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd 


Tuesday, May 14, 2024

 #codingexercise

Position eight queens on a chess board without conflicts:

    public static void positionEightQueens(int[][] B, int[][] used, int row) throws Exception {

        if (row == 8) {

            if (isAllSafe(B)) {

                printMatrix(B, B.length, B[0].length);

            }

            return;

        }

        for (int k = 0; k < 8; k++) {

            if ( isSafe(B, row, k) && isAllSafe(B)) {

                B[row][k] = 1;

                positionEightQueens(B, used, row + 1);

                B[row][k]  = 0;

            }

        }

    }

    public static boolean isSafe(int[][] B, int p, int q) {

        int row = B.length;

        int col = B[0].length;

        for (int i = 0; i < row; i++) {

            for (int j = 0; j < col; j++) {

                if (i == p && j == q) { continue; }

                if (B[i][j] == 1) {

                    boolean notSafe = isOnDiagonal(B, p, q, i, j) ||

                            isOnVertical(B, p, q, i, j) ||

                            isOnHorizontal(B, p, q, i, j);

                    if(notSafe){

                        return false;

                    }

                }

             }

        }

        return true;

    }

    public static boolean isAllSafe(int[][] B) {

        for (int i = 0; i < B.length; i++) {

            for (int j = 0; j < B[0].length; j++) {

                if (B[i][j]  == 1 && !isSafe(B, i, j)) {

                    return false;

                }

            }

        }

        return true;

    }

    public static boolean isOnDiagonal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k ++) {

            if (r2 - k >= 0 &&  c2 - k >= 0 && r1 == r2 - k && c1 == c2 - k) {

                return true;

            }

            if (r2 + k < row && c2 + k < col && r1 == r2 + k && c1 == c2 + k) {

                return true;

            }

            if (r2 - k >= 0 && c2 + k < col && r1 == r2 - k && c1 == c2 + k) {

                return true;

            }

            if (r2 + k < row  && c2 - k >= 0 && r1 == r2 + k && c1 == c2 - k) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnVertical(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (c2 - k >= 0  && c1 == c2 - k && r1 == r2 ) {

                return true;

            }

            if (c2 + k < row && c1 == c2 + k && r1 == r2) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnHorizontal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (r2 - k >= 0  && r1 == r2 - k && c1 == c2 ) {

                return true;

            }

            if (r2 + k < row && r1 == r2 + k && c1 == c2) {

                return true;

            }

        }

        return result;

    }


Sample output:

1 1 2 1 1 1 1 1

1 1 1 1 1 2 1 1

1 1 1 2 1 1 1 1

1 2 1 1 1 1 1 1

1 1 1 1 1 1 1 2

1 1 1 1 2 1 1 1

1 1 1 1 1 1 2 1

2 1 1 1 1 1 1 1