Monday, May 27, 2024

 This is a continuation of articles on IaC shortcomings and resolutions. In this section too, we focus on the deployment of azure machine learning workspaces with virtual network peering and securing it with proper connectivity. When peerings are established traffic from any source in virtual network can flow to any destination in another. This is very helpful when egress must be from one virtual network. Any number of virtual networks can be peered into hub-and-spoke model or as transit, but they have their drawbacks and advantages. The impact this has on the infrastructure for AZ ML deployments is usually not called out in deployments and there can be quite a few surprises in the normal functioning of the workspace. The previous article focused on DNS name resolution and the appropriate names and ip addresses to use with A records. This article focuses on private and service endpoints, firewall, NSG, and user defined routing. 

The workspace and the compute can have public and private ip addresses and when a virtual network is used, it is intended to isolate and secure the connectivity. This can be done in one of two ways. A managed virtual network or a customer specified virtual network for the compute instances and cluster. Either way, the workspace can retain public ip connectivity while the compute instances and clusters can choose to be assigned public and private connectivity independently. The latter can be provisioned with disabled public ip connectivity and only using private ip addresses from a subnet in the virtual network. It is important to say that the workspace’s ip connectivity can be independent from that of the compute and clusters because this affects end-users’ experience. The workspace can retain both a public and private ip address simultaneously but if it were made entirely private, then a jump server and a bastion would be needed to interact with the workspace including its notebooks, datastores and compute. With just the compute and the clusters having private ip connectivity to the subnet, the outbound ip connectivity can be established through the workspace in an unrestricted setting or with a firewall in a conditional egress setting. The subnet that the compute and clusters are provisioned from must have connectivity to the subnet that the storage account, key vault and azure container registry that are internal to the workspace. A subnet can even have its own Nat gateway so that all outbound access can get the same ip address prefix which is very helpful to secure using an ip rule for the prefix for incoming traffic at t the destination. Storage account and key vault can gain access via their service endpoints to the compute and cluster’s private ip address while the container registry must have a private endpoint for the private plane connectivity to the compute. A dedicated image server build compute can be created for designated image building activities. On the other hand, if the computer and cluster were assigned public ip connectivity, the azure batch service would need to be involved and these would reach the compute and cluster’s ip address via a load balancer. If created without a public ip, we get a private link service to accept the inbound access from Azure Batch Service and Azure Machine Learning Service without a public ip address. Local host file with the private ip address of the compute and a name like ‘mycomputeinstance.eastus.instances.azureml.ms’, is an option to connect to the virtual network with the workspace in it. is also important to set user-defined routing when a firewall is used, and the default rule must have ‘0.0.0.0/0’ to designate all outbound internet traffic to reach the private ip address of the firewall as a next hop. This allows the firewall to inspect all outbound traffic and security policies can kick in to allow or deny traffic selectively.

Previous article: IaCResolutionsPart126.docx


Sunday, May 26, 2024

 This is a continuation of IaC shortcomings and resolutions. In this section, we focus on the deployment of azure machine learning workspaces with virtual network peerings. When peerings are established traffic from any source in virtual network can flow to any destination in another. This comes very helpful when egress must be from one virtual network. Any number of virtual networks can be peered in hub-and-spoke model or as transit but they have their drawbacks and advantages. The impact this has on the infrastructure for az ml deployments is usually not called out in deployments and there can be quite a few surprises in the normal functioning of the workspace. This article explains these.

First, the azure machine learning workspace requires certain hosts and ports to reach it and they are maintained by Microsoft. For example, the hosts: login.microsoftonline.com,  and management.azure.com are necessary for the Microsoft Entra ID, Azure Portal and Azure Resource Manager to respond to the workspace. Users of the azml workspace might encounter error such as: “Performing interactive authentication. Please follow the instructions on the terminal. To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXYYZZAA to authenticate.” Such a direction does not result in a successful authentication and leads to the dreaded You-cannot-access-this-right-now with the detailed message “Your sign-in was successful but does not meet the criteria to access this resource”. To resolve this error, ensure that the workspace can be reached back from these hosts. If the compute attached to the workspace has public ip connectivity, the host can reach it back but if the compute were created with no public ip and was deployed to a subnet, then the reaching back occurs by name resolution. Consequently, the private endpoint associated with the workspace must be linked to the virtual networks that must have access and the following dns names must be registered with those zones: <workspace-identifier-guid>.workspace.<region>.privatelink.api.azureml.ms, <workspace-identifier-guid.workspace.centralus.cert.privatelink.api.azureml.ms. *.<workspace-identifier-guid>.inference.centralus.privatelink.api.azureml.ms and ml-ml-pod-innov—centralus-<workspace-identifier-guid>.<region>.privatelink.notebooks.azure.net whose corresponding private ip addresses can be found from the private endpoint associated with the workspace where workspace-identifier-guid is specific to a workspace and the region such as ‘centralus’ is where the workspace is deployed. With peered networks, private dns zones in those networks must allow reverse lookup of these names.

Second, the network watcher or tools like that must be used to diagnose that the traffic to the public network addresses registered with Microsoft which are typically well advertised in both documentation and api from Azure. These include CIDR like 13.0.0.0/8, 51.0.0.0/8 52.0.0.0/8, 20.0.0.0/8 and 40.0.0.0/8 and more specific ranges can be obtained via CLI/API.

Previous articles: IaCResolutionsPart125.docx


Saturday, May 25, 2024

 

This is a continuation of previous articles on IaC shortcomings and resolutions. In this section, we focus on automation involving external tools and APIs. Almost all mature DevOps pipelines rely on some automation that is facilitated by scripts and executable rather than IaC resources. The home for these scripts usually turns out to be in pipelines themselves or gravitate to centralized one-point maintenance destinations such as Azure Automation Account Runbooks or Azure DevOps depending on scope and reusability.

While deciding on where to save automation logic, some considerations often get ignored. For example, Runbooks run in sandbox environment or as Hybrid Runbook Worker.

When the executables are downloadable from the internet, either can be used since internet connectivity is available in both. But when local resources need to be managed such as an Azure storage account or on-premises store, they need to be managed via a Hybrid Runbook Worker. The Runbook worker enables us to manage local resources that are not necessarily native to the cloud and bridges the gap between cloud-based automation and on-premises or hybrid scenarios. There are two installation platforms for the Hybrid Runbook Worker: Extension-based (v2) and Agent-based (v1). The former is the recommended approach because it simplifies installation and management by using a VM extension.  It does not rely on the Log Analytics Agent and reports directly to an Azure Monitor Log Analytics workspace.  The V1 approach requires the Log Analytics agent to be installed first.  Both v1 and v2 can coexist on the same machine. Beyond those choices are just limitations and other options such as Azure DevOps might be considered instead. Webhooks and APIs are left out of this discussion, but they provide the advantage that authentication and encryption become part of each request.

 

Azure DevOps aka ADO is a cloud-based service, and it does not have restrictions on its elasticity. The DevOps based approach is critical to rapid software development cycles. The Azure DevOps project represents a fundamental container where data is stored when added to Azure DevOps. Since it is a repository for packages and a place for users to plan, track progress, and collaborate on building workflows, it must scale with the organization. When a project is created, a team is created by the same name. For enterprise, it is better to use collection-project-team structure which provides teams a high level of autonomy and supports administrative tasks to occur at the appropriate level. 

Some tenets for organization from ADO have parallels in Workflow management systems:

·       Projects can be added to support different business units 

·       Within a project, teams can be added 

·       Repositories and branches can be added for a team 

·       Agents, agent pools, and deployment pools to support continuous integration and deployment 

·       Many users can be managed using the Azure Active Directory. 

It might be tempting to use GitOps and third-party automation solutions including Jenkins-based automation, but they only introduce more variety. Consolidating resources and automation in the public cloud is the way to go.

As with all automation, it is important to register them in source control so that their maintenance can become easy. It is also important to secure the credentials with which these scripts run. Finally, lockdown of all resources in terms of network access and private planes is just as important as their accessibility for automation.

 

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd 

 

 

 

 

 

 

Friday, May 24, 2024

 This is a continuation of previous articles on IaC shortcomings and resolutions. In this section, we focus on automation involving external tools and APIs. Almost all mature DevOps pipelines rely on some automation that is facilitated by scripts and executable rather than IaC resources. The home for these scripts usually turns out to be in pipelines themselves or gravitate to centralized one-point maintenance destinations such as Azure Automation Account Runbooks or Azure DevOps depending on scope and reusability.

While deciding on where to save automation logic, some considerations often get ignored. For example, Runbooks run in sandbox environment or as Hybrid Runbook Worker.

When the executables are downloadable from the internet, either can be used since internet connectivity is available in both. But when local resources need to be managed such as an Azure storage account or on-premises store, they need to be managed via a Hybrid Runbook Worker. The Runbook worker enables us to manage local resources that are not necessarily native to the cloud and bridges the gap between cloud-based automation and on-premises or hybrid scenarios. There are two installation platforms for the Hybrid Runbook Worker: Extension-based (v2) and Agent-based (v1). The former is the recommended approach because it simplifies installation and management by using a VM extension.  It does not rely on the Log Analytics Agent and reports directly to an Azure Monitor Log Analytics workspace.  The V1 approach requires the Log Analytics agent to be installed first.  Both v1 and v2 can coexist on the same machine. Beyond those choices are just limitations and other options such as Azure DevOps might be considered instead. Webhooks and APIs are left out of this discussion, but they provide the advantage that authentication and encryption become part of each request.

 

Azure DevOps aka ADO is a cloud-based service, and it does not have restrictions on its elasticity. The DevOps based approach is critical to rapid software development cycles. The Azure DevOps project represents a fundamental container where data is stored when added to Azure DevOps. Since it is a repository for packages and a place for users to plan, track progress, and collaborate on building workflows, it must scale with the organization. When a project is created, a team is created by the same name. For enterprise, it is better to use collection-project-team structure which provides teams a high level of autonomy and supports administrative tasks to occur at the appropriate level.  

Some tenets for organization from ADO have parallels in Workflow management systems: 

· Projects can be added to support different business units  

· Within a project, teams can be added  

· Repositories and branches can be added for a team  

· Agents, agent pools, and deployment pools to support continuous integration and deployment  

· Many users can be managed using the Azure Active Directory.  

It might be tempting to use GitOps and third-party automation solutions including Jenkins-based automation, but they only introduce more variety. Consolidating resources and automation in the public cloud is the way to go.

As with all automation, it is important to register them in source control so that their maintenance can become easy. It is also important to secure the credentials with which these scripts run. Finally, lockdown of all resources in terms of network access and private planes is just as important as their accessibility for automation. 



Thursday, May 23, 2024

 This is a summary of the book titled “Nonviolent or Compassionate Communication – a language of life” written by Marshall P. Rosenberg and published by the Puddledancer press in 2003. The author explains how to express needs and feelings in ways that promote respectful empathic interpersonal communications. This is not about conflict resolution alone but about compassionate communication. It provides a framework about human needs and emotions and ultimately leads to clearer communication, mindfulness, relationships, and personal growth. Imperfect communication causes misunderstandings and frustrations. NVC is based on the language “from the heart”. It has four components: observations, feelings, needs and requests. We can practice it first by observing without judgement or evaluation.  We express our needs without associating our feelings which can easily be manipulated by environmental factors. Too often, we blame those external factors for our feelings, but we begin to prioritize our needs and by ourselves first before others. When we express requests, we can include both needs and feelings but not demands. Checking whether the message our requests sank in is good practice. Applying NVC practices can help in dealing with emotions and resolving conflicts. Simple substitutions of “I choose to” instead of “I have to” helps in this regard.

Nonviolent Communication (NVC) is a method of communication that promotes interpersonal connection and empathy. It consists of four components: observations, feelings, needs, and requests. NVC is applied by observing what is happening, sharing how it makes us feel and what we need, and asking for specific actions. NVC can be applied to personal relationships, family, business, and societal conflicts.


Observation should be specific to a time and context, and evaluation should be specific to the behavior observed. Identifying and expressing feelings is crucial, but people may not always support it. It can be improved by distinguishing between emotions and thoughts, and focusing on what is enriching or not enriching our life.


Feelings result from how we receive others' actions and statements, which is a choice made in combination with our needs and expectations. If someone says something negative to us, we have four response options: blaming ourselves, blaming others, paying attention to what we feel and need, or paying attention to what others feel and need. This helps us become aware of what's happening, what people are feeling, and why.

Identifying needs is crucial for emotional liberation, as it helps individuals recognize their physical, spiritual, autonomy, and interdependence needs. This process involves three stages: emotional slavery, where one feels responsible for others' feelings, the obnoxious stage, where one rejects responsibility, and the third stage, emotional liberation, where one takes responsibility for their actions.


NVC's fourth component is requesting, which involves asking others for things that would enrich one's life. Active language is used when making requests, and specific, positive actions are requested. Emphasizing empathy and asking listeners to reflect back on their responses can make requests seem less like demands. It is important to present requests as requests rather than demands, as people may view those who make a demand as criticizing or making them feel guilty. The goal is to build a relationship based on honesty and empathy, rather than presenting a demand.

NVC principles emphasize self-expression and empathy in interactions with others. Listening with our whole being, letting go of preconceptions, and focusing on what people feel and need is crucial. Empathy can be achieved by paraphrasing what we think we've heard, correcting our understanding if we're wrong, and empathizing when someone stays silent. NVC can help develop compassion for oneself, helping to grow rather than reinforcing self-hatred. It helps connect with feelings or needs arising from past actions, allowing for self-forgiveness.


NVC also helps in expressing anger by separating the link between others and their actions. Instead of blaming others, we look inside ourselves to identify unmet needs. Making requests in clear, positive, concrete action language reveals what we really want. When angry, we choose to stop and take a breath, identify judgments, and express our feelings and needs. To get someone to listen, we need to listen to them.

NVC-style conflict resolution focuses on establishing a connection between parties, allowing productive communication and understanding of each other's perspectives. It emphasizes listening to needs, providing empathy, and proposing strategies. Mediation should not be solely intellectual, but also involve playing different roles and avoiding punishment. It helps individuals recognize their feelings and needs and avoid repeating negative judgments. NVC also encourages expressing appreciation without unconscious judgment, avoiding negative compliments that can alienate. Instead, it encourages celebrating actions that enhance well-being and identifying the needs fulfilled by others. This approach helps to move people out of fixed positions and promotes a more positive and productive resolution.


Wednesday, May 22, 2024

 This is a continuation of previous articles on IaC shortcomings and resolutions. With the example of Azure Front Door, we were explaining the use of separate origin groups for logical organization of backend and front-end endpoints. This section talks about route configuration.

A route is the primary directive to Azure Front Door to handle traffic. The route settings define an association between a domain and an origin group.  Features such as Pattern-to-match and rulesets enable granular control over traffic to the backend resources.

A routing rule is composed of two major parts, the “left-hand-side” and the “right-hand-side”. Front Door matches the incoming request to the left-hand side of the route while the right-hand side defines how the request gets processed. On the left-hand side, we have the HTTP Protocols, the domain, and the path where these properties are expanded out so that every combination of a protocol, domain and path is a potential match set. On the right-hand side, we have the routing decisions. If caching is not enabled, the requests are routed directly to the backend.

Route matching is all about the “most-specific-request” that matches with the “left-hand-side”. The order of match is always protocol first, followed by the domain and then the path. The Match is always a yes or a no. Yes, there is a route with an exact match on the frontend host or no there is no such match. In the case of a “No”, a bad request error gets sent. After the host matching comes path matching. A similar logic to frontend hosts is used to match the request path. The only difference is that between a yes or a no, an approximate match based on wild card pattern is allowed. And as always, a failed match returns a bad request error.

One of the key differences between an application gateway and Front Door is this hybrid custom-domain and path-based routing combination matching as described above. Application gateway can be either custom-domain based or path-based routing in most deployments but FrontDoor by its nature to being global across different regional resource types, allows for both custom-domain and path-based matches. 

The anycast behavior from FrontDoor requires a comprehensive test matrix to avoid any unpredictability with low-latency choices made by default. For a choice of host and path, there can be four test cases at least even for a “/*” path. Predictability also involves trying those requests from various regions.

Thus, separate endpoints, routing and host header all play a role in determining the responses from the Azure Front Door. 

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd 


#codingexercise

#codingexercise

Position eight queens on a chess board without conflicts:

    public static void positionEightQueens(int[][] B, int[][] used, int row) throws Exception {

        if (row == 8) {

            if (isAllSafe(B)) {

                printMatrix(B, B.length, B[0].length);

            }

            return;

        }

        for (int k = 0; k < 8; k++) {

            if ( isSafe(B, row, k) && isAllSafe(B)) {

                B[row][k] = 1;

                positionEightQueens(B, used, row + 1);

                B[row][k]  = 0;

            }

        }

    }

    public static boolean isSafe(int[][] B, int p, int q) {

        int row = B.length;

        int col = B[0].length;

        for (int i = 0; i < row; i++) {

            for (int j = 0; j < col; j++) {

                if (i == p && j == q) { continue; }

                if (B[i][j] == 1) {

                    boolean notSafe = isOnDiagonal(B, p, q, i, j) ||

                            isOnVertical(B, p, q, i, j) ||

                            isOnHorizontal(B, p, q, i, j);

                    if(notSafe){

                        return false;

                    }

                }

             }

        }

        return true;

    }

    public static boolean isAllSafe(int[][] B) {

        for (int i = 0; i < B.length; i++) {

            for (int j = 0; j < B[0].length; j++) {

                if (B[i][j]  == 1 && !isSafe(B, i, j)) {

                    return false;

                }

            }

        }

        return true;

    }

    public static boolean isOnDiagonal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k ++) {

            if (r2 - k >= 0 &&  c2 - k >= 0 && r1 == r2 - k && c1 == c2 - k) {

                return true;

            }

            if (r2 + k < row && c2 + k < col && r1 == r2 + k && c1 == c2 + k) {

                return true;

            }

            if (r2 - k >= 0 && c2 + k < col && r1 == r2 - k && c1 == c2 + k) {

                return true;

            }

            if (r2 + k < row  && c2 - k >= 0 && r1 == r2 + k && c1 == c2 - k) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnVertical(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (c2 - k >= 0  && c1 == c2 - k && r1 == r2 ) {

                return true;

            }

            if (c2 + k < row && c1 == c2 + k && r1 == r2) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnHorizontal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (r2 - k >= 0  && r1 == r2 - k && c1 == c2 ) {

                return true;

            }

            if (r2 + k < row && r1 == r2 + k && c1 == c2) {

                return true;

            }

        }

        return result;

    }


Sample output:

1 1 2 1 1 1 1 1

1 1 1 1 1 2 1 1

1 1 1 2 1 1 1 1

1 2 1 1 1 1 1 1

1 1 1 1 1 1 1 2

1 1 1 1 2 1 1 1

1 1 1 1 1 1 2 1

2 1 1 1 1 1 1 1




Tuesday, May 21, 2024

 

 

This is the summary of the book titled “The Cybersecurity playbook – How every leader and Employee can contribute to a culture of security.” written by Allison Cerra and published by Wiley in 2019. The author draws upon years of fighting hacking and cybercrimes to produce a practical checklist for employees at all levels and disciplines such that the mindset for cybersecurity becomes part of the culture. These good habits can thwart attacks and boost preparedness. She calls on product designers to build security into network connected products from the ground up. She calls on human resources to increase awareness, capabilities, and resilience. Security breaches must be clearly communicated, and the response plan must be detailed. Since risk management is part of the cybersecurity initiatives, the finance office must also be involved. CISOs or the Chief Information Security Officer can co-ordinate and maintain the ultimate responsibility.

Corporate cybersecurity relies heavily on employee good habits, as one in five security breaches involves a negligent employee's mistake. Key practices include creating strong passwords, changing them frequently, and not reusing them. Employees should be familiar with common hacker tactics, such as phishing emails, and should check with IT security before using cloud services and tools. Encrypted thumb drives, reporting suspicious emails, and never leaving sensitive information unattended are essential.

Convincing employees to adopt these practices is challenging, as those responsible for cybersecurity often operate in the shadows. CISOs and their teams must weave safe practices and habits into the organization's culture to prepare for attacks and minimize damage. Cybersecurity preparedness requires the combined efforts of all parts of the organization, led by a CISO. The talent market for cybersecurity professionals is also struggling, with new techniques appearing daily.

Cybercriminals organize online communities on the Dark Web, sharing information and strategies. CEOs and board members must recognize that cybersecurity is a continuous escalating battle with measures and countermeasures, and no single tool can solve the problem.

Cybersecurity is a crucial investment for businesses, and it should be prioritized in every board meeting. The CISO should present and update the board on strategic risk management, explaining how the firm is protecting its most important assets. Regular updates from the CISO can help earmark security budgets for protecting these assets. Product designers must build security into network-connected products and devices from the ground up, as recent hacker attacks have highlighted the need for greater risk in every adoption of technology. Developers should make security a priority in product design, building security features as requirements and assigning accountability for continuous security monitoring and upkeep throughout the product life cycle.

Human resources play a crucial role in building cybersecurity awareness, capabilities, and resilience. A talent shortage in IT security talent is prevalent, with HR professionals sourcing candidates from atypical places and with less obvious credentials, such as women. HR should lead the charge in training employees in good cybersecurity practices, adjust reward programs, review personnel access to sensitive data, add questions to job interviews, and ensure every executive has at least one cybersecurity-related metric in their performance plan.

Developing and practicing a detailed communications and response plan to major security breaches is essential. Hacker stealth is a frightening aspect of cybersecurity, and firms should report breaches immediately to reduce damage and serve customers ethically. Preparing ahead of a breach involves scenario planning, developing a full communications plan, and preparing responses for tough questions.

CISOs must reframe their conversations with CFOs from a focus on ROI to one of risk management, estimating financial damage and potential avoidance of losses. CFOs should hold CISOs accountable for their past resource use and training.

CFOs and CISOs must ensure the corporate supply chain adheres to IT security standards, including outsourcing partners, suppliers, and new products or platforms. CISOs must balance policing employees with preventing a free-for-all that puts the firm at risk. They must translate threats to strategy and risks, ensuring that potential attacks put revenue and strategic objectives at risk. CISOs should also share phishing test results and maintain basic security best practices. AI is a weapon in both the company's cybersecurity arsenal and its enemies' arsenals. They must work closely with CIOs, agreeing on metrics, penetration testing schedules, and planned purchases. AI can automate threat detection but also results in more false positives, requiring resources to investigate. Organizations must develop a "sixth sense" for detecting threats and breaches, which can only be achieved when cybersecurity infuses the culture.

Previous book summary: BookSummary94.docx

Summarizing Software: SummarizerCodeSnippets.docx 


#codingexercise

Given a string of digits, count the number of subwords (consistent subsequences) that are anagrams of any palindrome.

Public class solution {

Public static int getSubWords(String digits) {

    Int count = 0;

    for (int k = 1; k < digits.length; k++) {

           for (int I = 0; I <digits.length; I++) {

                Int end = I + k;

                If (end < digits.length) {

                     String word = digits.substring(words, I, end);

                      If (isAnagram(word)) { 

                          count++;

                      }

                }

           }

    }

    return count;

}

Public boolean isAnagram(String word) {

        Map<Char, Integer> charMap = new HashMap<>();

        for (int I = 0; I < word.length; I++) {

               If (charMap.containsKey(word.charAt(I))) {

                    charMap[word.charAt(i)] = charMap.get(word.charAt(I)) + 1;

               } else {

                    charMap.put(word.charAt(I), 1);

               }

        }

        If (charMap.size() %2 == 1) {

            // count of only one element must be odd 

            return charMap.values().stream().filter(x-> x%2 == 1).count() == 1;

        }

        Else { 

             // count of all elements must be even

             return charMap.values().stream().filter(x -> x%2 == 0).count() == charMaps.size();

        }

}

}

test:

14641

2