Monday, September 30, 2024

 

Data Backup from relational databases in the cloud.

Many Azure Infrastructure deployments that involve a MySQL or MSSQL server instance will tend to rely on the features available from the cloud resource for backup and recovery. While these resources support continuous replication with another instance and scheduled backups on the same instance, there is no feature available to backup individual databases. Application-engineering teams that rely on the data in a single database must rely on customized automation for this purpose. Infrastructure engineering teams that deploy these database servers for the benefit of these application engineering teams have a wide range of options to write such an automation. Most of them rely on leveraging the command line utilities from the database publishers with the two most popular on Azure being MySQL or MSSQL. A command line utility like mysqldump can provide automation to backup and restore individual databases and most automation relies on archiving those backup files on storage accounts or s3 storage that are designed to be durable. Depending on where the application engineering teams host these applications, the customizations could leverage the same hosts for creating backups and restores. For example, an infrastructure using Azure Kubernetes Service to host the applications If the application engineering teams prefer GitOps over Azure DevOps to transmit the secrets to the command-line utility, one option is to host the automation as GitHub Actions. This article demonstrates how to do that with two files: one for a GitHub Action workflow that deploys an AKS job to backup or restore a single database and another describing the AKS job. The automation can also parameterize the hosts, database source and destination, and storage accounts so that the process can be repeated for various consumers.

name: "MySql Backup Restore"

on:

  push:

    branches:

      - main

    paths:

      - 'mysqlconfig/U2-Prepay-Non-Prod/**'

defaults:

  run:

    shell: bash

 

#Enforing to use OIDC for authentication

permissions:

  actions: read

  checks: read

  contents: read

  deployments: read

  id-token: write

  issues: read

  discussions: read

  packages: read

  pages: read

  pull-requests: write

  repository-projects: read

  security-events: read

  statuses: read

 

jobs:

  pre-deploy:

    name: Prepay NonProd

    runs-on: [ uhg-runner ]

    environment: prod

    strategy:

      matrix:

        subscriptions: [${{ input.subscription }}]

    env:

      ARM_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}

      ARM_USE_OIDC: true

      ARM_SUBSCRIPTION_ID: ${{ input.subscription }}

      ARM_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}

      GH_PAT: ${{ secrets.GH_PAT }}

    steps:

      #Login to Azure

      - name: 'Az CLI login'

        uses: azure/login@v1

        with:

          client-id: ${{ secrets.AZURE_CLIENT_ID }}

          tenant-id: ${{ secrets.AZURE_TENANT_ID }}

          subscription-id: ${{ input.subscription }}

         

      - name: 'Action Checkout'

        uses: actions/checkout@v3

        with:

          fetch-depth: 0   

      - name: 'Git Config URL'

        run: git config --global url."https://${{ secrets.GH_PAT }}@github.com".insteadOf https://github.com

      - name: 'Setup working Directory'

        run: |

             WORK_DIR="./"

             echo "WORK_DIR=${WORK_DIR}" >> $GITHUB_ENV

             echo "Current working directory is $WORK_DIR"   

         

      - name: Get git changes

        id: changes

        run: |

          jsonfile=$(git diff --name-only --diff-filter=A ${{ github.event.before }} ${{ github.event.after }} | grep .json$ | head -1 | xargs)

          echo "$jsonfile"

 

          githubjson=$(cat ${jsonfile})

 

          echo $githubjson

 

          DB_HOST=`echo $(jq -r '.DB_HOST' <<< "$githubjson")`

          echo "DB_HOST=$DB_HOST" >> $GITHUB_ENV

 

          DB_USER=`echo $(jq -r '.DB_USER' <<< "$githubjson")`

          echo "DB_USER=$DB_USER" >> $GITHUB_ENV

 

          DB_NAME=`echo $(jq -r '.DB_NAME' <<< "$githubjson")`

          echo "DB_NAME=$DB_NAME" >> $GITHUB_ENV

 

          KV_NAME=`echo $(jq -r '.KV_NAME' <<< "$githubjson")`

          echo "KV_NAME=$KV_NAME" >> $GITHUB_ENV

 

          KV_SECRET_NAME=`echo $(jq -r '.KV_SECRET_NAME' <<< "$githubjson")`

          echo "KV_SECRET_NAME=$KV_SECRET_NAME" >> $GITHUB_ENV

 

          KV_BLOB_CONN_STR=`echo $(jq -r '.KV_BLOB_CONN_STR' <<< "$githubjson")`

          echo "KV_BLOB_CONN_STR=$KV_BLOB_CONN_STR" >> $GITHUB_ENV

 

          FILE_NAME=`echo $(jq -r '.FILE_NAME' <<< "$githubjson")`

          echo "FILE_NAME=$FILE_NAME" >> $GITHUB_ENV

 

          BACKUP_RESTORE=`echo $(jq -r '.BACKUP_RESTORE' <<< "$githubjson")`

          echo "BACKUP_RESTORE=$BACKUP_RESTORE" >> $GITHUB_ENV

 

          AKS_RG=`echo $(jq -r '.AKS_RG' <<< "$githubjson")`

          echo "AKS_RG=$AKS_RG" >> $GITHUB_ENV

 

          AKS_NAME=`echo $(jq -r '.AKS_NAME' <<< "$githubjson")`

          echo "AKS_NAME=$AKS_NAME" >> $GITHUB_ENV

 

          AKS_NAMESPACE=`echo $(jq -r '.AKS_NAMESPACE' <<< "$githubjson")`

          echo "AKS_NAMESPACE=$AKS_NAMESPACE" >> $GITHUB_ENV

         

          BLOB_CONTAINER_NAME=`echo $(jq -r '.BLOB_CONTAINER_NAME' <<< "$githubjson")`

          echo "BLOB_CONTAINER_NAME=$BLOB_CONTAINER_NAME" >> $GITHUB_ENV

 

          EMAIL_RECIPIENTS=`echo $(jq -r '.EMAIL_RECIPIENTS' <<< "$githubjson")`

          echo "EMAIL_RECIPIENTS=$EMAIL_RECIPIENTS" >> $GITHUB_ENV

 

          EMAIL_API=`echo $(jq -r '.EMAIL_API' <<< "$githubjson")`

          echo "EMAIL_API=$EMAIL_API" >> $GITHUB_ENV

 

      - name: Setup kubectl

        id: install-kubectl

        uses: Azure/setup-kubectl@v3

 

      - name: Setup kubelogin

        id: install-kubelogin

        uses: azure/use-kubelogin@v1

        with:

          kubelogin-version: 'v0.0.32'

 

      - name: Deploy Job to AKS

        id: aksdeploy

        run: |

          az account set --subscription=$ARM_SUBSCRIPTION_ID

         

          DB_PASSWORD=$(az keyvault secret show --name $KV_SECRET_NAME --vault-name $KV_NAME --query "value")

          DB_PASSWORD=$(echo "$DB_PASSWORD" | sed -e 's/[\/&]/\\&/g')

          echo "DB_PASSWORD=$DB_PASSWORD" >> $GITHUB_ENV

 

          BLOB_CONNECTION=$(az keyvault secret show --name $KV_BLOB_CONN_STR --vault-name $KV_NAME --query "value")

          echo "BLOB_CONNECTION=$BLOB_CONNECTION" >> $GITHUB_ENV

 

          az aks get-credentials --resource-group $AKS_RG --name $AKS_NAME --overwrite-existing --admin

         

          if kubectl get job mysql-backup-job  -n $AKS_NAMESPACE > /dev/null 2>&1; then

             echo 'deleting existing job......'

             kubectl delete jobs mysql-backup-job  -n $AKS_NAMESPACE

          else

             echo 'no job to delete, moving on....'

          fi

 

          kubelogin convert-kubeconfig -l azurecli

          sed -i -e "s#%%HOST%%#$DB_HOST#" ./aks-job.yaml;

          sed -i -e "s#%%USER%%#$DB_USER#" ./aks-job.yaml;

          sed -i -e "s#%%PASS%%#$DB_PASSWORD#" ./aks-job.yaml;

          sed -i -e "s#%%DB%%#$DB_NAME#" ./aks-job.yaml;

          sed -i -e "s#%%RFILE%%#$FILE_NAME#" ./aks-job.yaml;

          sed -i -e "s#%%BACRST%%#$BACKUP_RESTORE#" ./aks-job.yaml;

          sed -i -e "s#%%CONNSTR%%#$BLOB_CONNECTION#" ./aks-job.yaml;

          sed -i -e "s#%%CNTNAME%%#$BLOB_CONTAINER_NAME#" ./aks-job.yaml;

          sed -i -e "s#%%EMAIL%%#$EMAIL_RECIPIENTS#" ./aks-job.yaml;

          sed -i -e "s#%%EMAILAPI%%#$EMAIL_API#" ./aks-job.yaml;

          sed -i -e "s#%%AKSNS%%#$AKS_NAMESPACE#" ./aks-job.yaml;

          kubectl apply -f ./aks-job.yaml

 

aks-job.yaml:

apiVersion: batch/v1

kind: Job

metadata:

  name: mysql-backup-job

  namespace: %%AKSNS%%

spec:

  template:

    metadata:

      labels: {}

    spec:

      containers:

      - env:

        - name: DB_HOST

          value: %%HOST%%

        - name: DB_USER

          value: %%USER%%

        - name: DB_PASSWORD

          value: %%PASS%%

        - name: DB_NAME

          value: %%DB%%

        - name: BLOB_MNT

          value: .

        - name: RESTORE_FILE

          value: %%RFILE%%

        - name: BACKUP_RESTORE

          value: %%BACRST%%

        - name: BLOB_CONNECTION_STRING

          value: %%CONNSTR%%

        - name: BLOB_CONTAINER_NAME

          value: %%CNTNAME%%

        - name: EMAIL_RECIPIENTS

          value: %%EMAIL%%

        - name: EMAIL_API

          value: %%EMAILAPI%%

        image: mycontainerregistry.azurecr.io/mysql-backup-restore

        imagePullPolicy: Always

        name: mysql-backup-job

        resources: {}

      restartPolicy: Never

 

Previous articles: IaCResolutionsPart175.docx

Sunday, September 29, 2024

 This is a summary of the book titled “Next! The power of reinvention in life and work” written by Joanne Lipman and published by Mariner Books in 2023. Transformation, whether from external circumstances or from internal motivation, does not have to be stressful. The author draws from remarkable transformations of both people and products from interviews and scientific research to uncover the process. Major transitions follow a four-part pattern. Gut instinct always feels right. Past failures can be learning. If something does not suit well, we forge a path ahead. Someone who believes in us can help us clarify our goals. If a company wants to make a transition, look for outside perspectives.

Major career transitions follow a four-part pattern: "Search," "Struggle," "Stop," and "Solution." The COVID pandemic prompted millions to rethink their careers, but most significant changes happen incrementally. Research and personal anecdotes reveal a common four-phase pattern: "Search," "Struggle," "Stop," and "Solution."

Deciding to make a career change is often the most challenging part, but it's important to listen to your gut instinct. Gut instincts can guide you in the right direction, stemming from your experience and knowledge. For example, James Patterson, a famous author, used his experience as an adman before becoming a best-selling novelist to appeal to customers and provide what they wanted. By following this pattern, individuals can successfully navigate their career or business transformations.

Past failures can help individuals grow and succeed by focusing on the learning process rather than the final results. Success often comes from going from failure to failure without losing enthusiasm. People who succeed after failure often search for reasons for their failures and work through the adjustment process to find a breakthrough idea. For example, figure skater Nathan Chen readjusted his mentality after losing a gold medal at the 2018 Olympics.

To learn from failure, reflect on the struggle and make small incremental changes. Allow for creative insights when searching for your next step, as decisive breakthroughs, or "aha moments," often emerge when you do something else entirely. Cognitive neuroscientists have found that during an "aha moment," the subconscious brain connects increments of unrelated or distantly related information to create a solution. To tap into the power of subconscious connections, distract yourself and create an incubation time for your subconscious brain to process your research and make connections.

Many people find themselves in need of a new career, often due to societal obstacles. Women, especially those with children, are more likely to embrace reinvention than men, creating companies that empower marginalized people. Women-owned firms have started to change the workplace into a more accepting, flexible environment.

To make a career- or life-changing transformation, it is important to prepare and lay down the groundwork before jumping into a new job. JP Morgan executive Will Brown took over 20 years to transition from being a Wall Street economist to a full-time farmer. Steve Jobs and Whitney Wolfe Herd have also made significant changes without consciously knowing it.

Trust that the struggle you are in will lead to better things and let it become a time of inspiration, research, and discovery. Find someone who believes in you to help clarify your goals. Ina Garten, for example, made her transition from working as a nuclear budget analyst to starting her culinary career by impulsively buying a 400-square-foot food shop in Westhampton Beach, New York.

Ina Garten, a successful chef, transitioned from a White House nuclear budget analyst to starting her culinary career with her husband's encouragement. This person helped her gain a professional reputation, attract celebrity clients, and expand her shops. When she decided to sell the shops, her husband's guidance helped her create best-selling cookbooks and start her TV career. It just takes one person who believes in you to help clarify your goals. Having someone who champions your strengths and provides a clear perspective can make all the difference in navigating tough career moves. Sharing your goals with trusted people can hold you accountable and support you throughout the process. If your company plans to make a transition, look for outside perspectives to guide the way. Innovative ideas rarely come from the C-suite, but outsiders may see potential where insiders can't. Companies in transition often benefit from outside perspectives, as they may see potential where insiders can't.


Saturday, September 28, 2024

 Problem statement:

A message containing letters from A-Z can be encoded into numbers using the following mapping:

'A' -> "1"

'B' -> "2"

...

'Z' -> "26"

To decode an encoded message, all the digits must be grouped then mapped back into letters using the reverse of the mapping above (there may be multiple ways). For example, "11106" can be mapped into:

• "AAJF" with the grouping (1 1 10 6)

• "KJF" with the grouping (11 10 6)

Note that the grouping (1 11 06) is invalid because "06" cannot be mapped into 'F' since "6" is different from "06".

Given a string s containing only digits, return the number of ways to decode it.

The test cases are generated so that the answer fits in a 32-bit integer.

Example 1:

Input: s = "12"

Output: 2

Explanation: "12" could be decoded as "AB" (1 2) or "L" (12).

Example 2:

Input: s = "226"

Output: 3

Explanation: "226" could be decoded as "BZ" (2 26), "VF" (22 6), or "BBF" (2 2 6).

Example 3:

Input: s = "06"

Output: 0

Explanation: "06" cannot be mapped to "F" because of the leading zero ("6" is different from "06").

Constraints:

• 1 <= s.length <= 100

• s contains only digits and may contain leading zero(s).

class Solution {

    public int numDecodings(String s) {

        Integer count = 0;

        if (isValid(s.substring(0,1)))

            traverse(s.substring(1), count);

        if (s.length() >= 2 && isValid(s.substring(0,2)))

            traverse(s.substring(2), count);

        return count;

    }

    public boolean traverse(String s, Integer count) {

        if (String.isNullOrWhitespace(s)){

            count += 1;

            return true;

        }

        if (isValid(s.substring(0,1)))

            traverse(s.substring(1), count);

        if (s.length() >= 2 && isValid(s.substring(0,2)))

            traverse(s.substring(2), count);

        return count > 0;

    }

    public boolean isValid(String s) {

        if (s.length() == 1 && s.charAt(0) >= '0' && s.charAt(0) <= '9'){

            return true;

        }

        if (s.length() == 2 &&

           (s.charAt(0) > '0' && s.charAt(0) <= '2') &&

           ((s.charAt(0) == '1' && s.charAt(1) >= '0' && s.chartAt(1) <= '9') ||

            (s.charAt(0) == '2' && s.chartAt(1) >= '0' && s.chartAt(1) <= '6')) {

            return true;

        }

        return false;

    }

}


Friday, September 27, 2024

 

Just-in-time (JIT) access, also known as just-in-time privileged access management (JIT PAM), is a security approach that grants privileged access or permissions only for the finite moments needed. It eliminates always-on, persistent privileged access, known as "standing privileges." On the other hand,  Just Enough Access aka JEA model is essential for implementing the principle of least privilege. But "true least privilege" requires combining both models, so that organizations can minimize potential attackers' footholds and the paths to privilege that could escalate an attack. However, many enterprises struggle with having too many accounts with unnecessary privileges, standing access status quo, privilege blindness, and lack of context around privileged risk. By combining these approaches, organizations can significantly reduce the attack surface and minimize potential vulnerabilities. Some of the malpractices include deploying too many accounts with unnecessary privileges, permissions, and entitlements, a standing access status quo, privileged blindness, and lack of context around privileged risk.

In Amazon Web Services (AWS), limiting human access to cloud resources is crucial for security. AWS offers tools like AWS Identity and Access Management (IAM) and AWS IAM Identity Center for managing access. Granting just-in-time access to developers for a limited time based on approval is an effective way to limit active time frames for assignments to AWS resources. Okta's integration with IAM Identity Center allows customers to access AWS using their Okta identities. As an example, the roles could correspond to different job functions within your organization. For example, the “AWS EC2 Admin” role could correspond to a DevOps on-call site reliability engineer (SRE) lead, whereas the “AWS EC2 Read Only” role may apply to members of your development team. The step-by-step configuration for this involves setting up groups representing different privilege levels, enabling automatic provisioning of groups using SCIM protocol, assigning access for groups in Okta, creating permissions sets in IAM identity center, assign group access in your AWS organization, configuring Okta identity governance access requests and finally testing the configuration. Okta's integration with AWS minimizes persistent access assignments, granting access just in time for specific operational functions. This solution allows empty user groups to be assigned to highly-privileged AWS permissions, with Okta Access Requests controlling group membership duration.

In Azure, Conditional Access templates provide a convenient method to deploy new policies aligned with Microsoft recommendations. These templates are designed to provide maximum protection aligned with commonly used policies across various customer types and locations. The templates are organized into secure foundation, zero trust, remote work, protect administrator, and emerging threats. Certain accounts must be excluded from these templates such as emergency-access or break-glass accounts to prevent tenant-wide account lockout and some service accounts and service principals that are non-interactive and tied to any particular user.

Thursday, September 26, 2024

 Principle of Just-in-Time (JIT) privileged access:

This is a security model used in Azure public cloud to grant temporary permissions to users for performing privileged tasks. This approach helps minimize the risk of unauthorized access by ensuring that elevated permissions are only available when needed and for a limited time. Users receive elevated permissions only for the duration necessary to complete specific tasks. Once the time expires, the permissions are revoked automatically. A dedicated service in Azure services portfolio by the name Azure AD Privileged Identity Management (PIM)  manages JIT access, allowing administrators to control and monitor privileged access to Azure resources and Azure AD. PIM can generate alerts for suspicious or unsafe activities, enhancing security monitoring. This is commonly used for administrative tasks, accessing sensitive data, or managing critical infrastructure.

Amazon Web Services aka AWS also supports something similar with its Privileged Access Management aka PAM solutions where third-party solutions can be integrated into the AWS to provide ephemeral JIT access, ensuring that users only have the necessary privileges for the duration of their tasks. AWS provides  regular fine-grained permissions for users, groups and roles with its Identity and Access Management policies which can even be used to restrict access to a certain time of the day. The single sign-on service can work with different identity providers to enforce JIT access. Finally, the AWS Security Token Service can issue temporary security credentials that provide limited time access to AWS resources.

To bolster the physical security, reducing the risk of malware or unauthorized access, streamlining and restricting activities that can be performed with the escalation of privilege, Microsoft hands out Secure Admin Workstations (SAWs) that are specialized and dedicated devices used exclusively for administrative tasks. They are particularly valuable in high-risk environments where security is paramount. Public clouds happen to be the most widely used cloud but there are other clouds that can be dedicated in scope specifically for governments, defense departments and those that require tighter access control and these are collectively called sovereign clouds. These clouds are especially benefited with SAW devices. Only authorized personnel can use SAWs, and they are often subject to strict security policies and monitoring. As an example, Microsoft uses approximately 35,000 SAW devices, with a small number dedicated to accessing these high-risk environments aka sovereign clouds.

These practices help ensure that Azure remains a secure platform for both administrators and users. 



Wednesday, September 25, 2024

 Manifesting Dependencies:

Among the frequently encountered disconcerting challenges faced by engineers who deploy infrastructure is the way to understand, capture and use dependencies. Imagine a clone army where all entities look alike and a specific one or two need to be replaced. Without having a name or identifier at hand, it is difficult to locate those entities but it becomes even harder when we don’t know which of the others are actually using them, so that we are mindful of the consequences of replacements. Grounding this example with cloud resources in azure public cloud, we can take a set of resources with a private endpoint each that gives them a unique private IP address, and we want to replace the virtual network that is integrated with these resources. When we switch the virtual network, the old and the new do not interact with one another and traffic that was flowing to a resource on the old network is now disrupted when that resource moves to a different virtual network. Unless we have all the dependencies known about who is using the resource that is about to move, we cannot resolve the failures they might encounter. What adds to the challenge is that the virtual network is like a carpet on which the resources stand and this resource type is always local to an availability zone or region so there is no built-in redundancy or replica available to ease the migration. One cannot just move the resource as if it were moving from one resource group to another, it must be untethered and tied to another virtual network with a delete of the old private endpoint and the addition of a new. Taking the example a little further, IaC does not capture dependencies between usages of resources. It only captures dependencies on creation or modification. For example, a workspace that users access to spin up compute and run their notebooks. might be using a container registry over the virtual network but its dependency does not get manifested because the registry does not maintain a list of addresses or networks to allow. The only way to reverse-engineer the listing of dependencies is to check the dns zone records associated with the private endpoint and the entries added to the callers that resolve the container registry over the virtual network. These entries will have private IP addresses associated with the callers and by virtue of the address belong to an address space designated to a sub-network, it is possible to tell whether it came from a connection device associated with a compute belonging to the workspace. By painful enumeration of each of these links, it is possible to draw a list of all workspaces using the container registry. These records that helped us draw the list may have a lot of stale entries as the callers disappear but do not clean up the record. So, some pruning might be involved and it might change over time but it will still be handy.



Tuesday, September 24, 2024

 

Problem: Given a weighted bidirectional graph with N nodes and M edges and all the weights as distinct positive numbers, find the maximum number of edges that can be visited on traversing the graph such that the weights are ascending.

Solution: When a weighted edge is encountered in an ascending order between nodes, say u and v, it must be the first edge of the path starting at either u or v and no other nodes. In addition, that path starts at one vertex, goes through edge uv and then the remaining longest ascending path up to the other vertex. Therefore, the weights accumulated at both these nodes is the maximum of (w[u], w[v] + 1) and (w[v], w[u]+1) in an array w of weights of longest ascending paths starting at that vertex.

 

public static int solution_unique_weights(int N, int[] src, int[] dest, int[] weight) {

            int M = weight.length;

            int[] e = new int[N];

            Integer[] index = new Integer[M];

            for (int i = 0; i <M; i++) { index[i] = i; }

            Comparator<Integer> comparator = (i, j) -> weight[j] - weight[i];

            Arrays.sort(index, 0, M, comparator);

            for (int I = 0; i< M; i++) {

                          int u = src[index[i]];

                          int v = dest[index[i]];

                           int count = Math.max(Math.max(e[u], e[v] + 1), Math.max(e[v], e[u]+1));

                           e[u] = count;

                           e[v] = count;

             }

             return Arrays.stream(e).max().getAsInt();

    }

 

    src[0] = 0    dest[0] = 1    weight[0] = 4

    src[1] = 1    dest[1] = 2    weight[1] = 3

    src[2] = 1    dest[2] = 3    weight[2] = 2

    src[3] = 2    dest[3] = 3    weight[3] = 5

    src[4] = 3    dest[4] = 4    weight[4] = 6

    src[5] = 4    dest[5] = 5    weight[5] = 7

    src[6] = 5    dest[6] = 0    weight[6] = 9

    src[7] = 3    dest[7] = 2    weight[7] = 8

    index:  0 1 2 3 4 5 6 7  // before sort

    index:  2 1 0 3 4 5 7 6  // after sort

    e: 

    0  1  0  1  0  0  0  0

    0  2  2  1  0  0  0  0

    3  3  2  1  0  0  0  0

    3  3  3  4  4  0  0  0

    3  3  3  4  5  5  0  0

    3  3  4  4  5  5  0  0

    6  3  4  4  5  6  0  0

    

With the longest ascending path being nodes 3->1->2->3->4->5->0 and 6 edges

 

Monday, September 23, 2024

 Infrastructure as a top-down approach versus bottom-up growth.

Centralized planning has many benefits for infrastructure as evidenced by parallels in construction industry and public transportation. The top-down approach in this context typically refers to a method where policy decisions and strategies are formulated at a higher, often governmental or organizational level, and then implemented down through various levels of the system. This approach contrasts with a bottom-up approach, where policies and strategies are developed based on input and feedback from lower levels, such as local communities or individual stakeholders.

Such a regulatory approach might involve:

Centralized Planning: High-level authorities set infrastructure policies and plans, which are then executed by regional or local agencies.

Regulation and Standards: Establishing uniform regulations and standards for cloud systems, which must be adhered to by all stakeholders.

Funding Allocation: Decisions on the allocation of funds for infrastructure projects are made at a higher level, often based on broader economic and policy goals.

This approach can ensure consistency and alignment with national or regional objectives, but it may also face challenges such as lack of local adaptability and slower response to specific local needs.

On the other hand, a bottom-up approach typically involves building and configuring resources starting from the lower levels of the infrastructure stack, often driven by the needs and inputs of individual teams or developers. This approach contrasts with a top-down approach, where decisions and designs are made at a higher organizational level and then implemented downwards.

Here are some key aspects of the bottom-up approach in Azure deployments:

Developer-Driven: Individual developers or teams have the autonomy to create and manage their own resources, such as virtual machines, databases, and networking components, based on their specific project requirements.

Incremental Development: Infrastructure is built incrementally, starting with basic components and gradually adding more complex services and configurations as needed. This allows for flexibility and adaptability.

Agility and Innovation: Teams can experiment with new services and technologies without waiting for centralized approval, fostering innovation and rapid iteration.

Infrastructure as Code (IaC): Tools like Terraform and Azure Resource Manager (ARM) templates are often used to define and manage infrastructure programmatically. This allows for version control, repeatability, and collaboration.

Feedback Loops: Continuous feedback from the deployment and operation of resources helps teams to quickly identify and address issues, optimizing the infrastructure over time.

This approach can be particularly effective in dynamic environments where requirements change frequently, and rapid deployment and scaling are essential

The right approach depends on a blend of what suits the workloads demanded by the business in the most expedient manner with iterative improvements and what can be curated as patterns and best practices towards a system architecture that will best serve the organization in the long run across changes in business requirements and directions.


Sunday, September 22, 2024

 This is a summary of the book titled “Cash is King” written by Peter W. Kingma and published by Wiley in 2024. As an entrepreneur, founders usually chase revenue, and cash is secondary concern but firms with strong cash positions can seize new opportunities and remain flexible. Using a fictional Owens Inc. the author draws this point through a comprehensive treatise on the topic of cash management. The procurement process from order placement to payment affects a company’s cash position. Business functions such as marketing and warehousing can also help optimize the cash position. Logistics, which is usually dynamic in nature, can help with inventory management and reduce cash freezes. The cash position for a firm can benefit from working capital management. Performance measurement metrics can aid managers. Improved cash management can boost a business’ resilience and guide it through bad times.

A business should prioritize cash flow over revenue generation to sustain growth. For example, Owens Inc., a manufacturer of electrical equipment, found that its sales terms were too favorable, and its internal processes were complicated, affecting invoicing and collections. The company's growth was driven by risk-taking and sales growth, but it neglected inventory management and internal processes. To manage sales and client management, companies should segment customers, implement credit review policies, track invoice payments, set collector targets, and adopt electronic payment methods. The procurement process, from order placement to payment, also impacts on a company's cash position. The procurement team must manage routine processes and deal with emergencies daily. The procurement team faces pressure and may not notice trade-offs that affect a company's cash position, such as lead times, minimum order quantities, and delivery times.

Business functions like marketing and warehousing can optimize cash position by synchronizing their interests and goals. Procurement personnel should focus on negotiating the best prices, while logistics management should be dynamic and adaptable to changing customer needs, transportation costs, and innovation. Marketing and engineering functions should monitor inventory to identify lost demand and ensure legitimate demand for new products. Logistics and warehousing should aim for higher service levels, requiring more inventory.

Logistics can affect a firm's cash flow through variations in batch size, use of technology, standardized terms of trade, customer-negotiated service terms, optimal warehouse management, and linking customer status updates to billing functions. These factors can disrupt existing dynamics and impact inventory management. The COVID-19 pandemic and global supply chains have also impacted inventory management.

Plant management procedures can optimize inventory investment for optimal returns. Investing in inventory that sells quickly and at a high margin yields more favorable returns than unused inventory. Safety stock is the level of inventory required to meet customer service standards, calculated based on historical variations. Minimal stock on hand for made-to-order products and minimal stock in transit can help reduce transportation time and minimum order requirements. Working capital management can improve a firm's cash position. A good financial controller can help businesses tackle accounting and financial reporting, following best practices like absorption costing and weighted average cost of capital (WACC). A company's stock price is affected by debt, and controllers should be cautious of using short-term debt costs without considering equity costs. Strong performance in one area can mask poor performance in another.

Managers should effectively use performance measurement metrics to gauge business performance and make informed decisions. Common metrics include inventory turns and cost per unit. However, they often do not align with operational metrics, leading to data integration issues or lack of review. Leadership metrics should serve as warning lights, guiding the company's health before it is too late. Operating metrics should capture the input management needs to measure, and key performance indicators and bonuses should be aligned with cash performance.

Improved cash management can boost a business's resilience and guide it through bad times. Recognizing the importance of cash flow is crucial, but many businesses consider it an afterthought. Companies with above-average working capital management tend to bounce back faster from setbacks and preserve shareholder capital better. Cash management is equally important for service sector firms, but the considerations are different.

To bring about sustainable changes, a cash leadership office should be formed, focusing on both cash position and growth. This ensures that the entire management team is on the same page and can advise the business on trade-offs or compromises.


Saturday, September 21, 2024

 Given clock hands positions for different points of time as pairs A[I][0] and A[I][1] where the order of the hands does not matter but their angle enclosed, count the number of pairs of points of time where the angles are the same

    public static int[] getClockHandsDelta(int[][] A) {

        int[] angles = new int[A.length];

        for (int i = 0; i < A.length; i++){

            angles[i] = Math.max(A[i][0], A[i][1]) - Math.min(A[i][0],A[i][1]);

        }

        return angles;

    }

    public static int NChooseK(int n, int k)

    {

        if (k < 0 || k > n || n == 0) return 0;

        if ( k == 0 || k == n) return 1;

        return Factorial(n) / (Factorial(n-k) * Factorial(k));

    }

 

    public static int Factorial(int n) {

        if (n <= 1) return 1;

        return n * Factorial(n-1);

    }


    public static int countPairsWithIdenticalAnglesDelta(int[] angles){

        Arrays.sort(angles);

        int count = 1;

        int result = 0;

        for (int i = 1; i < angles.length; i++) {

            if (angles[i] == angles[i-1]) {

                count += 1;

            } else {

                if (count > 0) {

                    result += NChooseK(count, 2);

                }

                count = 1;

            }

        }

        if (count > 0) {

            result += NChooseK(count, 2);

            count = 0;

        }

        return result;

    }


        int [][] A = new int[5][2];

         A[0][0] = 1;    A[0][1] = 2;

         A[1][0] = 2;    A[1][1] = 4;

         A[2][0] = 4;    A[2][1] = 3;

         A[3][0] = 2;    A[3][1] = 3;

         A[4][0] = 1;    A[4][1] = 3;

 1 2 1 1 2 

1 1 1 2 2 

4


Friday, September 20, 2024

 Decode ways:

A message containing letters from A-Z can be encoded into numbers using the following mapping:

'A' -> "1"

'B' -> "2"

...

'Z' -> "26"

To decode an encoded message, all the digits must be grouped then mapped back into letters using the reverse of the mapping above (there may be multiple ways). For example, "11106" can be mapped into:

"AAJF" with the grouping (1 1 10 6)

"KJF" with the grouping (11 10 6)

Note that the grouping (1 11 06) is invalid because "06" cannot be mapped into 'F' since "6" is different from "06".

Given a string s containing only digits, return the number of ways to decode it.

The test cases are generated so that the answer fits in a 32-bit integer.

 

Example 1:

Input: s = "12"

Output: 2

Explanation: "12" could be decoded as "AB" (1 2) or "L" (12).

Example 2:

Input: s = "226"

Output: 3

Explanation: "226" could be decoded as "BZ" (2 26), "VF" (22 6), or "BBF" (2 2 6).

Example 3:

Input: s = "06"

Output: 0

Explanation: "06" cannot be mapped to "F" because of the leading zero ("6" is different from "06").

 

Constraints:

1 <= s.length <= 100

s contains only digits and may contain leading zero(s).


class Solution {

    public int numDecodings(String s) {

        Integer count = 0;

        if (isValid(s.substring(0,1))) 

            traverse(s.substring(1), count);

        if (s.length() >= 2 && isValid(s.substring(0,2))) 

            traverse(s.substring(2), count);

        return count;

    }

    public boolean traverse(String s, Integer count) {

        if (String.isNullOrWhitespace(s)){

            count += 1;

            return true;

        }

        if (isValid(s.substring(0,1)))

            traverse(s.substring(1), count);

        if (s.length() >= 2 && isValid(s.substring(0,2)))

            traverse(s.substring(2), count);

        return count > 0;

    }

    public boolean isValid(String s) {

        if (s.length() == 1 && s.charAt(0) >= '0' && s.charAt(0) <= '9'){

            return true;

        }

        if (s.length() == 2 && 

           (s.charAt(0) > '0' && s.charAt(0) <= '2') && 

           ((s.charAt(0) == '1' && s.charAt(1) >= '0' && s.chartAt(1) <= '9') || 

            (s.charAt(0) == '2' && s.chartAt(1) >= '0' && s.chartAt(1) <= '6')) {

            return true;

        }

        return false;

    }

}


Thursday, September 19, 2024

 Given a wire grid of size N * N with N-1 horizontal edges and N-1 vertical edges along the X and Y axis respectively, and a wire burning out every instant as per the given order using three matrices A, B, C such that the wire that burns is 

(A[T], B[T] + 1), if C[T] = 0 or

(A[T] + 1, B[T]), if C[T] = 1

Determine the instant after which the circuit is broken 

     public static boolean checkConnections(int[] h, int[] v, int N) {

        boolean[][] visited = new boolean[N][N];

        dfs(h, v, visited,0,0);

        return visited[N-1][N-1];

    }

    public static void dfs(int[]h, int[]v, boolean[][] visited, int i, int j) {

        int N = visited.length;

        if (i < N && j < N && i>= 0 && j >= 0 && !visited[i][j]) {

            visited[i][j] = true;

            if (v[i * (N-1) + j] == 1) {

                dfs(h, v, visited, i, j+1);

            }

            if (h[i * (N-1) + j] == 1) {

                dfs(h, v, visited, i+1, j);

            }

            if (i > 0 && h[(i-1)*(N-1) + j] == 1) {

                dfs(h,v, visited, i-1, j);

            }

            if (j > 0 && h[(i * (N-1) + (j-1))] == 1) {

                dfs(h,v, visited, i, j-1);

            }

        }

    }

    public static int burnout(int N, int[] A, int[] B, int[] C) {

        int[] h = new int[N*N];

        int[] v = new int[N*N];

        for (int i = 0; i < N*N; i++) { h[i] = 1; v[i] = 1; }

        for (int i = 0; i < N; i++) {

            h[(i * (N)) + N - 1] = 0;

            v[(N-1) * (N) + i] = 0;

        }

        System.out.println(printArray(h));

        System.out.println(printArray(v));

        for (int i = 0; i < A.length; i++) {

            if (C[i] == 0) {

                v[A[i] * (N-1) + B[i]] = 0;

            } else {

                h[A[i] * (N-1) + B[i]] = 0;

            }

            if (!checkConnections(h,v, N)) {

                return i+1;

            }

        }

        return -1;

    }

        int[] A = new int[9];

        int[] B = new int[9];

        int[] C = new int[9];

        A[0] = 0;    B [0] = 0;    C[0] = 0;

        A[1] = 1;    B [1] = 1;    C[1] = 1;

        A[2] = 1;    B [2] = 1;    C[2] = 0;

        A[3] = 2;    B [3] = 1;    C[3] = 0;

        A[4] = 3;    B [4] = 2;    C[4] = 0;

        A[5] = 2;    B [5] = 2;    C[5] = 1;

        A[6] = 1;    B [6] = 3;    C[6] = 1;

        A[7] = 0;    B [7] = 1;    C[7] = 0;

        A[8] = 0;    B [8] = 0;    C[8] = 1;

        System.out.println(burnout(9, A, B, C));

1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 

8

Alternatively,

    public static boolean burnWiresAtT(int N, int[] A, int[] B, int[] C, int t) {

        int[] h = new int[N*N];

        int[] v = new int[N*N];

        for (int i = 0; i < N*N; i++) { h[i] = 1; v[i] = 1; }

        for (int i = 0; i < N; i++) {

            h[(i * (N)) + N - 1] = 0;

            v[(N-1) * (N) + i] = 0;

        }

        System.out.println(printArray(h));

        System.out.println(printArray(v));

        for (int i = 0; i < t; i++) {

            if (C[i] == 0) {

                v[A[i] * (N-1) + B[i]] = 0;

            } else {

                h[A[i] * (N-1) + B[i]] = 0;

            }

        }

        return checkConnections(h, v, N);

    }

    public static int binarySearch(int N, int[] A, int[] B, int[] C, int start, int end) {

        if (start == end) {

            if (!burnWiresAtT(N, A, B, C, end)){

                return end;

            }

            return  -1;

        } else {

            int mid = (start + end)/2;

            if (burnWiresAtT(N, A, B, C, mid)) {

                return binarySearch(N, A, B, C, mid + 1, end);

            } else {

                return binarySearch(N, A, B, C, start, mid);

            }

        }

    }

1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 0 

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 

8



Wednesday, September 18, 2024

 There is a cake factory producing K-flavored cakes. Flavors are numbered from 1 to K. A cake should consist of exactly K layers, each of a different flavor. It is very important that every flavor appears in exactly one cake layer and that the flavor layers are ordered from 1 to K from bottom to top. Otherwise the cake doesn't taste good enough to be sold. For example, for K = 3, cake [1, 2, 3] is well-prepared and can be sold, whereas cakes [1, 3, 2] and [1, 2, 3, 3] are not well-prepared.

 

The factory has N cake forms arranged in a row, numbered from 1 to N. Initially, all forms are empty. At the beginning of the day a machine for producing cakes executes a sequence of M instructions (numbered from 0 to M−1) one by one. The J-th instruction adds a layer of flavor C[J] to all forms from A[J] to B[J], inclusive.

 

What is the number of well-prepared cakes after executing the sequence of M instructions?

 

Write a function:

 

class Solution { public int solution(int N, int K, int[] A, int[] B, int[] C); }

 

that, given two integers N and K and three arrays of integers A, B, C describing the sequence, returns the number of well-prepared cakes after executing the sequence of instructions.

 

Examples:

 

1. Given N = 5, K = 3, A = [1, 1, 4, 1, 4], B = [5, 2, 5, 5, 4] and C = [1, 2, 2, 3, 3].

 

There is a sequence of five instructions:

 

The 0th instruction puts a layer of flavor 1 in all forms from 1 to 5.

The 1st instruction puts a layer of flavor 2 in all forms from 1 to 2.

The 2nd instruction puts a layer of flavor 2 in all forms from 4 to 5.

The 3rd instruction puts a layer of flavor 3 in all forms from 1 to 5.

The 4th instruction puts a layer of flavor 3 in the 4th form.

The picture describes the first example test.

 

The function should return 3. The cake in form 3 is missing flavor 2, and the cake in form 5 has additional flavor 3. The well-prepared cakes are forms 1, 2 and 5.

 

2. Given N = 6, K = 4, A = [1, 2, 1, 1], B = [3, 3, 6, 6] and C = [1, 2, 3, 4],

 

the function should return 2. The 2nd and 3rd cakes are well-prepared.

 

3. Given N = 3, K = 2, A = [1, 3, 3, 1, 1], B = [2, 3, 3, 1, 2] and C = [1, 2, 1, 2, 2],

 

the function should return 1. Only the 2nd cake is well-prepared.

 

4. Given N = 5, K = 2, A = [1, 1, 2], B = [5, 5, 3] and C = [1, 2, 1]

 

the function should return 3. The 1st, 4th and 5th cakes are well-prepared.

 

Write an efficient algorithm for the following assumptions:

 

N is an integer within the range [1..100,000];

M is an integer within the range [1..200,000];

each element of arrays A, B is an integer within the range [1..N];

each element of array C is an integer within the range [1..K];

for every integer J, A[J] ≤ B[J];

arrays A, B and C have the same length, equal to M.

// import java.util.*;

 

 

class Solution {

    public int solution(int N, int K, int[] A, int[] B, int[] C) {

        int[]  first = new int[N]; // first

        int[]  last = new int[N]; // last

        int[]  num = new int[N]; // counts

        for (int i = 0; i < A.length; i++) {

            for (int current = A[i]-1; current <= B[i]-1; current++) {

                num[current]++;

                if (first[current] == 0) {

                    first[current] = C[i];

                    last[current] = C[i];

                    continue;

                }

                If (last[current] > C[I]) {

                     last[current] = Integer.MAX_VALUE;

                } else {

                     last[current] = C[i];

               }

            }

        }

        int count = 0;

        for (int i = 0; i < N; i++) {

            if (((last[i] - first[i]) == (K - 1)) && (num[i] == K)) {

                count++;

            }

        }        

        // StringBuilder sb = new StringBuilder();

        // for (int i = 0; i < N; i++) {

        //     sb.append(last[i] + " ");

        // }

        // System.out.println(sb.toString());

        return count;

    }

}

Example test:   (5, 3, [1, 1, 4, 1, 4], [5, 2, 5, 5, 4], [1, 2, 2, 3, 3])

OK

 

Example test:   (6, 4, [1, 2, 1, 1], [3, 3, 6, 6], [1, 2, 3, 4])

OK

 

Example test:   (3, 2, [1, 3, 3, 1, 1], [2, 3, 3, 1, 2], [1, 2, 1, 2, 2])

OK

 

Example test:   (5, 2, [1, 1, 2], [5, 5, 3], [1, 2, 1])

OK


n_equal_to_1

OK

k_equal_to_1

OK

m_equal_to_1

OK

interval_contains_one_cake

OK

none_correct

OK


Tuesday, September 17, 2024

 


Given an array of varying heights above sea level for adjacent plots, and the array of water levels on consecutive days, find the number of islands. An island is a slice of array such that plots adjacent to the boundary of the island are under water.

class Solution { 

public int[] solution(int[] A, int[] B) {

    return Arrays

    .stream(B)

    .map(water -> IntStream

         .range(0, N)

         .filter(j -> (A[j] > water) && (j == N - 1 || A[j + 1] <= water))

         .map(i -> 1)

         .sum()) .toArray();

}

}

For example, given the following arrays A and B:

    A[0] = 2    B[0] = 0

    A[1] = 1    B[1] = 1

    A[2] = 3    B[2] = 2

    A[3] = 2    B[3] = 3

    A[4] = 3    B[4] = 1

Solution: 

result[0] = 1

result[1] = 2

result[2] = 2

result[3] = 0

result[4] = 2


For a water level, the number of islands is just the sum of changes in the number of islands as the water level is decreasing.

Optimized solution:

class Solution { 

public int[] solution(int[] A, int[] B) {

     int limit = Math.max(maxLevel(A), maxLevel(B));

     int[] island = new int[limit + 2];

     IntStream.range(0, A.length - 1)

                       .filter( j -> A[j]  > A[j+1])

                      .forEach(j ->  {

                                       island[A[j]] += 1;

                                       island[A[j + 1]] -= 1;

                       });

     island[A[A.length-1]] += 1;

     IntStream.range(-limit, 0)

                      .forEach(i -> island[-i] += island[-i+1]);

     return Arrays.stream(B).map(water -> island[water + 1]).toArray();

}

public int maxLevel(int[] A) {

       return Arrays.stream(A).max().getAsInt();

}

}


// before cumulation

island[0] = 0

island[1] = -1

island[2] = 0

island[3] = 2

island[4] = 0

// after cumulation

island[0] = 1

island[1] = 1

island[2] = 2

island[3] = 2

island[4] = 0


result[0] = 1

result[1] = 2

result[2] = 2

result[3] = 0

result[4] = 2