Saturday, August 31, 2024

  A self organizing map algorithm for scheduling meeting times as availabilities and bookings.  A map is a low-dimensional representation of a training sample comprising of elements e. It is represented by nodes n. The map is transformed by a  regression operation to modify the nodes position one element from the model (e) at a time. With preferences translating to nodes and availabilities as elements, this allows the map to start getting a closer match to the sample space with each epoch/iteration.

from sys import argv


import numpy as np


from io_helper import read_xyz, normalize

from neuron import generate_network, get_neighborhood, get_boundary

from distance import select_closest, euclidean_distance, boundary_distance

from plot import plot_network, plot_boundary


def main():

    if len(argv) != 2:

        print("Correct use: python src/main.py <filename>.xyz")

        return -1


    problem = read_xyz(argv[1])


    boundary = som(problem, 100000)


    problem = problem.reindex(boundary)


    distance = boundary_distance(problem)


    print('Boundary found of length {}'.format(distance))



def som(problem, iterations, learning_rate=0.8):

    """Solve the xyz using a Self-Organizing Map."""


    # Obtain the normalized set of timeslots (w/ coord in [0,1])

    timeslots = problem.copy()

    # print(timeslots)

    #timeslots[['X', 'Y', 'Z']] = normalize(timeslots[['X', 'Y', 'Z']])


    # The population size is 8 times the number of timeslots

    n = timeslots.shape[0] * 8


    # Generate an adequate network of neurons:

    network = generate_network(n)

    print('Network of {} neurons created. Starting the iterations:'.format(n))


    for i in range(iterations):

        if not i % 100:

            print('\t> Iteration {}/{}'.format(i, iterations), end="\r")

        # Choose a random timeslot

        timeslot = timeslots.sample(1)[['X', 'Y', 'Z']].values

        winner_idx = select_closest(network, timeslot)

        # Generate a filter that applies changes to the winner's gaussian

        gaussian = get_neighborhood(winner_idx, n//10, network.shape[0])

        # Update the network's weights (closer to the timeslot)

        network += gaussian[:,np.newaxis] * learning_rate * (timeslot - network)

        # Decay the variables

        learning_rate = learning_rate * 0.99997

        n = n * 0.9997


        # Check for plotting interval

        if not i % 1000:

            plot_network(timeslots, network, name='diagrams/{:05d}.png'.format(i))


        # Check if any parameter has completely decayed.

        if n < 1:

            print('Radius has completely decayed, finishing execution',

            'at {} iterations'.format(i))

            break

        if learning_rate < 0.001:

            print('Learning rate has completely decayed, finishing execution',

            'at {} iterations'.format(i))

            break

    else:

        print('Completed {} iterations.'.format(iterations))


    # plot_network(timeslots, network, name='diagrams/final.png')


    boundary = get_boundary(timeslots, network)

    plot_boundary(timeslots, boundary, 'diagrams/boundary.png')

    return boundary


if __name__ == '__main__':

    main()


Reference: 

https://github.com/raja0034/som4drones


#codingexercise

https://1drv.ms/w/s!Ashlm-Nw-wnWhPBaE87l8j0YBv5OFQ?e=uCIAp9


 This is a summary of the book titled the “ESG Mindset” written by Matthew Sekol and published by Kogan Page in 2024. The author evaluates “Enterprise, Social and Governance” aka ESG practices for long-term sustainability of corporations and their challenge to the corporate culture The author finds that deployments can raise issues which might affect transformation and growth, and most companies interpret these practices to suit their needs. This poses a challenge to even a standard definition and acceptance of associated norms. Leaders also are quick to get at the intangible behind these practices by cutting them to the simplest form which risks diluting its relevance. The author concludes that to realize ESG mindset fully, the companies must be committed to go all the way. He asserts that these practices are not merely data. But technology is the invisible “fourth” pillar in ESG. There is demonstrated success in the campaigns of companies that have embraced ESG, but the mindset goes beyond operations. As with most practices in the modern world, ESG must remain flexible.

Environmental, Social, and Governance (ESG) practices are rooted in Corporate Social Responsibility (CSR) and Socially Responsible Investing (SRI) but differentiated itself by 2004 with its broader definition of "material value" and willingness to deal with intangibles. ESG is difficult to define as it links intangible values with material results. Companies must align their ESG mindset to manage crises in an increasingly complex world. ESG is not merely data, but requires companies to prioritize, interpret, and communicate their data to stakeholders. Companies must inventory their "data estate" by reviewing internal and external data sets to ensure transparency and sustainability. Challenges faced by companies include global emissions increasing by 70% between 1970 and 2004, climate change, and public pressure from stakeholders. Publicly traded companies can provide guidelines on how their boards make decisions, including those involving ESG or affecting stakeholders.

Globalization has led to systemic issues such as child welfare, climate change, forced labor, equity, and justice, resulting in crises. Boards must shift their decision-making practices from short-term to long-term to pursue their material goals. Technology, such as blockchain, the metaverse, and generative AI, can support ESG transformation by solving problems and facilitating goals. However, companies must modernize legacy technology, break down internal silos, and solve complex cultural fears of change. Technology also produces data that is integral to ESG analysis and decision-making, but it exposes companies to cybersecurity risks. Critics and controversy can hinder ESG, especially in the United States, where polarization and activism from both the left and right complicate the issues ESG already faces. Companies must collaborate to ensure ESG's relevance and address the accuracy and fairness of ESG scores.

ESG pillars interconnect and can be analyzed to uncover new issues and improve resilience in a crisis. Companies must recognize that long-term interconnected crises will become material to every company over time. Changes addressing systemic problems can influence both internal workings and external stakeholders. Companies like PepsiCo, Lego, and Target have successfully leveraged their investment in ESG goals in various ways. PepsiCo founded the Beverage Industry Environmental Roundtable (BIER) to address systemic industry issues, particularly around water use. Lego committed to switching to sustainable materials by 2030, while Target leveraged the social pillar of ESG by hiring a diverse workforce and practicing community outreach. Paramount aligned stakeholder engagement with its core product, storytelling, demonstrating its commitment to addressing systemic issues with an ESG mindset. The ESG mindset goes beyond operations, as large-scale disruptions in the Environmental and Social dimensions may leave businesses struggling to react. Companies can leverage their ESG goals while remaining profitable through B Corps, value chain improvements, and industry collaboration.

ESG must adapt to a complex and volatile world, addressing systemic issues, intangible value, and global economic development. Companies must move from the following data to promoting measurable change. Technology can help address complexity but requires stakeholder buy-in and coordination. Companies face pressure to standardize ESG goals, define the ESG mindset, and demonstrate how to implement it, especially in the face of political agendas and pushback against DEI programs.

It is interesting that there can be so many parallels to draw between organizations and data science projects from an ESG perspective. The same sets of benefits and challenges apply to the long-term sustainability of these projects and charters. It is not just about the analysis, operations and predictions but also how it is presented to stakeholders.


Friday, August 30, 2024

 DevOps for IaC

As with any DevOps practice, the principles on which they are founded must always include a focus on people, process, and technology. With the help of Infrastructure-as-a-code and blueprints, resources, policies, and accesses can be packaged together and become a unit of provisioning the environment. 

The DevOps Adoption RoadMap has evolved over time. What used to be Feature Driven Development around 1999 gave way to Lean thinking and Lean software development around 2003, which was followed by Product development flows in 2009 and Continuous Integration/Delivery in 2010. The DevOps Handbook and the DevOps Adoption Playbook are recent as of the last 5-6 years. Principles that inform practices that resolve challenges also align accordingly. For example, the elimination of risk happens with automated testing and deployments,  and this resolves the manual testing, processes, deployments, and releases. 

The people involved in bringing build and deployments to the cloud and making use of them instead of outdated and cumbersome enterprise systems must be given roles and clear separation of responsibility. For example, developers can initiate the promotion of code package to the next environment but only a set of people other than the developers must allow it to propagate to production systems and with signoffs. Fortunately, this is well-understood and there is existing software such as ITSM, ITBM, ITOM and CMDB. These are fancy acronyms for situations such as:  

1. If you have a desired state you want to transition to, use a workflow,  

2. If you have a problem, open a service ticket. 

3. If you want orchestration and subscribe to events, use events monitoring and alerts. 

4. If you want a logical model of the inventory, use a configuration management database. 

Almost all IT businesses are concerned about ITOM such as with alerts and events, ITSM such as with incidents and service requests, and intelligence in operations. The only difference is that they have not been used or made available for our stated purposes, but this is still a good start. 

The process that needs to be streamlined is unprecedented at this scale and sensitivity. The unnecessary control points, waste and overhead must be removed, and usability must be one of the foremost considerations for improving adoption. 

The technology is inherently different between cloud and enterprise. While they have a lot in common when it comes to principles of storage, computing and networking, the division and organization in the cloud has many more knobs and levers that require due diligence. 

These concerns around people, process and technology are what distinguishes and makes this landscape so fertile for improvements.


Thursday, August 29, 2024

 

Technical Debt in IaC:

A case study might be a great introduction to this subject.  A team in an enterprise wanted to set up a new network in compliance with the security standards of the organization and migrate resources from the existing network to the new one. When they started out allocating subnets from the virtual network address space and deploying the first few resources such as an analytical workspace and its dependencies, they found that the exact same method provisioning for the old network did not create a resource that was at par with the functionality of the old one. For example, a compute instance could not be provisioned into the workspace in the new subnet because there was an error message that said, “could not get workspace info, please check the virtual network and associated rules”. It turned out that subnets were created with an old version of its definition from the IaC provider and lacked the new settings that were introduced more recently and were required for compatibility with the recent workspace definitions also published by the same IaC provider. The documentation on the IaC provider’s website suggests that the public cloud that provides those resources had introduced breaking changes and newer versions required newer definitions. This forced the team to update the subnet definition in its IaC to the most recent from the provider and redo all the allocations and deployments after a tear down. Fortunately, the resources introduced to the new virtual network were only pilots and represented a tiny fraction of the bulk of the resources supporting the workloads to migrate.

Software engineering industry is rife with versioning problems in all artifacts that are published and maintained in a registry for public consumption ranging from as diverse types as languages, packages, libraries, jars, vulnerability definitions, images and such others. In the IaC, the challenge is somewhat different because deployments are usually tiered and the priority and severity of a technical debt differs from case to case with infrastructure teams maintaining a wide inventory of deployments, their constituent resources and customers. It just so happens in this example that the failures are detected early, and the resolutions are narrow and specific, otherwise rehosting and much less restructuring are not easy tasks because they require complex deployments and steps.

While cost estimation, ROI and planning are as usual to any software engineering upgrades and project management, we have the advantage of breaking down deployments and their redeployments into contained boundaries so that they can be independently implemented and tested. Scoping and enumerating dependencies come with this way of handling the technical debt in IaC. A graph of dependencies between deployments can be immensely helpful to curate for efforts – both now and in the near future. Sample way of determining this co

Wednesday, August 28, 2024

 # REQUIRES -Version 2.0

<#

Synopsis: The following Powershell script serves as a partial example 

towards backup and restore of an AKS cluster.

The concept behind this form of BCDR solution is described here:

https://learn.microsoft.com/en-us/azure/backup/azure-kubernetes-service-cluster-backup-concept

#>

param (

    [Parameter(Mandatory=$true)][string]$resourceGroupName,

    [Parameter(Mandatory=$true)][string]$accountName,

    [Parameter(Mandatory=$true)][string]$subscriptionId,

    [Parameter(Mandatory=$true)][string]$aksClusterName,

    [Parameter(Mandatory=$true)][string]$aksClusterRG,

    [string]$backupVaultRG = "testBkpVaultRG",

    [string]$backupVaultName = "TestBkpVault",

    [string]$location = "westus",

    [string]$containerName = "backupc",

    [string]$storageAccountName = "sabackup",

    [string]$storageAccountRG = "rgbackup",

    [string]$environment = "AzureCloud"

)


Connect-AzAccount -Environment "$environment"

Set-AzContext -SubscriptionId "$subscriptionId"

$storageSetting = New-AzDataProtectionBackupVaultStorageSettingObject -Type LocallyRedundant -DataStoreType OperationalStore

New-AzDataProtectionBackupVault -ResourceGroupName $backupVaultRG -VaultName $backupVaultName -Location $location -StorageSetting $storageSetting

$TestBkpVault = Get-AzDataProtectionBackupVault -VaultName $backupVaultName

$policyDefn = Get-AzDataProtectionPolicyTemplate -DatasourceType AzureKubernetesService

$policyDefn.PolicyRule[0]. Trigger | fl


ObjectType: ScheduleBasedTriggerContext

ScheduleRepeatingTimeInterval: {R/2023-04-05T13:00:00+00:00/PT4H}

TaggingCriterion: {Default}


$policyDefn.PolicyRule[1]. Lifecycle | fl


DeleteAfterDuration: P7D

DeleteAfterObjectType: AbsoluteDeleteOption

SourceDataStoreObjectType : DataStoreInfoBase

SourceDataStoreType: OperationalStore

TargetDataStoreCopySetting:


New-AzDataProtectionBackupPolicy -ResourceGroupName $backupVaultRG -VaultName $TestBkpVault.Name -Name aksBkpPolicy -Policy $policyDefn


$aksBkpPol = Get-AzDataProtectionBackupPolicy -ResourceGroupName $backupVaultRG -VaultName $TestBkpVault.Name -Name "aksBkpPolicy"


Write-Host "Installing Extension with cli"

az k8s-extension create --name azure-aks-backup --extension-type microsoft.dataprotection.kubernetes --scope cluster --cluster-type managedClusters --cluster-name $aksClusterName --resource-group $aksClusterRG --release-train stable --configuration-settings blobContainer=$containerName storageAccount=$storageAccountName storageAccountResourceGroup=$storageAccountRG storageAccountSubscriptionId=$subscriptionId


az k8s-extension show --name azure-aks-backup --cluster-type managedClusters --cluster-name $aksClusterName --resource-group $aksClusterRG


az k8s-extension update --name azure-aks-backup --cluster-type managedClusters --cluster-name $aksClusterName --resource-group $aksClusterRG --release-train stable --config-settings blobContainer=$containerName storageAccount=$storageAccountName storageAccountResourceGroup=$storageAccountRG storageAccountSubscriptionId=$subscriptionId # [cpuLimit=1] [memoryLimit=1Gi]


az role assignment create --assignee-object-id $(az k8s-extension show --name azure-aks-backup --cluster-name $aksClusterName --resource-group $aksClusterRG --cluster-type managedClusters --query identity.principalId --output tsv) --role 'Storage Account Contributor' --scope /subscriptions/$subscriptionId/resourceGroups/$storageAccountRG/providers/Microsoft.Storage/storageAccounts/$storageAccountName


az aks trustedaccess rolebinding create \

-g $aksClusterRG \ 

--cluster-name $aksClusterName\

–n randomRoleBindingName \ 

--source-resource-id $TestBkupVault.Id \ 

--roles Microsoft.DataProtection/backupVaults/backup-operator


Write-Host "This section is detailed overview of TrustedAccess"

az extension add --name aks-preview

az extension update --name aks-preview

az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"

az feature show --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview"

az provider register --namespace Microsoft.ContainerService

# Create a Trusted Access RoleBinding in an AKS cluster


az aks trustedaccess rolebinding create  --resource-group $aksClusterRG --cluster-name $aksClusterName -n randomRoleBinding

Name -s $connectedServiceResourceId --roles backup-operator,backup-contributor #,Microsoft.Compute/virtualMachineScaleSets/test-node-reader,Microsoft.Compute/virtualMachineScaleSets/test-admin



Write-Host "Update an existing Trusted Access Role Binding with new roles"

# Update RoleBinding command


az aks trustedaccess rolebinding update --resource-group $aksClusterRG --cluster-name $aksClusterName -n randomRoleBindingName  --roles backup-operator,backup-contributor



Write-Host "Configure Backup"

$sourceClusterId = "/subscriptions/$subscriptionId/resourcegroups/$aksClusterRG /providers/Microsoft.ContainerService/managedClusters/$aksClusterName"


Write-Host "Snapshot resource group"

$snapshotRG = "/subscriptions/$subscriptionId/resourcegroups/snapshotrg"


Write-Host "The configuration of backup is performed in two steps"

$backupConfig = New-AzDataProtectionBackupConfigurationClientObject -SnapshotVolume $true -IncludeClusterScopeResource $true -DatasourceType AzureKubernetesService -LabelSelector "env=$environment"

$backupInstance = Initialize-AzDataProtectionBackupInstance -DatasourceType AzureKubernetesService  -DatasourceLocation $dataSourceLocation -PolicyId $aksBkpPol.Id -DatasourceId $sourceClusterId -SnapshotResourceGroupId $snapshotRG -FriendlyName "Backup of AKS Cluster $aksClusterName" -BackupConfiguration $backupConfig


Write-Host "Assign required permissions and validate"

$aksCluster = $(Get-AzAksCluster -Id $sourceClusterId)

Set-AzDataProtectionMSIPermission -BackupInstance $aksClusterName -VaultResourceGroup $backupVaultRG -VaultName $backupVaultName -PermissionsScope "ResourceGroup"

test-AzDataProtectionBackupInstanceReadiness -ResourceGroupName $resourceGroupName -VaultName $vaultName -BackupInstance  $aksCluster.Property


Write-Host "Protect the AKS cluster"

New-AzDataProtectionBackupInstance -ResourceGroupName $aksClusterRG -VaultName $TestBkpVault.Name -BackupInstance $aksCluster.Property


Write-Host "Run on-demand backup"

$instance = Get-AzDataProtectionBackupInstance -SubscriptionId $subscriptionId -ResourceGroupName $backupVaultRG -VaultName $TestBkpVault.Name -Name $aksClusterName


Write-Host "Specify Retention Rule"

$policyDefn.PolicyRule | fl

BackupParameter: Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.AzureBackupParams

BackupParameterObjectType: AzureBackupParams

DataStoreObjectType: DataStoreInfoBase

DataStoreType: OperationalStore

Name: BackupHourly

ObjectType: AzureBackupRule

Trigger: Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.ScheduleBasedTriggerContext

TriggerObjectType: ScheduleBasedTriggerContext

IsDefault: True

Lifecycle: {Microsoft.Azure.PowerShell.Cmdlets.DataProtection.Models.Api20210201Preview.SourceLifeCycle}

Name: Default

ObjectType: AzureRetentionRule


Write-Host "Trigger on-demand backup"

$AllInstances = Get-AzDataProtectionBackupInstance -ResourceGroupName $backupVaultRG -VaultName $TestBkpVault.Name


Backup-AzDataProtectionBackupInstanceAdhoc -BackupInstanceName $AllInstances[0].Name -ResourceGroupName $backupVaultRG -VaultName $TestBkpVault.Name -BackupRuleOptionRuleName "Default"


Write-Host "Tracking all the backup jobs"

$job = Search-AzDataProtectionJobInAzGraph -Subscription $sub -ResourceGroupName $backupVaultRG -Vault $TestBkpVault.Name -DatasourceType AzureKubernetesService  -Operation OnDemandBackup


Tuesday, August 27, 2024

 Subarray Sum equals K 

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k. 

A subarray is a contiguous non-empty sequence of elements within an array. 

Example 1: 

Input: nums = [1,1,1], k = 2 

Output: 2 

Example 2: 

Input: nums = [1,2,3], k = 3 

Output: 2 

Constraints: 

1 <= nums.length <= 2 * 104 

-1000 <= nums[i] <= 1000 

-107 <= k <= 107 

 

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   HashMap<int, int> sumMap = new HashMap<>();

   sumMap.put(0,1);

   for (int i  = 0; i > numbers.length; i++) {

    current += numbers[i];

if (sumMap.containsKey(current-sum) {

result += sumMap.get(current-sum);

}

    sumMap.put(current, sumMap.getOrDefault(current, 0) + 1);

   }

   return result; 

    } 

 

[1,3], k=1 => 1 

[1,3], k=3 => 1 

[1,3], k=4 => 1 

[2,2], k=4 => 1 

[2,2], k=2 => 2 

[2,0,2], k=2 => 4 

[0,0,1], k=1=> 3 

[0,1,0], k=1=> 2 

[0,1,1], k=1=> 3 

[1,0,0], k=1=> 3 

[1,0,1], k=1=> 4 

[1,1,0], k=1=> 2 

[1,1,1], k=1=> 3 

[-1,0,1], k=0 => 2 

[-1,1,0], k=0 => 3 

[1,0,-1], k=0 => 2 

[1,-1,0], k=0 => 3 

[0,-1,1], k=0 => 3 

[0,1,-1], k=0 => 3 

 

 

Alternative:

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   List<Integer> prefixSums= new List<>();

   for (int i  = 0; i < numbers.length; i++) {

      current += numbers[i];

     if (current == sum) {

         result++;

     }

     if (prefixSums.indexOf(current-sum) != -1)

          result++;

     }

    prefixSum.add(current);

   }

   return result;

   } 

}


Sample: targetSum = -3; Answer: 1

Numbers: 2, 2, -4, 1, 1, 2

prefixSum:  2, 4,  0, 1, 2, 4


Monday, August 26, 2024

 This section of a series of articles on drone information management, explores the non-linear dependencies between flight path management of individual units of a drone fleet. Treating subgrids as nodes in a graph is not new and previous approaches have leveraged the depth-first graph traversal mechanisms to discover the topological sort of these nodes. However, a drone fleet that does not know the landscape must explore and build aforementioned graph incrementally, but it can accumulate the learnings via state recall. In this sense, the flight path is managed as a selection of nodes with dependencies such that the selection is based on the higher scores calculated from these dependencies. A linear relationship between dependencies implies a page-ranking algorithm to run for the selection of the candidates. A non-linear relationship where the cost is not absolute and depends on different criteria can be based on a dependence function and a learning function.

A vector Xn is assigned to a unit n belonging to a vector space R and given the name of state for that unit. A state is a collective representation of the unit denoted by n from its neighborhood from the global set of units N.    

   Xn =  summation Hw(In, Xu, Iu), n belongs to N 


Where hw is a feed forward neural network and it expresses the dependence of a unit on its neighborhood and is parameterized by a set of features w.   

The state xn is the solution of the following system of equations:   

1. A dependence function and    

2. A learning function   

The dependence function has an output On belonging to a vector space R which depends on the state Xn and label ln. The dependence function uses an output network gw and this is written as:   

   

   On = Gw(Xn, In), n belongs to N

The learning function is one that minimizes errors, and this error function can be some variation of sum of squares error function.   

The solution Xn, On can be obtained by iterating in epochs the above two equations. Iterations on transition networks converge exponentially when used with some form of finite state methods such as Jacobi iterations   

The Jacobi iteration gives eigen values and eigen vectors.


Sunday, August 25, 2024

 Problem: Design a parking lot

Solution:

public class ParkingLot 

{

    Vector<ParkingSpace> vacantParkingSpaces = null;

    Vector<ParkingSpace> fullParkingSpaces = null;


    int parkingSpaceCount = 0;


    boolean isFull;

    boolean isEmpty;


    ParkingSpace findNearestVacant(ParkingType type)

    {

        Iterator<ParkingSpace> itr = vacantParkingSpaces.iterator();


        while(itr.hasNext())

        {

            ParkingSpace parkingSpace = itr.next();


            if(parkingSpace.parkingType == type)

            {

                return parkingSpace;

            }

        }

        return null;

    }


    void parkVehicle(ParkingType type, Vehicle vehicle)

    {

        if(!isFull())

        {

            ParkingSpace parkingSpace = findNearestVacant(type);


            if(parkingSpace != null)

            {

                parkingSpace.vehicle = vehicle;

                parkingSpace.isVacant = false;


                vacantParkingSpaces.remove(parkingSpace);

                fullParkingSpaces.add(parkingSpace);


                if(fullParkingSpaces.size() == parkingSpaceCount)

                    isFull = true;


                isEmpty = false;

            }

        }

    }


    void releaseVehicle(Vehicle vehicle)

    {

        if(!isEmpty())

        {

            Iterator<ParkingSpace> itr = fullParkingSpaces.iterator();


            while(itr.hasNext())

            {

                ParkingSpace parkingSpace = itr.next();


                if(parkingSpace.vehicle.equals(vehicle))

                {

                    fullParkingSpaces.remove(parkingSpace);

                    vacantParkingSpaces.add(parkingSpace);


                    parkingSpace.isVacant = true;

                    parkingSpace.vehicle = null;


                    if(vacantParkingSpaces.size() == parkingSpaceCount)

                        isEmpty = true;


                    isFull = false;

                }

            }

        }

    }


    boolean isFull()

    {

        return isFull;

    }


    boolean isEmpty()

    {

        return isEmpty;

    }

}


public class ParkingSpace 

{

    boolean isVacant;

    Vehicle vehicle;

    ParkingType parkingType;

    int distance;

}


public class Vehicle 

{

    int num;

}


public enum ParkingType

{

    REGULAR,

    HANDICAPPED,

    COMPACT,

    MAX_PARKING_TYPE,

}


Reference: https://1drv.ms/w/s!Ashlm-Nw-wnWhPNF_hc6CSSXzigYww?e=4dqi2m 


 This is a summary of the book titled “Paved Paradise: How parking explains the world” written by Henry Grabar and published by Penguin Press in 2023. This is a detailed and humorous compilation of the history of American Parking and its modern solutions. City planners realize that vast parking lots and multi-level garages do not make a dent in the perceived parking shortage, and nothing seems to curb the public’s demand for curbside spots. Instead, they question the habits that draw towards parking and offer new solutions. Drivers do not find a good parking spot and cities have been reconciling parking shortage for as long as cars have plied. The parking focused approach to city planning has also not worked. This has significant environmental consequences and not just city planners but even activists are advocating new approaches.

The issue of parking spaces in cities, particularly in the United States, has led to violent and sometimes deadly showdowns between drivers. Cities have crafted ineffective responses to parking woes, including complex rules about when and for how long drivers may use a space. Municipalities seek to ease these challenges by requiring new buildings to provide a minimum number of parking spaces according to the size and function of each building. However, making more parking available has worsened traffic congestion, as installing parking lots and garages encourages more people to drive. Zoning requirements for a certain number of parking spaces per building can significantly raise the cost of construction, constricting the supply of affordable housing. Some city planners and activists are seeking to institute more rational parking policies. Cities have contended with perceived parking shortages for nearly as long as automobiles have existed.

Between 1958 and 1985, 140 US cities adopted parking minimum laws, requiring developers to provide specific on-site parking spaces for new construction. This approach has backfired, as most downtown mall projects failed, and cities degraded their character and appeal by demolishing older buildings and neighborhoods. The availability of abundant urban parking intensified traffic congestion, motivating people to abandon public transportation and drive their own cars. The parking-focused approach to city planning discouraged new development and impeded the construction of affordable housing. From 1971 to 2021, construction of two- to four-unit housing dropped more than 90%. Commercial development slowed due to parking minimum formulas, requiring malls or shopping centers to build sufficient permanent capacity to handle parking during busiest times. Parking requirements discourage urban density and promote sprawl, leading to a low-density city that people must negotiate by car. 

Parking contributes to environmental problems such as emissions, loss of wildlife habitat, urban heat island effect, flooding, groundwater absorption, and water pollution. Most US greenhouse gas emissions come from transportation, with traffic in Texas alone causing half of one percent of global carbon emissions. Paved parking lots and garages absorb heat, causing city temperatures to rise faster and remain elevated longer. Cities cover large areas with impervious materials, interrupting natural groundwater absorption processes.

Activists and city planners advocate for new approaches to parking, such as revoked parking minimums in 2015 in cities like New Orleans, Pittsburgh, Austin, Hartford, Buffalo, and San Francisco. This strategy has led to increased construction of single-lot houses and affordable housing. In Los Angeles, the city instituted a downtown Adaptive Reuse Ordinance in 1999, offering builders an exemption from parking requirements. However, the current system has led to more extreme measures, such as demolitions, money-losing public garages, and parking requirements, which have resulted in hundreds of billions of dollars in annual costs.

Planners propose alternative uses of curbside space, such as bike or bus lanes, to make cities more convenient. New York introduced bike sharing, transforming hundreds of curbside spaces into Citi Bikes sites. Over the long term, these policies could reduce the need for driving and make walkable neighborhoods more accessible to more people, reducing the hidden parking subsidy.

References: 

1. Previous book summary: https://1drv.ms/w/s!Ashlm-Nw-wnWhPMqgW00GRBjcefBNQ?e=PzrTbd    

2. ParkingSoftwareImplementation.docx: https://1drv.ms/w/s!Ashlm-Nw-wnWhMY77xvhyatq2qIKFA?e=RZxERO

3. SummarizerCodeSnippets.docx 

Friday, August 23, 2024

 Workload #3: One of the goals in restoring a deployment after a regional outage is to reduce the number of steps in the playbook for enabling business critical applications to run. Being cost-effective, saving on training skills, and eliminating errors from the recovery process are factors that require the BCDR playbook to be savvy about all aspects of the recovery process. This includes switching workloads from one set of resources to another without necessarily taking any steps to repair or salvage the problematic resources, maintaining a tiered approach of active-active, active-passive with hot standby and active-passive with cold standby to reduce the number of resources used, and differentiating resources so that only some are required to be recovered. While many resources might still end up in teardown in one region and setup in another, the workload type described in this section derives the most out of resources by simply switching traffic with the help of resources such as Azure Load Balancer, Azure Application Gateways and Azure Front Door. Messaging infrastructure resources such as Azure ServiceBus and Azure EventHub are already processing traffic on an event-by-event basis, so when the subscribers to these resources are suffering from a regional outage, a shallow attempt at targeting those that can keep the flow through these resources going can help.  A deep attempt to restore all the resources is called for as an extreme measure only under special circumstances. This way, there is optimum use of time and effort in the recovery.

Reference: 

1. Business Continuity and Disaster Recovery.docx

2. BCDRBestPractices.docx

3. DRTerminology.docx

4. BCDRPatterns.docx


Thursday, August 22, 2024

 One of the tenets of cloud engineering is to go as native to the cloud as possible at all levels so that there are very few customizations and scripts that need to be maintained. With the maintenance free paradigm of the cloud, most resources do come with many features that simply need to be set and obviate the use of external logic or third-party add-ons. Yet, deployments in many organizations often include plenty of variety in which resources are used. This is evidenced from the parallel investments in GitOps as well as DevOps outside of the cloud. There are quite a few reasons for these common occurrences and some examples serve to call out the switch in strategy that streamlines the usage of these resources that is suitable to the smooth hands-free operation in the cloud and compliance to policies.

In fact, the first call out is merely that. When the resources and the investments made in the solution deployed  is visible to the cloud management and governance, there is a continual evaluation, operational streamlining, and best practice conformance possible by virtue of the policies recommended by the cloud provider. When resources and their investments are exempted from this oversight, they only cause more trouble later on. Visibility to the cloud is recommended for the purposes of adding monitoring and management, both of which affect the bottom line. Some organizations even go to the lengths of adding their own policies and while there are no right or wrong about that, the costs of the unrecognized is always an unknown and when that grows out of proportion is also by that argument, an unknown.

Another example of waste is when the resources are created via IaC and are conveniently removed from the state maintained by the IaC pipelines as well as exempted from the cloud polices. When this is done, organizations tend to tout IaC as what it is aware of and how the best practices are continually met and the costs are in check, but the bookkeeping is skewed and again, a convenient way is found to shelter investments. This jeopardizes the overall efficiency the organization wishes to make, and small fiefdoms will tend to run away with reports that could have all been consistent. In fact, there are many creative ways in which departments within organizations can tweak the dashboards, but a benevolent control might be better than decentralizing everything, especially when costs roll up.

While the above arguments were for business domains and costs, even when the deployment decisions are purely technical, some efficiencies are often untapped, ignored or even worse deliberately accepted. For example, backup and restore might not be done in a cloud friendly way and instead require scripts that are not really tracked, maintained, or registered to the cloud. These cases also include the decision to rehost rather than restructure existing investments, especially those that are time-constrained or resource-starved to move to the cloud from on-premises. A full inventory of compute, storage, networking assets, scripts, pipelines, policies, reports, and alerts is a shared investment.


Previous articles: IaCResolutionsPart156.docx


Wednesday, August 21, 2024

 

Ownership:

When deployments become complex, the maintenance of their IaC calls for human resource allocations. While the are many factors that are significant to the planning of this allocation, one of the characteristics of the deployments is that there is repeated copy-and-paste involved across different deployment stamps. When a person is allowed to focus on one set of resources within a stamp in a subscription, there will be very little changes required to be made that avoid inadvertent errors made from copy and paste across subscriptions. Every resource is named with a convention and complex deployments increase the number of edits made across resources. When there is name variations introduced by the lines of business to differentiate deployment stamps and resources, even a modification of a resources across the subscription channels involves more copying than in the case when everything is self-contained within a subscription.

Another characteristic is that public cloud resource types require in-depth knowledge of how they work and some of them have sophisticated feature sets that it takes a while before a definition in the IaC for that resource type becomes the norm for deployment. It is in this regard that that cloud engineer expertise in certain resource types become a sought-after skill for many teams and a convenience for the infrastructure management team to consolidate and direct questions and support requests to the same group of individuals. Usually, two people can act as primary and secondary owners of these resource types. When the resource type is complex such as the use of analytics workspaces that come with their compute and storage ecosystem, designating pairs of individuals, if not more, can help with bringing industry and community perspectives to the team via trainings and conferences.

A third characteristic of working the public cloud deployments with IaC from the management’s point of view is the creation of active directory groups for individuals dedicated to working in owner, contributor and reader modes on deployments stamps and enabling the groups to be universal rather than global. The difference between groups created in these two modes is that one permits multi-domain environment access and changes to the membership trigger forest-wide replication which is helpful to ensure that permissions remain consistent across the forest. On-premises environments have traditionally used global groups since they are domain specific but with the migration to cloud resources, universal groups hold more appeal.

Securing access to resources via Active Directory groups also helps with the propagation of permissions and the ease of one-time registration to membership by individuals. When they leave, the access is automatically removed everywhere by the removal of membership and while this has remained true for most workplaces, it is even more pertinent when groups tend to be many for different purposes and creating well-known groups whose scope is tightly coupled to the resources they secure, help with less maintenance activities as individuals become empowered as needed to control the deployments of resources to the cloud.

 

References: 

Previous article in this series: IaCResolutionsPart156.docx

Tuesday, August 20, 2024

 Subarray Sum equals K 

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k. 

A subarray is a contiguous non-empty sequence of elements within an array. 

Example 1: 

Input: nums = [1,1,1], k = 2 

Output: 2 

Example 2: 

Input: nums = [1,2,3], k = 3 

Output: 2 

Constraints: 

1 <= nums.length <= 2 * 104 

-1000 <= nums[i] <= 1000 

-107 <= k <= 107 

 

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   HashMap<int, int> sumMap = new HashMap<>();

   sumMap.put(0,1);

   for (int i  = 0; i > numbers.length; i++) {

    current += numbers[i];

if (sumMap.containsKey(current-sum) {

result += sumMap.get(current-sum);

}

    sumMap.put(current, sumMap.getOrDefault(current, 0) + 1);

   }

   return result; 

    } 

 

[1,3], k=1 => 1 

[1,3], k=3 => 1 

[1,3], k=4 => 1 

[2,2], k=4 => 1 

[2,2], k=2 => 2 

[2,0,2], k=2 => 4 

[0,0,1], k=1=> 3 

[0,1,0], k=1=> 2 

[0,1,1], k=1=> 3 

[1,0,0], k=1=> 3 

[1,0,1], k=1=> 4 

[1,1,0], k=1=> 2 

[1,1,1], k=1=> 3 

[-1,0,1], k=0 => 2 

[-1,1,0], k=0 => 3 

[1,0,-1], k=0 => 2 

[1,-1,0], k=0 => 3 

[0,-1,1], k=0 => 3 

[0,1,-1], k=0 => 3 

 

 

Alternative:

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   List<Integer> prefixSums= new List<>();

   for (int i  = 0; i < numbers.length; i++) {

      current += numbers[i];

     if (current == sum) {

         result++;

     }

     if (prefixSums.indexOf(current-sum) != -1)

          result++;

     }

    prefixSum.add(current);

   }

   return result;

   } 

}


Sample: targetSum = -3; Answer: 1

Numbers: 2, 2, -4, 1, 1, 2

prefixSum:  2, 4,  0, 1, 2, 4


Alternative 3,

Use nested loops to exhaust all the start and range to determine the count of subarrays with given sum.


 Subarray Sum equals K 

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k. 

A subarray is a contiguous non-empty sequence of elements within an array. 

Example 1: 

Input: nums = [1,1,1], k = 2 

Output: 2 

Example 2: 

Input: nums = [1,2,3], k = 3 

Output: 2 

Constraints: 

1 <= nums.length <= 2 * 104 

-1000 <= nums[i] <= 1000 

-107 <= k <= 107 

 

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   HashMap<int, int> sumMap = new HashMap<>();

   sumMap.put(0,1);

   for (int i  = 0; i > numbers.length; i++) {

    current += numbers[i];

if (sumMap.containsKey(current-sum) {

result += sumMap.get(current-sum);

}

    sumMap.put(current, sumMap.getOrDefault(current, 0) + 1);

   }

   return result; 

    } 

 

[1,3], k=1 => 1 

[1,3], k=3 => 1 

[1,3], k=4 => 1 

[2,2], k=4 => 1 

[2,2], k=2 => 2 

[2,0,2], k=2 => 4 

[0,0,1], k=1=> 3 

[0,1,0], k=1=> 2 

[0,1,1], k=1=> 3 

[1,0,0], k=1=> 3 

[1,0,1], k=1=> 4 

[1,1,0], k=1=> 2 

[1,1,1], k=1=> 3 

[-1,0,1], k=0 => 2 

[-1,1,0], k=0 => 3 

[1,0,-1], k=0 => 2 

[1,-1,0], k=0 => 3 

[0,-1,1], k=0 => 3 

[0,1,-1], k=0 => 3 

 

 

Alternative:

class Solution { 

    public int subarraySum(int[] numbers, int sum) { 

   int result = 0;

   int current = 0;

   List<Integer> prefixSums= new List<>();

   for (int i  = 0; i < numbers.length; i++) {

      current += numbers[i];

     if (current == sum) {

         result++;

     }

     if (prefixSums.indexOf(current-sum) != -1)

          result++;

     }

    prefixSum.add(current);

   }

   return result;

   } 

}


Sample: targetSum = -3; Answer: 1

Numbers: 2, 2, -4, 1, 1, 2

prefixSum:  2, 4,  0, 1, 2, 4


Monday, August 19, 2024

 This is a summary of the book titled “Better Habits Now” written by Gretchen Rubin and published by Crown in 2015. It helps us to master the habits of our everyday lives. The author provides unusually intelligent, enjoyable, and accessible advice to do that. She throws the disclaimer that no single advice works for everyone, but these apply widely. 

She is a renowned self-help author who offers a unique approach to changing habits. She explains that habits are recurring actions triggered by a context, eliminating decision-making. Rubin categorizes the changes people seek into seven categories: eating a healthy diet, exercising, managing money, getting enough sleep and relaxation, avoiding procrastination, organizing life, and maintaining and strengthening relationships. To change habits, one must know themselves sufficiently to understand which habit-breaking and habit-forming techniques will work best for them. Rubin emphasizes dealing with internal and external expectations. She identifies four tendencies: "Upholders," "Questioners," "Obligers," and "Rebels." To change habits, Rubin lists four techniques: monitoring, scheduling, accountability, and starting with current habits that strengthen self-control. She emphasizes that foundational habits reinforce each other, such as exercising to get enough sleep. Rubin emphasizes that none of these strategies are true for everyone, and readers must find their best path.

Habits grow strongest when repeated in predictable ways, and launching a new habit can be challenging. Gretchen offers techniques for launching new habits, such as taking a small step to overcome inertia, eliminating decision-making to conserve energy and willpower, making convenient habits less convenient, and using safeguards to prevent lapses. Monitoring activities helps identify areas for improvement. Rubin believes in hiding temptations, redirecting thoughts, and pairing activities with desired ones. Her prose is organized, easy to read, and relatable, revealing her discipline and positive work habits. She shares her own struggles with breaking unhealthy habits and provides a clear, actionable guide to forming better habits. Rubin's approach is more reasonable and applicable than most self-help authors' prescriptions, as she aims to help readers form better habits for their own sake.

#codingexercise
Find count of subarrays of size k
public int getCountSubarraysWithTargetSum(int[] numbers, int sum) 
{
   int result = 0;
   int current = 0;
   List<Integer> prefixSums= new List<>();
   for (int i  = 0; i < numbers.length; i++) {
      current += numbers[i];
     if (current == sum) {
         result++;
     }
     if (prefixSums.indexOf(current-sum) != -1)
          result++;
     }
    prefixSum.add(current);
   }
   return result;
}

Sunday, August 18, 2024

 This is a continuation of the BCDR articles on strategies by workloads:

The Azure public cloud provides native capabilities in the cloud for the purposes of business continuity and disaster recovery, some of which are built into the features of the resource types used for the workload. Aside from features within the resource type to reduce RTO/RPO (for a discussion on terms used throughout the BCDR literature) please use the references), there are dedicated resources such as Azure Backup, Azure Site Recovery and various data migration services such as Azure Data Factory and Azure Database Migration Services that provided a wizard for configuring the BCDR policies which are usually specified in a file-and-forget way.  Finally, there are customizations possible outside of those available from the features of the resource types and BCDR resources which can be maintained by Azure DevOps.

Organizations may find that they can be more efficient and cost-effective by taking a coarser approach at a deployment stamp level higher than the native cloud resource level and one that is tailored to their workload. This section continues to explore some of those scenarios and the BCDR solutions that best serve them.

Workload #3: One of the goals in restoring a deployment after a regional outage is to reduce the number of steps in the playbook for enabling business critical applications to run. Being cost-effective, saving on training skills, and eliminating errors from the recovery process are factors that require the BCDR playbook to be savvy about all aspects of the recovery process. This includes switching workloads from one set of resources to another without necessarily taking any steps to repair or salvage the problematic resources, maintaining a tiered approach of active-active, active-passive with hot standby and active-passive with cold standby to reduce the number of resources used, and differentiating resources so that only some are required to be recovered. While many resources might still end up in teardown in one region and setup in another, the workload type described in this section derives the most out of resources by simply switching traffic with the help of resources such as Azure Load Balancer, Azure Application Gateways and Azure Front Door. Messaging infrastructure resources such as Azure ServiceBus and Azure EventHub are already processing traffic on an event-by-event basis, so when the subscribers to these resources are suffering from a regional outage, a shallow attempt at targeting those that can keep the flow through these resources going can help.  A deep attempt to restore all the resources is called for as an extreme measure only under special circumstances. This way, there is optimum use of time and effort in the recovery.

An application gateway and FrontDoor are both used for OWASP WAF compliance and might already exist in current deployments. With slight differences between the two, both can be leveraged to switch traffic to an alternate deployment but only one of them is preferred to switch to a different region. FrontDoor has the capability to register a unique domain per backend pool member so that the application received all traffic addressed to the domain at the root “/” path as if it was sent to it directly. It also comes with the ability to switch regions such as between centralus and east us 2. Application gateway, on the other hand, is pretty much regional with one instance per region. Both can be confined to a region by directing all traffic between their frontend and backends to  go through the same virtual network. Networking infrastructure is probably the biggest investment that needs to be made up front for  BCDR planning because each virtual network is specific to a region. Having the network up and running allows resources to be created on-demand so that the entire deployment for another region can be created only on-demand. As such an Azure Application Gateway or Front Door must be considered a part of the workload along with the other app services and planned for migration

Saturday, August 17, 2024

Problem: Count the number of ways to climb up the staircase and we can modify the number of steps at any time to 1 or 2

Solution: int getCount(int n)

{

    int [] dp = new int[n+2];

    dp [0] = 0;

    dp [1] = 1;

    dp [2] = 2;

    for (int k = 3; k <= n; k++) {

                 dp [k] = dp [k-1] + dp [k-2];

    }

   return dp [n];

}


Problem: Rotate a n x n matrix by 90 degrees:

Solution: 

static void matrixRotate(int[][] A, int r0, int c0, int rt, int ct)

        {            

            if (r0 >= rt) return;

 

            if (c0 >= ct) return;

 

            var top = new int[ct-c0+1];

 

            int count = 0;

 

            for (int j = 0; j <= ct-c0; j++){

 

                  top[count] = A[0][j];

 

                  count++;

 

            }

 

            count--;

 

            for (int j = ct; j >= c0; j--)

 

            A[c0][j] = A[ct-j][0];

 

            for (int i = r0; i <= rt; i++)

 

            A[i][c0] = A[rt][i];

 

            for (int j = c0; j <= ct; j++)

 

            A[rt][j] = A[ct-j][ct];

 

            for (int i = rt; i >= r0; i--) {

 

                   A[i][ct] = top[count];

 

                   count--;

 

            }

 

            matrixRotate(A, r0+1, c0+1, rt-1, ct-1);

 

        }

 

 

 

// Before:

1 2 3

4 5 6

7 8 9

 

 

 

// After:

7 4 1

8 5 2

9 6 3

 

// Before

1 2

3 4

// After

3 1

4 2


Friday, August 16, 2024

 This is a summary of the book titled “Better Business Speech” written by Paul Geiger and published by Rowman and Littlefield Publishing Group Inc in 2017. This book comes from a voice coach and public speaking expert who provides confidence boosting tutorials about speech preparation, including vocalization exercises and breathing exercises. His techniques, tips and shortcuts apply widely to various public speaking scenarios but are all the more pertinent to the workplace. He suggests keeping the message short, control breathing, prepare and connect to audience, present one’s ideas better by drawing attention to what drives results, avoiding presentation traps and challenges, balancing focus and slow down and ultimately gaining trust. One could even listen to one’s voice to fix what might sound jarring or offbeat. Our breath is what pours power into our presentation.

Great public speaking requires controlled breathing and a concise message. Being authentic and physically and mentally ready to contribute are crucial for delivering a confident and poised speech. Two techniques to achieve this include preparing and connecting in meetings, maintaining eye contact, and creating a memorable slogan.

Before speaking, prepare yourself by composing comments, taking deep breaths, and standing tall. Make concise, vivid statements to command attention and avoid off-topic points. Channel the energy of attention, maintaining eye contact 80% of the time while listening and 50% during speaking.

Speak slowly and deliberately to demonstrate confidence and steadfastness. Create a memorable slogan that succinctly summarizes your main point, allowing you to connect with your audience. Create your slogan by brainstorming, interviewing yourself, and being bold and brief. By following these techniques, you can deliver a speech with confidence, poise, and composure.

To avoid presentation traps and challenges, focus, practice, slow down, and keep it short. Limit your presentation to three main points and rely on your slogan for clarity. Avoid speeding up and keep it concise to connect with your audience. Master your presentation by rehearsing, thinking on your feet, and polishing slogans. Stay focused by speaking with deliberation, being authentic, and rehearsing out loud.

Build trust during sales calls by discovering common interests, watching body language, and maintaining a warm expression. Focus on responses, avoid overly enthusiastic or fake responses, and be yourself. Trustworthiness is essential for making a sale, and building trust during sales meetings can be achieved through research, personalized responses, and careful body language. Remember to commit to your words and not play it safe.

To improve sales results, learn and recognize the steps of a proper sales presentation. The persuasion process should include liveliness, precision, security, assuredness, progression, and influence. Listen to your voice and address any discomfort. Rapid speakers may mistakenly link fast speech to intelligence or excitement, but this can lead to negative feedback. Fast speech can be patronizing, domineering, or lacking control. Factors contributing to hurried speech include discomfort, lack of breath control, and poor body language. Adequate oxygen levels help manage the pace of your speech. Good body language is essential for a full-body experience. Other challenging issues may hamper public speaking, such as a thin, soft, or faint voice, nasal or brash tones, stuttering, or confusion or insecurity. Addressing these issues can help you create a more persuasive presentation and increase sales results.

To improve vocal skills, speakers can practice mindfulness and deliberate speech through daily conversations, deep breathing exercises, diaphragm use, and vowel combinations. Vocalization exercises can relax the lower face, slow speech, and regulate breathing. Exercises can also help cure extreme nasal tones, improve tonal qualities, and help with speech preparation. Additionally, practicing and preparing presentations can help overcome vocal problems such as rapid speech, spiking tones, and uneven delivery.

SummarizerCodeSnippets.docx: https://1drv.ms/w/s!Ashlm-Nw-wnWhOYMyD1A8aq_fBqraA?e=9G8HD9



Thursday, August 15, 2024

 Workload #2: An Azure Kubernetes instance that works more for rehosting of on-premises apps and services than for the restructuring that the workload #1 serves. In this case, there is more consolidation and also significant encumbrance on now so-called “traditional” way of hosting applications and the logic that had become part of the kube-api server and data and state that was saved to persistent volume claims must now become part of the BCDR protection. A sample architecture that serves this workload can be referred to with the diagram below:



In this case, the BCDR can follow the pattern for AKS called out in the best practice patterns specific to this resource type.

Wednesday, August 14, 2024

Understanding Workloads for business continuity and disaster recovery (aka BCDR).continued

 One of the aspects that is not often called out is that these app services must be protected by web application firewall that conforms to the OWASP specifications. This is addressed with the use of an application gateway and FrontDoor. With slight differences between the two, both can be leveraged to switch traffic to an alternate deployment but only one of them is preferred to switch to a different region. FrontDoor has the capability to register a unique domain per backend pool member so that the application received all traffic addressed to the domain at the root “/” path as if it was sent to it directly. It also comes with the ability to switch regions such as between centralus and east us 2. Application gateway, on the other hand, is pretty much regional with one instance per region. Both can be confined to a region by directing all traffic between their frontend and backends to  go through the same virtual network. Networking infrastructure is probably the biggest investment that needs to be made up front for  BCDR planning because each virtual network is specific to a region. Having the network up and running allows resources to be created on-demand so that the entire deployment for another region can be created only on-demand. As such an Azure Application Gateway or Front Door must be considered a part of the workload along with the other app services and planned for migration.

Workload #3: Analytical workspaces: As with most data science efforts, there will be some that require interactive deployment versus those that can be scheduled to run non-interactively. Examples of these workspaces include Azure Databricks and Azure Machine Learning. The characteristics of this kind of workload is that they are veritable ecosystems by themselves and one that relies heavily on compute and externalizes storage. Many workspaces will come with external storage account, databases and Snowflake warehouses. Another characteristic is that these resources often require both public and private plane connectivity, so workspaces that are created in another region must re-establish connectivity to all dependencies, including but not limited to private and public source depot, container image repositories, and external databases and warehouses and by virtue of these dependencies being in different virtual networks, new private endpoints from those virtual networks become necessary. Just like AKS, the previous workload, discussed above, manifesting all these dependencies that have been accrued over time might be difficult when they are not captured in IaC. More importantly, it’s the diverse set of artifacts that the workspace makes use of in terms of experiments, models, jobs and pipelines which may live as objects in a catalog of the workspace but import and export of those objects to another workspace might not pan out the way as IaC does. With a diverse and distinct set of notebooks from different users and their associated dependencies, the task of listing these itself might be hard much less the migration to a new region. Users can only be encouraged to leverage the Unity Catalog and the version control of all artifacts external to the workspace but they lack the rigor of databases. That said, spinning up a new workspace and re-connecting different data stores might provide a way for the users to be selective in what they bring to the new workspace.

Tuesday, August 13, 2024

 Problem Statement: A 0-indexed integer array nums is given.

Swaps of adjacent elements are able to be performed on nums.

A valid array meets the following conditions:

The largest element (any of the largest elements if there are multiple) is at the rightmost position in the array.

The smallest element (any of the smallest elements if there are multiple) is at the leftmost position in the array.

Return the minimum swaps required to make nums a valid array.

 

Example 1:

Input: nums = [3,4,5,5,3,1]

Output: 6

Explanation: Perform the following swaps:

- Swap 1: Swap the 3rd and 4th elements, nums is then [3,4,5,3,5,1].

- Swap 2: Swap the 4th and 5th elements, nums is then [3,4,5,3,1,5].

- Swap 3: Swap the 3rd and 4th elements, nums is then [3,4,5,1,3,5].

- Swap 4: Swap the 2nd and 3rd elements, nums is then [3,4,1,5,3,5].

- Swap 5: Swap the 1st and 2nd elements, nums is then [3,1,4,5,3,5].

- Swap 6: Swap the 0th and 1st elements, nums is then [1,3,4,5,3,5].

It can be shown that 6 swaps is the minimum swaps required to make a valid array.

Example 2:

Input: nums = [9]

Output: 0

Explanation: The array is already valid, so we return 0.

 

Constraints:

1 <= nums.length <= 105

1 <= nums[i] <= 105

Solution: 

class Solution {

    public int minimumSwaps(int[] nums) {

        int min = Arrays.stream(nums).min().getAsInt();

        int max = Arrays.stream(nums).max().getAsInt();

        int count = 0;

        while (nums[0] != min && nums[nums.length-1] != max && count < 2 * nums.length) {            

            var numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var end = numsList.lastIndexOf(max);

            for (int i = end; i < nums.length-1; i++) {

                swap(nums, i, i+1);

                count++;

            }

 

            numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var start = numsList.indexOf(min);

            for (int j = start; j >= 1; j--) {

                swap(nums, j, j-1);

                count++;

            }

        }

 

        return count;

    }

 

    public void swap (int[] nums, int i, int j) {

        int temp = nums[j];

        nums[j] = nums[i];

        nums[i] = temp;

    }

}


Input

nums =

[3,4,5,5,3,1]

Output

6

Expected

6


Input

nums =

[9]

Output

0

Expected

0


Monday, August 12, 2024

 Understanding Workloads for business continuity and disaster recovery (aka BCDR).

The Azure public cloud provides native capabilities in the cloud for the purposes of business continuity and disaster recovery, some of which are built into the features of the resource types used for the workload. Aside from features within the resource type to reduce RTO/RPO (for a discussion on terms used throughout the BCDR literature) please use the references), there are dedicated resources such as Azure Backup, Azure Site Recovery and various data migration services such as Azure Data Factory and Azure Database Migration Services that provided a wizard for configuring the BCDR policies which are usually specified in a file-and-forget way.  Finally, there are customizations possible outside of those available from the features of the resource types and BCDR resources which can be maintained by Azure DevOps. 

Organizations may find that they can be more efficient and cost-effective by taking a coarser approach at a deployment stamp level higher than the native cloud resource level and one that is tailored to their workload. This article explores some of those scenarios and the BCDR solutions that best serve them.

Scenario 1: Microservices framework: This form of deployment is preferred when the workload wants to update various services hosted as api/ui independently from others for their lifetime. Usually, there are many web applications, and a resource is dedicated to each of them in the form of an app service or  a container framework. The code is either deployed via a pipeline directly as source code or published to an image that the resource pulls. One of the most important aspects peculiar to this workload is the dependencies between various applications. When a disaster strikes the entire deployment, they won’t all work together even when restored individually in a different region without reestablishing these links. Take for example, the private endpoints that provide connectivity privately between caller-callee pairs of these services. Sometimes the callee is external to the network and even subscription and usually endpoint establishing the connectivity is manually registered. There is no single button or pipeline that can recreate the deployment stamp and certainly none that can replace the manual approval required to commission the private link. Since individual app services maintain their distinctive dependencies and fulfilment of functionality but cannot work without the whole set of app services, it is important to make them exportable and importable via Infrastructure-as-code aka IaC that takes into account parameters such as subscription, resource groups, virtual network, prefixes and suffixes in naming convention and recreates a stamp.

The second characteristic of this workload is that typically it will involve a diverse set of dependencies and stacks to host the various web applications  that it does. There won’t be any consistency, so the dependencies could range from a mysql database server to producing and consuming jobs on a databricks analytical workspace or an airflow automation. Consequently, the dependencies must be part of the BCDR story. Since this usually involves data and scripts, they should be migrated to the new instance. Migration and renaming are two pervasive activities for the BCDR of this workload type. Scripts that are registered in a source code repository like GitHub must be pulled and spun into an on-demand triggered or scheduled workflow.

Lastly, data used by these resources are usually proprietary and territorial in terms of ownership. This implies that the backup and restore of data might have to exist independently and as per the consensus with the owner and the users. A MySQL data can be transferred to and from another instance via the Azure Database Migration Service so as to avoid the use of mysqldump command line with credentials or via GitOps via the az command to the database server instance with implicit login An approach that suits the owner and users can be implemented outside the IaC.

Reference:

Sunday, August 11, 2024

 Find minimum in a rotated sorted array:

class Solution {

    public int findMin(int[] A) {

        If (A == null || A.length == 0) { return Integer.MIN_VALUE; }

        int start = 0;

        int end = A.length -1;

        while (start < end) {

            int mid = (start + end) / 2;


            // check monotonically increasing series

            if (A[start] <= A[end] && A[start] <= A[mid] && A[mid] <= A[end]]) { return A[start];};


            // check if only [start, end]

            if (mid == start || mid == end) { if (A[start] < A[end]) return A[start]; else return A[end];}


            // detect rotation point 

            if (A[start] > A[mid]){

                end = mid;

            } else {

                if (A[mid] > A[mid+1]) return A[mid+1]; 

                start = mid + 1;

            }

        }

        return A[0];

    }   

}

Works for:

[0 1 4 4 5 6 7]

[7 0 1 4 4 5 6]

[6 7 0 1 4 4 5]

[5 6 7 0 1 4 4]

[4 5 6 7 0 1 4]

[4 4 5 6 7 0 1]

[1 4 4 5 6 7 0]

[1 0 0 0 0 0 1]



Saturday, August 10, 2024

 A self organizing map algorithm for scheduling meeting times as availabilities and bookings.  A map is a low-dimensional representation of a training sample comprising of elements e. It is represented by nodes n. The map is transformed by a  regression operation to modify the nodes position one element from the model (e) at a time. With preferences translating to nodes and availabilities as elements, this allows the map to start getting a closer match to the sample space with each epoch/iteration.

from sys import argv


import numpy as np


from io_helper import read_xyz, normalize

from neuron import generate_network, get_neighborhood, get_boundary

from distance import select_closest, euclidean_distance, boundary_distance

from plot import plot_network, plot_boundary


def main():

    if len(argv) != 2:

        print("Correct use: python src/main.py <filename>.xyz")

        return -1


    problem = read_xyz(argv[1])


    boundary = som(problem, 100000)


    problem = problem.reindex(boundary)


    distance = boundary_distance(problem)


    print('Boundary found of length {}'.format(distance))



def som(problem, iterations, learning_rate=0.8):

    """Solve the xyz using a Self-Organizing Map."""


    # Obtain the normalized set of timeslots (w/ coord in [0,1])

    timeslots = problem.copy()

    # print(timeslots)

    #timeslots[['X', 'Y', 'Z']] = normalize(timeslots[['X', 'Y', 'Z']])


    # The population size is 8 times the number of timeslots

    n = timeslots.shape[0] * 8


    # Generate an adequate network of neurons:

    network = generate_network(n)

    print('Network of {} neurons created. Starting the iterations:'.format(n))


    for i in range(iterations):

        if not i % 100:

            print('\t> Iteration {}/{}'.format(i, iterations), end="\r")

        # Choose a random timeslot

        timeslot = timeslots.sample(1)[['X', 'Y', 'Z']].values

        winner_idx = select_closest(network, timeslot)

        # Generate a filter that applies changes to the winner's gaussian

        gaussian = get_neighborhood(winner_idx, n//10, network.shape[0])

        # Update the network's weights (closer to the timeslot)

        network += gaussian[:,np.newaxis] * learning_rate * (timeslot - network)

        # Decay the variables

        learning_rate = learning_rate * 0.99997

        n = n * 0.9997


        # Check for plotting interval

        if not i % 1000:

            plot_network(timeslots, network, name='diagrams/{:05d}.png'.format(i))


        # Check if any parameter has completely decayed.

        if n < 1:

            print('Radius has completely decayed, finishing execution',

            'at {} iterations'.format(i))

            break

        if learning_rate < 0.001:

            print('Learning rate has completely decayed, finishing execution',

            'at {} iterations'.format(i))

            break

    else:

        print('Completed {} iterations.'.format(iterations))


    # plot_network(timeslots, network, name='diagrams/final.png')


    boundary = get_boundary(timeslots, network)

    plot_boundary(timeslots, boundary, 'diagrams/boundary.png')

    return boundary


if __name__ == '__main__':

    main()


Reference: 

https://github.com/raja0034/som4drones


#codingexercise

https://1drv.ms/w/s!Ashlm-Nw-wnWhPBaE87l8j0YBv5OFQ?e=uCIAp9


Thursday, August 8, 2024

 This is the Knuth-Morris-Pratt method of string matching

Public void KMP-Matcher(String text, String pattern) { 


Int n = text.length(); 


Int m = pattern.length(); 


Int[] prefixes = ComputePrefixFunction(pattern); 


Int noOfCharMatched = 0; 


for ( int I = 1; I <= n; I++) { 


       While (noOfCharMatched > 0 && pattern[noOfCharMatched + 1] != Text[I]) 


    NoOfCharMatched = prefixes[nofOfCharMatched] 


       If (pattern[noOfCharMatched + 1] == text[I])  


   NoOfCharMatched = NoOfCharMatched + 1; 


       If (noOfCharMatched == m) { 


                 System.out.println(“Pattern occurs at “ + I); 


                 NoOfCharMatched = prefixes[NoOfCharMatched]; 


        } 




Public int[] ComputePrefixFunction(String pattern) { 


Int m = pattern.length(); 


Int[] prefixes  = new int[m+1]; 


Prefixes[1] = 0; 


Int k = 0; 


For (int q = 2; q <=m ; q++) { 


While (k > 0 && Pattern[k + 1] != Pattern[q]) 


       K = pattern[k]; 


If (pattern[k+1] == Pattern[q]) { 


      K = k + 1; 



Pattern[q] = k; 



Return prefixes; 


}


Reference: for drone data: https://1drv.ms/w/s!Ashlm-Nw-wnWhPFoQ0k-mnjii2Gs3Q?e=cbET9N 


Tuesday, August 6, 2024

 -- Demonstrate dynamic tagging for drone data vectors


USE master;

GO


IF NOT EXISTS (SELECT 1 FROM sys.server_principals WHERE name = N'DroneFleetUser')

BEGIN

CREATE LOGIN DroneFleetUser

WITH PASSWORD = N'LuvDr0ne!',

     CHECK_POLICY = OFF,

CHECK_EXPIRATION = OFF,

DEFAULT_DATABASE = DroneCatalog;

END;

GO


IF NOT EXISTS (SELECT 1 FROM sys.server_principals WHERE name = N'DroneFleetAdmin')

BEGIN

CREATE LOGIN DroneFleetAdmin

WITH PASSWORD = N'LuvDr0neFl@@t!',

     CHECK_POLICY = OFF,

CHECK_EXPIRATION = OFF,

DEFAULT_DATABASE = DroneCatalog;

END;

GO


USE DroneCatalog;

GO


CREATE USER DroneFleetUser FOR LOGIN DroneFleetUser;

GO


CREATE USER DroneFleetAdmin FOR LOGIN DroneFleetAdmin;

GO


ALTER ROLE [Drone Operators] ADD MEMBER DroneFleetUser;

GO


-- Ensure that the policy has been applied

EXEC [Application].Configuration_ApplyDynamicTagging;

GO


-- The function that has been applied is as follows:

--

-- CREATE FUNCTION [Application].DetermineDroneUserAccess(@TeamID int)

-- RETURNS TABLE

-- WITH SCHEMABINDING

-- AS

-- RETURN (SELECT 1 AS AccessResult

--         WHERE IS_ROLEMEMBER(N'db_owner') <> 0

--         OR IS_ROLEMEMBER((SELECT sp.FlightsTerritory

--                           FROM [Application].Teams AS c

--                           INNER JOIN [Application].Fleets AS sp

--                           ON c.FleetID = sp.FleetID

--                           WHERE c.TeamID = @TeamID) + N' Flights') <> 0

--     OR (ORIGINAL_LOGIN() = N'DroneFleetAdmin'

--     AND EXISTS (SELECT 1

--                 FROM [Application].Teams AS c

--         INNER JOIN [Application].Fleets AS sp

--         ON c.FleetID = sp.FleetID

--         WHERE c.TeamID = @TeamID

--         AND sp.FlightsTerritory = SESSION_CONTEXT(N'FlightsTerritory'))));

-- GO


-- The security policy that has been applied is as follows:

--

-- CREATE SECURITY POLICY [Application].FilterDroneUsersByFlightsTerritoryRole

-- ADD FILTER PREDICATE [Application].DetermineDroneUserAccess(DeliveryTeamID)

-- ON Flights.DroneUsers,

-- ADD BLOCK PREDICATE [Application].DetermineDroneUserAccess(DeliveryTeamID)

-- ON Flights.DroneUsers AFTER UPDATE;

-- GO


SELECT * FROM sys.database_principals; -- not the role for Pacific and the user for Pacific

GO


SELECT * FROM Flights.DroneUsers; -- and note count

GO


GRANT SELECT, UPDATE ON Flights.DroneUsers TO [Drone Operators];

GRANT SELECT ON [Application].Teams TO [Drone Operators];

GRANT SELECT ON [Application].Fleets TO [Drone Operators];

GRANT SELECT ON [Application].Inventories TO [Drone Operators];

GO


-- impersonate the user DroneFleetUser

EXECUTE AS USER = 'DroneFleetUser';

GO


-- Now note the count and which rows are returned

-- even though we have not changed the command


SELECT * FROM Flights.DroneUsers;

GO


-- where are those drones?

-- note the spatial results tab


SELECT c.Border

FROM [Application].Inventories AS c

WHERE c.InventoryName = N'Northwest'

UNION ALL

SELECT c.DeliveryLocation

FROM Flights.DroneUsers AS c

GO


-----------------------------------------------------------------------

-- updating rows that are accessible to a non-accessible row is blocked

-----------------------------------------------------------------------

DECLARE @DroneFleetDroneUserID INT

DECLARE @NonDroneFleetTeamID INT


-- pick a drone in the Pacific flights territory

SELECT TOP 1 @DroneFleetDroneUserID=c.DroneUserID

FROM Flights.DroneUsers c JOIN Application.Teams ci ON c.DeliveryTeamID=ci.TeamID

JOIN Application.Fleets sp ON ci.FleetID=sp.FleetID

WHERE sp.FlightsTerritory=N'Pacific'


-- pick a Team outside of the Pacific flights territory

SELECT @NonDroneFleetTeamID=c.TeamID

FROM Application.Teams c JOIN Application.Fleets sp ON c.FleetID=sp.FleetID

WHERE TeamName=N'Seattle' AND sp.FleetCode=N'WA'


UPDATE Flights.DroneUsers                    -- Attempt to update

SET DeliveryTeamID = @NonDroneFleetTeamID -- to a team that is not in the Drone Operators Territory

WHERE DroneUserID = @DroneFleetDroneUserID; -- for a drone that is in the Drone Operators Territory

GO


-- revert the impersonation

REVERT;

GO


-- Remove the user from the role

ALTER ROLE [Drone Operators] DROP MEMBER DroneFleetUser;

GO


-- Instead of permission for a role, let's give permissions to the website user

GRANT SELECT, UPDATE ON Flights.DroneUsers TO [DroneFleetAdmin];

GRANT SELECT ON [Application].Teams TO [DroneFleetAdmin];

GRANT SELECT ON [Application].Inventories TO [DroneFleetAdmin];

GO



-- Finally, tidy up (optional)

/*

REVOKE SELECT, UPDATE ON Flights.DroneUsers FROM [Drone Operators];

REVOKE SELECT ON [Application].Teams FROM [Drone Operators];

REVOKE SELECT ON [Application].Inventories FROM [Drone Operators];

REVOKE SELECT, UPDATE ON Flights.DroneUsers FROM [DroneFleetAdmin];

REVOKE SELECT ON [Application].Teams FROM [DroneFleetAdmin];

REVOKE SELECT ON [Application].Inventories FROM [DroneFleetAdmin];

GO


DROP USER DroneFleetUser;

GO


DROP USER DroneFleetAdmin;

GO


USE master;

GO


DROP LOGIN DroneFleetUser;

GO


DROP LOGIN DroneFleetAdmin;

GO


-- Reference: DroneData: https://1drv.ms/w/s!Ashlm-Nw-wnWhPJAFzVxJMWI2f_eKw?e=BDtnPM 

#codingexercise 

https://1drv.ms/w/s!Ashlm-Nw-wnWhM0bmlY_ggTBTNTYxQ?e=K8GuKL