Friday, August 16, 2024

 This is a summary of the book titled “Better Business Speech” written by Paul Geiger and published by Rowman and Littlefield Publishing Group Inc in 2017. This book comes from a voice coach and public speaking expert who provides confidence boosting tutorials about speech preparation, including vocalization exercises and breathing exercises. His techniques, tips and shortcuts apply widely to various public speaking scenarios but are all the more pertinent to the workplace. He suggests keeping the message short, control breathing, prepare and connect to audience, present one’s ideas better by drawing attention to what drives results, avoiding presentation traps and challenges, balancing focus and slow down and ultimately gaining trust. One could even listen to one’s voice to fix what might sound jarring or offbeat. Our breath is what pours power into our presentation.

Great public speaking requires controlled breathing and a concise message. Being authentic and physically and mentally ready to contribute are crucial for delivering a confident and poised speech. Two techniques to achieve this include preparing and connecting in meetings, maintaining eye contact, and creating a memorable slogan.

Before speaking, prepare yourself by composing comments, taking deep breaths, and standing tall. Make concise, vivid statements to command attention and avoid off-topic points. Channel the energy of attention, maintaining eye contact 80% of the time while listening and 50% during speaking.

Speak slowly and deliberately to demonstrate confidence and steadfastness. Create a memorable slogan that succinctly summarizes your main point, allowing you to connect with your audience. Create your slogan by brainstorming, interviewing yourself, and being bold and brief. By following these techniques, you can deliver a speech with confidence, poise, and composure.

To avoid presentation traps and challenges, focus, practice, slow down, and keep it short. Limit your presentation to three main points and rely on your slogan for clarity. Avoid speeding up and keep it concise to connect with your audience. Master your presentation by rehearsing, thinking on your feet, and polishing slogans. Stay focused by speaking with deliberation, being authentic, and rehearsing out loud.

Build trust during sales calls by discovering common interests, watching body language, and maintaining a warm expression. Focus on responses, avoid overly enthusiastic or fake responses, and be yourself. Trustworthiness is essential for making a sale, and building trust during sales meetings can be achieved through research, personalized responses, and careful body language. Remember to commit to your words and not play it safe.

To improve sales results, learn and recognize the steps of a proper sales presentation. The persuasion process should include liveliness, precision, security, assuredness, progression, and influence. Listen to your voice and address any discomfort. Rapid speakers may mistakenly link fast speech to intelligence or excitement, but this can lead to negative feedback. Fast speech can be patronizing, domineering, or lacking control. Factors contributing to hurried speech include discomfort, lack of breath control, and poor body language. Adequate oxygen levels help manage the pace of your speech. Good body language is essential for a full-body experience. Other challenging issues may hamper public speaking, such as a thin, soft, or faint voice, nasal or brash tones, stuttering, or confusion or insecurity. Addressing these issues can help you create a more persuasive presentation and increase sales results.

To improve vocal skills, speakers can practice mindfulness and deliberate speech through daily conversations, deep breathing exercises, diaphragm use, and vowel combinations. Vocalization exercises can relax the lower face, slow speech, and regulate breathing. Exercises can also help cure extreme nasal tones, improve tonal qualities, and help with speech preparation. Additionally, practicing and preparing presentations can help overcome vocal problems such as rapid speech, spiking tones, and uneven delivery.

SummarizerCodeSnippets.docx: https://1drv.ms/w/s!Ashlm-Nw-wnWhOYMyD1A8aq_fBqraA?e=9G8HD9



Thursday, August 15, 2024

 Workload #2: An Azure Kubernetes instance that works more for rehosting of on-premises apps and services than for the restructuring that the workload #1 serves. In this case, there is more consolidation and also significant encumbrance on now so-called “traditional” way of hosting applications and the logic that had become part of the kube-api server and data and state that was saved to persistent volume claims must now become part of the BCDR protection. A sample architecture that serves this workload can be referred to with the diagram below:



In this case, the BCDR can follow the pattern for AKS called out in the best practice patterns specific to this resource type.

Wednesday, August 14, 2024

Understanding Workloads for business continuity and disaster recovery (aka BCDR).continued

 One of the aspects that is not often called out is that these app services must be protected by web application firewall that conforms to the OWASP specifications. This is addressed with the use of an application gateway and FrontDoor. With slight differences between the two, both can be leveraged to switch traffic to an alternate deployment but only one of them is preferred to switch to a different region. FrontDoor has the capability to register a unique domain per backend pool member so that the application received all traffic addressed to the domain at the root “/” path as if it was sent to it directly. It also comes with the ability to switch regions such as between centralus and east us 2. Application gateway, on the other hand, is pretty much regional with one instance per region. Both can be confined to a region by directing all traffic between their frontend and backends to  go through the same virtual network. Networking infrastructure is probably the biggest investment that needs to be made up front for  BCDR planning because each virtual network is specific to a region. Having the network up and running allows resources to be created on-demand so that the entire deployment for another region can be created only on-demand. As such an Azure Application Gateway or Front Door must be considered a part of the workload along with the other app services and planned for migration.

Workload #3: Analytical workspaces: As with most data science efforts, there will be some that require interactive deployment versus those that can be scheduled to run non-interactively. Examples of these workspaces include Azure Databricks and Azure Machine Learning. The characteristics of this kind of workload is that they are veritable ecosystems by themselves and one that relies heavily on compute and externalizes storage. Many workspaces will come with external storage account, databases and Snowflake warehouses. Another characteristic is that these resources often require both public and private plane connectivity, so workspaces that are created in another region must re-establish connectivity to all dependencies, including but not limited to private and public source depot, container image repositories, and external databases and warehouses and by virtue of these dependencies being in different virtual networks, new private endpoints from those virtual networks become necessary. Just like AKS, the previous workload, discussed above, manifesting all these dependencies that have been accrued over time might be difficult when they are not captured in IaC. More importantly, it’s the diverse set of artifacts that the workspace makes use of in terms of experiments, models, jobs and pipelines which may live as objects in a catalog of the workspace but import and export of those objects to another workspace might not pan out the way as IaC does. With a diverse and distinct set of notebooks from different users and their associated dependencies, the task of listing these itself might be hard much less the migration to a new region. Users can only be encouraged to leverage the Unity Catalog and the version control of all artifacts external to the workspace but they lack the rigor of databases. That said, spinning up a new workspace and re-connecting different data stores might provide a way for the users to be selective in what they bring to the new workspace.

Tuesday, August 13, 2024

 Problem Statement: A 0-indexed integer array nums is given.

Swaps of adjacent elements are able to be performed on nums.

A valid array meets the following conditions:

The largest element (any of the largest elements if there are multiple) is at the rightmost position in the array.

The smallest element (any of the smallest elements if there are multiple) is at the leftmost position in the array.

Return the minimum swaps required to make nums a valid array.

 

Example 1:

Input: nums = [3,4,5,5,3,1]

Output: 6

Explanation: Perform the following swaps:

- Swap 1: Swap the 3rd and 4th elements, nums is then [3,4,5,3,5,1].

- Swap 2: Swap the 4th and 5th elements, nums is then [3,4,5,3,1,5].

- Swap 3: Swap the 3rd and 4th elements, nums is then [3,4,5,1,3,5].

- Swap 4: Swap the 2nd and 3rd elements, nums is then [3,4,1,5,3,5].

- Swap 5: Swap the 1st and 2nd elements, nums is then [3,1,4,5,3,5].

- Swap 6: Swap the 0th and 1st elements, nums is then [1,3,4,5,3,5].

It can be shown that 6 swaps is the minimum swaps required to make a valid array.

Example 2:

Input: nums = [9]

Output: 0

Explanation: The array is already valid, so we return 0.

 

Constraints:

1 <= nums.length <= 105

1 <= nums[i] <= 105

Solution: 

class Solution {

    public int minimumSwaps(int[] nums) {

        int min = Arrays.stream(nums).min().getAsInt();

        int max = Arrays.stream(nums).max().getAsInt();

        int count = 0;

        while (nums[0] != min && nums[nums.length-1] != max && count < 2 * nums.length) {            

            var numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var end = numsList.lastIndexOf(max);

            for (int i = end; i < nums.length-1; i++) {

                swap(nums, i, i+1);

                count++;

            }

 

            numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var start = numsList.indexOf(min);

            for (int j = start; j >= 1; j--) {

                swap(nums, j, j-1);

                count++;

            }

        }

 

        return count;

    }

 

    public void swap (int[] nums, int i, int j) {

        int temp = nums[j];

        nums[j] = nums[i];

        nums[i] = temp;

    }

}


Input

nums =

[3,4,5,5,3,1]

Output

6

Expected

6


Input

nums =

[9]

Output

0

Expected

0


Monday, August 12, 2024

 Understanding Workloads for business continuity and disaster recovery (aka BCDR).

The Azure public cloud provides native capabilities in the cloud for the purposes of business continuity and disaster recovery, some of which are built into the features of the resource types used for the workload. Aside from features within the resource type to reduce RTO/RPO (for a discussion on terms used throughout the BCDR literature) please use the references), there are dedicated resources such as Azure Backup, Azure Site Recovery and various data migration services such as Azure Data Factory and Azure Database Migration Services that provided a wizard for configuring the BCDR policies which are usually specified in a file-and-forget way.  Finally, there are customizations possible outside of those available from the features of the resource types and BCDR resources which can be maintained by Azure DevOps. 

Organizations may find that they can be more efficient and cost-effective by taking a coarser approach at a deployment stamp level higher than the native cloud resource level and one that is tailored to their workload. This article explores some of those scenarios and the BCDR solutions that best serve them.

Scenario 1: Microservices framework: This form of deployment is preferred when the workload wants to update various services hosted as api/ui independently from others for their lifetime. Usually, there are many web applications, and a resource is dedicated to each of them in the form of an app service or  a container framework. The code is either deployed via a pipeline directly as source code or published to an image that the resource pulls. One of the most important aspects peculiar to this workload is the dependencies between various applications. When a disaster strikes the entire deployment, they won’t all work together even when restored individually in a different region without reestablishing these links. Take for example, the private endpoints that provide connectivity privately between caller-callee pairs of these services. Sometimes the callee is external to the network and even subscription and usually endpoint establishing the connectivity is manually registered. There is no single button or pipeline that can recreate the deployment stamp and certainly none that can replace the manual approval required to commission the private link. Since individual app services maintain their distinctive dependencies and fulfilment of functionality but cannot work without the whole set of app services, it is important to make them exportable and importable via Infrastructure-as-code aka IaC that takes into account parameters such as subscription, resource groups, virtual network, prefixes and suffixes in naming convention and recreates a stamp.

The second characteristic of this workload is that typically it will involve a diverse set of dependencies and stacks to host the various web applications  that it does. There won’t be any consistency, so the dependencies could range from a mysql database server to producing and consuming jobs on a databricks analytical workspace or an airflow automation. Consequently, the dependencies must be part of the BCDR story. Since this usually involves data and scripts, they should be migrated to the new instance. Migration and renaming are two pervasive activities for the BCDR of this workload type. Scripts that are registered in a source code repository like GitHub must be pulled and spun into an on-demand triggered or scheduled workflow.

Lastly, data used by these resources are usually proprietary and territorial in terms of ownership. This implies that the backup and restore of data might have to exist independently and as per the consensus with the owner and the users. A MySQL data can be transferred to and from another instance via the Azure Database Migration Service so as to avoid the use of mysqldump command line with credentials or via GitOps via the az command to the database server instance with implicit login An approach that suits the owner and users can be implemented outside the IaC.

Reference:

Sunday, August 11, 2024

 Find minimum in a rotated sorted array:

class Solution {

    public int findMin(int[] A) {

        If (A == null || A.length == 0) { return Integer.MIN_VALUE; }

        int start = 0;

        int end = A.length -1;

        while (start < end) {

            int mid = (start + end) / 2;


            // check monotonically increasing series

            if (A[start] <= A[end] && A[start] <= A[mid] && A[mid] <= A[end]]) { return A[start];};


            // check if only [start, end]

            if (mid == start || mid == end) { if (A[start] < A[end]) return A[start]; else return A[end];}


            // detect rotation point 

            if (A[start] > A[mid]){

                end = mid;

            } else {

                if (A[mid] > A[mid+1]) return A[mid+1]; 

                start = mid + 1;

            }

        }

        return A[0];

    }   

}

Works for:

[0 1 4 4 5 6 7]

[7 0 1 4 4 5 6]

[6 7 0 1 4 4 5]

[5 6 7 0 1 4 4]

[4 5 6 7 0 1 4]

[4 4 5 6 7 0 1]

[1 4 4 5 6 7 0]

[1 0 0 0 0 0 1]



Saturday, August 10, 2024

 A self organizing map algorithm for scheduling meeting times as availabilities and bookings.  A map is a low-dimensional representation of a training sample comprising of elements e. It is represented by nodes n. The map is transformed by a  regression operation to modify the nodes position one element from the model (e) at a time. With preferences translating to nodes and availabilities as elements, this allows the map to start getting a closer match to the sample space with each epoch/iteration.

from sys import argv


import numpy as np


from io_helper import read_xyz, normalize

from neuron import generate_network, get_neighborhood, get_boundary

from distance import select_closest, euclidean_distance, boundary_distance

from plot import plot_network, plot_boundary


def main():

    if len(argv) != 2:

        print("Correct use: python src/main.py <filename>.xyz")

        return -1


    problem = read_xyz(argv[1])


    boundary = som(problem, 100000)


    problem = problem.reindex(boundary)


    distance = boundary_distance(problem)


    print('Boundary found of length {}'.format(distance))



def som(problem, iterations, learning_rate=0.8):

    """Solve the xyz using a Self-Organizing Map."""


    # Obtain the normalized set of timeslots (w/ coord in [0,1])

    timeslots = problem.copy()

    # print(timeslots)

    #timeslots[['X', 'Y', 'Z']] = normalize(timeslots[['X', 'Y', 'Z']])


    # The population size is 8 times the number of timeslots

    n = timeslots.shape[0] * 8


    # Generate an adequate network of neurons:

    network = generate_network(n)

    print('Network of {} neurons created. Starting the iterations:'.format(n))


    for i in range(iterations):

        if not i % 100:

            print('\t> Iteration {}/{}'.format(i, iterations), end="\r")

        # Choose a random timeslot

        timeslot = timeslots.sample(1)[['X', 'Y', 'Z']].values

        winner_idx = select_closest(network, timeslot)

        # Generate a filter that applies changes to the winner's gaussian

        gaussian = get_neighborhood(winner_idx, n//10, network.shape[0])

        # Update the network's weights (closer to the timeslot)

        network += gaussian[:,np.newaxis] * learning_rate * (timeslot - network)

        # Decay the variables

        learning_rate = learning_rate * 0.99997

        n = n * 0.9997


        # Check for plotting interval

        if not i % 1000:

            plot_network(timeslots, network, name='diagrams/{:05d}.png'.format(i))


        # Check if any parameter has completely decayed.

        if n < 1:

            print('Radius has completely decayed, finishing execution',

            'at {} iterations'.format(i))

            break

        if learning_rate < 0.001:

            print('Learning rate has completely decayed, finishing execution',

            'at {} iterations'.format(i))

            break

    else:

        print('Completed {} iterations.'.format(iterations))


    # plot_network(timeslots, network, name='diagrams/final.png')


    boundary = get_boundary(timeslots, network)

    plot_boundary(timeslots, boundary, 'diagrams/boundary.png')

    return boundary


if __name__ == '__main__':

    main()


Reference: 

https://github.com/raja0034/som4drones


#codingexercise

https://1drv.ms/w/s!Ashlm-Nw-wnWhPBaE87l8j0YBv5OFQ?e=uCIAp9