Thursday, July 6, 2023

 

Problem Statement: A 0-indexed integer array nums is given.

Swaps of adjacent elements are able to be performed on nums.

A valid array meets the following conditions:

·       The largest element (any of the largest elements if there are multiple) is at the rightmost position in the array.

·       The smallest element (any of the smallest elements if there are multiple) is at the leftmost position in the array.

Return the minimum swaps required to make nums a valid array.

 

Example 1:

Input: nums = [3,4,5,5,3,1]

Output: 6

Explanation: Perform the following swaps:

- Swap 1: Swap the 3rd and 4th elements, nums is then [3,4,5,3,5,1].

- Swap 2: Swap the 4th and 5th elements, nums is then [3,4,5,3,1,5].

- Swap 3: Swap the 3rd and 4th elements, nums is then [3,4,5,1,3,5].

- Swap 4: Swap the 2nd and 3rd elements, nums is then [3,4,1,5,3,5].

- Swap 5: Swap the 1st and 2nd elements, nums is then [3,1,4,5,3,5].

- Swap 6: Swap the 0th and 1st elements, nums is then [1,3,4,5,3,5].

It can be shown that 6 swaps is the minimum swaps required to make a valid array.

Example 2:

Input: nums = [9]

Output: 0

Explanation: The array is already valid, so we return 0.

 

Constraints:

·         1 <= nums.length <= 105

·         1 <= nums[i] <= 105

Solution:

class Solution {

    public int minimumSwaps(int[] nums) {

        int min = Arrays.stream(nums).min().getAsInt();

        int max = Arrays.stream(nums).max().getAsInt();

        int count = 0;

        while (nums[0] != min && nums[nums.length-1] != max && count < 2 * nums.length) {           

            var numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var end = numsList.lastIndexOf(max);

            for (int i = end; i < nums.length-1; i++) {

                swap(nums, i, i+1);

                count++;

            }

 

            numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var start = numsList.indexOf(min);

            for (int j = start; j >= 1; j--) {

                swap(nums, j, j-1);

                count++;

            }

        }

 

        return count;

    }

 

    public void swap (int[] nums, int i, int j) {

        int temp = nums[j];

        nums[j] = nums[i];

        nums[i] = temp;

    }

}

 

Input

nums =

[3,4,5,5,3,1]

Output

6

Expected

6

 

Input

nums =

[9]

Output

0

Expected

0

 

 

Tuesday, July 4, 2023

 

Fixing path-based routing in Application Gateways:

The Azure Application Gateway is a resource that can consolidate web traffic to diverse set of resources such as App Services and Function Apps. When there are multiple resources, it is possible to route the traffic independently to each resource. Typically, a custom probe is created to test these independent traffic flow to different resources. When a custom probe is created, the most frequent and sometimes frustrating error response to get is 404 not found http status code even when the direct request to the response returns a successful 200 http status code. This article explains how to configure the listener, the routing rules, the backend pool members, and the backend http setting to get a successful http response from each of the resources just like they would have given when reached directly.

One of the hard to visualize concepts is that the traffic that flows to a backend pool member through the gateway is not pass-through traffic even when TLS is configured as end-to-end. The portion of the flow between the client and the gateway and the gateway and the backend pool member are initiated differently. The clients initiate the first part, and the gateway initiates the second part even when the request body and certain parameters remain the same to keep the data propagated through the gateway to the backend pool member. Throughout this article, we will assume that this break in flow to a resource is seamless and invisible to the user and that it occurs over https to cover the more general case of targeting a resource via the gateway as if it were targeted directly. Add a certificate, self-signed or signed, to enable the client to connect over https.

The first step is configuring the listener properly. A listener listens to a specific combination of IP address and port. There can be only as many probes as there are listeners. So, we can start with one listener that listens to https traffic at the front-end public ip address of the gateway. Since each app service and function app will have a default public endpoint, it is important that the listener be configured as multi-site rather than basic. Since all app services and function have endpoints with “azurewebsites.net” suffix, the at least one hostname that the listener must listen to is “*.azurewebsites.net”.

The next step is to configure the back-end pool members. Path-based routing can route the traffic consolidated by the listener to various back-end pool members. Note that we use the word instead of redirect because while http/s redirection is possible, here we are merely assigning the traffic to go to different destinations based on the relative paths mentioned in a route that we will configure next. The prerequisite to differentiating different targets for that purpose, is to separate out multiple backend pools, preferably one for each target if they are hosted on different endpoints.

The step after that is to configure the backend setting. This setting will specify the port and it is best to pick the hostname from the backend pool. It will require a cert for the data transfer between the gateway and the backend target, so it can take a public key certificate that can be downloaded from the trusted publisher of azurewebsites.net certificates.

The step after that is to configure the route. This step will ask for the listener as well as the target. When we specify the path-based routing, every relative path will have its target and http setting. Since the http setting is specific to the port and all backend pool members in this case leverage the same https port, they can use the same backend setting while the target differ for each path based route. There will be a default case to specify as well.

With these configuration, when the probe is created to reach the listener at the root path, each flow to a  target will be independently routed. This is demonstrated as shown below.



Reference: https://gw-temp-2.booksonsoftware.com

curl -i -k -H "Host: fn-app-temp-1.azurewebsites.net" "https://gw-temp-2.booksonsoftware.com/api/HttpExample/?name=Ravi"  > root.htmlcurl -i -k -H "Host: web-app-temp-1.azurewebsites.net/web/" "https://gw-temp

-2.booksonsoftware.com/"  > root.html

C:\Users\ravib.DESKTOP-1K6OB4E>curl -i -k -H "Host: fn-app-temp-1.azurewebsites.net" "https://gw-temp-2.booksonsoftware.com/api/HttpExample/?name=Ravi"

HTTP/1.1 200 OK

Date: Tue, 04 Jul 2023 02:18:13 GMT

Content-Type: text/plain; charset=utf-8

Transfer-Encoding: chunked

Connection: keep-alive

Request-Context: appId=cid-v1:78e72797-a318-46f1-8401-29719dbd5478

 

Hello, Ravi. This HTTP triggered function executed successfully.

Monday, July 3, 2023

            

Bidding is inherently a form of game with tangible rewards that has become as important as shopping and eCommerce. This software makes it easy for agencies to build auction campaigns. This form of human interaction is one of the most interactive and often more fulfilling than video games and rewards points by virtue of the products sold. In fact, the revenue from the proceeds of an auction and the benefits of an ecosystem around a software offering are as significant as eCommerce with the exception that shopping is inherently personal, and bidding is inherently social. Bidding is widely recognized as a significant contributor to increasing morale for employees of an organization as well as for individual shoppers to find alternatives that are not usually covered by seller platforms. Also, products can be sourced to appeal to demographics targeted by the software. Bids have been another factor in improving customer endearment and loyalty. This article explores how a platform can bring the best practices of cloud solutions to the bidding industry to make it more local and organic with the facility to making it more interesting. The technical solution involves a cloud-based solution comprising of publishers and subscribers, a ledger and set of routines to promote programmability, scripting, and user interface.

Most software makers are focused on trying to lead the market and improve the value offerings of their products to businesses and individuals. Bid point accumulation and redeeming services in favor of purchases are delegated to companies that develop and integrate these services for organizations and their employees. Unlike reward points for loyalty, auctions are more engaging and more satisfying than redeeming or shopping with reward points. The bid points can also be offered as products for purchases or bidding. Commercial applications for bidding are not a new concept as demonstrated by features of ECommerce companies and dedicated applications like DealDash but taking the concept to small and medium businesses or the common person directly, is realized by virtue of repeatable infrastructure or multitenant solutions and that is the objective of this proposal. They are not required to use a specific product and their usage of that product is not collected to determine their bid point grants. The market is full of companies that excel in certain segments of the order and fulfilment of reward points or provide digital redeeming services, but none focus on developing a software development kit that can integrate with businesses and organizations to engage their employees, customers, and end-users. The development of a technical solution and its branding provide immense opportunities for revenue generation. 

Sunday, July 2, 2023

 Problem Statement: A 0-indexed integer array nums is given.

Swaps of adjacent elements are able to be performed on nums.

valid array meets the following conditions:

·        The largest element (any of the largest elements if there are multiple) is at the rightmost position in the array.

·        The smallest element (any of the smallest elements if there are multiple) is at the leftmost position in the array.

Return the minimum swaps required to make nums a valid array.

 

Example 1:

Input: nums = [3,4,5,5,3,1]

Output: 6

Explanation: Perform the following swaps:

- Swap 1: Swap the 3rd and 4th elements, nums is then [3,4,5,3,5,1].

- Swap 2: Swap the 4th and 5th elements, nums is then [3,4,5,3,1,5].

- Swap 3: Swap the 3rd and 4th elements, nums is then [3,4,5,1,3,5].

- Swap 4: Swap the 2nd and 3rd elements, nums is then [3,4,1,5,3,5].

- Swap 5: Swap the 1st and 2nd elements, nums is then [3,1,4,5,3,5].

- Swap 6: Swap the 0th and 1st elements, nums is then [1,3,4,5,3,5].

It can be shown that 6 swaps is the minimum swaps required to make a valid array.

Example 2:

Input: nums = [9]

Output: 0

Explanation: The array is already valid, so we return 0.

 

Constraints:

·         1 <= nums.length <= 105

·         1 <= nums[i] <= 105

Solution:

class Solution {

    public int minimumSwaps(int[] nums) {

        int min = Arrays.stream(nums).min().getAsInt();

        int max = Arrays.stream(nums).max().getAsInt();

        int count = 0;

        while (nums[0] != min && nums[nums.length-1] != max && count < 2 * nums.length) {           

            var numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var end = numsList.lastIndexOf(max);

            for (int i = end; i < nums.length-1; i++) {

                swap(nums, i, i+1);

                count++;

            }

 

            numsList = Arrays.stream(nums).boxed().collect(Collectors.toList());

            var start = numsList.indexOf(min);

            for (int j = start; j >= 1; j--) {

                swap(nums, j, j-1);

                count++;

            }

        }

 

        return count;

    }

 

    public void swap (int[] numsint iint j) {

        int temp = nums[j];

        nums[j] = nums[i];

        nums[i] = temp;

    }

}

 

Input

nums =

[3,4,5,5,3,1]

Output

6

Expected

6

 

Input

nums =

[9]

Output

0

Expected

0

 

Saturday, July 1, 2023

 Problem statement: There is a growing need for dynamic, reliable and repeatable infrastructure as the scope expands to small footprint to deployments to cloud scale. Some of the manual approaches and management practices cannot keep up. There are two popular ways to meet these demands on the Azure public cloud which are Terraform and ARM templates. This article compares these two frameworks and their use cases. Specifically, we include a use case for DevSecOps and its applicability to the development and operation of trustworthy infrastructure-as-a-code.

Terraform is universally extendable through providers that furnish IaC for resource types. It’s a one-stop shop for any infrastructure, service, and application configuration. It can handle complex order-of-operations and composability of individual resources and encapsulated models. It is also backed by an open-source community for many providers and their modules with public documentation and examples. Microsoft also works directly with the Terraform maker on building and maintaining related providers and this partnership has gained widespread acceptance and usage. Perhaps, one of the best features is that it tracks the state of the real-world resources which makes Day-2 and onward operations easier and more powerful.

ARM templates are entirely from Microsoft consumed internally and externally as the de facto standard for describing resources on Azure and with their import and export options. There is a dedicated cloud service called the Azure Resource Manager service that expects and enforces this convention for all resources to provide effective validation, idempotency and repeatability.

Azure Blueprints can be leveraged to allow an engineer or architect to sketch a project’s design parameters, define a repeatable set of resources that implements and adheres to an organization’s standards, patterns and requirements.  It is a declarative way to orchestrate the deployment of various resource templates and other artifacts such as role assignments, policy assignments, ARM templates, and Resource Groups. Blueprint Objects are stored in the CosmosDB and replicated to multiple Azure regions. Since it is designed to setup the environment, it is different from resource provisioning. This package fits nicely into a CI/CD.

With Azure templates, one or more Azure resources can be described with a document, but it doesn’t exist natively in Azure and must be stored locally or in source control. Once those resources deploy, there is no active connection or relationship to the template.

Other IaC providers like Terraform also have features such that it tracks the state of the real-world resources which makes Day-2 and onward operations easier and more powerful and with Azure Blueprints, the relationship between what should be deployed and what was deployed is preserved. This connection supports improved tracking and auditing of deployments. It even works across several subscriptions with the same blueprint.

Typically, the choice is not between a blueprint and a resource template because one comprises the other but between an Azure Blueprint and a Terraform tfstate. They differ in their organization methodology as top-down or bottom-up. Blueprints are great candidates for compliance and regulations while Terraform is preferred by developers for their flexibility. Blueprints manage Azure resources only while Terraform can work with various resource providers.

Once the choice is made, some challenges will require to be tackled next. The account with which the IaC is deployed and the secrets it must know for those deployments to occur correctly are something that works centrally and not in the hands of individual end-users. Packaging and distributing solutions for end-users is easier when these can be read from a single source of truth in the cloud, so at least the location in the cloud for the solution to read and deploy the infrastructure must be known beforehand.

The DevSecOps workflow has a double loop between various stages including create->plan->monitor->configure->Release->Package->Verify where the create, plan, verify and package stages belong to Dev or design time and the monitor, configure and release belong to operations runtime. SecOps sits at the cusp between these two halves of Dev and Ops and participates in the planning, package and release stages.

Some of the greatest challenges of DevSecOps are firstly, cultural in that it comes from market fragmentation in terms of IaC providers and secondly, variety of wide skills required for such IaC. Others include definition of well-known code or design patterns, difficulty in replicating errors, IaC language specifics and diverse toolset, security and trustworthiness, configuration drift and changing infrastructure requirements.