Thursday, May 16, 2024

 

This is a continuation of previous articles on IaC shortcomings and resolutions. With the example of Azure Front Door, we were explaining the use of separate origin groups for logical organization of backend and front-end endpoints. This section talks about route configuration.

A route is the primary directive to Azure Front Door to handle traffic. The route settings define an association between a domain and an origin group.  Features such as Pattern-to-match and rulesets enable granular control over traffic to the backend resources.

A routing rule is composed of two major parts, the “left-hand-side” and the “right-hand-side”. Front Door matches the incoming request to the left-hand side of the route while the right-hand side defines how the request gets processed. On the left-hand side, we have the HTTP Protocols, the domain, and the path where these properties are expanded out so that every combination of a protocol, domain and path is a potential match set. On the right-hand side, we have the routing decisions. If caching is not enabled, the requests are routed directly to the backend.

Route matching is all about the “most-specific-request” that matches with the “left-hand-side”. The order of match is always protocol first, followed by the domain and then the path. The Match is always a yes or a no. Yes, there is a route with an exact match on the frontend host or no there is no such match. In the case of a “No”, a bad request error gets sent. After the host matching comes path matching. A similar logic to frontend hosts is used to match the request path. The only difference is that between a yes or a no, an approximate match based on wild card pattern is allowed. And as always, a failed match returns a bad request error.

One of the key differences between an application gateway and Front Door is this hybrid custom-domain and path-based routing combination matching as described above. Application gateway can be either custom-domain based or path-based routing in most deployments but FrontDoor by its nature to being global across different regional resource types, allows for both custom-domain and path-based matches.

The anycast behavior from FrontDoor requires a comprehensive test matrix to avoid any unpredictability with low-latency choices made by default. For a choice of host and path, there can be four test cases at least even for a “/*” path. Predictability also involves trying those requests from various regions.

Thus, separate endpoints, routing and host header all play a role in determining the responses from the Azure Front Door.

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd

 

 

Wednesday, May 15, 2024

 This is a continuation of previous articles on IaC shortcomings and resolutions. With the example of Azure Front Door, we were explaining the use of separate origin groups for logical organization of backend. This section talks about organization of endpoints.

An endpoint is a logical grouping of one or more routes associated with domain names. Each endpoint can be assigned a domain name either built-in or custom.  A Front Door profile can contain multiple domains especially when they have different routes and route paths. They can be combined into a single endpoint when there is a need to turn on or off collectively.

Azure Front Door can create managed certificates for custom domains even when the dns resolution occurs outside Azure. This makes https for end to end SSL easier to setup as the signed certificate is universally validated.

The steps taken to create endpoints is similar on both the frontend and the backend. The origin should be viewed as an endpoint for the application backend. When an origin is created in an origin group, Frontdoor requires to know the origin type and host headers. For an Azure app service, this could be contoso.azurewebsites.net or a custom domain. FrobtDoor validates if the request hostname matched the host name I’m the certificate provided by the origin.

Thus, separate endpoints, routing and host header all play a role in determining the responses from the Azure Front Door.

Previous articles: https://1drv.ms/w/s!Ashlm-Nw-wnWhO4RqzMcKLnR-r_WSw?e=kTQwQd 


Tuesday, May 14, 2024

 #codingexercise

Position eight queens on a chess board without conflicts:

    public static void positionEightQueens(int[][] B, int[][] used, int row) throws Exception {

        if (row == 8) {

            if (isAllSafe(B)) {

                printMatrix(B, B.length, B[0].length);

            }

            return;

        }

        for (int k = 0; k < 8; k++) {

            if ( isSafe(B, row, k) && isAllSafe(B)) {

                B[row][k] = 1;

                positionEightQueens(B, used, row + 1);

                B[row][k]  = 0;

            }

        }

    }

    public static boolean isSafe(int[][] B, int p, int q) {

        int row = B.length;

        int col = B[0].length;

        for (int i = 0; i < row; i++) {

            for (int j = 0; j < col; j++) {

                if (i == p && j == q) { continue; }

                if (B[i][j] == 1) {

                    boolean notSafe = isOnDiagonal(B, p, q, i, j) ||

                            isOnVertical(B, p, q, i, j) ||

                            isOnHorizontal(B, p, q, i, j);

                    if(notSafe){

                        return false;

                    }

                }

             }

        }

        return true;

    }

    public static boolean isAllSafe(int[][] B) {

        for (int i = 0; i < B.length; i++) {

            for (int j = 0; j < B[0].length; j++) {

                if (B[i][j]  == 1 && !isSafe(B, i, j)) {

                    return false;

                }

            }

        }

        return true;

    }

    public static boolean isOnDiagonal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k ++) {

            if (r2 - k >= 0 &&  c2 - k >= 0 && r1 == r2 - k && c1 == c2 - k) {

                return true;

            }

            if (r2 + k < row && c2 + k < col && r1 == r2 + k && c1 == c2 + k) {

                return true;

            }

            if (r2 - k >= 0 && c2 + k < col && r1 == r2 - k && c1 == c2 + k) {

                return true;

            }

            if (r2 + k < row  && c2 - k >= 0 && r1 == r2 + k && c1 == c2 - k) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnVertical(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (c2 - k >= 0  && c1 == c2 - k && r1 == r2 ) {

                return true;

            }

            if (c2 + k < row && c1 == c2 + k && r1 == r2) {

                return true;

            }

        }

        return result;

    }

    public static boolean isOnHorizontal(int[][] used, int r1, int c1, int r2, int c2) {

        boolean result = false;

        int row = used.length;

        int col = used[0].length;

        for (int k = 0; k < 8; k++) {

            if (r2 - k >= 0  && r1 == r2 - k && c1 == c2 ) {

                return true;

            }

            if (r2 + k < row && r1 == r2 + k && c1 == c2) {

                return true;

            }

        }

        return result;

    }


Sample output:

1 1 2 1 1 1 1 1

1 1 1 1 1 2 1 1

1 1 1 2 1 1 1 1

1 2 1 1 1 1 1 1

1 1 1 1 1 1 1 2

1 1 1 1 2 1 1 1

1 1 1 1 1 1 2 1

2 1 1 1 1 1 1 1


Monday, May 13, 2024

 This is a summary of the book titled “Practical Fairness: achieving fair and secure data models”  written by Aileen Nielsen and published by O’Reilly in 2020. The author is a software engineer and attorney who examines various kinds of fairness and how both training data and algorithms can promote them. Machine Learning developers and MLOps can benefit from this discussion and as an O’Reilly book comes with python examples. Fairness in this book is about who gets what and how that is decided. Fair results start with fair data and there must be an all-round effort to increase fairness at various stages of the process. Privacy and fairness are vulnerable to attacks. Product design should also be fair and make a place for fair models. Industry standards and regulations can demand fairness from the market in all the relevant products.

Fairness in technology is crucial for ensuring that users receive fair treatment, and that technology is used responsibly. It is essential for software developers to differentiate between equity and equality, security, and privacy, and to avoid legal issues and consumer backlash. People tend to prefer equity over equality, as it implies that people should not receive different treatment for belonging to a certain group. However, equity is not straightforward, as privacy metrics can be undercut by human error.


To ensure fairness, machine learning models should start with fair data, which should be high-quality, suited to the model's intended purposes, and correctly labeled. Technology is neither good nor bad, and data quality can suffer from biased sampling and incomplete data. A fairness mandate can stimulate ideas in mathematics, computer science, and law, but it cannot guarantee fairness in all respects.

Data models can be trained to increase fairness throughout their development process. Pre-processing is the most flexible and powerful option, offering the most opportunities for downstream metrics. Techniques to increase fairness include deleting parts of data that could be exploited to discriminate against people, such as gender, or attaching weightings to different data about a person. However, individual fairness can lead to unfairness for a group, so techniques like learned fair representation and optimized pre-processing balance the two. Adversarial de-biasing involves having a second model analyze the output of the first, ensuring non-discriminatory outcomes.


Sometimes, neither pre-processing data nor training a model for fairness is possible or allowable. Users can process the output of a model to make it fairer, providing transparency. To gauge whether a model generates fair outcomes, audit it using black-box auditing or white-box auditing. Interpretable models or black-box models that explain the basis of decisions can help avoid arbitrary decisions. Privacy and fairness are vulnerable to attacks, as modern technologies may undercut anonymization and new concepts emerge, such as the "right to be forgotten."

Privacy is an evolving legal norm, and machine learning models are vulnerable to attacks that aim to subvert their output. Attacks can be evasion attacks, where attackers feed model data that forces it to err, or poisoning attacks, where the attackers make the model malfunction or classify certain data in a desired way. Fair models should be integrated into fair products, satisfying customer expectations, and ensuring that companies do not harm those who contributed data. Companies should also consider how their products could be misused and not roll out updates too frequently. Even if a product works well, it can have fairness problems if it works better for some than for others. The market will not force companies to deliver fairness in their products without the correct laws. The EU's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are two major laws concerning data use, providing citizens with the right to data portability, erasure, and correction of personal data.

The GDPR and CCPA have prompted organizations to consider privacy, transparency, and accountability. The GDPR prohibits algorithms from making significant decisions affecting EU citizens. In the US, laws regulating algorithm use have not passed. Some states, like California, set rules for chatbots, ensuring users are not communicating with humans. As machine learning advances, technology and fairness laws will evolve.


Sunday, May 12, 2024

 While large tech sector businesses are developing proprietary warehouse robotics and drone fleet software, the rest of the industry is looking up to them for industry solutions. Unfortunately, these same businesses failed to deliver even on cloud migration and modernization solutions for various industries who were left to outsource the strategy and implementation to vendors. While boutique drone solution can be highly customizable, need-based and highly effective, a shopify-like platform for drone fleet management can bring the best practices to the industry without significant development costs. Businesses recognize the value of drone formation software that expand their options of delivery network aerial activities. From event exhibits such as those for Lady Gaga in 2016 Super Bowl half-time to home-delivery automations or aerial transport of goods or forestation activities, drone related software automations will only grow. It is this absence of a managed service for drone formation that inspires the technical design and use case presented in this document. The B2B solution provided by a business that implements a rich and robust drone formation handling mechanism not only gets it right once but also enables smoother and easier investment for the businesses that subscribe to these services without them having to reinvent the wheel. The competition in this field is little or non-existent given the breakthroughs possible in wiring up drone activities to a variety of drones. When the end-users do not have to rewrite automations for changes in business purpose or technological advances in drone units, they become more laser focused on their mission. With the use of public cloud and a pay-as-you-go model, the ability to isolate and scale drone formation specific computing and data storage is unparalleled and something that is not yet mainstream in this market yet.  This gives a lot of wind to sail the drone formation and planning services.  

Saturday, May 11, 2024

 

This is a summary of the book titled “Rewired: The McKinsey guide to outcompeting in the age of Digital and AI” written by Rodney Zemmel, Eric Lamarre and Kate Smaje and published by Wiley, 2023. It is a relevant and authoritative playbook and reference to the transformation brought on by AI. For those who might question the source, McKinsey is a global leader and leaders like Sundar Pichai, James Gorman, and Sheryl Sandberg have all worked there. Written for executives, this book’s recommendations apply to organizations of all sizes.

This book suggests that digital and AI transformations are  here to stay. They will be a constant and never-ending change that can be embraced with a careful roadmap, and one built on foundations. Organizations will require core digital talent in-house and the company’s operating model must support rapid development. Such an in-house technology environment needs seven capabilities to support digital innovation across the organization. Digital leaders must establish a data architecture that facilitates the flow of data from source to use. Strategy forged by these leaders must drive customer or user adoption.

Digital and AI transformations are crucial for businesses to remain competitive. With 89% of leaders having undertaken some form of digital transformation, it is essential for leaders to extend their initiatives beyond tech to encompass all organizational capabilities. A successful transformation depends on foundational work, including a clear, detailed road map that outlines business domains, solutions, programs, and key performance indicators. A domain-based approach is recommended to right-size the transformation's scope, prioritizing domains based on value and feasibility. Leaders should avoid being distracted by low-value pet projects and focus on long-term capability building. Digital leaders should prioritize people and capabilities over tech solutions, focusing on long-term improvement and customer experience. The CEO should also take personal responsibility for the transformation, as every member of the executive team plays a role in driving it. By laying the groundwork and implementing a robust digital roadmap, businesses can achieve meaningful change and measurable results.

To support continuous transformation, organizations should focus on core digital talent in-house, with 70% to 80% of their digital talent residing in-house. Many organizations have established a Talent Win Room (TWR) to focus on digital talent, including executive sponsors, tech recruiters, HR specialists, and part-time functional specialists. Digital leaders offer dual career paths and align compensation with employee value. The company's operating model must enable fast, flexible technology development, with agile pods being a key component. Leaders must deepen their understanding of agile beyond processes and rituals to ensure pods deliver their potential value. Three dominant operating model designs are digital factory, product, and platform (P&P), and enterprise-wide agile. The digital transformation must also include user experience design capabilities to ensure solutions meet customer needs and wants. Agile pods should include design experts and leaders who should understand how customer experience links to value.

The enterprise's technology environment needs seven capabilities to support digital innovation across the organization. These capabilities include decoupled architecture, cloud, engineering practices, developer tools, reliable production environments, automated security, and machine learning operations (MLOps) automation. A distributed, decoupled architecture enables agility and scaling, while a cloud approach and data platform are essential for reducing costs. Engineering practices, such as automation of the software development lifecycle (SDLC), DevOps, and coding standards, are crucial for agility and quality. Developer tools should be provided in sandbox environments, and a reliable production environment must be secure and available. Automated security is essential for moving to the cloud, and machine learning operations (MLOps) automation can help exploit AI's potential. A data architecture facilitates the flow of data from source to use, and data products are essential for standardization, scaling, and speed. A data strategy sets out the organization's data requirements and plans for cleaning and delivering its data.

To maximize the benefits of digital and AI transformation, leaders must implement strategies to drive customer or user adoption. Adoption depends on two factors: user experience and change management. Strategies include adapting the business model, designing in replication, tracking progress, establishing digital trust, and creating a digital culture. A CEO or division head should ensure alignment across the business, plan a replication approach, and assetize solutions. Leaders should track progress through a five-stage process, assess risks, review digital trust policies, and ensure operational capabilities support digital trust. Leaders should also display attributes that support a digital culture, such as customer-centricity, collaboration, and urgency.

Previous summaries: BookSummary88.docx
Summarizing Software: SummarizerCodeSnippets.docx. 

Friday, May 10, 2024

 This is a continuation of articles on IaC shortcomings and resolutions. One of the pitfalls of IaC modernization is the copy-and-paste mindset when it comes to transferring existing rules from one resource type to another.  Take the case of dedicated deployment for resources like Azure Front End and Azure Application Gateway. The default traffic corresponds to the “/*” rule and clients expecting to get a response from a zonal resource such as a virtual machine scale set might expect it to come from a specific instance in a given region and zone regardless of whether the resource is switched from Azure Application Gateway to Azure Front Door without nesting one behind the other and as a drop-in replacement between clients and hosted applications. Yet, the resources differ from one another in how the default traffic is handled.

1. Azure Application Gateway:

o Azure Application Gateway is a layer 7 load balancer that provides application-level routing and load balancing services.

o By default, when traffic is sent to the root path ("/") of the domain, Azure Application Gateway uses the "default backend pool" to handle the request.

o The default backend pool can be configured to point to a specific backend pool or virtual machine scale set. It acts as a fallback when no specific path-based routing rules match the request.

o If you have defined any path-based routing rules for other paths, they will take precedence over the default backend pool when matching requests.

2. Azure Front Door:

o Azure Front Door is a global, scalable entry point for web applications that provides path-based routing, SSL offloading, and other features.

o When traffic is sent to the root path ("/") of the domain, Azure Front Door uses the "default routing rule" to handle the request.

o The default routing rule in Azure Front Door allows you to define a set of backend pools and associated routing conditions for requests that don't match any specific path-based routing rules.

o You can configure the default routing rule to redirect or route traffic to a specific backend pool, providing flexibility in handling default requests.

In summary, both Azure Application Gateway and Azure Front Door offer path-based routing capabilities, but they handle default traffic sent to the root path differently. Azure Application Gateway uses a default backend pool as a fallback, while Azure Front Door uses a default routing rule to handle such requests.

Now let us consider the case where two application gateways, one for each region is placed as backend to a global Azure Front Door. Furthermore, let us say each application gateway routes to different backend pool members for “/images/” and “/videos” respectively. If the traffic always went to the same application gateway, there would be predictability in who answers either route but the default routing rule of “/*” in the FrontDoor means either application gateway could be targeted and the response might come unexpectedly from another region. In this case, the proper configuration would make distinct routes to each application gateway and these routes can have route path qualifiers for images and videos. In fact, it might even be better to consolidate all images behind one application gateway and all videos behind the other if the latency differences can be tolerated. In this way, the resolution to the target becomes predictable.


Thursday, May 9, 2024

 There is a cake factory producing K-flavored cakes. Flavors are numbered from 1 to K. A cake should consist of exactly K layers, each of a different flavor. It is very important that every flavor appears in exactly one cake layer and that the flavor layers are ordered from 1 to K from bottom to top. Otherwise the cake doesn't taste good enough to be sold. For example, for K = 3, cake [1, 2, 3] is well-prepared and can be sold, whereas cakes [1, 3, 2] and [1, 2, 3, 3] are not well-prepared.

 

The factory has N cake forms arranged in a row, numbered from 1 to N. Initially, all forms are empty. At the beginning of the day a machine for producing cakes executes a sequence of M instructions (numbered from 0 to M−1) one by one. The J-th instruction adds a layer of flavor C[J] to all forms from A[J] to B[J], inclusive.

 

What is the number of well-prepared cakes after executing the sequence of M instructions?

 

Write a function:

 

class Solution { public int solution(int N, int K, int[] A, int[] B, int[] C); }

 

that, given two integers N and K and three arrays of integers A, B, C describing the sequence, returns the number of well-prepared cakes after executing the sequence of instructions.

 

Examples:

 

1. Given N = 5, K = 3, A = [1, 1, 4, 1, 4], B = [5, 2, 5, 5, 4] and C = [1, 2, 2, 3, 3].

 

There is a sequence of five instructions:

 

The 0th instruction puts a layer of flavor 1 in all forms from 1 to 5.

The 1st instruction puts a layer of flavor 2 in all forms from 1 to 2.

The 2nd instruction puts a layer of flavor 2 in all forms from 4 to 5.

The 3rd instruction puts a layer of flavor 3 in all forms from 1 to 5.

The 4th instruction puts a layer of flavor 3 in the 4th form.

The picture describes the first example test.

 

The function should return 3. The cake in form 3 is missing flavor 2, and the cake in form 5 has additional flavor 3. The well-prepared cakes are forms 1, 2 and 5.

 

2. Given N = 6, K = 4, A = [1, 2, 1, 1], B = [3, 3, 6, 6] and C = [1, 2, 3, 4],

 

the function should return 2. The 2nd and 3rd cakes are well-prepared.

 

3. Given N = 3, K = 2, A = [1, 3, 3, 1, 1], B = [2, 3, 3, 1, 2] and C = [1, 2, 1, 2, 2],

 

the function should return 1. Only the 2nd cake is well-prepared.

 

4. Given N = 5, K = 2, A = [1, 1, 2], B = [5, 5, 3] and C = [1, 2, 1]

 

the function should return 3. The 1st, 4th and 5th cakes are well-prepared.

 

Write an efficient algorithm for the following assumptions:

 

N is an integer within the range [1..100,000];

M is an integer within the range [1..200,000];

each element of arrays A, B is an integer within the range [1..N];

each element of array C is an integer within the range [1..K];

for every integer J, A[J] ≤ B[J];

arrays A, B and C have the same length, equal to M.

// import java.util.*;

 

 

class Solution {

    public int solution(int N, int K, int[] A, int[] B, int[] C) {

        int[]  first = new int[N]; // first

        int[]  last = new int[N]; // last

        int[]  num = new int[N]; // counts

        for (int i = 0; i < A.length; i++) {

            for (int current = A[i]-1; current <= B[i]-1; current++) {

                num[current]++;

                if (first[current] == 0) {

                    first[current] = C[i];

                    last[current] = C[i];

                    continue;

                }

                If (last[current] > C[I]) {

                     last[current] = Integer.MAX_VALUE;

                } else {

                     last[current] = C[i];

               }

            }

        }

        int count = 0;

        for (int i = 0; i < N; i++) {

            if (((last[i] - first[i]) == (K - 1)) && (num[i] == K)) {

                count++;

            }

        }        

        // StringBuilder sb = new StringBuilder();

        // for (int i = 0; i < N; i++) {

        //     sb.append(last[i] + " ");

        // }

        // System.out.println(sb.toString());

        return count;

    }

}

Example test:   (5, 3, [1, 1, 4, 1, 4], [5, 2, 5, 5, 4], [1, 2, 2, 3, 3])

OK

 

Example test:   (6, 4, [1, 2, 1, 1], [3, 3, 6, 6], [1, 2, 3, 4])

OK

 

Example test:   (3, 2, [1, 3, 3, 1, 1], [2, 3, 3, 1, 2], [1, 2, 1, 2, 2])

OK

 

Example test:   (5, 2, [1, 1, 2], [5, 5, 3], [1, 2, 1])

OK


Wednesday, May 8, 2024

 Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.

1. Image Enhancement:

o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.

o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.

o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.

o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.

2. Image Restoration:

o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.

o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.

o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.

o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.

Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.

Let’s review these in detail:

1. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.

2. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.

3. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.

4. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).

5. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.

6. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.

7. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.

8. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.

As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,

Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.

Key points from this research:

o Objective: To improve object detection accuracy and energy utilization.

o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.

o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.

o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.

o Energy Savings: Energy savings were achieved based on ambient lighting conditions.

This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.

Another topic involves improving energy efficiency of indoor lighting:

This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.

The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.

By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.

The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.

Previous articles on data processing: DM.docx  Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.

1. Image Enhancement:

o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.

o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.

o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.

o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.

2. Image Restoration:

o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.

o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.

o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.

o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.

Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.

Let’s review these in detail:

1. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.

2. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.

3. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.

4. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).

5. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.

6. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.

7. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.

8. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.

As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,

Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.

Key points from this research:

o Objective: To improve object detection accuracy and energy utilization.

o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.

o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.

o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.

o Energy Savings: Energy savings were achieved based on ambient lighting conditions.

This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.

Another topic involves improving energy efficiency of indoor lighting:

This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.

The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.

By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.

The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.

Previous articles on data processing: DM.docx Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.

3. Image Enhancement:

o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.

o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.

o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.

o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.

4. Image Restoration:

o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.

o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.

o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.

o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.

Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.

Let’s review these in detail:

9. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.

10. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.

11. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.

12. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).

13. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.

14. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.

15. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.

16. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.

As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,

Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.

Key points from this research:

o Objective: To improve object detection accuracy and energy utilization.

o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.

o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.

o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.

o Energy Savings: Energy savings were achieved based on ambient lighting conditions.

This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.

Another topic involves improving energy efficiency of indoor lighting:

This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.

The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.

By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.

The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.

Previous articles on data processing: DM.docx 

 

Subarray Sum equals K 

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k. 

A subarray is a contiguous non-empty sequence of elements within an array. 

Example 1: 

Input: nums = [1,1,1], k = 2 

Output: 2 

Example 2: 

Input: nums = [1,2,3], k = 3 

Output: 2 

Constraints: 

1 <= nums.length <= 2 * 104 

-1000 <= nums[i] <= 1000 

-107 <= k <= 107 

 

class Solution { 

    public int subarraySum(int[] nums, int k) { 

        if (nums == null || nums.length == 0) return -1; 

        int[] sums = new int[nums.length];    

        int sum = 0; 

        for (int i = 0; i < nums.length; i++){ 

            sum += nums[i]; 

            sums[i] = sum; 

        } 

        int count = 0; 

        for (int i = 0; i < nums.length; i++) { 

            for (int j = i; j < nums.length; j++) { 

                int current = nums[i] + (sums[j] - sums[i]); 

                if (current == k){ 

                    count += 1; 

                } 

            } 

        } 

        return count; 

    } 

 

[1,3], k=1 => 1 

[1,3], k=3 => 1 

[1,3], k=4 => 1 

[2,2], k=4 => 1 

[2,2], k=2 => 2 

[2,0,2], k=2 => 4 

[0,0,1], k=1=> 3 

[0,1,0], k=1=> 2 

[0,1,1], k=1=> 3 

[1,0,0], k=1=> 3 

[1,0,1], k=1=> 4 

[1,1,0], k=1=> 2 

[1,1,1], k=1=> 3 

[-1,0,1], k=0 => 2 

[-1,1,0], k=0 => 3 

[1,0,-1], k=0 => 2 

[1,-1,0], k=0 => 3 

[0,-1,1], k=0 => 3 

[0,1,-1], k=0 => 3 

 

 

 






Tuesday, May 7, 2024

 This is a summary of the book titled “The BRAVE Leader” written by David McQueen and published by Practical Inspirational Publishing in 2024. The author is a leadership coach who asserts that failing to model inclusivity has dire consequences for a leader no matter how busy they might get. They must empower all their people and create systems that serve a wide range of stakeholders’ needs. They can do so and more by following the Bold, Resilient, Agile, Visionary, and Ethical Leadership style.

The framework in this book tackles root causes and expands emerging possibilities. It helps to drive innovation while having a strategic thinking. Inclusive practices can be embed into the “DNA” of the organization. Honesty, transparency and a culture to drive antifragility  will help with a systemic change.

Good leaders inspire and empower others in various contexts, including community projects, sports games, and faith groups. Leadership is not just about management, but also involves understanding an organization's norms, values, and external factors. Leaders need followers to buy into their vision and actively participate in the work. Inclusive leadership involves attracting, empowering, and supporting talented individuals to achieve common goals without marginalizing them. To achieve this, leaders must be "BRAVE" - bold, resilient, agile, visionary, and ethical. This requires systems thinking and the ability to sense emerging possibilities. To be a BRAVE leader, leaders should focus on creating a culture where team members can develop their leadership qualities. They should resist the temptation to position themselves as an omnipotent "hero leader" and consider their decision-making approach. To align with the BRAVE framework, leaders should consider boldness, resilience, agility, vision, and ethicalness in their decision-making approaches.

BRAVE leaders use the "five W's" approach to problem-solving, which involves identifying the issue, identifying the business area, determining the deadline, identifying the most affected stakeholders, and determining the success of the problem. This approach helps in identifying the root cause of the problem and addressing it.

Strategic thinking is crucial for driving innovation and thriving amid uncertainty. It involves examining complex problems, identifying potential issues and opportunities, and crafting action plans for achieving big-picture goals. Inclusive leadership is essential for organizations to avoid homogeneous decision-making and foster a culture that combines inclusivity and strategic thinking.

Implementing inclusive practices into the organization's DNA includes rethinking recruitment, hiring practices, performance management, offboarding, and mapping customer segments. This involves rethinking the "best" applicants, updating hiring practices, and fostering a culture that combines inclusivity and strategic thinking. By embracing diversity and fostering a culture of inclusivity, organizations can thrive in the face of uncertainty and drive innovation.

Inclusive practices should be incorporated into an organization's DNA, including recruitment, performance management, offboarding, mapping customer segments, and product development. Expand the scope of applicants and provide inclusive hiring training to team members. Be more inclusive in performance management by asking questions and viewing performance reviews as opportunities for improvement. Treat exit interviews as learning experiences and consider customer needs and characteristics. Ensure stakeholders feel included in product development, ensuring they feel part of a two-way relationship.

Self-leadership is essential for effective leadership, as it involves understanding oneself, identifying desired experiences, and intentionally guiding oneself towards them. BRAVE leaders model excellence, embracing self-discipline, consistency, active listening, and impulse control. They prioritize their mental and physical health, taking breaks and vacations to show team members that self-care is crucial. Leadership coaching can help develop BRAVE characteristics and identify interventions for long-term changes.

Inclusive leadership requires a positive organizational climate, where employees feel valued, respected, and included. Building a BRAVE organizational culture involves setting quantifiable goals and holding managers and leaders accountable for meeting them. Diverse teams benefit from a broader range of insights, perspectives, and talents, and problem-solving more effectively by approaching challenges from multiple angles.

By embracing courage and overcoming fear, leaders drive systemic change, leading to courageous decision-making, better management, and a systematic approach to leadership. By cultivating positive characteristics like generosity, transparency, and accountability, leaders can drive sustainable growth and foster a more inclusive environment.

Previous Book Summaries: BookSummary86.docx 

Summarizing Software: SummarizerCodeSnippets.docx: https://1drv.ms/w/s!Ashlm-Nw-wnWhOYMyD1A8aq_fBqraA?e=BGTkR7


Monday, May 6, 2024

 Data mining algorithms are powerful tools used in various fields to analyze and extract valuable insights from large datasets. These algorithms are designed to automatically discover patterns, relationships, and trends in data, enabling organizations and researchers to make informed decisions.

Here are some commonly used data mining algorithms:

1. Decision Trees: Decision trees are tree-like structures that represent decisions and their possible consequences. They are used to classify data based on a set of rules derived from the features of the dataset.

2. Random Forests: Random forests are an ensemble learning method that combines multiple decision trees to improve accuracy and reduce overfitting. Each tree in the forest is trained on a random subset of the data.

3. Naive Bayes: Naive Bayes is a probabilistic classifier based on Bayes' theorem. It assumes that the features are independent of each other, which simplifies the calculations. Naive Bayes is commonly used for text classification and spam filtering.

4. Support Vector Machines (SVM): SVM is a supervised learning model used for classification and regression analysis. It separates data points into different classes by finding an optimal hyperplane that maximizes the margin between the classes.

5. K-means Clustering: K-means is an unsupervised learning algorithm used for clustering analysis. It partitions data into K clusters based on their similarity, where K is a predefined number. It aims to minimize the intra-cluster variance and maximize the inter-cluster variance.

6. Neural Networks: Neural networks are artificial intelligence models inspired by the human brain's structure and function. They consist of interconnected nodes (neurons) organized in layers. Neural networks can be trained to recognize patterns, make predictions, and classify data.

7. Deep Learning: Deep learning is a subset of neural networks that involves training models with multiple layers. It has achieved significant breakthroughs in image recognition, natural language processing, and other complex tasks.

8. Association Rule Mining: Association rule mining is used to discover relationships and dependencies between items in a dataset. It identifies frequent itemsets and generates rules based on their co-occurrence.

9. Reinforcement Learning: Reinforcement learning is an AI technique where an agent learns to make optimal decisions by interacting with an environment. The agent receives feedback in the form of rewards or penalties, which guide its learning process.

10. Genetic Algorithms: Genetic algorithms are optimization techniques inspired by the process of natural selection. They use principles of genetics and evolution to iteratively search for the best solution in a large solution space.

These algorithms are just a small sample of the vast array of techniques available in data mining and artificial intelligence. Each algorithm has its strengths and weaknesses, and the choice depends on the specific problem and dataset at hand.

Reference: https://1drv.ms/w/s!Ashlm-Nw-wnWxBFlhCtfFkoVDRDa?e=aVT37e  


Sunday, May 5, 2024

 This is a summary of the book titled “Reputation Analytics: Public opinion for companies” written by Daniel Diermeier and published by University of Chicago Press in 2023. This book outlines the necessity and method for a corporation to protect itself from a corporate reputation crisis. The author explains how small actions and even inactions can cascade into a massive crisis and potentially harm the business, even in the long run. By providing examples and learnings, the author provides a step-by-step framework to achieve that goal. Some of the highlights are that managing a corporate reputation is like thinking as a political strategist. People form both specific and general impressions and they do so in six primary ways. Companies face reputational crises when they trigger a “moral outrage”. It is difficult to fight perceptions that a brand is causing harm, so taking accountability becomes a consideration. The tasks in an activist’s campaign are something that a company must be comfortable managing. Leveraging a deep understanding of media and social network influence and harnessing emerging technologies are necessary. A risk management mindset that avoids common mistakes also helps.

Managing a corporate reputation is similar to managing public opinion, but companies must consider various publics, including customers, employees, investors, business partners, suppliers, and external groups like regulators and the media. Successful reputation management requires assuming external actors' perspectives and viewpoints, as public perceptions are not always rooted in direct experiences and may differ across constituencies, products, and markets. People form specific and general impressions of a brand in six primary ways: repetition, relevance, attention, affect, concordance, and online processing. Companies face reputational crises when they trigger "moral outrage," which is emotional response to a brand's break with ethical norms or values. Moral judgment hinges on three main principles: the duty to avoid causing others harm, upholding fairness, justice, and rights, and respecting moral conventions and values. People employ two modes of thinking when making moral judgments: experiential (experiential) and analytical (logic-based). Companies must make reputation management an integral part of their strategic operations to avoid reputational crises and maintain a positive brand image.

Brands must take accountability for their actions and consider "folk economics" before taking action. The public's perception of commerce and industry can affect a company's reputation. Companies should fight against accusations with clear, easy-to-understand arguments and apologize for any harm caused. Leaders should demonstrate commitment to handling crises and empathy towards those harmed by the company's actions. Modern companies are more likely to face activist campaigns that damage reputations due to increased ethical expectations, media criticism, and trust-based business models. Social activism is more common and less localized, thanks to social media. Companies should adopt corporate social responsibility (CSR) practices but not be afraid of activist attacks. Statistical modeling should consider these factors to avoid misinterpretation. Companies should also leverage a deep understanding of media and social network influence to avoid negative media coverage that can trigger a reputational crisis. For example, Toyota's stock prices plummeted after a car crash, despite the company's overall safety record.

Perceptions and attitudes are influenced by peers, third-party experts, and media, both traditional and user-generated. Building and maintaining a successful reputation in the marketplace requires a deep understanding of these channels of influence. Media outlets can play a significant role in determining the issues to which people pay attention, and when one company in a particular industry or product area comes under media scrutiny, the potential for reputational damage increases for all businesses in that sector and those in closely related sectors. Social media also wields influence over public opinion, and using linear regression models can help identify triggers for a rise in certain variables.

To manage corporate reputation proactively, organizations should explore alternative ways of collecting and analyzing consumer data, such as sentiment analysis, machine learning algorithms, text-analytic scores, and supervised learning models. A risk management mindset is essential, as people will consider a company's current actions and past actions when under public scrutiny.

To avoid reputational crises, shift from reactive crisis management to proactive risk management. Develop a reputation management system into your corporate strategy and appoint a tactical team to oversee it. Regularly update leadership on potential risks and employ preparation strategies for those you cannot avoid. Invest time in assessing important issues that could risk reputational damage. Monitor emerging issues and respond accordingly. By developing a proactive reputation management capability, you increase the likelihood of preventing crises before they occur.


Summarizing Software: 

SummarizerCodeSnippets.docx

##codingexercise https://1drv.ms/w/s!Ashlm-Nw-wnWhO1TAZ1Y860-W7-vGw?e=s3pvmb

Saturday, May 4, 2024

 This is a summary of the book titled “Leveraged: the new economics of debt and financial fragility” written by Prof.Moritz Schularick and published by University of Chicago Press in 2022. This collection of essays presents an overview of the latest thinking and their practical implications. Assumptions such as those before 2008 that financial institutions will just be fine, are questioned in some contexts and the work of Hyman Minsky who explained human nature’s tendency towards boom-and-bust cycles is a recurring theme and inspiration.

Credit and leverage are fundamental factors in recent crises. Credit booms distort economies and slowdowns follow. A banking system with higher capital to lending ratios does not affect the financial crisis. Financial sector expectations drive lending booms and busts. When credit grows, the price of risk is lowered. A historical categorization of financial crises might just be worth it. This might reveal that a great depression might have been a credit boom gone wrong. Even though credit plays such a big role in creating instability, its policy implications are far from straightforward.

Credit booms distort economies and lead to economic slowdowns. Current financial system regulation is too focused on minimizing the risk of banks getting into trouble, which leads to a dramatic drop in consumer spending and loss of confidence in the wider economy. To address this, the current structure of banking regulation should split risk between creditors and debtors in a socially beneficial manner. One way to tackle this is using "state-contingent contracting" (SCCs), which automatically reduce the amount a borrower needs to pay back during a downturn. Examples of SCCs include student loans and loans to countries based on GDP growth. Credit booms often generate distortions and vulnerabilities that often end in crises. The 2008 financial crisis revealed that both executives and shareholders take risks underwritten by the taxpayer. To address this, "lockups" or "debt-based compensation" for bankers' pay could be created, setting the condition that there will not be bankruptcies or taxpayer bailouts for some time after the remuneration period.

Excessive subprime lending was a popular narrative that led to the 2008 US financial crisis. However, non-investors, such as real estate investors, often had other non-real estate loans in distress, leading to policy implications that differ from those based on the notion that subprime borrowers drove the crisis. Young professionals, who were approximately 14% of all borrowers, represented almost 50% of foreclosures during the crisis's peak. A banking system with higher capital-to-lending ratios does not affect the likelihood of a financial crisis. Despite regulations increasing capital after previous crises, no evidence suggests that banks with more capital suffered less during that period. Research shows that better capital ratios do have an influence on recovery from a crisis. Financial sector expectations drive lending booms and busts, as they amplify trends of the recent past and neglect the mean reversion that long-term data suggests.

Investment industry methodology could improve the process of assessing the riskiness of banks, as recent crises have shown. Portfolio-assessing methodology, which combines market data and bank accounting data, could be a useful tool for banks to assess their risk. Studies show that low asset volatility in the past can predict credit growth, as agents update their views on risk based on the past and are overoptimistic about risk going forward. This could lead to excessive risk, resulting in fragility and raising the likelihood of a bad event.

A comprehensive historical categorization of financial crises is valuable, as it focuses on real-time metrics like bank equity returns, credit spread measures, credit distress metrics, nonperforming loan rates, and other bank data. This quantitative approach contrasts with the vagaries of commentators reporting on financial crises and the filtration of narratives by historians.

Narrative accounts of crises are still valuable, but research reveals that some "quiet crises" with less impact on the general economy have been forgotten or misunderstood. The spread of government-backed deposit insurance and the shift from lending to businesses to real estate were significant events in the US Great Depression.

The US Great Depression may have been a credit boom gone wrong, as credit played a crucial role in generating the bubble. The growth of the money supply continued until 1926, but credit growth continued for a few manic years. Total private credit reached 156% in 1929, more than other developed countries. The New York Fed pressured member banks to cap brokers' loans, but interest rates on brokers' loans proved attractive, leading to nonmember banks, financial institutions, companies, and individuals filling the gap. The Federal Reserve raised interest rates in 1928 to contain the boom, but the stock market continued to rise, attracting money from abroad. The importance of credit in creating financial instability has revived since the 2008 crisis. Evidence suggests that the allocation of credit matters as much as its quantity, and excessive credit directed toward real estate is more likely to come before a financial crisis.


Friday, May 3, 2024

 This is a summary of the book titled “Leveraged: the new economics of debt and financial fragility” written by Prof.Moritz Schularick and published by University of Chicago Press in 2022. This collection of essays presents an overview of the latest thinking and their practical implications. Assumptions such as those before 2008 that financial institutions will just be fine, are questioned in some contexts and the work of Hyman Minsky who explained human nature’s tendency towards boom-and-bust cycles is a recurring theme and inspiration.

Credit and leverage are fundamental factors in recent crises. Credit booms distort economies and slowdowns follow. A banking system with higher capital to lending ratios does not affect the financial crisis. Financial sector expectations drive lending booms and busts. When credit grows, the price of risk is lowered. A historical categorization of financial crises might just be worth it. This might reveal that a great depression might have been a credit boom gone wrong. Even though credit plays such a big role in creating instability, its policy implications are far from straightforward.

Credit booms distort economies and lead to economic slowdowns. Current financial system regulation is too focused on minimizing the risk of banks getting into trouble, which leads to a dramatic drop in consumer spending and loss of confidence in the wider economy. To address this, the current structure of banking regulation should split risk between creditors and debtors in a socially beneficial manner. One way to tackle this is using "state-contingent contracting" (SCCs), which automatically reduce the amount a borrower needs to pay back during a downturn. Examples of SCCs include student loans and loans to countries based on GDP growth. Credit booms often generate distortions and vulnerabilities that often end in crises. The 2008 financial crisis revealed that both executives and shareholders take risks underwritten by the taxpayer. To address this, "lockups" or "debt-based compensation" for bankers' pay could be created, setting the condition that there will not be bankruptcies or taxpayer bailouts for some time after the remuneration period.

Excessive subprime lending was a popular narrative that led to the 2008 US financial crisis. However, non-investors, such as real estate investors, often had other non-real estate loans in distress, leading to policy implications that differ from those based on the notion that subprime borrowers drove the crisis. Young professionals, who were approximately 14% of all borrowers, represented almost 50% of foreclosures during the crisis's peak. A banking system with higher capital-to-lending ratios does not affect the likelihood of a financial crisis. Despite regulations increasing capital after previous crises, no evidence suggests that banks with more capital suffered less during that period. Research shows that better capital ratios do have an influence on recovery from a crisis. Financial sector expectations drive lending booms and busts, as they amplify trends of the recent past and neglect the mean reversion that long-term data suggests.

Investment industry methodology could improve the process of assessing the riskiness of banks, as recent crises have shown. Portfolio-assessing methodology, which combines market data and bank accounting data, could be a useful tool for banks to assess their risk. Studies show that low asset volatility in the past can predict credit growth, as agents update their views on risk based on the past and are overoptimistic about risk going forward. This could lead to excessive risk, resulting in fragility and raising the likelihood of a bad event.

A comprehensive historical categorization of financial crises is valuable, as it focuses on real-time metrics like bank equity returns, credit spread measures, credit distress metrics, nonperforming loan rates, and other bank data. This quantitative approach contrasts with the vagaries of commentators reporting on financial crises and the filtration of narratives by historians.

Narrative accounts of crises are still valuable, but research reveals that some "quiet crises" with less impact on the general economy have been forgotten or misunderstood. The spread of government-backed deposit insurance and the shift from lending to businesses to real estate were significant events in the US Great Depression.

The US Great Depression may have been a credit boom gone wrong, as credit played a crucial role in generating the bubble. The growth of the money supply continued until 1926, but credit growth continued for a few manic years. Total private credit reached 156% in 1929, more than other developed countries. The New York Fed pressured member banks to cap brokers' loans, but interest rates on brokers' loans proved attractive, leading to nonmember banks, financial institutions, companies, and individuals filling the gap. The Federal Reserve raised interest rates in 1928 to contain the boom, but the stock market continued to rise, attracting money from abroad. The importance of credit in creating financial instability has revived since the 2008 crisis. Evidence suggests that the allocation of credit matters as much as its quantity, and excessive credit directed toward real estate is more likely to come before a financial crisis.


Thursday, May 2, 2024

 This is a continuation of a previous article on cloud resources, their IaC, shortcomings and resolutions with some more exciting challenges to talk about. The previous article cited challenges and resolutions with regards to Azure Front Door and its backend services aka origins. This article focuses on ip access restrictions of the origins such as app services but we resume from the earlier mentioned best practices that a good access restriction will not only specify the ip address range of the sender but also verify the header which in the case of Azure Front Door is x-Azure-FDID and is stamped by the Front Door with its GUID. Since the GUID is specific to the instance of the typically unique and global Front Door in most deployments, a rule that checks the header only needs one value to compare against.  This header is set by the Front Door on every request so the access restriction rule works against every request.

In this case, the app services must be configured to do IP address filtering to accept traffic from the Front Door’s backend IP address space and Azure’s infrastructure only. As pointed out earlier, this does not mean the ip addresses to which the Front Door’s endpoint resolves to. Instead a complete list of Ip addresses for the backend can be found with the use of a service tag named AzureFrontDoor.Backend which comes helpful not only to find the ip addresses but also to configure rules in the network security group, if desired. The backend ip addresses can be found from their publication at https://www.microsoft.com/download/details.aspx?id=56519 and appropriate CIDR ranges can be determined to encompass all. Note that these pertain to a large number of locations, specifically metros that are spread the world over.  Should an ipv6 CIDR be need for these ip ranges, they can be succinctly denoted by 2a01:111:2050::/44 range.

On the other hand, traffic from the Azure’s basic infrastructure services will originate from the virtualized host ip addresses of 168.63.129.16 through 169.254.169.254.



Wednesday, May 1, 2024

 This is a continuation of a previous article on cloud resources, their IaC, shortcomings and resolutions with some more exciting challenges to talk about. When compared with a variety of load balancer options, the Azure Front Door aka AFD we cited in the previous article often evokes misunderstanding about the term global. It is true that an instance of the Azure Front Door and CDN profile is not tied down to a region and in fact, appears with the location property set to the value global. But it is really catering to edge load balancing. When clients connect to it from a variety of different locations, AFD provides the entrypoint based on where the nearest edge location is. As a contrast and for a cross region or global load balancer, that’s always entering Azure from the same endpoint, so the entrypoint is what it decides as what is closest to that endpoint. Based on this entrypoint, clients from two different locations, an Azure cross region or global load balancer  will route in the same exact way. An AFD will determine the edge location and it doesn’t matter where the azure call was made but what is closest to the FD Edge location, and this provides higher control over latency. Having called out the difference, the similarity is that it uses anycast protocol and slip TCP. It is layer 7 technology and is solely internet facing.

One of the challenges from an internet facing resource is its addressability and the best practice to overcome limitations with ip address is to use DNS names always. This brings into consideration DNS caching where the nameserver goes down, but the time-to-live aka TTL helps to keep the routing going albeit to an unhealthy endpoint. A retry or re-resolve would fix this issue and again that falls under the best practices. Some other best practices are about determining whether the client needs a global or a regional solution, where the traffic enters Azure or if it is latency sensitive, what is the type of workload – on-premises or cloud or hybrid.

When the choice for Azure Front Door is determined, the above plays a big role in connecting destination cloud resources as origins in an origin group. Cloud solution architects are surprised when they connect app services with ip access restrictions behind the Front Door. No matter whether they specify one rule or another, or whether they include the ipv4 and ipv6 addresses that the Front Door endpoint resolves to, they will encounter a 403.  AFD leverages 192 edge locations across 109 metro cities – a vast global network of points of presence (POP) to bring the applications closer to the end users. When there are such multiple POP servers involved, all those POP servers must be allowed in the ip access restrictions in the Azure App Services. It is also possible to allowlist based on virtual networks.

Lastly, securing the access restrictions on the app services when it involves IP ACLs, is not complete without setting X-Azure-FDID header check to have the value of the Front Door’s unique identifier in the form of a universal identifier (GUID). This check prevents spoofing.