Monday, May 13, 2024

 This is a summary of the book titled “Practical Fairness: achieving fair and secure data models”  written by Aileen Nielsen and published by O’Reilly in 2020. The author is a software engineer and attorney who examines various kinds of fairness and how both training data and algorithms can promote them. Machine Learning developers and MLOps can benefit from this discussion and as an O’Reilly book comes with python examples. Fairness in this book is about who gets what and how that is decided. Fair results start with fair data and there must be an all-round effort to increase fairness at various stages of the process. Privacy and fairness are vulnerable to attacks. Product design should also be fair and make a place for fair models. Industry standards and regulations can demand fairness from the market in all the relevant products.

Fairness in technology is crucial for ensuring that users receive fair treatment, and that technology is used responsibly. It is essential for software developers to differentiate between equity and equality, security, and privacy, and to avoid legal issues and consumer backlash. People tend to prefer equity over equality, as it implies that people should not receive different treatment for belonging to a certain group. However, equity is not straightforward, as privacy metrics can be undercut by human error.


To ensure fairness, machine learning models should start with fair data, which should be high-quality, suited to the model's intended purposes, and correctly labeled. Technology is neither good nor bad, and data quality can suffer from biased sampling and incomplete data. A fairness mandate can stimulate ideas in mathematics, computer science, and law, but it cannot guarantee fairness in all respects.

Data models can be trained to increase fairness throughout their development process. Pre-processing is the most flexible and powerful option, offering the most opportunities for downstream metrics. Techniques to increase fairness include deleting parts of data that could be exploited to discriminate against people, such as gender, or attaching weightings to different data about a person. However, individual fairness can lead to unfairness for a group, so techniques like learned fair representation and optimized pre-processing balance the two. Adversarial de-biasing involves having a second model analyze the output of the first, ensuring non-discriminatory outcomes.


Sometimes, neither pre-processing data nor training a model for fairness is possible or allowable. Users can process the output of a model to make it fairer, providing transparency. To gauge whether a model generates fair outcomes, audit it using black-box auditing or white-box auditing. Interpretable models or black-box models that explain the basis of decisions can help avoid arbitrary decisions. Privacy and fairness are vulnerable to attacks, as modern technologies may undercut anonymization and new concepts emerge, such as the "right to be forgotten."

Privacy is an evolving legal norm, and machine learning models are vulnerable to attacks that aim to subvert their output. Attacks can be evasion attacks, where attackers feed model data that forces it to err, or poisoning attacks, where the attackers make the model malfunction or classify certain data in a desired way. Fair models should be integrated into fair products, satisfying customer expectations, and ensuring that companies do not harm those who contributed data. Companies should also consider how their products could be misused and not roll out updates too frequently. Even if a product works well, it can have fairness problems if it works better for some than for others. The market will not force companies to deliver fairness in their products without the correct laws. The EU's General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are two major laws concerning data use, providing citizens with the right to data portability, erasure, and correction of personal data.

The GDPR and CCPA have prompted organizations to consider privacy, transparency, and accountability. The GDPR prohibits algorithms from making significant decisions affecting EU citizens. In the US, laws regulating algorithm use have not passed. Some states, like California, set rules for chatbots, ensuring users are not communicating with humans. As machine learning advances, technology and fairness laws will evolve.


Sunday, May 12, 2024

 While large tech sector businesses are developing proprietary warehouse robotics and drone fleet software, the rest of the industry is looking up to them for industry solutions. Unfortunately, these same businesses failed to deliver even on cloud migration and modernization solutions for various industries who were left to outsource the strategy and implementation to vendors. While boutique drone solution can be highly customizable, need-based and highly effective, a shopify-like platform for drone fleet management can bring the best practices to the industry without significant development costs. Businesses recognize the value of drone formation software that expand their options of delivery network aerial activities. From event exhibits such as those for Lady Gaga in 2016 Super Bowl half-time to home-delivery automations or aerial transport of goods or forestation activities, drone related software automations will only grow. It is this absence of a managed service for drone formation that inspires the technical design and use case presented in this document. The B2B solution provided by a business that implements a rich and robust drone formation handling mechanism not only gets it right once but also enables smoother and easier investment for the businesses that subscribe to these services without them having to reinvent the wheel. The competition in this field is little or non-existent given the breakthroughs possible in wiring up drone activities to a variety of drones. When the end-users do not have to rewrite automations for changes in business purpose or technological advances in drone units, they become more laser focused on their mission. With the use of public cloud and a pay-as-you-go model, the ability to isolate and scale drone formation specific computing and data storage is unparalleled and something that is not yet mainstream in this market yet.  This gives a lot of wind to sail the drone formation and planning services.  

Saturday, May 11, 2024

 

This is a summary of the book titled “Rewired: The McKinsey guide to outcompeting in the age of Digital and AI” written by Rodney Zemmel, Eric Lamarre and Kate Smaje and published by Wiley, 2023. It is a relevant and authoritative playbook and reference to the transformation brought on by AI. For those who might question the source, McKinsey is a global leader and leaders like Sundar Pichai, James Gorman, and Sheryl Sandberg have all worked there. Written for executives, this book’s recommendations apply to organizations of all sizes.

This book suggests that digital and AI transformations are  here to stay. They will be a constant and never-ending change that can be embraced with a careful roadmap, and one built on foundations. Organizations will require core digital talent in-house and the company’s operating model must support rapid development. Such an in-house technology environment needs seven capabilities to support digital innovation across the organization. Digital leaders must establish a data architecture that facilitates the flow of data from source to use. Strategy forged by these leaders must drive customer or user adoption.

Digital and AI transformations are crucial for businesses to remain competitive. With 89% of leaders having undertaken some form of digital transformation, it is essential for leaders to extend their initiatives beyond tech to encompass all organizational capabilities. A successful transformation depends on foundational work, including a clear, detailed road map that outlines business domains, solutions, programs, and key performance indicators. A domain-based approach is recommended to right-size the transformation's scope, prioritizing domains based on value and feasibility. Leaders should avoid being distracted by low-value pet projects and focus on long-term capability building. Digital leaders should prioritize people and capabilities over tech solutions, focusing on long-term improvement and customer experience. The CEO should also take personal responsibility for the transformation, as every member of the executive team plays a role in driving it. By laying the groundwork and implementing a robust digital roadmap, businesses can achieve meaningful change and measurable results.

To support continuous transformation, organizations should focus on core digital talent in-house, with 70% to 80% of their digital talent residing in-house. Many organizations have established a Talent Win Room (TWR) to focus on digital talent, including executive sponsors, tech recruiters, HR specialists, and part-time functional specialists. Digital leaders offer dual career paths and align compensation with employee value. The company's operating model must enable fast, flexible technology development, with agile pods being a key component. Leaders must deepen their understanding of agile beyond processes and rituals to ensure pods deliver their potential value. Three dominant operating model designs are digital factory, product, and platform (P&P), and enterprise-wide agile. The digital transformation must also include user experience design capabilities to ensure solutions meet customer needs and wants. Agile pods should include design experts and leaders who should understand how customer experience links to value.

The enterprise's technology environment needs seven capabilities to support digital innovation across the organization. These capabilities include decoupled architecture, cloud, engineering practices, developer tools, reliable production environments, automated security, and machine learning operations (MLOps) automation. A distributed, decoupled architecture enables agility and scaling, while a cloud approach and data platform are essential for reducing costs. Engineering practices, such as automation of the software development lifecycle (SDLC), DevOps, and coding standards, are crucial for agility and quality. Developer tools should be provided in sandbox environments, and a reliable production environment must be secure and available. Automated security is essential for moving to the cloud, and machine learning operations (MLOps) automation can help exploit AI's potential. A data architecture facilitates the flow of data from source to use, and data products are essential for standardization, scaling, and speed. A data strategy sets out the organization's data requirements and plans for cleaning and delivering its data.

To maximize the benefits of digital and AI transformation, leaders must implement strategies to drive customer or user adoption. Adoption depends on two factors: user experience and change management. Strategies include adapting the business model, designing in replication, tracking progress, establishing digital trust, and creating a digital culture. A CEO or division head should ensure alignment across the business, plan a replication approach, and assetize solutions. Leaders should track progress through a five-stage process, assess risks, review digital trust policies, and ensure operational capabilities support digital trust. Leaders should also display attributes that support a digital culture, such as customer-centricity, collaboration, and urgency.

Previous summaries: BookSummary88.docx
Summarizing Software: SummarizerCodeSnippets.docx. 

Friday, May 10, 2024

 This is a continuation of articles on IaC shortcomings and resolutions. One of the pitfalls of IaC modernization is the copy-and-paste mindset when it comes to transferring existing rules from one resource type to another.  Take the case of dedicated deployment for resources like Azure Front End and Azure Application Gateway. The default traffic corresponds to the “/*” rule and clients expecting to get a response from a zonal resource such as a virtual machine scale set might expect it to come from a specific instance in a given region and zone regardless of whether the resource is switched from Azure Application Gateway to Azure Front Door without nesting one behind the other and as a drop-in replacement between clients and hosted applications. Yet, the resources differ from one another in how the default traffic is handled.

1. Azure Application Gateway:

o Azure Application Gateway is a layer 7 load balancer that provides application-level routing and load balancing services.

o By default, when traffic is sent to the root path ("/") of the domain, Azure Application Gateway uses the "default backend pool" to handle the request.

o The default backend pool can be configured to point to a specific backend pool or virtual machine scale set. It acts as a fallback when no specific path-based routing rules match the request.

o If you have defined any path-based routing rules for other paths, they will take precedence over the default backend pool when matching requests.

2. Azure Front Door:

o Azure Front Door is a global, scalable entry point for web applications that provides path-based routing, SSL offloading, and other features.

o When traffic is sent to the root path ("/") of the domain, Azure Front Door uses the "default routing rule" to handle the request.

o The default routing rule in Azure Front Door allows you to define a set of backend pools and associated routing conditions for requests that don't match any specific path-based routing rules.

o You can configure the default routing rule to redirect or route traffic to a specific backend pool, providing flexibility in handling default requests.

In summary, both Azure Application Gateway and Azure Front Door offer path-based routing capabilities, but they handle default traffic sent to the root path differently. Azure Application Gateway uses a default backend pool as a fallback, while Azure Front Door uses a default routing rule to handle such requests.

Now let us consider the case where two application gateways, one for each region is placed as backend to a global Azure Front Door. Furthermore, let us say each application gateway routes to different backend pool members for “/images/” and “/videos” respectively. If the traffic always went to the same application gateway, there would be predictability in who answers either route but the default routing rule of “/*” in the FrontDoor means either application gateway could be targeted and the response might come unexpectedly from another region. In this case, the proper configuration would make distinct routes to each application gateway and these routes can have route path qualifiers for images and videos. In fact, it might even be better to consolidate all images behind one application gateway and all videos behind the other if the latency differences can be tolerated. In this way, the resolution to the target becomes predictable.


Thursday, May 9, 2024

 There is a cake factory producing K-flavored cakes. Flavors are numbered from 1 to K. A cake should consist of exactly K layers, each of a different flavor. It is very important that every flavor appears in exactly one cake layer and that the flavor layers are ordered from 1 to K from bottom to top. Otherwise the cake doesn't taste good enough to be sold. For example, for K = 3, cake [1, 2, 3] is well-prepared and can be sold, whereas cakes [1, 3, 2] and [1, 2, 3, 3] are not well-prepared.

 

The factory has N cake forms arranged in a row, numbered from 1 to N. Initially, all forms are empty. At the beginning of the day a machine for producing cakes executes a sequence of M instructions (numbered from 0 to M−1) one by one. The J-th instruction adds a layer of flavor C[J] to all forms from A[J] to B[J], inclusive.

 

What is the number of well-prepared cakes after executing the sequence of M instructions?

 

Write a function:

 

class Solution { public int solution(int N, int K, int[] A, int[] B, int[] C); }

 

that, given two integers N and K and three arrays of integers A, B, C describing the sequence, returns the number of well-prepared cakes after executing the sequence of instructions.

 

Examples:

 

1. Given N = 5, K = 3, A = [1, 1, 4, 1, 4], B = [5, 2, 5, 5, 4] and C = [1, 2, 2, 3, 3].

 

There is a sequence of five instructions:

 

The 0th instruction puts a layer of flavor 1 in all forms from 1 to 5.

The 1st instruction puts a layer of flavor 2 in all forms from 1 to 2.

The 2nd instruction puts a layer of flavor 2 in all forms from 4 to 5.

The 3rd instruction puts a layer of flavor 3 in all forms from 1 to 5.

The 4th instruction puts a layer of flavor 3 in the 4th form.

The picture describes the first example test.

 

The function should return 3. The cake in form 3 is missing flavor 2, and the cake in form 5 has additional flavor 3. The well-prepared cakes are forms 1, 2 and 5.

 

2. Given N = 6, K = 4, A = [1, 2, 1, 1], B = [3, 3, 6, 6] and C = [1, 2, 3, 4],

 

the function should return 2. The 2nd and 3rd cakes are well-prepared.

 

3. Given N = 3, K = 2, A = [1, 3, 3, 1, 1], B = [2, 3, 3, 1, 2] and C = [1, 2, 1, 2, 2],

 

the function should return 1. Only the 2nd cake is well-prepared.

 

4. Given N = 5, K = 2, A = [1, 1, 2], B = [5, 5, 3] and C = [1, 2, 1]

 

the function should return 3. The 1st, 4th and 5th cakes are well-prepared.

 

Write an efficient algorithm for the following assumptions:

 

N is an integer within the range [1..100,000];

M is an integer within the range [1..200,000];

each element of arrays A, B is an integer within the range [1..N];

each element of array C is an integer within the range [1..K];

for every integer J, A[J] ≤ B[J];

arrays A, B and C have the same length, equal to M.

// import java.util.*;

 

 

class Solution {

    public int solution(int N, int K, int[] A, int[] B, int[] C) {

        int[]  first = new int[N]; // first

        int[]  last = new int[N]; // last

        int[]  num = new int[N]; // counts

        for (int i = 0; i < A.length; i++) {

            for (int current = A[i]-1; current <= B[i]-1; current++) {

                num[current]++;

                if (first[current] == 0) {

                    first[current] = C[i];

                    last[current] = C[i];

                    continue;

                }

                If (last[current] > C[I]) {

                     last[current] = Integer.MAX_VALUE;

                } else {

                     last[current] = C[i];

               }

            }

        }

        int count = 0;

        for (int i = 0; i < N; i++) {

            if (((last[i] - first[i]) == (K - 1)) && (num[i] == K)) {

                count++;

            }

        }        

        // StringBuilder sb = new StringBuilder();

        // for (int i = 0; i < N; i++) {

        //     sb.append(last[i] + " ");

        // }

        // System.out.println(sb.toString());

        return count;

    }

}

Example test:   (5, 3, [1, 1, 4, 1, 4], [5, 2, 5, 5, 4], [1, 2, 2, 3, 3])

OK

 

Example test:   (6, 4, [1, 2, 1, 1], [3, 3, 6, 6], [1, 2, 3, 4])

OK

 

Example test:   (3, 2, [1, 3, 3, 1, 1], [2, 3, 3, 1, 2], [1, 2, 1, 2, 2])

OK

 

Example test:   (5, 2, [1, 1, 2], [5, 5, 3], [1, 2, 1])

OK


Wednesday, May 8, 2024

 Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.

1. Image Enhancement:

o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.

o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.

o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.

o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.

2. Image Restoration:

o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.

o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.

o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.

o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.

Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.

Let’s review these in detail:

1. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.

2. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.

3. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.

4. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).

5. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.

6. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.

7. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.

8. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.

As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,

Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.

Key points from this research:

o Objective: To improve object detection accuracy and energy utilization.

o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.

o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.

o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.

o Energy Savings: Energy savings were achieved based on ambient lighting conditions.

This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.

Another topic involves improving energy efficiency of indoor lighting:

This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.

The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.

By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.

The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.

Previous articles on data processing: DM.docx  Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.

1. Image Enhancement:

o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.

o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.

o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.

o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.

2. Image Restoration:

o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.

o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.

o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.

o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.

Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.

Let’s review these in detail:

1. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.

2. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.

3. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.

4. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).

5. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.

6. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.

7. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.

8. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.

As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,

Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.

Key points from this research:

o Objective: To improve object detection accuracy and energy utilization.

o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.

o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.

o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.

o Energy Savings: Energy savings were achieved based on ambient lighting conditions.

This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.

Another topic involves improving energy efficiency of indoor lighting:

This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.

The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.

By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.

The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.

Previous articles on data processing: DM.docx Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.

3. Image Enhancement:

o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.

o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.

o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.

o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.

4. Image Restoration:

o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.

o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.

o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.

o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.

Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.

Let’s review these in detail:

9. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.

10. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.

11. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.

12. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).

13. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.

14. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.

15. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.

16. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.

As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,

Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.

Key points from this research:

o Objective: To improve object detection accuracy and energy utilization.

o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.

o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.

o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.

o Energy Savings: Energy savings were achieved based on ambient lighting conditions.

This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.

Another topic involves improving energy efficiency of indoor lighting:

This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.

The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.

By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.

The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.

Previous articles on data processing: DM.docx 

 

Subarray Sum equals K 

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k. 

A subarray is a contiguous non-empty sequence of elements within an array. 

Example 1: 

Input: nums = [1,1,1], k = 2 

Output: 2 

Example 2: 

Input: nums = [1,2,3], k = 3 

Output: 2 

Constraints: 

1 <= nums.length <= 2 * 104 

-1000 <= nums[i] <= 1000 

-107 <= k <= 107 

 

class Solution { 

    public int subarraySum(int[] nums, int k) { 

        if (nums == null || nums.length == 0) return -1; 

        int[] sums = new int[nums.length];    

        int sum = 0; 

        for (int i = 0; i < nums.length; i++){ 

            sum += nums[i]; 

            sums[i] = sum; 

        } 

        int count = 0; 

        for (int i = 0; i < nums.length; i++) { 

            for (int j = i; j < nums.length; j++) { 

                int current = nums[i] + (sums[j] - sums[i]); 

                if (current == k){ 

                    count += 1; 

                } 

            } 

        } 

        return count; 

    } 

 

[1,3], k=1 => 1 

[1,3], k=3 => 1 

[1,3], k=4 => 1 

[2,2], k=4 => 1 

[2,2], k=2 => 2 

[2,0,2], k=2 => 4 

[0,0,1], k=1=> 3 

[0,1,0], k=1=> 2 

[0,1,1], k=1=> 3 

[1,0,0], k=1=> 3 

[1,0,1], k=1=> 4 

[1,1,0], k=1=> 2 

[1,1,1], k=1=> 3 

[-1,0,1], k=0 => 2 

[-1,1,0], k=0 => 3 

[1,0,-1], k=0 => 2 

[1,-1,0], k=0 => 3 

[0,-1,1], k=0 => 3 

[0,1,-1], k=0 => 3 

 

 

 






Tuesday, May 7, 2024

 This is a summary of the book titled “The BRAVE Leader” written by David McQueen and published by Practical Inspirational Publishing in 2024. The author is a leadership coach who asserts that failing to model inclusivity has dire consequences for a leader no matter how busy they might get. They must empower all their people and create systems that serve a wide range of stakeholders’ needs. They can do so and more by following the Bold, Resilient, Agile, Visionary, and Ethical Leadership style.

The framework in this book tackles root causes and expands emerging possibilities. It helps to drive innovation while having a strategic thinking. Inclusive practices can be embed into the “DNA” of the organization. Honesty, transparency and a culture to drive antifragility  will help with a systemic change.

Good leaders inspire and empower others in various contexts, including community projects, sports games, and faith groups. Leadership is not just about management, but also involves understanding an organization's norms, values, and external factors. Leaders need followers to buy into their vision and actively participate in the work. Inclusive leadership involves attracting, empowering, and supporting talented individuals to achieve common goals without marginalizing them. To achieve this, leaders must be "BRAVE" - bold, resilient, agile, visionary, and ethical. This requires systems thinking and the ability to sense emerging possibilities. To be a BRAVE leader, leaders should focus on creating a culture where team members can develop their leadership qualities. They should resist the temptation to position themselves as an omnipotent "hero leader" and consider their decision-making approach. To align with the BRAVE framework, leaders should consider boldness, resilience, agility, vision, and ethicalness in their decision-making approaches.

BRAVE leaders use the "five W's" approach to problem-solving, which involves identifying the issue, identifying the business area, determining the deadline, identifying the most affected stakeholders, and determining the success of the problem. This approach helps in identifying the root cause of the problem and addressing it.

Strategic thinking is crucial for driving innovation and thriving amid uncertainty. It involves examining complex problems, identifying potential issues and opportunities, and crafting action plans for achieving big-picture goals. Inclusive leadership is essential for organizations to avoid homogeneous decision-making and foster a culture that combines inclusivity and strategic thinking.

Implementing inclusive practices into the organization's DNA includes rethinking recruitment, hiring practices, performance management, offboarding, and mapping customer segments. This involves rethinking the "best" applicants, updating hiring practices, and fostering a culture that combines inclusivity and strategic thinking. By embracing diversity and fostering a culture of inclusivity, organizations can thrive in the face of uncertainty and drive innovation.

Inclusive practices should be incorporated into an organization's DNA, including recruitment, performance management, offboarding, mapping customer segments, and product development. Expand the scope of applicants and provide inclusive hiring training to team members. Be more inclusive in performance management by asking questions and viewing performance reviews as opportunities for improvement. Treat exit interviews as learning experiences and consider customer needs and characteristics. Ensure stakeholders feel included in product development, ensuring they feel part of a two-way relationship.

Self-leadership is essential for effective leadership, as it involves understanding oneself, identifying desired experiences, and intentionally guiding oneself towards them. BRAVE leaders model excellence, embracing self-discipline, consistency, active listening, and impulse control. They prioritize their mental and physical health, taking breaks and vacations to show team members that self-care is crucial. Leadership coaching can help develop BRAVE characteristics and identify interventions for long-term changes.

Inclusive leadership requires a positive organizational climate, where employees feel valued, respected, and included. Building a BRAVE organizational culture involves setting quantifiable goals and holding managers and leaders accountable for meeting them. Diverse teams benefit from a broader range of insights, perspectives, and talents, and problem-solving more effectively by approaching challenges from multiple angles.

By embracing courage and overcoming fear, leaders drive systemic change, leading to courageous decision-making, better management, and a systematic approach to leadership. By cultivating positive characteristics like generosity, transparency, and accountability, leaders can drive sustainable growth and foster a more inclusive environment.

Previous Book Summaries: BookSummary86.docx 

Summarizing Software: SummarizerCodeSnippets.docx: https://1drv.ms/w/s!Ashlm-Nw-wnWhOYMyD1A8aq_fBqraA?e=BGTkR7