While large tech sector businesses are developing proprietary warehouse robotics and drone fleet software, the rest of the industry is looking up to them for industry solutions. Unfortunately, these same businesses failed to deliver even on cloud migration and modernization solutions for various industries who were left to outsource the strategy and implementation to vendors. While boutique drone solution can be highly customizable, need-based and highly effective, a shopify-like platform for drone fleet management can bring the best practices to the industry without significant development costs. Businesses recognize the value of drone formation software that expand their options of delivery network aerial activities. From event exhibits such as those for Lady Gaga in 2016 Super Bowl half-time to home-delivery automations or aerial transport of goods or forestation activities, drone related software automations will only grow. It is this absence of a managed service for drone formation that inspires the technical design and use case presented in this document. The B2B solution provided by a business that implements a rich and robust drone formation handling mechanism not only gets it right once but also enables smoother and easier investment for the businesses that subscribe to these services without them having to reinvent the wheel. The competition in this field is little or non-existent given the breakthroughs possible in wiring up drone activities to a variety of drones. When the end-users do not have to rewrite automations for changes in business purpose or technological advances in drone units, they become more laser focused on their mission. With the use of public cloud and a pay-as-you-go model, the ability to isolate and scale drone formation specific computing and data storage is unparalleled and something that is not yet mainstream in this market yet. This gives a lot of wind to sail the drone formation and planning services.
Sunday, May 12, 2024
Saturday, May 11, 2024
This is a summary of the book titled “Rewired: The McKinsey
guide to outcompeting in the age of Digital and AI” written by Rodney Zemmel,
Eric Lamarre and Kate Smaje and published by Wiley, 2023. It is a relevant and
authoritative playbook and reference to the transformation brought on by AI.
For those who might question the source, McKinsey is a global leader and
leaders like Sundar Pichai, James Gorman, and Sheryl Sandberg have all worked
there. Written for executives, this book’s recommendations apply to
organizations of all sizes.
This book suggests that digital and AI transformations are here to stay. They will be a constant and
never-ending change that can be embraced with a careful roadmap, and one built
on foundations. Organizations will require core digital talent in-house and the
company’s operating model must support rapid development. Such an in-house
technology environment needs seven capabilities to support digital innovation
across the organization. Digital leaders must establish a data architecture
that facilitates the flow of data from source to use. Strategy forged by these
leaders must drive customer or user adoption.
Digital and AI transformations are crucial for businesses to
remain competitive. With 89% of leaders having undertaken some form of digital
transformation, it is essential for leaders to extend their initiatives beyond
tech to encompass all organizational capabilities. A successful transformation
depends on foundational work, including a clear, detailed road map that
outlines business domains, solutions, programs, and key performance indicators.
A domain-based approach is recommended to right-size the transformation's
scope, prioritizing domains based on value and feasibility. Leaders should
avoid being distracted by low-value pet projects and focus on long-term
capability building. Digital leaders should prioritize people and capabilities
over tech solutions, focusing on long-term improvement and customer experience.
The CEO should also take personal responsibility for the transformation, as
every member of the executive team plays a role in driving it. By laying the
groundwork and implementing a robust digital roadmap, businesses can achieve
meaningful change and measurable results.
To support continuous transformation, organizations should
focus on core digital talent in-house, with 70% to 80% of their digital talent
residing in-house. Many organizations have established a Talent Win Room (TWR)
to focus on digital talent, including executive sponsors, tech recruiters, HR
specialists, and part-time functional specialists. Digital leaders offer dual
career paths and align compensation with employee value. The company's
operating model must enable fast, flexible technology development, with agile
pods being a key component. Leaders must deepen their understanding of agile
beyond processes and rituals to ensure pods deliver their potential value.
Three dominant operating model designs are digital factory, product, and
platform (P&P), and enterprise-wide agile. The digital transformation must
also include user experience design capabilities to ensure solutions meet
customer needs and wants. Agile pods should include design experts and leaders
who should understand how customer experience links to value.
The enterprise's technology environment needs seven
capabilities to support digital innovation across the organization. These
capabilities include decoupled architecture, cloud, engineering practices,
developer tools, reliable production environments, automated security, and
machine learning operations (MLOps) automation. A distributed, decoupled
architecture enables agility and scaling, while a cloud approach and data
platform are essential for reducing costs. Engineering practices, such as
automation of the software development lifecycle (SDLC), DevOps, and coding
standards, are crucial for agility and quality. Developer tools should be
provided in sandbox environments, and a reliable production environment must be
secure and available. Automated security is essential for moving to the cloud,
and machine learning operations (MLOps) automation can help exploit AI's
potential. A data architecture facilitates the flow of data from source to use,
and data products are essential for standardization, scaling, and speed. A data
strategy sets out the organization's data requirements and plans for cleaning
and delivering its data.
To maximize the benefits of digital and AI transformation,
leaders must implement strategies to drive customer or user adoption. Adoption
depends on two factors: user experience and change management. Strategies
include adapting the business model, designing in replication, tracking
progress, establishing digital trust, and creating a digital culture. A CEO or
division head should ensure alignment across the business, plan a replication
approach, and assetize solutions. Leaders should track progress through a
five-stage process, assess risks, review digital trust policies, and ensure
operational capabilities support digital trust. Leaders should also display
attributes that support a digital culture, such as customer-centricity,
collaboration, and urgency.
Previous summaries: BookSummary88.docx
Summarizing Software:
SummarizerCodeSnippets.docx.
Friday, May 10, 2024
This is a continuation of articles on IaC shortcomings and resolutions. One of the pitfalls of IaC modernization is the copy-and-paste mindset when it comes to transferring existing rules from one resource type to another. Take the case of dedicated deployment for resources like Azure Front End and Azure Application Gateway. The default traffic corresponds to the “/*” rule and clients expecting to get a response from a zonal resource such as a virtual machine scale set might expect it to come from a specific instance in a given region and zone regardless of whether the resource is switched from Azure Application Gateway to Azure Front Door without nesting one behind the other and as a drop-in replacement between clients and hosted applications. Yet, the resources differ from one another in how the default traffic is handled.
1. Azure Application Gateway:
o Azure Application Gateway is a layer 7 load balancer that provides application-level routing and load balancing services.
o By default, when traffic is sent to the root path ("/") of the domain, Azure Application Gateway uses the "default backend pool" to handle the request.
o The default backend pool can be configured to point to a specific backend pool or virtual machine scale set. It acts as a fallback when no specific path-based routing rules match the request.
o If you have defined any path-based routing rules for other paths, they will take precedence over the default backend pool when matching requests.
2. Azure Front Door:
o Azure Front Door is a global, scalable entry point for web applications that provides path-based routing, SSL offloading, and other features.
o When traffic is sent to the root path ("/") of the domain, Azure Front Door uses the "default routing rule" to handle the request.
o The default routing rule in Azure Front Door allows you to define a set of backend pools and associated routing conditions for requests that don't match any specific path-based routing rules.
o You can configure the default routing rule to redirect or route traffic to a specific backend pool, providing flexibility in handling default requests.
In summary, both Azure Application Gateway and Azure Front Door offer path-based routing capabilities, but they handle default traffic sent to the root path differently. Azure Application Gateway uses a default backend pool as a fallback, while Azure Front Door uses a default routing rule to handle such requests.
Now let us consider the case where two application gateways, one for each region is placed as backend to a global Azure Front Door. Furthermore, let us say each application gateway routes to different backend pool members for “/images/” and “/videos” respectively. If the traffic always went to the same application gateway, there would be predictability in who answers either route but the default routing rule of “/*” in the FrontDoor means either application gateway could be targeted and the response might come unexpectedly from another region. In this case, the proper configuration would make distinct routes to each application gateway and these routes can have route path qualifiers for images and videos. In fact, it might even be better to consolidate all images behind one application gateway and all videos behind the other if the latency differences can be tolerated. In this way, the resolution to the target becomes predictable.
Thursday, May 9, 2024
There is a cake factory producing K-flavored cakes. Flavors are numbered from 1 to K. A cake should consist of exactly K layers, each of a different flavor. It is very important that every flavor appears in exactly one cake layer and that the flavor layers are ordered from 1 to K from bottom to top. Otherwise the cake doesn't taste good enough to be sold. For example, for K = 3, cake [1, 2, 3] is well-prepared and can be sold, whereas cakes [1, 3, 2] and [1, 2, 3, 3] are not well-prepared.
The factory has N cake forms arranged in a row, numbered from 1 to N. Initially, all forms are empty. At the beginning of the day a machine for producing cakes executes a sequence of M instructions (numbered from 0 to M−1) one by one. The J-th instruction adds a layer of flavor C[J] to all forms from A[J] to B[J], inclusive.
What is the number of well-prepared cakes after executing the sequence of M instructions?
Write a function:
class Solution { public int solution(int N, int K, int[] A, int[] B, int[] C); }
that, given two integers N and K and three arrays of integers A, B, C describing the sequence, returns the number of well-prepared cakes after executing the sequence of instructions.
Examples:
1. Given N = 5, K = 3, A = [1, 1, 4, 1, 4], B = [5, 2, 5, 5, 4] and C = [1, 2, 2, 3, 3].
There is a sequence of five instructions:
The 0th instruction puts a layer of flavor 1 in all forms from 1 to 5.
The 1st instruction puts a layer of flavor 2 in all forms from 1 to 2.
The 2nd instruction puts a layer of flavor 2 in all forms from 4 to 5.
The 3rd instruction puts a layer of flavor 3 in all forms from 1 to 5.
The 4th instruction puts a layer of flavor 3 in the 4th form.
The picture describes the first example test.
The function should return 3. The cake in form 3 is missing flavor 2, and the cake in form 5 has additional flavor 3. The well-prepared cakes are forms 1, 2 and 5.
2. Given N = 6, K = 4, A = [1, 2, 1, 1], B = [3, 3, 6, 6] and C = [1, 2, 3, 4],
the function should return 2. The 2nd and 3rd cakes are well-prepared.
3. Given N = 3, K = 2, A = [1, 3, 3, 1, 1], B = [2, 3, 3, 1, 2] and C = [1, 2, 1, 2, 2],
the function should return 1. Only the 2nd cake is well-prepared.
4. Given N = 5, K = 2, A = [1, 1, 2], B = [5, 5, 3] and C = [1, 2, 1]
the function should return 3. The 1st, 4th and 5th cakes are well-prepared.
Write an efficient algorithm for the following assumptions:
N is an integer within the range [1..100,000];
M is an integer within the range [1..200,000];
each element of arrays A, B is an integer within the range [1..N];
each element of array C is an integer within the range [1..K];
for every integer J, A[J] ≤ B[J];
arrays A, B and C have the same length, equal to M.
// import java.util.*;
class Solution {
public int solution(int N, int K, int[] A, int[] B, int[] C) {
int[] first = new int[N]; // first
int[] last = new int[N]; // last
int[] num = new int[N]; // counts
for (int i = 0; i < A.length; i++) {
for (int current = A[i]-1; current <= B[i]-1; current++) {
num[current]++;
if (first[current] == 0) {
first[current] = C[i];
last[current] = C[i];
continue;
}
If (last[current] > C[I]) {
last[current] = Integer.MAX_VALUE;
} else {
last[current] = C[i];
}
}
}
int count = 0;
for (int i = 0; i < N; i++) {
if (((last[i] - first[i]) == (K - 1)) && (num[i] == K)) {
count++;
}
}
// StringBuilder sb = new StringBuilder();
// for (int i = 0; i < N; i++) {
// sb.append(last[i] + " ");
// }
// System.out.println(sb.toString());
return count;
}
}
Example test: (5, 3, [1, 1, 4, 1, 4], [5, 2, 5, 5, 4], [1, 2, 2, 3, 3])
OK
Example test: (6, 4, [1, 2, 1, 1], [3, 3, 6, 6], [1, 2, 3, 4])
OK
Example test: (3, 2, [1, 3, 3, 1, 1], [2, 3, 3, 1, 2], [1, 2, 1, 2, 2])
OK
Example test: (5, 2, [1, 1, 2], [5, 5, 3], [1, 2, 1])
OK
Wednesday, May 8, 2024
Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.
1. Image Enhancement:
o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.
o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.
o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.
o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.
2. Image Restoration:
o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.
o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.
o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.
o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.
Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.
Let’s review these in detail:
1. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.
2. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.
3. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.
4. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).
5. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.
6. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.
7. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.
8. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.
As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,
• Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.
• Key points from this research:
o Objective: To improve object detection accuracy and energy utilization.
o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.
o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.
o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.
o Energy Savings: Energy savings were achieved based on ambient lighting conditions.
• This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.
Another topic involves improving energy efficiency of indoor lighting:
• This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.
• The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.
• By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.
The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.
Previous articles on data processing: DM.docx Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.
1. Image Enhancement:
o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.
o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.
o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.
o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.
2. Image Restoration:
o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.
o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.
o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.
o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.
Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.
Let’s review these in detail:
1. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.
2. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.
3. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.
4. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).
5. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.
6. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.
7. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.
8. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.
As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,
• Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.
• Key points from this research:
o Objective: To improve object detection accuracy and energy utilization.
o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.
o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.
o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.
o Energy Savings: Energy savings were achieved based on ambient lighting conditions.
• This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.
Another topic involves improving energy efficiency of indoor lighting:
• This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.
• The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.
• By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.
The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.
Previous articles on data processing: DM.docx Image processing is a field of study that involves analyzing, manipulating, and enhancing digital images using various algorithms and techniques. These techniques can be broadly categorized into two main categories: image enhancement and image restoration.
3. Image Enhancement:
o Contrast Adjustment: Techniques like histogram equalization, contrast stretching, and gamma correction are used to enhance the dynamic range of an image.
o Filtering: Filtering techniques such as linear filters (e.g., mean, median, and Gaussian filters) and non-linear filters (e.g., edge-preserving filters) can be applied to suppress noise and enhance image details.
o Sharpening: Techniques like unsharp masking and high-pass filtering can enhance the sharpness and details of an image.
o Color Correction: Methods like color balance, color transfer, and color grading can adjust the color appearance of an image.
4. Image Restoration:
o Denoising: Various denoising algorithms, such as median filtering, wavelet-based methods, and total variation denoising, can be used to remove noise from images.
o Deblurring: Techniques like blind deconvolution and Wiener deconvolution are used to recover the original image from blurred versions.
o Super-resolution: Super-resolution techniques aim to enhance the resolution and details of low-resolution images by utilizing information from multiple images or prior knowledge about the image degradation process.
o Image Inpainting: Inpainting algorithms fill in missing or corrupted regions in an image by estimating the content from the surrounding areas.
Apart from these, there are several other advanced image processing techniques, such as image segmentation, object recognition, image registration, and feature extraction, which are widely used in fields like computer vision, medical imaging, and remote sensing.
Let’s review these in detail:
9. Image Filtering: This algorithm involves modifying the pixel values of an image based on a specific filter or kernel. Filters like Gaussian, median, and Sobel are used for tasks like smoothing, noise reduction, and edge detection.
10. Histogram Equalization: It is a technique used to enhance the contrast of an image by redistributing the pixel intensities. This algorithm is often used to improve the visibility of details in an image.
11. Image Segmentation: This algorithm partitions an image into multiple regions or segments based on specific criteria such as color, texture, or intensity. Segmentation is useful for tasks like object recognition, image understanding, and computer vision applications.
12. Edge Detection: Edge detection algorithms identify and highlight the boundaries between different regions in an image. Commonly used edge detection algorithms include Sobel, Canny, and Laplacian of Gaussian (LoG).
13. Image Compression: Image compression algorithms reduce the file size of an image by removing redundant or irrelevant information. Popular compression algorithms include JPEG, PNG, and GIF.
14. Morphological Operations: These algorithms are used for processing binary or grayscale images, mainly focusing on shape analysis and image enhancement. Operations such as dilation, erosion, opening, and closing are commonly used.
15. Feature Extraction: Feature extraction algorithms extract meaningful information or features from an image, which can be used for tasks like object recognition, pattern matching, and image classification. Techniques like Scale-Invariant Feature Transform (SIFT) and Histogram of Oriented Gradients (HOG) are commonly used.
16. Neural Networks: Deep learning algorithms, such as Convolutional Neural Networks (CNNs), are widely used for image processing tasks. CNNs can automatically learn and extract features from images, making them highly effective for tasks like object detection, image classification, and image generation.
As with most algorithms, the quality of data plays an immense role in the output of the image processing. Image capture, continuous capture, lighting and best match among captures are some of the factors when comparing choices for the same image processing task. The use of lighting for better results in high contrast images is a significant area of research. For example,
• Recently, an embedded system was proposed that leverages image processing techniques for intelligent ambient lighting. The focus is on reference-color-based illumination for object detection and positioning within robotic handling scenarios.
• Key points from this research:
o Objective: To improve object detection accuracy and energy utilization.
o Methodology: The system uses LED-based lighting controlled via pulse-width modulation (PWM). Instead of external sensors, it calibrates lighting based on predetermined red, green, blue, and yellow (RGBY) reference objects.
o Color Choice: Yellow was identified as the optimal color for minimal illumination while achieving successful object detection.
o Illuminance Level: Object detection was demonstrated at an illuminance level of approximately 50 lx.
o Energy Savings: Energy savings were achieved based on ambient lighting conditions.
• This study highlights the importance of color choice and intelligent lighting systems in computer vision applications.
Another topic involves improving energy efficiency of indoor lighting:
• This proposes an intelligent lighting control system based on computer vision. It aims to reduce energy consumption and initial installation costs.
• The system utilizes real-time video stream data from existing building surveillance systems instead of traditional sensors for perception.
• By dynamically adjusting lighting based on visual cues, energy efficiency can be improved.
The book "Active Lighting and Its Application for Computer Vision" covers various active lighting techniques. Photometric stereo and structured light are some examples. Actively controlling lighting conditions helps to enhance the quality of captured images and improve subsequent processing.
Previous articles on data processing: DM.docx
Subarray Sum equals K
Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k.
A subarray is a contiguous non-empty sequence of elements within an array.
Example 1:
Input: nums = [1,1,1], k = 2
Output: 2
Example 2:
Input: nums = [1,2,3], k = 3
Output: 2
Constraints:
• 1 <= nums.length <= 2 * 104
• -1000 <= nums[i] <= 1000
• -107 <= k <= 107
class Solution {
public int subarraySum(int[] nums, int k) {
if (nums == null || nums.length == 0) return -1;
int[] sums = new int[nums.length];
int sum = 0;
for (int i = 0; i < nums.length; i++){
sum += nums[i];
sums[i] = sum;
}
int count = 0;
for (int i = 0; i < nums.length; i++) {
for (int j = i; j < nums.length; j++) {
int current = nums[i] + (sums[j] - sums[i]);
if (current == k){
count += 1;
}
}
}
return count;
}
}
[1,3], k=1 => 1
[1,3], k=3 => 1
[1,3], k=4 => 1
[2,2], k=4 => 1
[2,2], k=2 => 2
[2,0,2], k=2 => 4
[0,0,1], k=1=> 3
[0,1,0], k=1=> 2
[0,1,1], k=1=> 3
[1,0,0], k=1=> 3
[1,0,1], k=1=> 4
[1,1,0], k=1=> 2
[1,1,1], k=1=> 3
[-1,0,1], k=0 => 2
[-1,1,0], k=0 => 3
[1,0,-1], k=0 => 2
[1,-1,0], k=0 => 3
[0,-1,1], k=0 => 3
[0,1,-1], k=0 => 3
Tuesday, May 7, 2024
This is a summary of the book titled “The BRAVE Leader” written by David McQueen and published by Practical Inspirational Publishing in 2024. The author is a leadership coach who asserts that failing to model inclusivity has dire consequences for a leader no matter how busy they might get. They must empower all their people and create systems that serve a wide range of stakeholders’ needs. They can do so and more by following the Bold, Resilient, Agile, Visionary, and Ethical Leadership style.
The framework in this book tackles root causes and expands emerging possibilities. It helps to drive innovation while having a strategic thinking. Inclusive practices can be embed into the “DNA” of the organization. Honesty, transparency and a culture to drive antifragility will help with a systemic change.
Good leaders inspire and empower others in various contexts, including community projects, sports games, and faith groups. Leadership is not just about management, but also involves understanding an organization's norms, values, and external factors. Leaders need followers to buy into their vision and actively participate in the work. Inclusive leadership involves attracting, empowering, and supporting talented individuals to achieve common goals without marginalizing them. To achieve this, leaders must be "BRAVE" - bold, resilient, agile, visionary, and ethical. This requires systems thinking and the ability to sense emerging possibilities. To be a BRAVE leader, leaders should focus on creating a culture where team members can develop their leadership qualities. They should resist the temptation to position themselves as an omnipotent "hero leader" and consider their decision-making approach. To align with the BRAVE framework, leaders should consider boldness, resilience, agility, vision, and ethicalness in their decision-making approaches.
BRAVE leaders use the "five W's" approach to problem-solving, which involves identifying the issue, identifying the business area, determining the deadline, identifying the most affected stakeholders, and determining the success of the problem. This approach helps in identifying the root cause of the problem and addressing it.
Strategic thinking is crucial for driving innovation and thriving amid uncertainty. It involves examining complex problems, identifying potential issues and opportunities, and crafting action plans for achieving big-picture goals. Inclusive leadership is essential for organizations to avoid homogeneous decision-making and foster a culture that combines inclusivity and strategic thinking.
Implementing inclusive practices into the organization's DNA includes rethinking recruitment, hiring practices, performance management, offboarding, and mapping customer segments. This involves rethinking the "best" applicants, updating hiring practices, and fostering a culture that combines inclusivity and strategic thinking. By embracing diversity and fostering a culture of inclusivity, organizations can thrive in the face of uncertainty and drive innovation.
Inclusive practices should be incorporated into an organization's DNA, including recruitment, performance management, offboarding, mapping customer segments, and product development. Expand the scope of applicants and provide inclusive hiring training to team members. Be more inclusive in performance management by asking questions and viewing performance reviews as opportunities for improvement. Treat exit interviews as learning experiences and consider customer needs and characteristics. Ensure stakeholders feel included in product development, ensuring they feel part of a two-way relationship.
Self-leadership is essential for effective leadership, as it involves understanding oneself, identifying desired experiences, and intentionally guiding oneself towards them. BRAVE leaders model excellence, embracing self-discipline, consistency, active listening, and impulse control. They prioritize their mental and physical health, taking breaks and vacations to show team members that self-care is crucial. Leadership coaching can help develop BRAVE characteristics and identify interventions for long-term changes.
Inclusive leadership requires a positive organizational climate, where employees feel valued, respected, and included. Building a BRAVE organizational culture involves setting quantifiable goals and holding managers and leaders accountable for meeting them. Diverse teams benefit from a broader range of insights, perspectives, and talents, and problem-solving more effectively by approaching challenges from multiple angles.
By embracing courage and overcoming fear, leaders drive systemic change, leading to courageous decision-making, better management, and a systematic approach to leadership. By cultivating positive characteristics like generosity, transparency, and accountability, leaders can drive sustainable growth and foster a more inclusive environment.
Previous Book Summaries: BookSummary86.docx
Summarizing Software: SummarizerCodeSnippets.docx: https://1drv.ms/w/s!Ashlm-Nw-wnWhOYMyD1A8aq_fBqraA?e=BGTkR7
Monday, May 6, 2024
Data mining algorithms are powerful tools used in various fields to analyze and extract valuable insights from large datasets. These algorithms are designed to automatically discover patterns, relationships, and trends in data, enabling organizations and researchers to make informed decisions.
Here are some commonly used data mining algorithms:
1. Decision Trees: Decision trees are tree-like structures that represent decisions and their possible consequences. They are used to classify data based on a set of rules derived from the features of the dataset.
2. Random Forests: Random forests are an ensemble learning method that combines multiple decision trees to improve accuracy and reduce overfitting. Each tree in the forest is trained on a random subset of the data.
3. Naive Bayes: Naive Bayes is a probabilistic classifier based on Bayes' theorem. It assumes that the features are independent of each other, which simplifies the calculations. Naive Bayes is commonly used for text classification and spam filtering.
4. Support Vector Machines (SVM): SVM is a supervised learning model used for classification and regression analysis. It separates data points into different classes by finding an optimal hyperplane that maximizes the margin between the classes.
5. K-means Clustering: K-means is an unsupervised learning algorithm used for clustering analysis. It partitions data into K clusters based on their similarity, where K is a predefined number. It aims to minimize the intra-cluster variance and maximize the inter-cluster variance.
6. Neural Networks: Neural networks are artificial intelligence models inspired by the human brain's structure and function. They consist of interconnected nodes (neurons) organized in layers. Neural networks can be trained to recognize patterns, make predictions, and classify data.
7. Deep Learning: Deep learning is a subset of neural networks that involves training models with multiple layers. It has achieved significant breakthroughs in image recognition, natural language processing, and other complex tasks.
8. Association Rule Mining: Association rule mining is used to discover relationships and dependencies between items in a dataset. It identifies frequent itemsets and generates rules based on their co-occurrence.
9. Reinforcement Learning: Reinforcement learning is an AI technique where an agent learns to make optimal decisions by interacting with an environment. The agent receives feedback in the form of rewards or penalties, which guide its learning process.
10. Genetic Algorithms: Genetic algorithms are optimization techniques inspired by the process of natural selection. They use principles of genetics and evolution to iteratively search for the best solution in a large solution space.
These algorithms are just a small sample of the vast array of techniques available in data mining and artificial intelligence. Each algorithm has its strengths and weaknesses, and the choice depends on the specific problem and dataset at hand.
Reference: https://1drv.ms/w/s!Ashlm-Nw-wnWxBFlhCtfFkoVDRDa?e=aVT37e