Friday, December 26, 2025

 Solving Jumble:

# Find a common word that is a jumble of the letters RGYUEN

Solution:

import java.util.*;

import java.lang.*;

import java.io.*;

/* Name of the class has to be "Main" only if the class is public. */

class Ideone

{

 public static void main (String[] args) throws java.lang.Exception

 {

  String a = "RGYUEN";

  StringBuilder b = new StringBuilder();

  List<String> permutations = new ArrayList<String>();

  boolean[] used = new boolean[a.length()];

  permute(a, b, used, 0, a.length()-1, perumations);

  for (int i = 0; i < permutations.size(); i++) {

   if (isValid(permutations.get(i))) {

    System.out.println(permutations.get(i));

   }

  }

 }

 public static void permute(String a, StringBuilder b, boolean[] used, int start, int end, List<String> permutations) {

  if (b.length() == end - start + 1) {

   permutations.add(b.toString());

   return;

  }

  for (int i = start; i <= end; i++) {

   if (used[i]) continue;

   used[i] = true;

                                            b.append(a.charAt(i));

   permute(a, b, used, start, end, permutations);

   b.deleteCharAt(b.length()-1);

   used[i] = false;

  }

 }

 public static boolean isValid(String s) {

  if ((s.charAt(0) == 'G' || s.charAt(2) == 'Y' || s.charAt(5) == 'R') &&

      (s.charAt(0) == 'G' || s.charAt(2) == 'Y' || s.charAt(5) == 'R') &&

      (s.charAt(0) == 'G' || s.charAt(2) == 'Y' || s.charAt(5) == 'R'))

      return true;

  return false;

 }

}

Answer: Gurney

#codingexercise: CodingExercise-12-26-2025.docx 

#booksummary: BookSummary401.docx

Thursday, December 25, 2025

 This is a continuation of article from day before yesterday on benchmark cases for aerial drone image analytics:

Case 3: types of hazards detected:

• Cars Parked in Bicycle Lanes: Vehicles obstructing dedicated bike lanes can force cyclists into traffic.

• Pedestrians Crossing Intersections: Pedestrians may cross at intersections unpredictably, sometimes against traffic signals.

• Car Crossings at Intersections: Vehicles turning or crossing at intersections can pose risks to cyclists and pedestrians.

• Improperly Marked Crosswalks: Lack of clear signage or faded markings can lead to confusion for pedestrians and drivers.

• Construction Zones: Temporary constructions can create obstacles and require detours, increasing risks.

• Poor Visibility Areas: Curves, or poorly lit areas can reduce visibility for both cyclists and drivers.

• Cyclists Riding on Sidewalks: In some areas, cyclists riding on sidewalks can surprise pedestrians.

• Vehicle Door Zones: Cyclists are at risk from opened car doors when riding near parked vehicles.

• Inadequate Lighting: Poorly lit areas can make it difficult for drivers to see cyclists and pedestrians.

• Obstructed Views: Trees, signs, or buildings may block sightlines at intersections.

• Weather Conditions: Rain, snow, or ice can affect road conditions and visibility.

• Bicycle Infrastructure: Inadequate or poorly designed bike paths can create hazardous situations.



Wednesday, December 24, 2025

 This is a summary of a book titled “Buyable: your guide to building a self-managing, fast-growing, and a high-profit business” written and self-published by Steve Preda in 2021. The ultimate dream for many founders is not just to build a thriving business, but to one day profitably cash out—reaping the rewards of years of hard work. Yet, as Steve Preda reveals in his book, this dream is elusive for most. The reality is stark: only a small fraction of business owners manage to sell their companies for the value they desire. The reason? Too many entrepreneurs become so absorbed in the daily grind and the relentless pursuit of profit that they neglect to plan for the eventual sale of their business.

Preda prescribes a set of management blueprints to maximize the value of your business and keep your options open for the future, you must build a “buyable” company. A buyable business is not just profitable and growing—it is structured, predictable, and operates with processes that can be replicated by others. Such a company is attractive to buyers because it offers stability, regular cash flows, and the promise of continued success even after the founder steps away. In contrast, businesses that are overly dependent on their founders or lack clear systems are often deemed “unbuyable,” and their owners may struggle to find buyers willing to pay a premium.

The statistics are sobering: most business owners face long odds—just a one-in-ten chance—of selling their company for the price they want. However, those who proactively “groom” their businesses for sale can achieve prices 30% to 50% higher than those who do not prepare. The key is to start with the end in mind, making strategic decisions that enhance the company’s marketability from the outset.

Preda outlines three primary paths to building a buyable business. The first is creative entrepreneurship, where founders launch independent ventures, often learning through trial and error. This route is rewarding but risky, with a steep learning curve—statistics show that 90% of startups don’t survive to their tenth year. The second path is franchise ownership, which offers a turnkey operation with proven systems but less room for innovation and a share of profits going to the parent company. The third, and perhaps most strategic, is to follow a tested management blueprint—leveraging the collective wisdom of business experts to build a company that is both independent and scalable.

He lists seven foundational management pillars: culture, structure, vision, strategy, execution, process, and alignment. A strong culture unites employees around a shared purpose, while a clear structure ensures accountability and smart decision-making. Vision gives the company direction, inspired by Maslow’s hierarchy of needs, motivating people to strive for higher goals once the basics are met. Strategy involves defining the company’s mission and understanding customer needs, while execution is about setting objectives and achieving measurable results—exemplified by Andy Grove’s leadership at Intel. Process design, as advocated by Frederick Winslow Taylor, ensures that operations are systematic and knowledge is passed on efficiently. Finally, alignment—championed by Jim Collins—ensures that everyone in the organization is moving in the same direction, preventing chaos and maximizing effectiveness.

To help entrepreneurs put these pillars into practice, Preda introduces ten leading management blueprints, each distilled from successful business books and real-world experience. These include Michael Gerber’s “E-Myth,” which urges founders to work on their business, not just in it; Jack Stack’s “The Great Game of Business,” which gamifies operations to engage employees; Verne Harnish’s “Rockefeller Habits,” which emphasizes priorities, data, and regular meetings; Gino Wickman’s “Entrepreneurial Operating System (EOS),” which focuses on vision, people, and execution; and several others, each offering practical frameworks for building a resilient, scalable company.

Preda’s narrative is one of proactive leadership. Savvy founders begin with the end in mind, understanding that selling a business is a process that can take 12 to 18 months. They know their “magic number”—the profit they need from a sale—and they prepare meticulously, maintaining records, building loyal customers, and strengthening contractual relationships. They seek out strategic buyers who can benefit from synergies, and they surround themselves with experienced advisors. In contrast, reactive founders who fail to plan may find themselves unable to sell or forced to accept far less than their business is worth.

#codingexercise: CodingExercise-12-24-2025.docx

Tuesday, December 23, 2025

 The following is distance measurement studies from DVSA benchmarks:

1. Determine scale of each frame.

2. Identification of the largest built-up structure encountered during the drone tour

3. Estimating the size of that largest built-up structure

4. Identification of the largest free space encountered during the drone tour

5. Estimating the size of that largest free space.

6. Identifying the count of similar sized structures within a scene.

7. Identifying the count of objects that can occupy a given free space

8. Identifying the distance between two points of interest across disparate frames, such as the length traversed by the drone in a specific direction prior to a turning point.

9. Total distance covered prior to revisits to a point of interest.

Methodology:

1. Scale (eg. 1:100 1 unit in the image = 100 units in real life). This needs to be found out only once.

a. Each frame has a location and timestamp before it is vectorized and stored in the vector store along with its insights on objects and bounding boxes. Therefore, scale resolution can be achieved in a few ways:

i. Using Ground Sample Distance as in 2cm/pixel as a fraction of real distance versus the tiniest point in an image with smaller GSD being better for details.

1. With GSD either known earlier or already computed as (Flight Altitude x Sensor Dimension) / (Focal length x Image Dimension), return scale as inversion of GSD

ii. Using well-known objects or landmarks:

1. Given the bounding box of a well-known object in the frame, say an intermediate sedan or a known landmark, compute the scale as representative fraction comprising of pixel-length by actual length on ground such as that of a semi-trailer.

2. Width of road: Given the width of the road in pixels and the ground distance from a city record or google maps, we can determine the scale.

iii. Using GPS co-ordinates:

1. Using overall tour:

a. get the overall tour bounding box width and height in terms of latitude and longitude by computing (min Latitude, min Logitude, max Latitude, max Longitude)

b. Calculate the fraction of the tour area covered by the current frame:

c. Proportionately distribute the height to width given the frame width and height or take the square root of the (fw x fh) / (tw x th)

d. Emit the scale

2. Using GPS co-ordinates of two points in the same frame:

a. Take two points in the frame such as one pertaining to the center of the frame given by the drone and another found from Google Maps and compute the actual distance using Haversine Formula.

height_m = haversine(lat_min, lon_min, lat_max, lon_min)

width_m = haversine(lat_min, lon_min, lat_min, lon_max)

Note: Since every frame has a GPS co-ordinate to begin with, to find another gps coordinate in the same frame, detect, clip and vectorize an object in that frame and find it in Google Maps of the scene at the Latitude and Longitude and get its GPS co-ordinates. Haversine can then be used to the actual distance while the pixel width gives the image-based distance.

b. Emit the scale

For example:

from math import radians, cos, sin, asin, sqrt

# Step 1: Haversine function to compute distances in meters

def haversine(lat1, lon1, lat2, lon2):

    R = 6371000 # Earth's radius in meters

    dlat = radians(lat2 - lat1)

    dlon = radians(lon2 - lon1)

    a = sin(dlat/2)**2 + cos(radians(lat1))*cos(radians(lat2))*sin(dlon/2)**2

    c = 2 * asin(sqrt(a))

    return R * c

# Bounding rectangle corners (nearest and farthest)

lat_min, lon_min = 42.37043, -71.12165

lat_max, lon_max = 42.37125, -71.11733

# Compute east-west (width) and north-south (height) ground distances, in meters

height_m = haversine(lat_min, lon_min, lat_max, lon_min)

width_m = haversine(lat_min, lon_min, lat_min, lon_max)

# Step 2: Area in square meters

area_m2 = width_m * height_m

# Step 3: Convert to square feet (1 m = 3.28084 ft)

area_ft2 = area_m2 * (3.28084 ** 2)

# Step 4: Convert to square miles (1 sq mile = 27,878,400 sq ft)

area_miles2 = area_ft2 / 27878400

print(f"Ground area covered: {area_miles2:.6f} square miles")

2. Largest built-up find:

a. The bounding boxes of all detected objects in a scene gives the area of each

b. sort and filter these to include only the buildings

c. Return the top most from descending order

3. Largest built-up area:

a. Using 2. Find the bounding box of the corresponding object in the scene and calculate width and height

b. With the scale computed from 1. And the width and height from previous step, calculate the area as width x scale x height x scale

4. Largest free-space find:

a. If the detected objects are tagged as one of park, street intersection, courtyard, parking lot, transit center, grass, pavement, lake, river etc, pick the largest one as shown from 2. Above

b. Use color histogram based analysis to classify land cover

5. Largest free-space area:

a. If the free space is in one of the detected objects, then its bounding box and scale gives the largest free space area

b. Otherwise get the color histogram and proportionately divide the area of the scene for the chosen color

6. Count of objects in a scene can be done with trained models or clustering and hdbscan

7. Given the object size is found by bounding box and scale and the free space is given by its bounding box and scale, this is just a simple multiple

8. Distance calculation based on disparate frames is easy to do with GPS co-ordinates for each which is a given and a Haversine computation. The trick is to find the nearest and the furthest frames from the scene catalog and either a ground truth can be relied upon such as Google Maps or Geodnet or preferably turning point frames can be identified from the video and such frames can be correlated with timestamps and velocity of the drone to find displacement in that direction.

9. Cumulation of the above in all directions traversed by the drone provides the total distance covered or as speed of drone x (flight time – hover time).

Operators for logic above become re-usable and must be curated into a library of the DVSA application or framework. Improvements to object detection and counting in a scene can be accomplished by better training and fine-tuning the corresponding model


Monday, December 22, 2025

 As drones evolve toward higher levels of autonomy, the need for contextual intelligence—beyond raw sensor fusion and rule-based planning—becomes increasingly critical. While these drones excel in structured environments using LiDAR, radar, and HD maps, they often lack the semantic depth and temporal foresight that a vision-driven analytics layer can provide. This is where our drone-based video sensing architecture, enriched by importance sampling, online overlays, and agentic retrieval, offers transformative potential: a contextual copilot that augments autonomy with memory, judgment, and adaptive feedback. As a non-invasive overlay over existing drone operations and platforms, this architecture brings down cost substantially with a dual approach of making on-board enhancements unnecessary with parallel and often uncontested capabilities in the overlay plane and using commodity and cloud infrastructure.

Drones operate with modular autonomy stacks: perception, localization, prediction, planning, and control. These modules rely heavily on real-time sensor input and preloaded maps, which can falter in dynamic or degraded conditions—poor visibility, occlusions, or unexpected traffic behavior. Our system introduces a complementary layer: a selective sampling engine that curates high-value video frames from vehicle-mounted or aerial cameras, forming a spatiotemporal catalog of environmental states and trajectory outcomes. This catalog becomes a living memory of the tour, encoding not just what was seen, but how the drone responded and what alternatives existed.

By applying importance sampling, our copilot prioritizes frames with semantic richness—intersections, merges, pedestrian zones, or adverse weather—creating a dense vector space of contextually significant moments. These vectors are indexed by time, location, and scenario type, enabling retrospective analysis and predictive planning. For example, if a drone needs to calculate distance to a detour waypoint, this could help with similar geometry, overlay ground data, and suggest trajectory adjustments based on historical success rates.

This retrieval is powered by agentic query framing, where the copilot interprets system or user intent—“What’s the safest merge strategy here?” or “How did similar vehicles handle this turn during rain?”—and matches it against cataloged vectors and online traffic feeds. The result is a semantic response, not just a path: a recommendation grounded in prior information, enriched by real-time data, and tailored to current conditions.

Our analytics framework respects both autonomous and non-autonomous drone or swarm architectures, acting as a non-invasive overlay that feeds contextual insights into the planning module. It does not replace the planner—it informs it, offering scores, grounded preferences, and fallback strategies when primary sensors degrade.

Moreover, our system’s integration with online maps and traffic information allows for enriched drone video sensing applications. By leveraging standard 100m high point of reference for aerial images adjusted from online satellite maps of urban scenes, we detect objects that help beyond what custom models are trained for. In addition, the use of catalogued objects, grounded truth, and commodity models for analysis, we make this cost-effective. With our architecture offering a plug-and-play intelligence layer, this help drones to evolve from perceive and plan to remember, compare and adapt which is aligned with the future of agentic mobility


Sunday, December 21, 2025

 Absolute Difference Between Maximum and Minimum K Elements

You are given an integer array nums and an integer k.

Find the absolute difference between:

the sum of the k largest elements in the array; and

the sum of the k smallest elements in the array.

Return an integer denoting this difference.

Example 1:

Input: nums = [5,2,2,4], k = 2

Output: 5

Explanation:

The k = 2 largest elements are 4 and 5. Their sum is 4 + 5 = 9.

The k = 2 smallest elements are 2 and 2. Their sum is 2 + 2 = 4.

The absolute difference is abs(9 - 4) = 5.

Example 2:

Input: nums = [100], k = 1

Output: 0

Explanation:

The largest element is 100.

The smallest element is 100.

The absolute difference is abs(100 - 100) = 0.

Constraints:

1 <= n == nums.length <= 100

1 <= nums[i] <= 100

1 <= k <= n

import java.util.ArrayList;

import java.util.Arrays;

import java.util.Collections;

import java.util.List;

class Solution {

    public int absDifference(int[] nums, int k) {

        int[] sortedNums = IntStream.of(nums)

                                   .boxed()

                                   .sorted(Comparator.reverseOrder())

                                   .mapToInt(Integer::intValue)

                                   .toArray();

        long max = 0;

        long min = 0;

        for (int i = 0; i < k; i++) {

            max += (long) sortedNums[i];

        }

        for (int i = nums.length - 1; i >= nums.length - k; i--) {

            min += (long) sortedNums[i];

        }

        return (int) Math.abs(max - min);

    }

}

994 / 994 testcases passed


Saturday, December 20, 2025

 Many of the drone vision analytics queries are about objects located in a scene. For example, a search for a “parking garage” in a scene should yield a result with a clipped image showing the garage.  

As a multimodal search, this does not always accurately result in the correct answer but a few techniques can help. This article list those. 

  1. When the scenes are vectorized frame by frame, they could also be analyzed to detect as many objects as possible along with their bounding boxes and saved with the scenes as documents with id, vector, captions, title, location, bounding box and tags. 

  1. The search over these accumulated scenes and objects can make use of various search options to narrow down the search. For example: 

  1. Create a vector from the text: 

search_text = "parking garage" 

vector_query = VectorizableTextQuery(text=search_text, exhaustive=True, k_nearest_neighbors=50, fields="vector", weight=0.5) 

results = dest_search_client.search( 

    search_text=search_text, 

    vector_queries=[vector_query], 

    query_type=QueryType.SEMANTIC, 

    select=["id", "description","vector"], 

    filter = f"description ne null and search.ismatch('{search_text}', 'description')", 

    semantic_configuration_name="mysemantic", 

    query_caption=QueryCaptionType.EXTRACTIVE, 

    query_answer=QueryAnswerType.EXTRACTIVE, 

    top=10, 

) 

  1. use semantic configuration 

  1. Semantic configuration leverages the text based content in the fields such as title, description and tags for keyword and semantic search. 

  1. Specify the Hierarchical Navigable Small World (HNSW) search or Exhaustive KNN search as appropriate. The differences are that HNSW has high accuracy and low latency but might miss neighbors while exhaustive counts all neighbours at higher cost. Usually with large datasets, HNSW performs better 

  1. filter the results: 

  1. You can always leverage the text associated with the images to narrow down your results. 

  1. Even if the match is not at the top of the list, retrieving ten results as tensors can still be used in a subsequent clustering to find the centroid. 

These are some of the tips to make the results of a multimodal search more deterministic and high quality on a scale of 1 to 5.