Sunday, March 8, 2026

 This is the summary of a book titled “People Glue: Hold on to your best people by setting them free” written by Helen Beedham and published by Practical Inspiration Publishing in 2026.This book looks at a simple but often misunderstood question: why people stay at work, and why they leave. Helen Beedham argues that money matters, but it is rarely the main reason people commit to an organization long term. What keeps people is a sense of freedom in how they work, paired with clear expectations about what needs to be done. When freedom is handled well, it becomes a strong force that helps organizations hold on to their best people.

The cost of losing employees is high. Replacing someone can cost anywhere from a large fraction of their annual salary to double it, once recruitment, onboarding, and lost productivity are taken into account. Beyond cost, frequent turnover weakens client relationships, slows teams down, and makes it harder for organizations to build the skills they will need in the future. Despite this, most workers in the US and the UK stay with an employer for fewer than four years. Research consistently shows that higher pay is not the main driver of job changes. Many people leave because they want more flexibility, more interesting work, and better opportunities to grow. For most workers, work–life balance and a sense of control over their time matter more than compensation alone.

Through research and surveys, Beedham and her colleagues identified four kinds of freedom that matter most to people at work. The first, and by far the most important across all demographic groups, is autonomy. People want a say in when, where, and how they do their work. After the COVID 19 pandemic, organizations that forced a full return to the office saw higher turnover than those that offered remote or hybrid options. Flexibility has become a baseline expectation for many workers. Importantly, autonomy does not mean chaos or a lack of standards. It means trusting people to decide how best to meet agreed goals. When people feel overly monitored or micromanaged, their motivation drops. When they have room to set priorities and make decisions, they are more likely to hold themselves to high performance standards.

Meaningful work is the second major freedom. Most people want to feel that what they do matters, even if they define “meaning” differently. For some, it is about contributing to society. For others, it is about solving interesting problems, learning, or feeling part of a team. Many workplaces unintentionally strip meaning from work by filling schedules with meetings and urgent tasks, leaving little time for focused thinking. Research shows that people need dedicated time each week to work deeply, yet most get far less than they need. When organizations reduce unnecessary meetings, stress drops and productivity rises sharply. Meaningful work is less about constant happiness and more about being energized and focused on solving real problems together.

The third freedom is self expression. People need to know that their ideas and perspectives are taken seriously. When someone speaks up and is ignored or dismissed, they are far less likely to contribute again. This problem affects many workers, but especially women and people from underrepresented groups. A lack of respect and belonging is a major reason people leave jobs. At the same time, self expression does not mean saying everything without restraint. It depends on mutual respect, thoughtful communication, and an environment where disagreement is handled constructively. When people feel safe to speak honestly, they help surface problems early and often offer solutions leaders would otherwise miss.

The final freedom is growth. While survey respondents ranked it lower than the others, it still plays an important role, especially as skill shortages grow worldwide. Many workers feel their employers focus more on hiring new talent than developing the people they already have. Nearly half of employees say learning opportunities influence whether they stay. People want to grow in ways that fit their goals, not just the needs of their current role. They value challenging assignments, room to fail and learn, and visibility into possible future paths. Organizations that support internal movement, mentoring, and skill development tend to see higher engagement and retention.

A key message of the book is that freedom only works when expectations are clear. Giving people freedom without structure leads to confusion, uneven treatment, and frustration. Leaders need to be explicit about what needs to be done, who is responsible, what decisions people can make on their own, and where boundaries lie. Beedham emphasizes that enabling freedom does not mean letting everyone do whatever they want. It means being clear about goals, roles, timelines, and standards, while trusting people to decide how to meet them.

Freedom is also not a one time initiative. It requires ongoing adjustment. Leaders should pay attention to what works, what does not, and why. When people push boundaries, it is not always a problem. Sometimes it signals innovation or unclear expectations rather than bad intent. Overreacting by removing freedom or assigning blame damages trust and can drive high performers away. When standards truly matter, such as in areas like safety or data privacy, leaders need to explain why rules exist and enforce them consistently.

The book makes the case that retaining people is less about control and perks and more about trust, clarity, and respect. When people are given room to work in ways that suit them, feel their work has purpose, know their voices matter, and see opportunities to grow, they are far more likely to stay.

Saturday, March 7, 2026

 This is a summary of a book: “The DOSE Effect: Optimize Your Brain and Body by Boosting Your Dopamine, Oxytocin, Serotonin, and Endorphins” written by Tj Power, a neuroscientist and founder of DOSE Lab and published by Dey Street in 2025. This book examines how modern lifestyles disrupt the neurochemical systems that regulate motivation, mood, social connection, and stress resilience. Drawing on neuroscience and behavioral research, Power focuses on four key neurotransmitters—dopamine, oxytocin, serotonin, and endorphins—and explains how everyday habits influence their balance. He argues that chronic stress, insufficient sleep, poor diet, and constant digital stimulation interfere with these systems, leading to reduced motivation, emotional instability, and diminished well-being and proposes a healthier behavior and environment can allow the stimuli and responder to co-exist better.

Dopamine is presented as the primary driver of motivation and goal-directed behavior. It operates through a pleasure–pain mechanism in which effortful or uncomfortable actions initially produce strain but are followed by a sense of reward upon completion. This system evolved to reinforce survival-related behaviors, but in contemporary environments it is frequently overstimulated by effortless rewards such as highly processed food, alcohol, online shopping, and social media. These activities produce rapid dopamine spikes without corresponding effort, often followed by declines in mood and motivation. Repeated exposure to such stimuli narrows the range of activities that feel rewarding, contributing to compulsive behavior and reduced drive. In contrast, dopamine regulation is strengthened through sustained effort, structured routines, and engagement in meaningful pursuits. Consistently completing demanding tasks, maintaining order in one’s environment, and working toward long-term goals reinforces the association between effort and reward, gradually restoring motivation and psychological resilience.

He emphasizes that discipline is central to maintaining a stable dopamine system. Small, repeatable actions—such as maintaining personal routines or completing routine responsibilities—condition the brain to tolerate effort and delay gratification. Over time, this process supports a broader capacity for sustained focus and perseverance. Equally important is the presence of a clearly defined pursuit that provides direction and anticipation. Without an ongoing sense of purpose, achievements alone may fail to produce lasting satisfaction, whereas engagement in the pursuit itself supports motivation and emotional stability.

Oxytocin is described as the neurochemical foundation of social bonding, trust, and self-confidence. It is released during moments of affection, cooperation, and emotional connection, and it plays a critical role in forming and maintaining relationships. Low oxytocin levels are associated with loneliness, self-doubt, and social withdrawal, conditions that are exacerbated by habits such as excessive phone use, superficial online comparison, and reduced face-to-face interaction. Chronic deficits in social connection are portrayed as having significant psychological and physiological consequences. Conversely, oxytocin levels increase through acts of service, supportive relationships, and physical touch, all of which promote feelings of safety, belonging, and emotional stability. Regular interpersonal engagement and contribution to others’ well-being are presented as essential components of long-term mental health.

Serotonin is examined primarily through its connection to physical health and nutrition. Unlike other neurotransmitters, the majority of serotonin is produced in the gut, making dietary patterns and digestion central to emotional regulation. Diets high in ultra-processed foods and refined sugars are associated with fluctuations in mood, energy, and anxiety, while consistent, nutrient-dense eating supports more stable serotonin production. Sleep and exposure to natural light further influence serotonin levels, reinforcing circadian rhythms that promote calmness and sustained energy. Time spent outdoors, particularly in low-stimulation environments, is identified as a reliable way to improve mood, focus, and overall physiological balance.

Endorphins are characterized as the body’s primary mechanism for managing stress and physical discomfort. They evolved to mitigate pain and regulate emotional responses during periods of intense physical demand. In modern contexts, insufficient physical activity and prolonged sedentary behavior reduce endorphin release, leaving individuals more vulnerable to chronic stress and tension. Regular movement, particularly activities that combine strength, endurance, and short periods of high exertion, stimulates endorphin production and improves stress tolerance. Stretching and mobility practices further support this system by reducing physical tension and promoting relaxation.

Overall, he presents mental and emotional well-being as the outcome of interacting biological systems that are shaped by daily behavior. Rather than emphasizing short-term interventions or external solutions, it argues for sustained, effort-based habits that align with the brain’s underlying neurochemistry. By prioritizing purposeful work, meaningful relationships, nutritious food, regular movement, adequate sleep, and time in natural environments, individuals can create conditions that support more stable motivation, emotional regulation, and long-term psychological health.


#codingexercise: https://1drv.ms/b/c/d609fb70e39b65c8/IQBBH30P0VQQQpbR9PdMI2mHAcj-baxH_XBgJ14c9j42tXI?e=Xc8Kok 

Friday, March 6, 2026

 Subarray Sum equals K

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k.

A subarray is a contiguous non-empty sequence of elements within an array.

Example 1:

Input: nums = [1,1,1], k = 2

Output: 2

Example 2:

Input: nums = [1,2,3], k = 3

Output: 2

Constraints:

• 1 <= nums.length <= 2 * 104

• -1000 <= nums[i] <= 1000

• -107 <= k <= 107

class Solution {

    public int subarraySum(int[] nums, int k) {

        if (nums == null || nums.length == 0) return -1;

        int[] sums = new int[nums.length];  

        int sum = 0;

        for (int i = 0; i < nums.length; i++){

            sum += nums[i];

            sums[i] = sum;

        }

        int count = 0;

        for (int i = 0; i < nums.length; i++) {

            for (int j = i; j < nums.length; j++) {

                int current = nums[i] + (sums[j] - sums[i]);

                if (current == k){

                    count += 1;

                }

            }

        }

        return count;

    }

}

[1,3], k=1 => 1

[1,3], k=3 => 1

[1,3], k=4 => 1

[2,2], k=4 => 1

[2,2], k=2 => 2

[2,0,2], k=2 => 4

[0,0,1], k=1=> 3

[0,1,0], k=1=> 2

[0,1,1], k=1=> 3

[1,0,0], k=1=> 3

[1,0,1], k=1=> 4

[1,1,0], k=1=> 2

[1,1,1], k=1=> 3

[-1,0,1], k=0 => 2

[-1,1,0], k=0 => 3

[1,0,-1], k=0 => 2

[1,-1,0], k=0 => 3

[0,-1,1], k=0 => 3

[0,1,-1], k=0 => 3


Thursday, March 5, 2026

 Agentic retrieval is considered reliable only when users can verify not just the final answer but the entire chain of decisions that produced it. The most mature systems treat verification as an integral part of the workflow, giving users visibility into what the agent saw, how it interpreted that information, which tools it invoked, and why it converged on a particular conclusion. When these mechanisms work together, they transform a stochastic, improvisational agent into something that behaves more like an auditable, instrumented pipeline.

The first layer of verification comes from detailed traces of the agent’s reasoning steps. These traces reveal the sequence of tool calls, the inputs and outputs of each step, and the logic that guided the agent’s choices. Even though the internal chain of thought remains abstracted, the user still sees a faithful record of the agent’s actions: how it decomposed the query, which retrieval strategies it attempted, and where it may have misinterpreted evidence. In a drone analytics context, this might show the exact detector invoked, the confidence thresholds applied, and the SQL filters used to isolate a particular geospatial slice. This level of transparency allows users to diagnose inconsistencies and understand why the agent behaved differently across runs.

A second layer comes from grounding and citation tools that force the agent to tie its conclusions to specific pieces of retrieved evidence. Instead of producing free-floating assertions, the agent must show which documents, image regions, database rows, or vector-search neighbors support its answer. This grounding is especially important in multimodal settings, where a single misinterpreted bounding box or misaligned embedding can change the meaning of an entire mission. By exposing the provenance of each claim, the system ensures that users can trace the answer back to its source and evaluate whether the evidence truly supports the conclusion.

Deterministic tool wrappers add another stabilizing force. Even if the model’s reasoning is probabilistic, the tools it calls—detectors, SQL templates, vector-search functions—behave deterministically. Fixed seeds, fixed thresholds, and fixed schemas ensure that once the agent decides to call a tool, the tool’s behavior is predictable and reproducible. This separation between stochastic planning and deterministic execution is what allows agentic retrieval to feel stable even when the underlying model is not.

Schema and contract validators reinforce this stability by ensuring that every tool call conforms to expected formats. They reject malformed SQL, incorrect parameter types, invalid geospatial bounds, or unsafe API calls. When a validator blocks a step, the agent must correct its plan and try again, preventing silent failures and reducing the variability that comes from poorly structured queries. These validators act as guardrails that keep the agent’s behavior within predictable bounds.

Some systems go further by introducing counterfactual evaluators that explore alternative retrieval paths. These evaluators run parallel or fallback queries—different detectors, different chunking strategies, different retrieval prompts—and compare the results. If the agent’s initial path diverges too far from these alternatives, it can revise its reasoning or adjust its confidence. This reduces sensitivity to small prompt variations and helps the agent converge on answers that are robust across multiple retrieval strategies.

Self-critique layers add yet another dimension. These evaluators score the agent’s output using task-specific rubrics, consistency checks, cross-model agreement, or domain constraints. In aerial imagery, for example, a rubric might flag an object that is physically impossible given the frame’s scale or context. By forcing the agent to evaluate its own output before presenting it to the user, the system catches errors that would otherwise appear as unpredictable behavior.

All of these mechanisms culminate in human-readable execution summaries that distill the entire process into a coherent narrative. These summaries explain which tools were used, what evidence was retrieved, how the agent reasoned through the problem, and where uncertainty remains. They give users a clear sense of the workflow without overwhelming them with raw traces, and they reinforce the perception that the system behaves consistently even when the underlying model is improvisational.

Together, these verification tools form a feedback loop in which the agent proposes a plan, validators check it, deterministic tools execute it, grounding ties it to evidence, counterfactuals test its robustness, evaluators critique it, and summaries explain it. This loop transforms agentic retrieval from a black-box improvisation into a transparent, auditable process. The deeper shift is that users stop relying on the agent’s answers alone and begin trusting the process that produced them. In operational domains like drone analytics, that shift is what makes agentic retrieval predictable enough to use with confidence.

Alternate source of truth and observability pipelines are often ignored from verification mechanisms but they are powerful reinforcers. Traditional mechanisms relying on structured and non-structured data direct queries can at least provide a grounding basis as much as it was possible to use online literature via a grounding api call. Custom metrics and observability pipelines also provide a way to measure drifts when none is anticipated. Lastly, error corrections and their root causes help to understand the underlying errors that can help to keep a system verified and operating successfully.


Wednesday, March 4, 2026

 TorchLean from Caltech is an attempt to close a long‑standing gap between how neural networks are built and how they are formally reasoned about. Instead of treating models as opaque numerical engines, it treats them as mathematical objects with precise, inspectable semantics. The work begins from a simple but powerful observation: most verification pipelines analyze a network outside the environment in which it runs, which means that subtle differences in operator definitions, tensor layouts, or floating‑point behavior can undermine the guarantees we think we have. TorchLean eliminates that gap by embedding a PyTorch‑style modeling API directly inside the Lean theorem prover and giving both execution and verification a single shared intermediate representation. This ensures that the network we verify is exactly the network we run. arXiv.org

The framework builds its foundation on a fully executable IEEE‑754 Float32 semantics, making every rounding behavior explicit and proof‑relevant. On top of this, it layers a tensor system with precise shape and indexing rules, a computation‑graph IR, and a dual execution model that supports both eager evaluation and compiled lowering. Verification is not an afterthought but a first‑class capability: TorchLean integrates interval bound propagation, CROWN/LiRPA linear relaxations, and α, β‑CROWN branch‑and‑bound, all with certificate generation and checking. These tools allow one to derive certified robustness bounds, stability guarantees for neural controllers, and derivative bounds for physics‑informed neural networks. The project’s authors demonstrate these capabilities through case studies ranging from classifier robustness to Lyapunov‑style safety verification and even a mechanized proof of the universal approximation theorem. Github

What makes TorchLean particularly striking is its ambition to unify the entire lifecycle of a neural network—definition, training, execution, and verification—under a single semantic‐first umbrella. Instead of relying on empirical testing or post‑hoc analysis, the framework encourages a world where neural networks can be reasoned with the same rigor as classical algorithms. The Caltech team emphasizes that this is a step toward a fully verified machine‑learning stack, where floating‑point behavior, tensor transformations, and verification algorithms all live within the same formal universe. LinkedIn

For our drone video sensing analytics framework, TorchLean offers a kind of structural clarity that aligns naturally with the way we already think about operational intelligence. Our system treats drone video as a continuous spatio‑temporal signal, fusing geolocation, transformer‑based detection, and multimodal vector search. TorchLean gives us a way to formalize the neural components of that pipeline so that robustness, stability, and safety guarantees are not just empirical observations but mathematically certified properties. For example, we could use its bound‑propagation tools to certify that our object‑detection backbone remains stable under small perturbations in lighting, altitude, or camera jitter—conditions that are unavoidable in aerial operations. Its explicit floating‑point semantics could help us reason for numerical drift in long‑duration flights or edge‑device inference. And its Lyapunov‑style verification tools could extend naturally to flight‑path prediction, collision‑avoidance modules, or any learned controller we integrate into our analytics stack.

More broadly, TorchLean’s semantics‑first approach complements our emphasis on reproducibility, benchmarking, and operational rigor. It gives us a way to turn parts of our pipeline into formally verified components, which strengthens our publication‑grade narratives and positions our framework as not just high‑performance but certifiably reliable. It also opens the door to hybrid workflows where our agentic retrieval and vision‑LLM layers can be paired with verified perception modules, creating a pipeline that is both intelligent and provably safe.


Tuesday, March 3, 2026

 This is a continuation of the previous article1.

Supporting code snippets:

1. Fetch_issues.py:

import requests, os, json

from datetime import datetime, timedelta

repo = os.environ["GITHUB_REPOSITORY"]

token = os.environ["GH_TOKEN"]

since = (datetime.utcnow() - timedelta(days=30)).isoformat()

url = f"https://api.github.com/repos/{repo}/issues?state=all&since={since}"

headers = {"Authorization": f"token {token}"}

issues = requests.get(url, headers=headers).json()

print(json.dumps(issues, indent=2))

2. embed_and_cluster.py:

import json, os

import numpy as np

from sklearn.cluster import KMeans

from openai import AzureOpenAI

client = AzureOpenAI(

    api_key=os.environ["AZURE_OPENAI_API_KEY"],

    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],

    api_version="2024-02-15-preview"

)

issues = json.load(open("issues.json"))

texts = [i["title"] + "\n" + i.get("body", "") for i in issues]

embeddings = []

for t in texts:

    e = client.embeddings.create(

        model="text-embedding-3-large",

        input=t

    ).data[0].embedding

    embeddings.append(e)

X = np.array(embeddings)

kmeans = KMeans(n_clusters=5, random_state=42).fit(X)

labels = kmeans.labels_

clusters = {}

for label, issue in zip(labels, issues):

    clusters.setdefault(int(label), []).append(issue)

print(json.dumps(clusters, indent=2))

3. generate_report.py:

import json, os

from openai import AzureOpenAI

client = AzureOpenAI(

    api_key=os.environ["AZURE_OPENAI_API_KEY"],

    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],

    api_version="2024-02-15-preview"

)

clusters = json.load(open("clusters.json"))

prompt = f"""

You are an expert Terraform and Databricks architect.

Generate a monthly insights report with:

- Executive summary

- Top recurring problems

- Modules with the most issues

- Common root causes

- Suggested improvements to Terraform modules

- Hotspots in Databricks workspace deployments

- Action plan for next month

Data:

{json.dumps(clusters)}

"""

resp = client.chat.completions.create(

    model="gpt-4o-mini",

    messages=[{"role": "user", "content": prompt}],

    max_tokens=2000,

    temperature=0.2

)

print(resp.choices[0].message.content)


Monday, March 2, 2026

 An AI-generated monthly insights reports for Terraform GitHub repository can be realized by building a small automated pipeline, that puts all GitHub issues for the past month, embeds them into vectors, clusters and analyzes them, feeds the structured data into an LLM, produces a leadership friendly markdown report, and publishes it automatically via a Teams message. These are explained in detail below:

1. Data ingestion: A scheduled github action that runs monthly and fetches all issues created or updated in the last 30 days, their comments, labels, module references, and severity or impact indicators. This produces a json dataset like:

[

  {

    "id": 1234,

    "title": "Databricks workspace recreation on VNet change",

    "body": "Changing the VNet CIDR causes full workspace recreation...",

    "labels": ["bug", "module/databricks-workspace"],

    "comments": ["We hit this again last week..."],

    "created_at": "2026-02-01",

    "updated_at": "2026-02-05"

  }

]

2. And use Azure OpenAI embeddings (text-embedding-3-large) to convert each issue into a vector. The store has issue_id, embedding, module (parsed from labels or text), text (title + body + comments) and these can be stored in Pincone or a dedicated Azure AI Search (vector index)

For a simple implementation, pgvector is enough.

3. We can use unsupervised clustering to detect running themes: K-means, HDBScan, and agglomerative clustering. This lets you identify recurring problems, common root causes, hotspots in databricks deployments, and modules with repeated issues.

Sample output:

Cluster 0: Databricks workspace recreation issues (7 issues)

Cluster 1: Private endpoint misconfiguration (4 issues)

Cluster 2: Missing tags / policy violations (5 issues)

Cluster 3: Module version drift (3 issues)

4. This structured data is then fed into an LLM with a prompt like:

You are an expert Terraform and Azure Databricks architect.

Summarize the following issue clusters into a leadership-friendly monthly report.

Include:

- Top recurring problems

- Modules with the most issues

- Common root causes

- Suggested improvements to Terraform modules

- Hotspots in Databricks workspace deployments

- A short executive summary

- A recommended action plan for the next month

Data:

<insert JSON clusters + issue summaries>

And the LLM produced a polished Markdown report.

5. Sample output: for what’s presented to the leadership:

# Monthly Terraform Insights Report — February 2026

## Executive Summary

This month saw 19 issues across 7 Terraform modules. The majority were related to Databricks workspace networking, private endpoints, and tag compliance. Workspace recreation remains the most disruptive pattern.

## Top Recurring Problems

- Databricks workspace recreation due to VNet CIDR changes (7 issues)

- Private endpoint misconfiguration (4 issues)

- Missing required tags (5 issues)

- Module version drift (3 issues)

## Modules with the Most Issues

- module/databricks-workspace (9 issues)

- module/private-endpoints (4 issues)

- module/networking (3 issues)

## Common Root Causes

- Inconsistent module usage patterns

- Lack of lifecycle rules preventing accidental recreation

- Missing validation rules in modules

- Insufficient documentation around networking constraints

## Suggested Improvements

- Add `prevent_destroy` lifecycle blocks to workspace modules

- Introduce schema validation for required tags

- Add automated tests for private endpoint creation

- Publish module usage examples for networking patterns

## Hotspots in Databricks Deployments

- Workspace recreation triggered by minor networking changes

- Cluster policy misalignment with workspace settings

- Missing Unity Catalog configuration in new workspaces

## Action Plan for Next Month

- Refactor workspace module to isolate networking dependencies

- Add tag validation to all modules

- Create a “safe update” guide for Databricks workspaces

- Introduce CI checks for module version drift

6. That’s all!