Wednesday, April 1, 2026

 This is a summary of the book titled “Between You and AI: Unlock the Power of Human Skills to Thrive in an AI-Driven World” written by Andrea Iorio and published by Wiley, 2025. This book argues that the most durable advantage in an AI-saturated workplace comes from combining machine efficiency with distinctly human judgment.

Iorio frames AI as a powerful accelerator of structured work—searching, summarizing, classifying, drafting, and pattern-finding—but he cautions that automation alone rarely differentiates a person or an organization. He points to forecasts that a substantial share of work may be automated and notes that the winners will be those who use AI to amplify what machines do not supply on their own: meaning, context, relationships, ethical reflection, and creative reframing.

A “hybrid” skill set: delegating well-bounded tasks to AI while strengthening emotional intelligence, critical thinking, and creativity, is a must. “The way forward is not about choosing between AI and human expertise — it is about integrating both into a new hybrid set of skills that leverages the best of each.” To illustrate, Iorio revisits the moment IBM’s Deep Blue defeated chess champion Garry Kasparov and the subsequent rise of “advanced chess,” where human players use AI analysis but still shape strategy and decide when to depart from the model’s suggestions.

He extends that example to everyday business decisions. At Nubank, for instance, customer service representatives work with an AI co-pilot that offers real-time suggestions. The system improves speed and consistency, while the human agent contributes empathy and situational awareness—qualities that matter when someone is frustrated, confused, or dealing with a sensitive issue.

Because AI can surface information that once required years of specialized study, Iorio argues that advantage increasingly comes from knowing how to work with these systems, not from memorizing what they can retrieve. He cites research from his team suggesting many leaders would rather collaborate with someone who can use AI well to find and synthesize answers than with someone who relies on expertise alone. In that sense, prompting becomes a practical craft: “The more thought you put into your prompt from the start, the more time and productivity you will save later.”

When Iorio turns to prompt design, his guidance is straightforward: be deliberate about the role you want the system to play, the specificity of the question, the context that shapes what “good” looks like, and the format you need back. Instead of asking for a generic report, you might ask the model to respond as a consultant, define the industry and constraints, describe the audience, and request an output structure that you can review and refine.

From there, the book emphasizes what Iorio calls “data sensemaking.” AI can process huge volumes of information, detect patterns, and generate predictions, but it cannot decide what matters most in a particular environment. Sensemaking means choosing the questions worth asking, defining indicators that connect to real decisions, and interpreting outputs in light of goals, constraints, and lived experience. It also includes actively looking for surprising relationships in the data, distinguishing vanity metrics from signals that should change priorities, and connecting past performance to leading indicators that hint at where the market is moving.

In 1997, IBM’s Deep Blue beat then-undefeated Chess champion Garry Kasparov. In the aftermath of his loss, Kasparov began playing Advanced Chess, in which human players collaborate with an AI. Players consider AI’s advice and intervene with their own strategies.

Historically, people gained a competitive advantage by acquiring highly specialized knowledge. For example, lawyers charge high fees because they dedicate years to becoming experts in the law. But nowadays, AIs such as GPT-4 can pass bar exams and explain legal matters, such as data privacy policies, to laypeople. This doesn’t mean human lawyers — or other human experts — are going away. However, according to a survey by Andrea Iorio and his team, nearly 60% of leaders would prefer to collaborate with people skilled in using AI to find answers than with people with strong expertise but who don’t use AI.

Sensemaking also requires skepticism about where outputs come from and how they generalize. Iorio notes that models can overreach when information is thin or when training data reflects historical bias. The remedy he recommends is continuous review: checking whether data is current, whether it represents the populations affected by the decision, documenting known limitations, and building human review into workflows—especially where the stakes are high.

Another theme is “reperception,” Iorio’s term for deliberately letting go of inherited assumptions to make room for new possibilities. He describes common cognitive traps—such as seeking only confirming evidence, getting overwhelmed by abundant information, defaulting to familiar “safe” strategies, and mistaking slow early progress for a sign that change will never accelerate. In practice, reperception can look like intentionally exposing yourself to viewpoints outside your usual feed, using frameworks to narrow attention to what is truly decision-relevant, and regularly posing questions that challenge what you take for granted.

To show how a single “impossible” question can reopen a problem, Iorio retells the story of Edwin Land being asked by his young daughter why she could not see a photo immediately—a moment that helped spur the invention of instant photography. He pairs that mindset shift with adaptability: noticing emerging curves early and acting on what you learn. John Deere, for example, moved beyond selling equipment toward using sensors and AI to provide farmers with guidance on planting and yield, expanding into software and services rather than relying only on its historical product line.

Iorio then draws on the concept of “antifragility”: not merely withstanding shocks, but improving because of them. Citing research on decades of innovative projects, he argues that failure is a common feature of eventual success when teams extract lessons quickly and apply them to the next iteration. AI, in his view, can lower the cost of learning by helping prevent routine errors through automation, manage unavoidable risks through prediction and monitoring, and accelerate experimentation by analyzing patterns across large sets of past failures.

He highlights how simulation and pattern analysis can compress feedback loops. Automotive firms that once relied on a limited number of expensive physical crash tests can now run many virtual scenarios, learn faster, and refine designs earlier. In a different domain, NotCo’s AI system, “Giuseppe,” searches through vast ingredient combinations to propose plant-based recipes that human teams can then test and adjust, turning unusual suggestions into practical prototypes.

Plant-based food developer NotCo developed a proprietary AI, “Giuseppe,” that analyzes the texture, structure, and flavor properties of 300,000 potential ingredients and suggests recipes for vegan products. Even though some of “Giuseppe’s” ideas seem unusual — like using pineapple and cauliflower as part of plant-based milk — the AI allowed NotCo to generate and test new products, such as a vegan custard for Shake Shack, in far less time than traditional approaches required.

A later section focuses on trust. Iorio notes that people are often wary of AI in sensitive settings such as healthcare, even when the technology can improve detection and treatment. He describes research in multiple sclerosis care in which systems can scan records and imaging for patterns clinicians might miss, and he argues that the value of such tools depends on making their use understandable and accountable to the people affected.

A 2023 study by the Pew Research Center found that 60% of Americans are concerned about their medical providers using AI. AI can significantly improve the diagnosis and treatment of diseases, particularly for complex conditions that don’t have a definitive test, such as, for example, multiple sclerosis (MS). People with MS may experience a variety of symptoms, including blurry vision and difficulty walking. They often visit different specialists for each problem. Research published in a 2023 issue of the International Journal of MS Care found that AI can seek patterns across health records and identify signs of MS that individual doctors might overlook. Researchers at University College London used the AI MindGlide to detect patterns in MS patients’ MRI scans and — in a matter of seconds — recommend treatment plans that are most likely to be effective.

He returns repeatedly to the “black box” problem: when a model produces an output that neither users nor even developers can readily explain, organizations may not be able to justify decisions or detect errors. For regulated decisions—such as credit and lending—he points to the importance of transparency and “explainable AI,” meaning systems and processes that allow humans to trace the logic, challenge results, and correct them when necessary.

Finally, Iorio argues that responsibility cannot be delegated to a tool. Using the 2018 fatal crash involving an Uber self-driving vehicle as an example, he shows how accountability tends to fall back on humans and organizations even when automated systems are involved. “AI can execute, but it cannot care… it cannot be held responsible.” For that reason, he recommends defining who owns AI-assisted decisions, building checkpoints for human review, and testing outputs against organizational values so that efficiency does not override fairness, safety, or long-term trust.

Andrea Iorio hosts the Metanoia Lab podcast and NVIDIA’s Vem AI podcast in Brazil. He is an MBA professor at Fundação Dom Cabral, a columnist for MIT Technology Review Brazil, and a frequent speaker on AI and leadership.

Tuesday, March 31, 2026

 The following is a sample code for getting custom insights into GitHub issues opened against a repository on a periodic basis:

#! /usr/bin/python

import os, requests, json, datetime, re

REPO = os.environ["REPO"]

TOKEN = os.environ["GH_TOKEN"]

WINDOW_DAYS = int(os.environ.get("WINDOW_DAYS","7"))

HEADERS = {"Authorization": f"Bearer {TOKEN}", "Accept": "application/vnd.github+json, application/vnd.github.mockingbird-preview+json", "X-GitHub-Api-Version": "2026-03-10"}

since = (datetime.datetime.utcnow() - datetime.timedelta(days=WINDOW_DAYS)).isoformat() + "Z"

# ---- Helpers ----

def gh_get(url, params=None):

  r = requests.get(url, headers=HEADERS, params=params)

  r.raise_for_status()

  return r.json()

def gh_get_text(url):

  r = requests.get(url, headers=HEADERS)

  r.raise_for_status()

  return r.text

issues_url = f"https://api.github.com/repos/{REPO}/issues"

params = {"state":"closed","since":since,"per_page":100}

items = gh_get(issues_url, params=params)

issues = []

for i in items:

  if "pull_request" in i:

    continue

  comments = gh_get(i["comments_url"], params={"per_page":100})

  pr_urls = set()

  for c in comments:

    body = c.get("body","") or ""

    for m in re.findall(r"https://github\.com/[^/\s]+/[^/\s]+/pull/\d+", body):

      pr_urls.add(m)

    for m in re.findall(r"(?:^|\s)#(\d+)\b", body):

      pr_urls.add(f"https://github.com/{REPO}/pull/{m}")

  issues.append({

    "number": i["number"],

    "title": i.get("title",""),

    "user": i.get("user",{}).get("login",""),

    "created_at": i.get("created_at"),

    "closed_at": i.get("closed_at"),

    "html_url": i.get("html_url"),

    "comments": [{"id":c.get("id"), "body":c.get("body",""), "created_at":c.get("created_at")} for c in comments],

    "pr_urls": sorted(pr_urls)

  })

with open("issues.json","w") as f:

  json.dump(issues, f, indent=2)

print(f"WROTE_ISSUES={len(issues)}")

import os, requests, datetime, pandas as pd

REPO = os.environ["REPO"]

TOKEN = os.environ["GH_TOKEN"]

WINDOW_DAYS = int(os.environ.get("WINDOW_DAYS", "7"))

headers = {

  "Authorization": f"Bearer {TOKEN}",

  "Accept": "application/vnd.github+json",

}

since = (datetime.datetime.utcnow() - datetime.timedelta(days=WINDOW_DAYS)).isoformat() + "Z"

url = f"https://api.github.com/repos/{REPO}/issues"

def fetch(state):

  items = []

  page = 1

  while True:

    r = requests.get(

      url,

      headers=headers,

      params={"state": state, "since": since, "per_page": 100, "page": page},

    )

    r.raise_for_status()

    batch = [i for i in r.json() if "pull_request" not in i]

    if not batch:

      break

    items.extend(batch)

    if len(batch) < 100:

      break

    page += 1

  return items

opened = fetch("open")

closed = fetch("closed")

df = pd.DataFrame(

  [

    {"metric": "opened", "count": len(opened)},

    {"metric": "closed", "count": len(closed)},

  ]

)

df.to_csv("issue_activity.csv", index=False)

print(df)

import os, re, json, datetime, requests

import hcl2

import pandas as pd

REPO = os.environ["GITHUB_REPOSITORY"]

GH_TOKEN = os.environ["GH_TOKEN"]

HEADERS = {"Authorization": f"Bearer {GH_TOKEN}", "Accept": "application/vnd.github+json, application/vnd.github.mockingbird-preview+json", "X-GitHub-Api-Version": "2026-03-10"}

# ---- Time window (last 7 days) ----

since = (datetime.datetime.utcnow() - datetime.timedelta(days=7)).isoformat() + "Z"

# ---- Helpers ----

def list_closed_issues():

  # Issues API returns both issues and PRs; filter out PRs.

  url = f"https://api.github.com/repos/{REPO}/issues"

  items = gh_get(url, params={"state":"closed","since":since,"per_page":100})

  return [i for i in items if "pull_request" not in i]

PR_HTML_URL_RE = re.compile(

    r"https?://github\.com/(?P<owner>[^/\s]+)/(?P<repo>[^/\s]+)/pull/(?P<num>\d+)",

    re.IGNORECASE,

)

PR_API_URL_RE = re.compile(

    r"https?://api\.github\.com/repos/(?P<owner>[^/\s]+)/(?P<repo>[^/\s]+)/pulls/(?P<num>\d+)",

    re.IGNORECASE,

)

# Shorthand references that might appear in text:

# - #123 (assumed to be same repo)

# - owner/repo#123 (explicit cross-repo)

SHORTHAND_SAME_REPO_RE = re.compile(r"(?<!\w)#(?P<num>\d+)\b")

SHORTHAND_CROSS_REPO_RE = re.compile(

    r"(?P<owner>[A-Za-z0-9_.-]+)/(?P<repo>[A-Za-z0-9_.-]+)#(?P<num>\d+)\b"

)

def _normalize_html_pr_url(owner: str, repo: str, num: int) -> str:

    return f"https://github.com/{owner}/{repo}/pull/{int(num)}"

def _collect_from_text(text: str, default_owner: str, default_repo: str) -> set:

    """Extract candidate PR URLs from free text (body/comments/events text)."""

    found = set()

    if not text:

        return found

    # 1) Direct HTML PR URLs

    for m in PR_HTML_URL_RE.finditer(text):

        found.add(_normalize_html_pr_url(m.group("owner"), m.group("repo"), m.group("num")))

    # 2) API PR URLs -> convert to HTML

    for m in PR_API_URL_RE.finditer(text):

        found.add(_normalize_html_pr_url(m.group("owner"), m.group("repo"), m.group("num")))

    # 3) Cross-repo shorthand: owner/repo#123 (we will treat it as PR URL candidate)

    for m in SHORTHAND_CROSS_REPO_RE.finditer(text):

        found.add(_normalize_html_pr_url(m.group("owner"), m.group("repo"), m.group("num")))

    # 4) Same-repo shorthand: #123

    for m in SHORTHAND_SAME_REPO_RE.finditer(text):

        found.add(_normalize_html_pr_url(default_owner, default_repo, m.group("num")))

    return found

def _paginate_gh_get(url, headers=None, per_page=100):

    """Generator: fetch all pages until fewer than per_page are returned."""

    page = 1

    while True:

        data = gh_get(url, params={"per_page": per_page, "page": page})

        if not isinstance(data, list) or len(data) == 0:

            break

        for item in data:

            yield item

        if len(data) < per_page:

            break

        page += 1

def extract_pr_urls_from_issue(issue_number: int):

    """

    Extract PR URLs associated with an issue by scanning:

      - Issue body

      - Issue comments

      - Issue events (including 'mentioned', 'cross-referenced', etc.)

      - Issue timeline (most reliable for cross references)

    Returns a sorted list of unique, normalized HTML PR URLs.

    Requires:

      - REPO = "owner/repo"

      - gh_get(url, params=None, headers=None) is available

    """

    owner, repo = REPO.split("/", 1)

    pr_urls = set()

    # Baseline Accept header for REST v3 + timeline support.

    # The timeline historically required a preview header. Keep both for compatibility.

    base_headers = {

        "Accept": "application/vnd.github+json, application/vnd.github.mockingbird-preview+json"

    }

    # 1) Issue body

    issue_url = f"https://api.github.com/repos/{REPO}/issues/{issue_number}"

    issue = gh_get(issue_url)

    if isinstance(issue, dict):

        body = issue.get("body") or ""

        pr_urls |= _collect_from_text(body, owner, repo)

        # If this issue IS itself a PR (when called with a PR number), make sure we don't add itself erroneously

        # We won't add unless text contains it anyway; still fine.

    # 2) All comments

    comments_url = f"https://api.github.com/repos/{REPO}/issues/{issue_number}/comments"

    for c in _paginate_gh_get(comments_url):

        body = c.get("body") or ""

        pr_urls |= _collect_from_text(body, owner, repo)

    # 3) Issue events (event stream can have 'mentioned', 'cross-referenced', etc.)

    events_url = f"https://api.github.com/repos/{REPO}/issues/{issue_number}/events"

    for ev in _paginate_gh_get(events_url):

        # (a) Free-text fields: some events carry body/commit messages, etc.

        if isinstance(ev, dict):

            body = ev.get("body") or ""

            pr_urls |= _collect_from_text(body, owner, repo)

            # (b) Structured cross-reference (best: 'cross-referenced' events)

            # If the source.issue has 'pull_request' key, it's a PR; use its html_url.

            if ev.get("event") == "cross-referenced":

                src = ev.get("source") or {}

                issue_obj = src.get("issue") or {}

                pr_obj = issue_obj.get("pull_request") or {}

                html_url = issue_obj.get("html_url")

                if pr_obj and html_url and "/pull/" in html_url:

                    pr_urls.add(html_url)

                # Fallback: If not marked but looks like a PR in URL

                elif html_url and "/pull/" in html_url:

                    pr_urls.add(html_url)

        # (c) Also include 'mentioned' events (broadened): inspect whatever text fields exist

        # Already covered via 'body' text extraction

    # 4) Timeline API (the most complete for references)

    timeline_url = f"https://api.github.com/repos/{REPO}/issues/{issue_number}/timeline"

    for item in _paginate_gh_get(timeline_url):

        if not isinstance(item, dict):

            continue

        # Free-text scan on any plausible string field

        for key in ("body", "message", "title", "commit_message", "subject"):

            val = item.get(key)

            if isinstance(val, str):

                pr_urls |= _collect_from_text(val, owner, repo)

        # Structured cross-reference payloads

        if item.get("event") == "cross-referenced":

            src = item.get("source") or {}

            issue_obj = src.get("issue") or {}

            pr_obj = issue_obj.get("pull_request") or {}

            html_url = issue_obj.get("html_url")

            if pr_obj and html_url and "/pull/" in html_url:

                pr_urls.add(html_url)

            elif html_url and "/pull/" in html_url:

                pr_urls.add(html_url)

        # Some timeline items are themselves issues/PRs with html_url

        html_url = item.get("html_url")

        if isinstance(html_url, str) and "/pull/" in html_url:

            pr_urls.add(html_url)

        # Occasionally the timeline includes API-style URLs

        api_url = item.get("url")

        if isinstance(api_url, str):

            m = PR_API_URL_RE.search(api_url)

            if m:

                pr_urls.add(_normalize_html_pr_url(m.group("owner"), m.group("repo"), m.group("num")))

    # Final normalization: keep only HTML PR URLs and sort

    pr_urls = {m.group(0) for url in pr_urls for m in [PR_HTML_URL_RE.search(url)] if m}

    return sorted(pr_urls)

def pr_number_from_url(u):

  m = re.search(r"/pull/(\d+)", u)

  return int(m.group(1)) if m else None

def list_pr_files(pr_number):

  url = f"https://api.github.com/repos/{REPO}/pulls/{pr_number}/files"

  files = []

  page = 1

  while True:

    batch = gh_get(url, params={"per_page":100,"page":page})

    if not batch:

      break

    files.extend(batch)

    page += 1

  return files

def get_pr_head_sha(pr_number):

  url = f"https://api.github.com/repos/{REPO}/pulls/{pr_number}"

  pr = gh_get(url)

  return pr["head"]["sha"]

def get_file_at_sha(path, sha):

  # Use contents API to fetch file at a specific ref (sha).

  url = f"https://api.github.com/repos/{REPO}/contents/{path}"

  r = requests.get(url, headers=HEADERS, params={"ref": sha})

  if r.status_code == 404:

    return None

  r.raise_for_status()

  data = r.json()

  if isinstance(data, dict) and data.get("type") == "file" and data.get("download_url"):

    return gh_get_text(data["download_url"])

  return None

def extract_module_term_from_source(src: str) -> str | None:

    """

    Given a module 'source' string, return the last path segment between the

    final '/' and the '?' (or end of string if '?' is absent).

    Examples:

      git::https://...//modules/container/kubernetes-service?ref=v4.0.15 -> 'kubernetes-service'

      ../modules/network/vnet -> 'vnet'

      registry- or other sources with no '/' -> returns None

    """

    if not isinstance(src, str) or not src:

        return None

    # Strip query string

    path = src.split('?', 1)[0]

    # For git:: URLs that include a double-slash path component ("//modules/..."),

    # keep the right-most path component regardless of scheme.

    # Normalize backslashes just in case.

    path = path.replace('\\', '/')

    # Remove trailing slashes

    path = path.rstrip('/')

    # Split and take last non-empty part

    parts = [p for p in path.split('/') if p]

    if not parts:

        return None

    return parts[-1]

def parse_module_terms_from_tf(tf_text):

    """

    Parse HCL to find module blocks and return the set of module 'terms'

    extracted from their 'source' attribute (last segment before '?').

    """

    terms = set()

    try:

        obj = hcl2.loads(tf_text)

    except Exception:

        return terms

    mods = obj.get("module", [])

    # module is usually list of dicts: [{ "name": { "source": "...", ... }}, ...]

    def add_src_term(src_str: str):

        term = extract_module_term_from_source(src_str)

        if term:

            terms.add(term)

    if isinstance(mods, list):

        for item in mods:

            if isinstance(item, dict):

                for _, body in item.items():

                    if isinstance(body, dict):

                        src = body.get("source")

                        if isinstance(src, str):

                            add_src_term(src)

    elif isinstance(mods, dict):

        for _, body in mods.items():

            if isinstance(body, dict):

                src = body.get("source")

                if isinstance(src, str):

                    add_src_term(src)

    return terms

def parse_module_sources_from_tf(tf_text):

  # Extract module "x" { source = "..." } blocks.

  sources = set()

  try:

    obj = hcl2.loads(tf_text)

  except Exception:

    return sources

  mods = obj.get("module", [])

  # module is usually list of dicts: [{ "name": { "source": "...", ... }}, ...]

  if isinstance(mods, list):

    for item in mods:

      if isinstance(item, dict):

        for _, body in item.items():

          if isinstance(body, dict):

            src = body.get("source")

            if isinstance(src, str):

              sources.add(src)

  elif isinstance(mods, dict):

    for _, body in mods.items():

      if isinstance(body, dict):

        src = body.get("source")

        if isinstance(src, str):

          sources.add(src)

  return sources

def normalize_local_module_path(source, app_dir):

  # Only resolve local paths within repo; ignore registry/git/http sources.

  if source.startswith("./") or source.startswith("../"):

    # app_dir is like "workload/appA"

    import posixpath

    return posixpath.normpath(posixpath.join(app_dir, source))

  return None

def list_repo_tf_files_under(dir_path, sha):

  # Best-effort: use git (checked out main) for listing; then fetch content at sha.

  # We only need paths; use `git ls-tree` against sha for accuracy.

  import subprocess

  try:

    out = subprocess.check_output(["git","ls-tree","-r","--name-only",sha,dir_path], text=True)

    paths = [p.strip() for p in out.splitlines() if p.strip().endswith(".tf")]

    return paths

  except Exception:

    return []

def collect_module_terms_for_app(app_dir, sha):

    """

    Scan all .tf in the app dir at PR head sha; extract:

      1) module terms directly used by the app

      2) for any local module sources, recurse one level and extract module terms defined there

    """

    terms = set()

    module_dirs = set()

    tf_paths = list_repo_tf_files_under(app_dir, sha)

    for p in tf_paths:

        txt = get_file_at_sha(p, sha)

        if not txt:

            continue

        # Collect module terms directly in the app

        terms |= parse_module_terms_from_tf(txt)

        # Track local modules so we can scan their contents

        for src in parse_module_sources_from_tf(txt):

            local = normalize_local_module_path(src, app_dir)

            if local:

                module_dirs.add(local)

    # Scan local module dirs for additional module terms (one level deep)

    for mdir in sorted(module_dirs):

        m_tf_paths = list_repo_tf_files_under(mdir, sha)

        for p in m_tf_paths:

            txt = get_file_at_sha(p, sha)

            if not txt:

                continue

            terms |= parse_module_terms_from_tf(txt)

    return terms

# ---- Main: issues -> PRs -> touched apps -> module terms ----

issues = list_closed_issues()

issue_to_terms = {} # issue_number -> set(module_terms)

for issue in issues:

  inum = issue["number"]

  pr_urls = extract_pr_urls_from_issue(inum)

  pr_numbers = sorted({pr_number_from_url(u) for u in pr_urls if pr_number_from_url(u)})

  if not pr_numbers:

    continue

  terms_for_issue = set()

  for prn in pr_numbers:

    sha = get_pr_head_sha(prn)

    files = list_pr_files(prn)

    # Identify which workload apps are touched by this PR.

    # Requirement: multiple application folders within "workload/".

    touched_apps = set()

    for f in files:

      path = f.get("filename","")

      if not path.startswith("workload/"):

        continue

      parts = path.split("/")

      if len(parts) >= 2:

        touched_apps.add("/".join(parts[:2])) # workload/<app>

    # For each touched app, compute module terms by scanning app + local modules.

    for app_dir in sorted(touched_apps):

      terms_for_issue |= collect_module_terms_for_app(app_dir, sha)

  if terms_for_issue:

    issue_to_terms[inum] = sorted(terms_for_issue)

# Build severity distribution: "severity" = number of issues touching each module term.

rows = []

for inum, terms in issue_to_terms.items():

  for t in set(terms):

    rows.append({"issue": inum, "module_term": t})

print(f"rows={len(rows)}")

df = pd.DataFrame(rows)

df.to_csv("severity_data.csv", index=False)

# Also write a compact JSON for debugging/audit.

with open("issue_to_module_terms.json","w") as f:

  json.dump(issue_to_terms, f, indent=2, sort_keys=True)

print(f"Closed issues considered: {len(issues)}")

print(f"Issues with PR-linked module impact: {len(issue_to_terms)}")

import os, json, re, requests, subprocess

import hcl2

REPO = os.environ["REPO"]

TOKEN = os.environ["GH_TOKEN"]

HEADERS = {"Authorization": f"Bearer {TOKEN}", "Accept": "application/vnd.github+json, application/vnd.github.mockingbird-preview+json", "X-GitHub-Api-Version": "2026-03-10"}

with open("issues.json") as f:

  issues = json.load(f)

issue_to_terms = {}

issue_turnaround = {}

module_deps = {} # app_dir -> set(module paths it references)

for issue in issues:

  inum = issue["number"]

  created = issue.get("created_at")

  closed = issue.get("closed_at")

  if created and closed:

    from datetime import datetime

    fmt = "%Y-%m-%dT%H:%M:%SZ"

    try:

      dt_created = datetime.strptime(created, fmt)

      dt_closed = datetime.strptime(closed, fmt)

      delta_days = (dt_closed - dt_created).total_seconds() / 86400.0

    except Exception:

      delta_days = None

  else:

    delta_days = None

  issue_turnaround[inum] = delta_days

  pr_urls = issue.get("pr_urls",[])

  pr_numbers = sorted({pr_number_from_url(u) for u in pr_urls if pr_number_from_url(u)})

  terms_for_issue = set()

  for prn in pr_numbers:

    sha = get_pr_head_sha(prn)

    files = list_pr_files(prn)

    touched_apps = set()

    for f in files:

      path = f.get("filename","")

      if path.startswith("workload/"):

        parts = path.split("/")

        if len(parts) >= 2:

          touched_apps.add("/".join(parts[:2]))

    for app_dir in sorted(touched_apps):

      terms_for_issue |= collect_module_terms_for_app(app_dir, sha)

      # collect module sources for dependency graph

      # scan app tf files for module sources at PR head

      tf_paths = list_repo_tf_files_under(app_dir, sha)

      for p in tf_paths:

        txt = get_file_at_sha(p, sha)

        if not txt:

          continue

        for src in parse_module_sources_from_tf(txt):

          local = normalize_local_module_path(src, app_dir)

          if local:

            module_deps.setdefault(app_dir, set()).add(local)

  if terms_for_issue:

    issue_to_terms[inum] = sorted(terms_for_issue)

rows = []

for inum, terms in issue_to_terms.items():

  for t in set(terms):

    rows.append({"issue": inum, "module_term": t})

import pandas as pd

df = pd.DataFrame(rows)

df.to_csv("severity_data.csv", index=False)

ta_rows = []

for inum, days in issue_turnaround.items():

  ta_rows.append({"issue": inum, "turnaround_days": days})

pd.DataFrame(ta_rows).to_csv("turnaround.csv", index=False)

with open("issue_to_module_terms.json","w") as f:

  json.dump(issue_to_terms, f, indent=2)

with open("issue_turnaround.json","w") as f:

  json.dump(issue_turnaround, f, indent=2)

with open("module_deps.json","w") as f:

  json.dump({k: sorted(list(v)) for k,v in module_deps.items()}, f, indent=2)

print(f"ISSUES_WITH_TYPES={len(issue_to_terms)}")

import os, json, datetime, glob

import pandas as pd

import matplotlib.pyplot as plt

import seaborn as sns

import networkx as nx

ts = datetime.datetime.utcnow().strftime("%Y%m%d-%H%M%S")

os.makedirs("history", exist_ok=True)

def read_csv(file_name):

    df = None

    if os.path.exists("severity_data.csv"):

        try:

           df = pd.read_csv("severity_data.csv")

        except Exception:

           df = None

    else:

        df = None

    return df

# --- Severity bar (existing) ---

if os.path.exists("severity_data.csv"):

  df = read_csv("severity_data.csv")

  if df == None:

     df = pd.DataFrame(columns=["issue", "module_term"])

  counts = df.groupby("module_term")["issue"].nunique().sort_values(ascending=False)

else:

  counts = pd.Series(dtype=int)

png_sev = f"history/severity-by-module-{ts}.png"

plt.figure(figsize=(12,6))

if not counts.empty:

  counts.plot(kind="bar")

  plt.title("Issue frequency by module term")

  plt.xlabel("module_term")

  plt.ylabel("number of closed issues touching module term")

else:

  plt.text(0.5, 0.5, "No module-impacting issues in window", ha="center", va="center")

  plt.axis("off")

plt.tight_layout()

plt.savefig(png_sev)

plt.clf()

# --- Heatmap: module_term x issue (binary or counts) ---

heat_png = f"history/heatmap-module-issues-{ts}.png"

if os.path.exists("severity_data.csv"):

  mat = read_csv("severity_data.csv")

  if not mat:

     mat = pd.DataFrame(columns=["issue", "module_term"])

  if not mat.empty:

    pivot = mat.pivot_table(index="module_term", columns="issue", aggfunc='size', fill_value=0)

    # Optionally cluster or sort by total counts

    pivot['total'] = pivot.sum(axis=1)

    pivot = pivot.sort_values('total', ascending=False).drop(columns=['total'])

    # limit columns for readability (most recent/top issues)

    if pivot.shape[1] > 100:

      pivot = pivot.iloc[:, :100]

    plt.figure(figsize=(14, max(6, 0.2 * pivot.shape[0])))

    sns.heatmap(pivot, cmap="YlOrRd", cbar=True)

    plt.title("Heatmap: module terms (rows) vs issues (columns)")

    plt.xlabel("Issue number (truncated)")

    plt.ylabel("module terms")

    plt.tight_layout()

    plt.savefig(heat_png)

    plt.clf()

  else:

    plt.figure(figsize=(6,2))

    plt.text(0.5,0.5,"No data for heatmap",ha="center",va="center")

    plt.axis("off")

    plt.savefig(heat_png)

    plt.clf()

else:

  plt.figure(figsize=(6,2))

  plt.text(0.5,0.5,"No data for heatmap",ha="center",va="center")

  plt.axis("off")

  plt.savefig(heat_png)

  plt.clf()

# --- Trend lines: aggregate historical severity_data.csv files in history/ ---

trend_png = f"history/trendlines-module-{ts}.png"

# collect historical CSVs that match severity_data pattern

hist_files = sorted(glob.glob("history/*severity-data-*.csv") + glob.glob("history/*severity_data.csv") + glob.glob("history/*severity-by-module-*.csv"))

# also include current run's severity_data.csv

if os.path.exists("severity_data.csv"):

  hist_files.append("severity_data.csv")

# Build weekly counts per module terms by deriving timestamp from filenames where possible

trend_df = pd.DataFrame()

for f in hist_files:

  try:

    # attempt to extract timestamp from filename

    import re

    m = re.search(r"(\d{8}-\d{6})", f)

    ts_label = m.group(1) if m else os.path.getmtime(f)

    tmp = read_csv(f)

    if tmp == None or tmp.empty:

      continue

    counts_tmp = tmp.groupby("module_terms")["issue"].nunique().rename(ts_label)

    trend_df = pd.concat([trend_df, counts_tmp], axis=1)

  except Exception:

    continue

if not trend_df.empty:

  trend_df = trend_df.fillna(0).T

  # convert index to datetime where possible

  try:

    trend_df.index = pd.to_datetime(trend_df.index, format="%Y%m%d-%H%M%S", errors='coerce').fillna(pd.to_datetime(trend_df.index, unit='s'))

  except Exception:

    pass

  plt.figure(figsize=(14,6))

  # plot top N module_terms by latest total

  latest = trend_df.iloc[-1].sort_values(ascending=False).head(8).index.tolist()

  for col in latest:

    plt.plot(trend_df.index, trend_df[col], marker='o', label=col)

  plt.legend(loc='best', fontsize='small')

  plt.title("Trend lines: issue frequency over time for top module_terms")

  plt.xlabel("time")

  plt.ylabel("issue count")

  plt.xticks(rotation=45)

  plt.tight_layout()

  plt.savefig(trend_png)

  plt.clf()

else:

  plt.figure(figsize=(8,2))

  plt.text(0.5,0.5,"No historical data for trend lines",ha="center",va="center")

  plt.axis("off")

  plt.savefig(trend_png)

  plt.clf()

# --- Dependency graph: build directed graph from module_deps.json ---

dep_png = f"history/dependency-graph-{ts}.png"

if os.path.exists("module_deps.json"):

  with open("module_deps.json") as f:

    deps = json.load(f)

  G = nx.DiGraph()

  # add edges app -> module

  for app, mods in deps.items():

    G.add_node(app, type='app')

    for m in mods:

      G.add_node(m, type='module')

      G.add_edge(app, m)

  if len(G.nodes) == 0:

    plt.figure(figsize=(6,2))

    plt.text(0.5,0.5,"No dependency data",ha="center",va="center")

    plt.axis("off")

    plt.savefig(dep_png)

    plt.clf()

  else:

    plt.figure(figsize=(12,8))

    pos = nx.spring_layout(G, k=0.5, iterations=50)

    node_colors = ['#1f78b4' if G.nodes[n].get('type')=='app' else '#33a02c' for n in G.nodes()]

    nx.draw_networkx_nodes(G, pos, node_size=600, node_color=node_colors)

    nx.draw_networkx_edges(G, pos, arrows=True, arrowstyle='->', arrowsize=12, edge_color='#888888')

    nx.draw_networkx_labels(G, pos, font_size=8)

    plt.title("Module dependency graph (apps -> local modules)")

    plt.axis('off')

    plt.tight_layout()

    plt.savefig(dep_png)

    plt.clf()

else:

  plt.figure(figsize=(6,2))

  plt.text(0.5,0.5,"No dependency data",ha="center",va="center")

  plt.axis("off")

  plt.savefig(dep_png)

  plt.clf()

# --- Turnaround chart (existing) ---

ta_png = f"history/turnaround-by-issue-{ts}.png"

if os.path.exists("turnaround.csv"):

  ta = read_csv("turnaround.csv")

  if ta == None:

     ta = pd.DataFrame(columns=["issue", "turnaround_days"])

  ta = ta.dropna(subset=["turnaround_days"])

  if not ta.empty:

    ta_sorted = ta.sort_values("turnaround_days", ascending=False).head(50)

    plt.figure(figsize=(12,6))

    plt.bar(ta_sorted["issue"].astype(str), ta_sorted["turnaround_days"])

    plt.xticks(rotation=90)

    plt.title("Turnaround time (days) for closed issues in window")

    plt.xlabel("Issue number")

    plt.ylabel("Turnaround (days)")

    plt.tight_layout()

    plt.savefig(ta_png)

    plt.clf()

  else:

    plt.figure(figsize=(8,2))

    plt.text(0.5,0.5,"No turnaround data available",ha="center",va="center")

    plt.axis("off")

    plt.savefig(ta_png)

    plt.clf()

else:

  plt.figure(figsize=(8,2))

  plt.text(0.5,0.5,"No turnaround data available",ha="center",va="center")

  plt.axis("off")

  plt.savefig(ta_png)

  plt.clf()

# --- Issue activity charts (opened vs closed) ---

activity_png = f"history/issue-activity-{ts}.png"

if os.path.exists("issue_activity.csv"):

    act = pd.read_csv("issue_activity.csv")

    plt.figure(figsize=(6,4))

    plt.bar(act["metric"], act["count"], color=["#1f78b4", "#33a02c"])

    plt.title("GitHub issue activity in last window")

    plt.xlabel("Issue state")

    plt.ylabel("Count")

    plt.tight_layout()

    plt.savefig(activity_png)

    plt.clf()

else:

    plt.figure(figsize=(6,2))

    plt.text(0.5, 0.5, "No issue activity data", ha="center", va="center")

    plt.axis("off")

    plt.savefig(activity_png)

    plt.clf()

# --- AI summary (who wants what) ---

if os.path.exists("issues.json"):

  with open("issues.json") as f:

    issues = json.load(f)

else:

  issues = []

condensed = []

for i in issues:

  condensed.append({

    "number": i.get("number"),

    "user": i.get("user"),

    "title": i.get("title"),

    "html_url": i.get("html_url")

  })

with open("issues_for_ai.json","w") as f:

  json.dump(condensed, f, indent=2)

# call OpenAI if key present (same approach as before)

import subprocess, os

OPENAI_KEY = os.environ.get("OPENAI_API_KEY")

ai_text = "AI summary skipped (no OPENAI_API_KEY)."

if OPENAI_KEY:

  prompt = ("You are given a JSON array of GitHub issues with fields: number, user, title, html_url. "

            "Produce a concise list of one-line 'who wants what' statements, one per issue, in plain text. "

            "Format: '#<number> — <user> wants <succinct request derived from title>'. "

            "Do not add commentary.")

  payload = {

    "model": "gpt-4o-mini",

    "messages": [{"role":"system","content":"You are a concise summarizer."},

                 {"role":"user","content": prompt + "\\n\\nJSON:\\n" + json.dumps(condensed)[:15000]}],

    "temperature":0.2,

    "max_tokens":400

  }

  proc = subprocess.run([

    "curl","-sS","https://api.openai.com/v1/chat/completions",

    "-H", "Content-Type: application/json",

    "-H", f"Authorization: Bearer {OPENAI_KEY}",

    "-d", json.dumps(payload)

  ], capture_output=True, text=True)

  if proc.returncode == 0 and proc.stdout:

    try:

      resp = json.loads(proc.stdout)

      ai_text = resp["choices"][0]["message"]["content"].strip()

    except Exception:

      ai_text = "AI summary unavailable (parsing error)."

# --- Write markdown report combining all visuals ---

md_path = f"history/severity-report-{ts}.md"

with open(md_path, "w") as f:

  f.write("# Weekly Terraform module hotspot report\n\n")

  f.write(f"**Window (days):** {os.environ.get('WINDOW_DAYS','7')}\n\n")

  f.write("## AI Summary (who wants what)\n\n")

  f.write("```\n")

  f.write(ai_text + "\n")

  f.write("```\n\n")

  f.write("## GitHub issue activity (last window)\n\n")

  f.write(f"![{os.path.basename(activity_png)}]"

          f"({os.path.basename(activity_png)})\n\n")

  if os.path.exists("issue_activity.csv"):

      act = pd.read_csv("issue_activity.csv")

      f.write(act.to_markdown(index=False) + "\n\n")

  f.write("## Top module terms by issue frequency\n\n")

  if not counts.empty:

    f.write("![" + os.path.basename(png_sev) + "](" + os.path.basename(png_sev) + ")\n\n")

    f.write(counts.head(30).to_frame("issues").to_markdown() + "\n\n")

  else:

    f.write("No module-impacting issues found in the selected window.\n\n")

  f.write("## Heatmap: module terms vs issues\n\n")

  f.write("![" + os.path.basename(heat_png) + "](" + os.path.basename(heat_png) + ")\n\n")

  f.write("## Trend lines: historical issue frequency for top module terms\n\n")

  f.write("![" + os.path.basename(trend_png) + "](" + os.path.basename(trend_png) + ")\n\n")

  f.write("## Dependency graph: apps -> local modules\n\n")

  f.write("![" + os.path.basename(dep_png) + "](" + os.path.basename(dep_png) + ")\n\n")

  f.write("## Turnaround time for closed issues (days)\n\n")

  f.write("![" + os.path.basename(ta_png) + "](" + os.path.basename(ta_png) + ")\n\n")

  f.write("## Data artifacts\n\n")

  f.write("- `severity_data.csv` — per-issue module term mapping\n")

  f.write("- `turnaround.csv` — per-issue turnaround in days\n")

  f.write("- `issue_to_module_terms.json` — mapping used to build charts\n")

  f.write("- `module_deps.json` — module dependency data used for graph\n")

# Save current CSVs into history with timestamp for future trend aggregation

try:

  import shutil

  if os.path.exists("severity_data.csv"):

    shutil.copy("severity_data.csv", f"history/severity-data-{ts}.csv")

  if os.path.exists("turnaround.csv"):

    shutil.copy("turnaround.csv", f"history/turnaround-{ts}.csv")

except Exception:

  pass

print(f"REPORT_MD={md_path}")

print(f"REPORT_PNG={png_sev}")

print(f"REPORT_HEAT={heat_png}")

print(f"REPORT_TREND={trend_png}")

print(f"REPORT_DEP={dep_png}")

print(f"REPORT_TA={ta_png}")

import os, re

from pathlib import Path

hist = Path("history")

hist.mkdir(exist_ok=True)

# Pair md+png by timestamp in filename: severity-by-module-YYYYMMDD-HHMMSS.(md|png)

pat = re.compile(r"^severity-by-module-(\d{8}-\d{6})\.(md|png)$")

groups = {}

for p in hist.iterdir():

  m = pat.match(p.name)

  if not m:

    continue

  ts = m.group(1)

  groups.setdefault(ts, []).append(p)

# Keep newest 10 timestamps

timestamps = sorted(groups.keys(), reverse=True)

keep = set(timestamps[:10])

drop = [p for ts, files in groups.items() if ts not in keep for p in files]

for p in drop:

  p.unlink()

print(f"Kept {len(keep)} report sets; pruned {len(drop)} files.")

---

This produces sample output including the various json and csv files as mentioned above. We list just one of them:

                  metric count

0 #opened 8

1 #closed 8

Care must be taken to not run into rate limits: For example:

{“message”: “API rate limit exceeded for <client-ip-address>”, “documentation_url”: https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting}


Monday, March 30, 2026

 In drone-based video sensing, the captured image stream can be understood as a temporally ordered sequence of highly correlated visual frames, where consecutive frames differ only incrementally due to the drone’s smooth motion and relatively stable environment. This continuity induces substantial redundancy, making it computationally advantageous to model frame progression in a formal, automata-theoretic framework. By conceptualizing frames as symbols in a string, the video stream can be treated analogously to a sequence of characters subjected to pattern recognition techniques such as the Knuth–Morris–Pratt (KMP) algorithm. In KMP, the presence of repeating substrings enables efficient pattern matching through the construction of partial match tables that avoid redundant computations. Similarly, in video data, repeated or near-identical frames may be interpreted as recurring “symbols” within an input sequence, suggesting a structural parallel between image repetition and substring recurrence.

An automaton defined over this sequence of frames can function as a state machine capturing the evolution of visual contexts during the drone’s flight. Each state in the automaton corresponds to a distinct visual configuration or stationary context, while transitions between states are triggered by detectable deviations in the input data, such as changes in color distribution, object presence, or spatial structure. Thus, the automaton abstracts the continuous video feed into a discrete set of states and transitions, effectively summarizing the perceptual variation encountered during the observation period.

The utility of this model lies in its ability to produce a compact representation of the entire flight. Rather than retaining every frame, which largely encodes redundant information, the automaton emphasizes transition points—moments when the state sequence changes—thereby isolating salient frames corresponding to significant environmental or positional changes. This process induces a “signature” of the flight, a compressed temporal trace that preserves the structural pattern of observed changes while discarding repetitive content.

From a computational perspective, the method provides both efficiency and interpretability. It reduces temporal redundancy by formalizing similarity relations among frames and yields a mathematically grounded representation suitable for downstream tasks such as indexing, retrieval, or anomaly detection. The resulting automaton-based abstraction thus serves as a formal mechanism for encoding, analyzing, and interpreting dynamic visual data, capturing the essential structure of the drone’s perceptual experience through the lens of automata theory and pattern matching.


Sunday, March 29, 2026

 Problem statement: Determine the maximum rectangle enclosed in a bar chart where the bars appear in a streaming manner.

Solution:

public static int getMaxRectangleStreaming(List<Integer> A) {

        int maxArea = Integer.MIN_VALUE;

        int maxHeight = Integer.MIN_VALUE;

        List<Integer> heights = new ArrayList<>();

        // parallel sums of unit-areas of bounding boxes with top-left at incremental heights of the current bar in a barchart

        List<Integer> areas = new ArrayList<>();

        int prev = 0;

        for (int i = 0; i < A.size(); i++) {

            if (A.get(i) > maxHeight) {

                maxHeight = A.get(i);

            }

            if (prev < A.get(i)) {

                for (int j = 0; j < A.get(i); j++) {

                    if (heights.size() < j+1) heights.add(0);

                    if (areas.size() < j+1) areas.add(0);

                    heights.set(j, (j+1) * 1);

                    if ( j < areas.size()) {

                        int newArea = areas.get(j) + (j + 1) * 1;

                        areas.set(j, newArea);

                        if (newArea > maxArea) {

                            maxArea = newArea;

                        }

                    } else {

                        areas.set(j, (j+1) * 1);

                    }

                }

            } else {

                for (int j = 0; j < A.get(i); j++) {

                    heights.set(j, (j+1) *1);

                    if ( j < areas.size()) {

                        int newArea = areas.get(j) + (j + 1) * 1;

                        areas.set(j, newArea);

                        if (newArea > maxArea) {

                            maxArea = newArea;

                        }

                    } else {

                        areas.set(j, (j+1) * 1);

                    }

                }

                for (int j = A.get(i); j < prev; j++){

                    heights.set(j, 0);

                    if (areas.size() > j && areas.get(j) > maxArea) {

                        maxArea = areas.get(j);

                    }

                    areas.set(j, 0);

                }

            }

            prev = A.get(i);

            System.out.println("heights:" + print(heights));

            System.out.println("areas:" + print(areas));

        }

        return maxArea;

    }

    public static String print(List<Integer> A){

        StringBuilder sb = new StringBuilder();

        for (Integer a : A) {

            sb.append(a + " ");

        }

        return sb.toString();

    };

}

//Output:

heights:1 2 3 4

areas:1 2 3 4

heights:1 2 3 4 5 6

areas:2 4 6 8 5 6

heights:1 2 0 0 0 0

areas:3 6 0 0 0 0

heights:1 2 3 4 0 0

areas:4 8 3 4 0 0

heights:1 2 3 4 5 6 7 8 9 10 11 12

areas:5 10 6 8 5 6 7 8 9 10 11 12

heights:1 2 3 4 5 6 7 0 0 0 0 0

areas:6 12 9 12 10 12 14 0 0 0 0 0

heights:1 2 3 4 0 0 0 0 0 0 0 0

areas:7 14 12 16 0 0 0 0 0 0 0 0

heights:1 2 0 0 0 0 0 0 0 0 0 0

areas:8 16 0 0 0 0 0 0 0 0 0 0

heights:1 2 0 0 0 0 0 0 0 0 0 0

areas:9 18 0 0 0 0 0 0 0 0 0 0

heights:1 2 0 0 0 0 0 0 0 0 0 0

areas:10 20 0 0 0 0 0 0 0 0 0 0

20


Saturday, March 28, 2026

 This is a summary of a book titled “The AI Revolution in Customer Service and Support: A Practical Guide to Impactful Deployment of AI to Best Serve Your Customers” written by Ross Smith, Emily McKeon and Mayte Cubino and published by Pearson Education (USA) in 2024. This book examines how artificial intelligence is reshaping customer service at a moment when expectations for speed, personalization, and convenience are higher than ever. The authors argue that customer service has become a defining factor in how organizations are judged, often as important as the products or services themselves. Many traditional support models struggle to meet contemporary demands, leaving customers frustrated by long wait times and inefficient interactions. Against this backdrop, the authors position AI as a tool capable of transforming customer service into something more responsive, consistent, and closely aligned with individual customer needs.

Drawing parallels with earlier technological shifts such as electrification and industrial automation, the book situates AI within a broader pattern of innovation that alters how work is organized and value is delivered. In customer service, AI systems can process vast amounts of data to provide personalized assistance at scale, often more quickly and reliably than human agents alone. While implementing such systems can require significant upfront investment, the authors suggest that long-term efficiencies and improved customer satisfaction can offset these costs.

Organizations are encouraged to develop a clear vision for how AI fits into their long-term strategy rather than treating it as a short-term efficiency fix. This vision should articulate what success looks like several years into the future and should be communicated clearly to all stakeholders, including employees and customers. The authors emphasize that leadership commitment must be visible and consistent, and that AI initiatives should be grounded in a realistic understanding of both technological capabilities and organizational needs. Setting concrete, measurable goals allows companies to move beyond abstract enthusiasm and toward meaningful outcomes.

Before deploying AI, the authors stress the need to understand existing customer service operations. Establishing a baseline helps organizations evaluate whether AI adoption is actually improving performance. This involves identifying gaps between current service levels and customer expectations, prioritizing areas for improvement, and quantifying desired changes in metrics such as customer satisfaction. During development, AI systems should be tested iteratively with different customer segments, assessed for integration with existing tools, and reviewed regularly from an ethical standpoint. Validation should include basic accuracy checks, stress testing under real-world conditions, and confirmation that systems comply with regulatory and internal ethical standards.

Once deployed, AI systems must be accessible across the channels customers already use and adaptable to the needs of both customers and employees. Successful integration depends not only on technical infrastructure but also on education and change management. The authors note that while customers ultimately benefit from faster and more consistent service, some may be concerned about losing human interaction. Transparency about when and how AI is used, along with clear pathways to human support, can help address these concerns. Employee responses to AI adoption also vary, ranging from enthusiasm to anxiety about job security. The book emphasizes that AI should be framed as a tool that supports human work rather than replaces it, and that employees should be encouraged to engage with and learn from the technology.

Ethical considerations run throughout the authors’ discussion. As AI systems become more influential, the risks associated with bias, lack of accountability, and opaque decision-making increase. The book argues that responsible AI use must be grounded in human values, with explicit commitments to fairness, transparency, security, and accountability. Organizations are urged to take responsibility for the outputs of their AI systems and to address any harms that arise from their use, rather than treating ethical issues as secondary or abstract concerns.

Cultural factors also play a significant role in how AI is received. Resistance to new technology often stems from fear or misunderstanding, and the authors suggest that organizational culture can either amplify or mitigate these reactions. A culture that values learning and adaptation is more likely to view AI as an opportunity rather than a threat. Generational differences can shape expectations as well, with younger customers and employees generally more comfortable with automation than older ones. Addressing these differences thoughtfully, such as by showing how AI can reduce routine work and allow for deeper human engagement, can ease adoption.

The book also explores how AI changes the nature of customer support roles. As organizations map their customer journeys and introduce AI into specific touchpoints, employee responsibilities shift toward more complex, judgment-based tasks. Training becomes essential, particularly in teaching staff how to work effectively with AI systems and interpret their outputs. At the same time, new roles emerge, including specialists focused on data, model performance, ethics, and content management. These roles help ensure that AI systems remain aligned with organizational goals and customer needs.

The authors argue that leadership itself must evolve. Leaders in customer service are tasked not only with managing operations but also with guiding their organizations through ongoing technological change. This requires openness to learning, attentiveness to employee concerns, and a willingness to address the broader social implications of AI use. By emphasizing transparency, accountability, and respect for data privacy, leaders can build trust among customers, employees, and other stakeholders as AI becomes an integral part of customer service and support.


Friday, March 27, 2026

 

Daniel F. Spulber’s The Market Makers: How Leading Companies Create and Win Markets (McGraw-Hill, 1998) argues that the real contest in modern business is not primarily about making better products or delivering better services—it is about enabling better transactions. As technology makes exchange faster and cheaper, Spulber urges leaders to rethink what firms fundamentally do: the winners are the organizations that consistently reduce the cost and friction of exchange for both customers and suppliers, and that deliberately position themselves where value is being transferred.

Viewed through this lens, a company’s primary mission becomes building “market bridges” that connect buyers and sellers more effectively than anyone else. Spulber’s strategic framework treats firms as intermediaries and transaction facilitators—institutions of exchange—rather than as standalone producers. That shift in perspective changes how you think about strategy, growth, innovation, and even technology adoption: instead of benchmarking leaders or piling on tools, winning firms learn how to use what they have to communicate and coordinate with the outside world, tightening the link between supply and demand.

Spulber emphasizes that competition plays out across the entire chain of transactions required to make, move, price, and deliver an offering. As he puts it, “Firms achieve success not only by offering better prices and products, but also by reducing the costs of transactions for their customers and suppliers.” In practice, that means treating exchange itself as the arena of innovation: a firm wins by designing smoother, clearer, and more reliable ways for counterparties to find each other, decide, contract, and complete the deal.

The book makes a hard-edged claim about ambition: in many markets, settling for “good enough” is a slow path to irrelevance. If you are not striving to be the best bridge in your market, someone else will be—and once customers and suppliers coordinate around a dominant intermediary, that position can reinforce itself. The payoff for leadership is not just higher returns; it is a stronger reputation, easier recruiting, more stable supplier relationships, and a lower “search cost” for customers who do not have time to shop around.

To lead a firm toward that kind of advantage, Spulber recommends recasting strategy around exchange: make it your job to create innovative transactions and to participate actively in the institutions where trade happens. A firm that focuses on growth through better market-making leaves behind a defensive past of endless cutbacks and retrenchment. It stops looking inward and starts coordinating outward—helping customers and suppliers meet, communicate, and commit with less uncertainty and delay. In that spirit, Spulber cautions that maximizing shareholder value cannot be the corporation’s only objective; sustained market success is what ultimately raises long-run value.

The book returns repeatedly to the logic of being number one. Spulber compares markets to tournaments: the “gold medal” can justify the risk, time, and investment because leadership makes the bridge more valuable to everyone involved. “Competition means much more than manufacturing a better widget,” he writes. “It means carrying out the entire set of economic transactions needed to make and distribute that widget.” In that broader contest, a leading intermediary becomes the default choice for customers in a hurry and for suppliers seeking stability. Blockbuster Video, for example, built a near-universal bridge between filmmakers and viewers by making rentals reliably available at scale—a transaction innovation as much as a retail footprint.

Rather than trying to win by “cutting, scrimping, and saving,” Spulber argues that market leaders expand what is possible by pushing against four boundaries: scale, scope, span, and speed. Scale is not simply about getting bigger; it is about building the largest operation you can coordinate well, where the limiting factor is often the efficiency of communication. Scope concerns variety—time-pressed customers value one-stop solutions and broad selection, so increasing the range of offerings can strengthen the bridge. Span forces decisions about what to do in-house versus what to contract out, whether you manufacture, retail, distribute, or integrate across multiple stages of the value chain. Speed, finally, is the modern imperative: innovation depends on faster information, better technology, and quicker execution.

Pricing, in Spulber’s account, is not an afterthought imposed by an “invisible outside force,” but a strategic instrument that can reinforce the market bridge. Because the firm mediates between suppliers and customers, purchasing and marketing must stay tightly coordinated so the channel of exchange remains smooth. Price is also a form of service—your greeting and your handshake—so clarity matters. Complicated contracts that save a seller pennies while costing customers time ultimately weaken the relationship. Clear pricing, by contrast, becomes a way to learn: customers’ reactions reveal what they value, and early prices can be adjusted as the firm gathers real market feedback.

You should also begin to think of pricing as a service. In the end, your price is how you greet your customers – it’s your handshake. Your customers are not fools. You can’t trick them into buying your product or service by creating complicated or obscure pricing contracts. Save your customers time by keeping your price information clear. Clear pricing will give you a golden opportunity to learn about your customers. Don’t waste too much time trying to figure out the price for a new product, though. First, someone else might break the product before you. Second, once you introduce it at whatever price you set, you can glean invaluable information from your customers’ reactions and adjust if necessary.

The mechanics of pricing also shape how a firm manages broad product lines and diverse customer segments. A discount on a “traffic-driving” item can lift sales of complementary goods—Spulber points to familiar fast-food dynamics, where cheaper burgers can increase purchases of fries and soft drinks, while potentially cannibalizing other menu items. Pricing can also segment by quantity through volume discounts, letting customers self-select into larger purchases if per-unit savings are attractive. Or it can segment by quality, as with graded gasoline offerings. Across these approaches, the goal is the same: capture and retain different groups before rivals do, while keeping the bridge easy to use for the customers who rely on it.

To make the idea operational, Spulber proposes the MAIN framework—Market Making, Arbitrage, Intermediation, and Networking—as a practical way to think about winning markets by strengthening market bridges. Market making focuses on creating new, simple ways for many buyers and many sellers to transact quickly and reliably; supermarkets exemplify this by giving numerous suppliers efficient access to shoppers who save time by consolidating trips. Arbitrage emphasizes information and movement—obtaining timely exchange data and improving the ability to buy and sell across places, times, or conditions in ways that create value. Intermediation highlights the multiple roles a firm can play as an agent, monitor, broker, and communicator across marketing, purchasing, hiring, financing, and research—often with price and other exchange terms carrying more information than branding claims alone. Networking ties it together by maintaining the relationships and infrastructure that keep participants connected, sometimes by stepping aside so counterparties can interact directly, and sometimes by improving distribution so the whole system runs with less friction.

In the end, The Market Makers asks readers to re-envision competition itself. Rivals are not merely alternative product sellers; they are alternative transaction facilitators, including channels where suppliers bypass you to reach customers directly or venues that permanently undercut your terms. The path to success, Spulber suggests, is both more complex and more straightforward than it appears: find where capital is changing hands, earn the right to stand in the middle of that exchange, and then make the transfer faster, clearer, and less costly for everyone involved.


Thursday, March 26, 2026

 The following sample script illustrates how to control the access to containers and folders inside a storage account such as Azure Data Lake Storage so that users with only Reader control plane access can be allowed access to individual folders and files at that fine-granular level.

Script begins:

#!/usr/bin/bash

subscriptionid=

az account set --subscription "$subscriptionid"

accountkey=

accountname=

cradle=

domesticrw=

domesticro=

globalro=

globalrw=

removegroup1=

removegroup2=

if [[ -n "$domesticrw" ]] && ! [[ "$domesticrw" =~ ^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$ ]]; then

  echo "translating domesticrw=$domesticrw"

  domesticrw=$(az ad group list --filter "displayName eq '$domesticrw'" --query "[0].id" --output tsv)

  echo "domesticrw=$domesticrw"

fi

if [[ -n "$domesticro" ]] && ! [[ "$domesticro" =~ ^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$ ]]; then

  echo "translating domesticro=$domesticro"

  domesticro=$(az ad group list --filter "displayName eq '$domesticro'" --query "[0].id" --output tsv)

  echo "domesticro=$domesticro"

fi

if [[ -n "$globalrw" ]] && ! [[ "$globalrw" =~ ^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$ ]]; then

  echo "translating globalrw=$globalrw"

  globalrw=$(az ad group list --filter "displayName eq '$globalrw'" --query "[0].id" --output tsv)

  echo "globalrw=$globalrw"

fi

if [[ -n "$globalro" ]] && ! [[ "$globalro" =~ ^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$ ]]; then

  echo "translating globalrw=$globalro"

  globalro=$(az ad group list --filter "displayName eq '$globalro'" --query "[0].id" --output tsv)

  echo "globalro=$globalro"

fi

if [[ -n "$removegroup1" ]] && ! [[ "$removegroup1" =~ ^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$ ]]; then

  echo "translating removegroup1=$removegroup1"

  removegroup1=$(az ad group list --filter "displayName eq '$removegroup1'" --query "[0].id" --output tsv)

  echo "removegroup1=$removegroup1"

fi

if [[ -n "$removegroup2" ]] && ! [[ "$removegroup2" =~ ^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$ ]]; then

  echo "translating removegroup1=$removegroup2"

  removegroup2=$(az ad group list --filter "displayName eq '$removegroup2'" --query "[0].id" --output tsv)

  echo "removegroup2=$removegroup2"

fi

echo "domesticrw="$domesticrw""

echo "domesticro="$domesticro""

echo "globalro="$globalro""

echo "globalrw="$globalrw""

echo "removegroup1="$removegroup1""

echo "removegroup2="$removegroup2""

echo "create container, if not exists"

az storage container create -n $cradle --account-name "$accountname" --account-key "$accountkey"

echo "container exists, acling..."

[[ -n "$globalro" ]] && [[ -n "$domesticro" ]] && az storage fs access set --acl "group:"$globalro":r-x,group:"$domesticro":r-x" -p "/" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

[[ -n "$domesticro" ]] && az storage fs access update-recursive --acl "group:"$domesticro":r-x,default:user:"$domesticro":r-x" -p "/" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

echo "container acl'ed."

echo "creating global and domestic folders..."

az storage fs directory create -n domestic -f "$cradle" --account-name "$accountname" --account-key "$accountkey" --only-show-errors

az storage fs directory create -n global -f "$cradle" --account-name "$accountname" --account-key "$accountkey" --only-show-errors

echo "folders exist, remove existing acls..."

echo "beginning remove"

[[ -n "$removegroup1" ]] && az storage fs access remove-recursive --acl "group:"$removegroup1"" -p "domestic" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

[[ -n "$removegroup2" ]] && az storage fs access remove-recursive --acl "group:"$removegroup2"" -p "domestic" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

[[ -n "$removegroup1" ]] && az storage fs access remove-recursive --acl "group:"$removegroup1"" -p "global" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

[[ -n "$removegroup2" ]] && az storage fs access remove-recursive --acl "group:"$removegroup2"" -p "global" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

echo "ending remove"

echo "folders exist, acling..."

[[ -n "$domesticrw" ]] && az storage fs access update-recursive --acl "group:"$domesticrw":rwx,default:user:"$domesticrw":rwx" -p "domestic" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

[[ -n "$domesticro" ]] && az storage fs access update-recursive --acl "group:"$domesticro":r-x,default:user:"$domesticro":r-x" -p "domestic" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

[[ -n "$globalrw" ]] && az storage fs access update-recursive --acl "group:"$globalrw":rwx,default:user:"$globalrw":rwx" -p "global" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

[[ -n "$globalro" ]] && az storage fs access update-recursive --acl "group:"$globalro":r-x,default:user:"$globalro":r-x" -p "global" -f "$cradle" --account-name "$accountname" --account-key "$accountkey"

echo "folders acl'ed."

Sample log:

-----

translating domesticro=AZU_PI_Domestic_RO_DSS_U2

domesticro=d39e64e3-4c72-4d7b-83fd-5bdba321629b

translating globalrw=AZU_PI_Global_RO_DSS_U2

globalro=d2683e46-9f59-4cc4-9a77-f95e5bdf8a6d

translating removegroup1=AZU_PI_Domestic_RW_DM

removegroup1=20be57b0-157a-4b59-88ce-086dab652d57

translating removegroup1=AZU_PI_Global_RW_DM

removegroup2=5b5ed4b3-9462-43fc-94aa-80dc00d3c02d

domesticrw=

domesticro=d39e64e3-4c72-4d7b-83fd-5bdba321629b

globalro=d2683e46-9f59-4cc4-9a77-f95e5bdf8a6d

globalrw=

removegroup1=20be57b0-157a-4b59-88ce-086dab652d57

removegroup2=5b5ed4b3-9462-43fc-94aa-80dc00d3c02d

create container, if not exists

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "created": false

}

container exists, acling...

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "client_request_id": "5761d34a-27cd-11f1-97a9-8ef5922a9147",

  "date": "2026-03-24T22:03:51+00:00",

  "etag": "\"0x8DE89ED2F7CA9FC\"",

  "last_modified": "2026-03-24T21:34:53+00:00",

  "request_id": "b72d61e2-c01f-0038-26da-bb0186000000",

  "version": "2021-08-06"

}

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "continuation": null,

  "counters": {

    "directoriesSuccessful": 3,

    "failureCount": 0,

    "filesSuccessful": 0

  },

  "failedEntries": []

}

container acl'ed.

creating global and domestic folders...

{

  "content_length": 0,

  "continuation": null,

  "date": "2026-03-24T22:03:54+00:00",

  "encryption_key_sha256": null,

  "etag": "\"0x8DE89F13E05F710\"",

  "last_modified": "2026-03-24T22:03:55+00:00",

  "request_id": "63228947-f01f-0019-58da-bb6cb7000000",

  "request_server_encrypted": true,

  "version": "2021-08-06"

}

{

  "content_length": 0,

  "continuation": null,

  "date": "2026-03-24T22:03:56+00:00",

  "encryption_key_sha256": null,

  "etag": "\"0x8DE89F13EF8F98C\"",

  "last_modified": "2026-03-24T22:03:56+00:00",

  "request_id": "89334994-001f-0084-34da-bb16f7000000",

  "request_server_encrypted": true,

  "version": "2021-08-06"

}

folders exist, remove existing acls...

beginning remove

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "continuation": null,

  "counters": {

    "directoriesSuccessful": 1,

    "failureCount": 0,

    "filesSuccessful": 0

  },

  "failedEntries": []

}

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "continuation": null,

  "counters": {

    "directoriesSuccessful": 1,

    "failureCount": 0,

    "filesSuccessful": 0

  },

  "failedEntries": []

}

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "continuation": null,

  "counters": {

    "directoriesSuccessful": 1,

    "failureCount": 0,

    "filesSuccessful": 0

  },

  "failedEntries": []

}

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "continuation": null,

  "counters": {

    "directoriesSuccessful": 1,

    "failureCount": 0,

    "filesSuccessful": 0

  },

  "failedEntries": []

}

ending remove

folders exist, acling...

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "continuation": null,

  "counters": {

    "directoriesSuccessful": 1,

    "failureCount": 0,

    "filesSuccessful": 0

  },

  "failedEntries": []

}

Command group 'az storage' is in preview and under development. Reference and support levels: https://aka.ms/CLI_refstatus

{

  "continuation": null,

  "counters": {

    "directoriesSuccessful": 1,

    "failureCount": 0,

    "filesSuccessful": 0

  },

  "failedEntries": []

}

folders acl'ed.

Reference:

1. Online documentation on acls: https://learn.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-access-control