Wednesday, March 11, 2026

 This extends the same workflow we discussed in the previous 2 articles to include the following:

• Heatmaps use the same issue‑to‑type matrix

• Trend lines use the same CSVs once history is accumulated

• Dependency graphs using the already implemented module‑source parsing

The above three are typical of issue management dashboards and are well-known for being eye-candies.

Yaml now follows:

---

name: Weekly Terraform azurerm hotspot report with AI summary and advanced visuals

on:

  workflow_dispatch:

    inputs:

      window_days:

        description: "Number of days back to collect closed issues (integer)"

        required: false

        default: "7"

  schedule:

    - cron: "0 13 * * MON"

permissions:

  contents: write

  pull-requests: write

  issues: read

jobs:

  report:

    runs-on: ubuntu-latest

    steps:

      - name: Check out repository

        uses: actions/checkout@v4

        with:

          fetch-depth: 0

      - name: Set up Python

        uses: actions/setup-python@v5

        with:

          python-version: "3.11"

      - name: Install dependencies

        run: |

          python -m pip install --upgrade pip

          pip install requests pandas matplotlib seaborn networkx python-hcl2

      - name: Prepare environment variables

        id: env

        run: |

          echo "WINDOW_DAYS=${{ github.event.inputs.window_days || '7' }}" >> $GITHUB_ENV

          echo "REPO=${GITHUB_REPOSITORY}" >> $GITHUB_ENV

      - name: Fetch closed issues and linked PRs (window)

        id: fetch

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

          REPO: ${{ env.REPO }}

          WINDOW_DAYS: ${{ env.WINDOW_DAYS }}

        run: |

          python - <<'PY'

          import os, requests, json, datetime, re

          REPO = os.environ["REPO"]

          TOKEN = os.environ["GH_TOKEN"]

          WINDOW_DAYS = int(os.environ.get("WINDOW_DAYS","7"))

          HEADERS = {"Authorization": f"Bearer {TOKEN}", "Accept": "application/vnd.github+json"}

          since = (datetime.datetime.utcnow() - datetime.timedelta(days=WINDOW_DAYS)).isoformat() + "Z"

          def gh_get(url, params=None):

            r = requests.get(url, headers=HEADERS, params=params)

            r.raise_for_status()

            return r.json()

          issues_url = f"https://api.github.com/repos/{REPO}/issues"

          params = {"state":"closed","since":since,"per_page":100}

          items = gh_get(issues_url, params=params)

          issues = []

          for i in items:

            if "pull_request" in i:

              continue

            comments = gh_get(i["comments_url"], params={"per_page":100})

            pr_urls = set()

            for c in comments:

              body = c.get("body","") or ""

              for m in re.findall(r"https://github\.com/[^/\s]+/[^/\s]+/pull/\d+", body):

                pr_urls.add(m)

              for m in re.findall(r"(?:^|\s)#(\d+)\b", body):

                pr_urls.add(f"https://github.com/{REPO}/pull/{m}")

            issues.append({

              "number": i["number"],

              "title": i.get("title",""),

              "user": i.get("user",{}).get("login",""),

              "created_at": i.get("created_at"),

              "closed_at": i.get("closed_at"),

              "html_url": i.get("html_url"),

              "comments": [{"id":c.get("id"), "body":c.get("body",""), "created_at":c.get("created_at")} for c in comments],

              "pr_urls": sorted(pr_urls)

            })

          with open("issues.json","w") as f:

            json.dump(issues, f, indent=2)

          print(f"WROTE_ISSUES={len(issues)}")

          PY

      - name: Resolve PRs, collect touched workload apps and azurerm types

        id: analyze

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

          REPO: ${{ env.REPO }}

        run: |

          python - <<'PY'

          import os, json, re, requests, subprocess

          import hcl2

          REPO = os.environ["REPO"]

          TOKEN = os.environ["GH_TOKEN"]

          HEADERS = {"Authorization": f"Bearer {TOKEN}", "Accept": "application/vnd.github+json"}

          def gh_get(url, params=None):

            r = requests.get(url, headers=HEADERS, params=params)

            r.raise_for_status()

            return r.json()

          def gh_get_text(url):

            r = requests.get(url, headers=HEADERS)

            r.raise_for_status()

            return r.text

          def pr_number_from_url(u):

            m = re.search(r"/pull/(\d+)", u)

            return int(m.group(1)) if m else None

          def list_pr_files(pr_number):

            url = f"https://api.github.com/repos/{REPO}/pulls/{pr_number}/files"

            files = []

            page = 1

            while True:

              batch = gh_get(url, params={"per_page":100,"page":page})

              if not batch:

                break

              files.extend(batch)

              page += 1

            return files

          def get_pr_head_sha(pr_number):

            url = f"https://api.github.com/repos/{REPO}/pulls/{pr_number}"

            pr = gh_get(url)

            return pr["head"]["sha"]

          def get_file_at_sha(path, sha):

            url = f"https://api.github.com/repos/{REPO}/contents/{path}"

            r = requests.get(url, headers=HEADERS, params={"ref": sha})

            if r.status_code == 404:

              return None

            r.raise_for_status()

            data = r.json()

            if isinstance(data, dict) and data.get("type") == "file" and data.get("download_url"):

              return gh_get_text(data["download_url"])

            return None

          def parse_azurerm_resource_types_from_tf(tf_text):

            types = set()

            try:

              obj = hcl2.loads(tf_text)

            except Exception:

              return types

            res = obj.get("resource", [])

            if isinstance(res, list):

              for item in res:

                if isinstance(item, dict):

                  for rtype in item.keys():

                    if isinstance(rtype, str) and rtype.startswith("azurerm_"):

                      types.add(rtype)

            elif isinstance(res, dict):

              for rtype in res.keys():

                if isinstance(rtype, str) and rtype.startswith("azurerm_"):

                  types.add(rtype)

            return types

          def parse_module_sources_from_tf(tf_text):

            sources = set()

            try:

              obj = hcl2.loads(tf_text)

            except Exception:

              return sources

            mods = obj.get("module", [])

            if isinstance(mods, list):

              for item in mods:

                if isinstance(item, dict):

                  for _, body in item.items():

                    if isinstance(body, dict):

                      src = body.get("source")

                      if isinstance(src, str):

                        sources.add(src)

            elif isinstance(mods, dict):

              for _, body in mods.items():

                if isinstance(body, dict):

                  src = body.get("source")

                  if isinstance(src, str):

                    sources.add(src)

            return sources

          def normalize_local_module_path(source, app_dir):

            if source.startswith("./") or source.startswith("../"):

              import posixpath

              return posixpath.normpath(posixpath.join(app_dir, source))

            return None

          def list_repo_tf_files_under(dir_path, sha):

            try:

              out = subprocess.check_output(["git","ls-tree","-r","--name-only",sha,dir_path], text=True)

              paths = [p.strip() for p in out.splitlines() if p.strip().endswith(".tf")]

              return paths

            except Exception:

              return []

          def collect_azurerm_types_for_app(app_dir, sha):

            az_types = set()

            module_dirs = set()

            tf_paths = list_repo_tf_files_under(app_dir, sha)

            for p in tf_paths:

              txt = get_file_at_sha(p, sha)

              if not txt:

                continue

              az_types |= parse_azurerm_resource_types_from_tf(txt)

              for src in parse_module_sources_from_tf(txt):

                local = normalize_local_module_path(src, app_dir)

                if local:

                  module_dirs.add(local)

            for mdir in sorted(module_dirs):

              m_tf_paths = list_repo_tf_files_under(mdir, sha)

              for p in m_tf_paths:

                txt = get_file_at_sha(p, sha)

                if not txt:

                  continue

                az_types |= parse_azurerm_resource_types_from_tf(txt)

            return az_types

          with open("issues.json") as f:

            issues = json.load(f)

          issue_to_types = {}

          issue_turnaround = {}

          module_deps = {} # app_dir -> set(module paths it references)

          for issue in issues:

            inum = issue["number"]

            created = issue.get("created_at")

            closed = issue.get("closed_at")

            if created and closed:

              from datetime import datetime

              fmt = "%Y-%m-%dT%H:%M:%SZ"

              try:

                dt_created = datetime.strptime(created, fmt)

                dt_closed = datetime.strptime(closed, fmt)

                delta_days = (dt_closed - dt_created).total_seconds() / 86400.0

              except Exception:

                delta_days = None

            else:

              delta_days = None

            issue_turnaround[inum] = delta_days

            pr_urls = issue.get("pr_urls",[])

            pr_numbers = sorted({pr_number_from_url(u) for u in pr_urls if pr_number_from_url(u)})

            types_for_issue = set()

            for prn in pr_numbers:

              sha = get_pr_head_sha(prn)

              files = list_pr_files(prn)

              touched_apps = set()

              for f in files:

                path = f.get("filename","")

                if path.startswith("workload/"):

                  parts = path.split("/")

                  if len(parts) >= 2:

                    touched_apps.add("/".join(parts[:2]))

              for app_dir in sorted(touched_apps):

                types_for_issue |= collect_azurerm_types_for_app(app_dir, sha)

                # collect module sources for dependency graph

                # scan app tf files for module sources at PR head

                tf_paths = list_repo_tf_files_under(app_dir, sha)

                for p in tf_paths:

                  txt = get_file_at_sha(p, sha)

                  if not txt:

                    continue

                  for src in parse_module_sources_from_tf(txt):

                    local = normalize_local_module_path(src, app_dir)

                    if local:

                      module_deps.setdefault(app_dir, set()).add(local)

            if types_for_issue:

              issue_to_types[inum] = sorted(types_for_issue)

          rows = []

          for inum, types in issue_to_types.items():

            for t in set(types):

              rows.append({"issue": inum, "azurerm_type": t})

          import pandas as pd

          df = pd.DataFrame(rows)

          df.to_csv("severity_data.csv", index=False)

          ta_rows = []

          for inum, days in issue_turnaround.items():

            ta_rows.append({"issue": inum, "turnaround_days": days})

          pd.DataFrame(ta_rows).to_csv("turnaround.csv", index=False)

          with open("issue_to_azurerm_types.json","w") as f:

            json.dump(issue_to_types, f, indent=2)

          with open("issue_turnaround.json","w") as f:

            json.dump(issue_turnaround, f, indent=2)

          with open("module_deps.json","w") as f:

            json.dump({k: sorted(list(v)) for k,v in module_deps.items()}, f, indent=2)

          print(f"ISSUES_WITH_TYPES={len(issue_to_types)}")

          PY

      - name: Generate charts and markdown (severity, heatmap, trend, dependency, turnaround) and include AI summary

        id: report

        env:

          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

        run: |

          set -euo pipefail

          python - <<'PY'

          import os, json, datetime, glob

          import pandas as pd

          import matplotlib.pyplot as plt

          import seaborn as sns

          import networkx as nx

          ts = datetime.datetime.utcnow().strftime("%Y%m%d-%H%M%S")

          os.makedirs("history", exist_ok=True)

          # --- Severity bar (existing) ---

          if os.path.exists("severity_data.csv"):

            df = pd.read_csv("severity_data.csv")

            counts = df.groupby("azurerm_type")["issue"].nunique().sort_values(ascending=False)

          else:

            counts = pd.Series(dtype=int)

          png_sev = f"history/severity-by-azurerm-{ts}.png"

          plt.figure(figsize=(12,6))

          if not counts.empty:

            counts.plot(kind="bar")

            plt.title("Issue frequency by azurerm resource type")

            plt.xlabel("azurerm resource type")

            plt.ylabel("number of closed issues touching type")

          else:

            plt.text(0.5, 0.5, "No azurerm-impacting issues in window", ha="center", va="center")

            plt.axis("off")

          plt.tight_layout()

          plt.savefig(png_sev)

          plt.clf()

          # --- Heatmap: azurerm_type x issue (binary or counts) ---

          heat_png = f"history/heatmap-azurerm-issues-{ts}.png"

          if os.path.exists("severity_data.csv"):

            mat = pd.read_csv("severity_data.csv")

            if not mat.empty:

              pivot = mat.pivot_table(index="azurerm_type", columns="issue", aggfunc='size', fill_value=0)

              # Optionally cluster or sort by total counts

              pivot['total'] = pivot.sum(axis=1)

              pivot = pivot.sort_values('total', ascending=False).drop(columns=['total'])

              # limit columns for readability (most recent/top issues)

              if pivot.shape[1] > 100:

                pivot = pivot.iloc[:, :100]

              plt.figure(figsize=(14, max(6, 0.2 * pivot.shape[0])))

              sns.heatmap(pivot, cmap="YlOrRd", cbar=True)

              plt.title("Heatmap: azurerm resource types (rows) vs issues (columns)")

              plt.xlabel("Issue number (truncated)")

              plt.ylabel("azurerm resource type")

              plt.tight_layout()

              plt.savefig(heat_png)

              plt.clf()

            else:

              plt.figure(figsize=(6,2))

              plt.text(0.5,0.5,"No data for heatmap",ha="center",va="center")

              plt.axis("off")

              plt.savefig(heat_png)

              plt.clf()

          else:

            plt.figure(figsize=(6,2))

            plt.text(0.5,0.5,"No data for heatmap",ha="center",va="center")

            plt.axis("off")

            plt.savefig(heat_png)

            plt.clf()

          # --- Trend lines: aggregate historical severity_data.csv files in history/ ---

          trend_png = f"history/trendlines-azurerm-{ts}.png"

          # collect historical CSVs that match severity_data pattern

          hist_files = sorted(glob.glob("history/*severity-data-*.csv") + glob.glob("history/*severity_data.csv") + glob.glob("history/*severity-by-azurerm-*.csv"))

          # also include current run's severity_data.csv

          if os.path.exists("severity_data.csv"):

            hist_files.append("severity_data.csv")

          # Build weekly counts per azurerm_type by deriving timestamp from filenames where possible

          trend_df = pd.DataFrame()

          for f in hist_files:

            try:

              # attempt to extract timestamp from filename

              import re

              m = re.search(r"(\d{8}-\d{6})", f)

              ts_label = m.group(1) if m else os.path.getmtime(f)

              tmp = pd.read_csv(f)

              if tmp.empty:

                continue

              counts_tmp = tmp.groupby("azurerm_type")["issue"].nunique().rename(ts_label)

              trend_df = pd.concat([trend_df, counts_tmp], axis=1)

            except Exception:

              continue

          if not trend_df.empty:

            trend_df = trend_df.fillna(0).T

            # convert index to datetime where possible

            try:

              trend_df.index = pd.to_datetime(trend_df.index, format="%Y%m%d-%H%M%S", errors='coerce').fillna(pd.to_datetime(trend_df.index, unit='s'))

            except Exception:

              pass

            plt.figure(figsize=(14,6))

            # plot top N azurerm types by latest total

            latest = trend_df.iloc[-1].sort_values(ascending=False).head(8).index.tolist()

            for col in latest:

              plt.plot(trend_df.index, trend_df[col], marker='o', label=col)

            plt.legend(loc='best', fontsize='small')

            plt.title("Trend lines: issue frequency over time for top azurerm types")

            plt.xlabel("time")

            plt.ylabel("issue count")

            plt.xticks(rotation=45)

            plt.tight_layout()

            plt.savefig(trend_png)

            plt.clf()

          else:

            plt.figure(figsize=(8,2))

            plt.text(0.5,0.5,"No historical data for trend lines",ha="center",va="center")

            plt.axis("off")

            plt.savefig(trend_png)

            plt.clf()

          # --- Dependency graph: build directed graph from module_deps.json ---

          dep_png = f"history/dependency-graph-{ts}.png"

          if os.path.exists("module_deps.json"):

            with open("module_deps.json") as f:

              deps = json.load(f)

            G = nx.DiGraph()

            # add edges app -> module

            for app, mods in deps.items():

              G.add_node(app, type='app')

              for m in mods:

                G.add_node(m, type='module')

                G.add_edge(app, m)

            if len(G.nodes) == 0:

              plt.figure(figsize=(6,2))

              plt.text(0.5,0.5,"No dependency data",ha="center",va="center")

              plt.axis("off")

              plt.savefig(dep_png)

              plt.clf()

            else:

              plt.figure(figsize=(12,8))

              pos = nx.spring_layout(G, k=0.5, iterations=50)

              node_colors = ['#1f78b4' if G.nodes[n].get('type')=='app' else '#33a02c' for n in G.nodes()]

              nx.draw_networkx_nodes(G, pos, node_size=600, node_color=node_colors)

              nx.draw_networkx_edges(G, pos, arrows=True, arrowstyle='->', arrowsize=12, edge_color='#888888')

              nx.draw_networkx_labels(G, pos, font_size=8)

              plt.title("Module dependency graph (apps -> local modules)")

              plt.axis('off')

              plt.tight_layout()

              plt.savefig(dep_png)

              plt.clf()

          else:

            plt.figure(figsize=(6,2))

            plt.text(0.5,0.5,"No dependency data",ha="center",va="center")

            plt.axis("off")

            plt.savefig(dep_png)

            plt.clf()

          # --- Turnaround chart (existing) ---

          ta_png = f"history/turnaround-by-issue-{ts}.png"

          if os.path.exists("turnaround.csv"):

            ta = pd.read_csv("turnaround.csv")

            ta = ta.dropna(subset=["turnaround_days"])

            if not ta.empty:

              ta_sorted = ta.sort_values("turnaround_days", ascending=False).head(50)

              plt.figure(figsize=(12,6))

              plt.bar(ta_sorted["issue"].astype(str), ta_sorted["turnaround_days"])

              plt.xticks(rotation=90)

              plt.title("Turnaround time (days) for closed issues in window")

              plt.xlabel("Issue number")

              plt.ylabel("Turnaround (days)")

              plt.tight_layout()

              plt.savefig(ta_png)

              plt.clf()

            else:

              plt.figure(figsize=(8,2))

              plt.text(0.5,0.5,"No turnaround data available",ha="center",va="center")

              plt.axis("off")

              plt.savefig(ta_png)

              plt.clf()

          else:

            plt.figure(figsize=(8,2))

            plt.text(0.5,0.5,"No turnaround data available",ha="center",va="center")

            plt.axis("off")

            plt.savefig(ta_png)

            plt.clf()

          # --- AI summary (who wants what) ---

          if os.path.exists("issues.json"):

            with open("issues.json") as f:

              issues = json.load(f)

          else:

            issues = []

          condensed = []

          for i in issues:

            condensed.append({

              "number": i.get("number"),

              "user": i.get("user"),

              "title": i.get("title"),

              "html_url": i.get("html_url")

            })

          with open("issues_for_ai.json","w") as f:

            json.dump(condensed, f, indent=2)

          # call OpenAI if key present (same approach as before)

          import subprocess, os

          OPENAI_KEY = os.environ.get("OPENAI_API_KEY")

          ai_text = "AI summary skipped (no OPENAI_API_KEY)."

          if OPENAI_KEY:

            prompt = ("You are given a JSON array of GitHub issues with fields: number, user, title, html_url. "

                      "Produce a concise list of one-line 'who wants what' statements, one per issue, in plain text. "

                      "Format: '#<number> — <user> wants <succinct request derived from title>'. "

                      "Do not add commentary.")

            payload = {

              "model": "gpt-4o-mini",

              "messages": [{"role":"system","content":"You are a concise summarizer."},

                           {"role":"user","content": prompt + "\\n\\nJSON:\\n" + json.dumps(condensed)[:15000]}],

              "temperature":0.2,

              "max_tokens":400

            }

            proc = subprocess.run([

              "curl","-sS","https://api.openai.com/v1/chat/completions",

              "-H", "Content-Type: application/json",

              "-H", f"Authorization: Bearer {OPENAI_KEY}",

              "-d", json.dumps(payload)

            ], capture_output=True, text=True)

            if proc.returncode == 0 and proc.stdout:

              try:

                resp = json.loads(proc.stdout)

                ai_text = resp["choices"][0]["message"]["content"].strip()

              except Exception:

                ai_text = "AI summary unavailable (parsing error)."

          # --- Write markdown report combining all visuals ---

          md_path = f"history/severity-report-{ts}.md"

          with open(md_path, "w") as f:

            f.write("# Weekly Terraform azurerm hotspot report\n\n")

            f.write(f"**Window (days):** {os.environ.get('WINDOW_DAYS','7')}\n\n")

            f.write("## AI Summary (who wants what)\n\n")

            f.write("```\n")

            f.write(ai_text + "\n")

            f.write("```\n\n")

            f.write("## Top azurerm resource types by issue frequency\n\n")

            if not counts.empty:

              f.write("![" + os.path.basename(png_sev) + "](" + os.path.basename(png_sev) + ")\n\n")

              f.write(counts.head(30).to_frame("issues").to_markdown() + "\n\n")

            else:

              f.write("No azurerm-impacting issues found in the selected window.\n\n")

            f.write("## Heatmap: azurerm types vs issues\n\n")

            f.write("![" + os.path.basename(heat_png) + "](" + os.path.basename(heat_png) + ")\n\n")

            f.write("## Trend lines: historical issue frequency for top azurerm types\n\n")

            f.write("![" + os.path.basename(trend_png) + "](" + os.path.basename(trend_png) + ")\n\n")

            f.write("## Dependency graph: apps -> local modules\n\n")

            f.write("![" + os.path.basename(dep_png) + "](" + os.path.basename(dep_png) + ")\n\n")

            f.write("## Turnaround time for closed issues (days)\n\n")

            f.write("![" + os.path.basename(ta_png) + "](" + os.path.basename(ta_png) + ")\n\n")

            f.write("## Data artifacts\n\n")

            f.write("- `severity_data.csv` — per-issue azurerm type mapping\n")

            f.write("- `turnaround.csv` — per-issue turnaround in days\n")

            f.write("- `issue_to_azurerm_types.json` — mapping used to build charts\n")

            f.write("- `module_deps.json` — module dependency data used for graph\n")

          # Save current CSVs into history with timestamp for future trend aggregation

          try:

            import shutil

            if os.path.exists("severity_data.csv"):

              shutil.copy("severity_data.csv", f"history/severity-data-{ts}.csv")

            if os.path.exists("turnaround.csv"):

              shutil.copy("turnaround.csv", f"history/turnaround-{ts}.csv")

          except Exception:

            pass

          print(f"REPORT_MD={md_path}")

          print(f"REPORT_PNG={png_sev}")

          print(f"REPORT_HEAT={heat_png}")

          print(f"REPORT_TREND={trend_png}")

          print(f"REPORT_DEP={dep_png}")

          print(f"REPORT_TA={ta_png}")

          PY

      - name: Add report files to history and commit via PR

        id: create_pr

        uses: peter-evans/create-pull-request@v6

        with:

          commit-message: "Add weekly Terraform azurerm hotspot report and advanced visuals (prune to last 10)"

          title: "Weekly Terraform azurerm hotspot report"

          body: |

            This PR adds the latest weekly azurerm hotspot report and charts under `history/`.

            The workflow prunes older reports to keep at most 10 report sets.

          branch: "weekly-terraform-azurerm-hotspots"

          base: "main"

          path: "history"

      - name: Prune history to max 10 report sets (post-commit)

        if: steps.create_pr.outcome == 'success'

        run: |

          python - <<'PY'

          import os, re

          from pathlib import Path

          hist = Path("history")

          hist.mkdir(exist_ok=True)

          groups = {}

          for p in hist.iterdir():

            m = re.search(r"(\d{8}-\d{6})", p.name)

            if not m:

              continue

            ts = m.group(1)

            groups.setdefault(ts, []).append(p)

          timestamps = sorted(groups.keys(), reverse=True)

          keep = set(timestamps[:10])

          drop = [p for ts, files in groups.items() if ts not in keep for p in files]

          for p in drop:

            try:

              p.unlink()

            except Exception:

              pass

          print(f"Pruned {len(drop)} files; kept {len(keep)} report sets.")

          PY

      - name: Notify runbook webhook (which will send az communication email)

        if: steps.create_pr.outcome == 'success'

        env:

          RUNBOOK_WEBHOOK_URL: ${{ secrets.RUNBOOK_WEBHOOK_URL }}

          PR_URL: ${{ steps.create_pr.outputs.pull-request-url }}

          WINDOW_DAYS: ${{ env.WINDOW_DAYS }}

        run: |

          payload=$(jq -n \

            --arg pr "$PR_URL" \

            --arg window "$WINDOW_DAYS" \

            '{subject: ("Weekly Terraform azurerm hotspot report - " + $window + "d"), body: ("A new weekly azurerm hotspot report has been generated. Review the PR: " + $pr), pr_url: $pr, window_days: $window}')

          curl -sS -X POST "$RUNBOOK_WEBHOOK_URL" \

            -H "Content-Type: application/json" \

            -d "$payload"

      - name: Output artifact list

        if: always()

        run: |

          echo "Generated files in history/:"

          ls -la history || true


Tuesday, March 10, 2026

 This is a continuation of a previous article for context. Here we extend that workflow to include the following capabilities in the report generated:

- LLM generated summary of “who asks what”

- Turn-around time bar chart for issues

- Flexible sampling window as input parameter.


Yaml now follows:

--

name: Weekly Terraform azurerm hotspot report with AI summary and turnaround chart

on:

  workflow_dispatch:

    inputs:

      window_days:

        description: "Number of days back to collect closed issues (integer)"

        required: false

        default: "7"

  schedule:

    - cron: "0 13 * * MON"

permissions:

  contents: write

  pull-requests: write

  issues: read

jobs:

  report:

    runs-on: ubuntu-latest

    steps:

      - name: Check out repository

        uses: actions/checkout@v4

        with:

          fetch-depth: 0

      - name: Set up Python

        uses: actions/setup-python@v5

        with:

          python-version: "3.11"

      - name: Install dependencies

        run: |

          python -m pip install --upgrade pip

          pip install requests pandas matplotlib python-hcl2

      - name: Prepare environment variables

        id: env

        run: |

          echo "WINDOW_DAYS=${{ github.event.inputs.window_days || '7' }}" >> $GITHUB_ENV

          echo "REPO=${GITHUB_REPOSITORY}" >> $GITHUB_ENV

      - name: Fetch closed issues and linked PRs (window)

        id: fetch

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

          REPO: ${{ env.REPO }}

          WINDOW_DAYS: ${{ env.WINDOW_DAYS }}

        run: |

          python - <<'PY'

          import os, requests, json, datetime, re

          REPO = os.environ["REPO"]

          TOKEN = os.environ["GH_TOKEN"]

          WINDOW_DAYS = int(os.environ.get("WINDOW_DAYS","7"))

          HEADERS = {"Authorization": f"Bearer {TOKEN}", "Accept": "application/vnd.github+json"}

          since = (datetime.datetime.utcnow() - datetime.timedelta(days=WINDOW_DAYS)).isoformat() + "Z"

          def gh_get(url, params=None):

            r = requests.get(url, headers=HEADERS, params=params)

            r.raise_for_status()

            return r.json()

          issues_url = f"https://api.github.com/repos/{REPO}/issues"

          params = {"state":"closed","since":since,"per_page":100}

          items = gh_get(issues_url, params=params)

          issues = []

          for i in items:

            if "pull_request" in i:

              continue

            # fetch comments to find any PR links

            comments = gh_get(i["comments_url"], params={"per_page":100})

            pr_urls = set()

            for c in comments:

              body = c.get("body","") or ""

              for m in re.findall(r"https://github\.com/[^/\s]+/[^/\s]+/pull/\d+", body):

                pr_urls.add(m)

              for m in re.findall(r"(?:^|\s)#(\d+)\b", body):

                pr_urls.add(f"https://github.com/{REPO}/pull/{m}")

            issues.append({

              "number": i["number"],

              "title": i.get("title",""),

              "user": i.get("user",{}).get("login",""),

              "created_at": i.get("created_at"),

              "closed_at": i.get("closed_at"),

              "html_url": i.get("html_url"),

              "comments": [{"id":c.get("id"), "body":c.get("body",""), "created_at":c.get("created_at")} for c in comments],

              "pr_urls": sorted(pr_urls)

            })

          with open("issues.json","w") as f:

            json.dump(issues, f, indent=2)

          print(f"WROTE_ISSUES={len(issues)}")

          PY

      - name: Resolve PRs, collect touched workload apps and azurerm types

        id: analyze

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

          REPO: ${{ env.REPO }}

        run: |

          python - <<'PY'

          import os, json, re, requests, subprocess

          import hcl2

          from collections import defaultdict

          REPO = os.environ["REPO"]

          TOKEN = os.environ["GH_TOKEN"]

          HEADERS = {"Authorization": f"Bearer {TOKEN}", "Accept": "application/vnd.github+json"}

          def gh_get(url, params=None):

            r = requests.get(url, headers=HEADERS, params=params)

            r.raise_for_status()

            return r.json()

          def gh_get_text(url):

            r = requests.get(url, headers=HEADERS)

            r.raise_for_status()

            return r.text

          def pr_number_from_url(u):

            m = re.search(r"/pull/(\d+)", u)

            return int(m.group(1)) if m else None

          def list_pr_files(pr_number):

            url = f"https://api.github.com/repos/{REPO}/pulls/{pr_number}/files"

            files = []

            page = 1

            while True:

              batch = gh_get(url, params={"per_page":100,"page":page})

              if not batch:

                break

              files.extend(batch)

              page += 1

            return files

          def get_pr_head_sha(pr_number):

            url = f"https://api.github.com/repos/{REPO}/pulls/{pr_number}"

            pr = gh_get(url)

            return pr["head"]["sha"]

          def get_file_at_sha(path, sha):

            url = f"https://api.github.com/repos/{REPO}/contents/{path}"

            r = requests.get(url, headers=HEADERS, params={"ref": sha})

            if r.status_code == 404:

              return None

            r.raise_for_status()

            data = r.json()

            if isinstance(data, dict) and data.get("type") == "file" and data.get("download_url"):

              return gh_get_text(data["download_url"])

            return None

          def parse_azurerm_resource_types_from_tf(tf_text):

            types = set()

            try:

              obj = hcl2.loads(tf_text)

            except Exception:

              return types

            res = obj.get("resource", [])

            if isinstance(res, list):

              for item in res:

                if isinstance(item, dict):

                  for rtype in item.keys():

                    if isinstance(rtype, str) and rtype.startswith("azurerm_"):

                      types.add(rtype)

            elif isinstance(res, dict):

              for rtype in res.keys():

                if isinstance(rtype, str) and rtype.startswith("azurerm_"):

                  types.add(rtype)

            return types

          def parse_module_sources_from_tf(tf_text):

            sources = set()

            try:

              obj = hcl2.loads(tf_text)

            except Exception:

              return sources

            mods = obj.get("module", [])

            if isinstance(mods, list):

              for item in mods:

                if isinstance(item, dict):

                  for _, body in item.items():

                    if isinstance(body, dict):

                      src = body.get("source")

                      if isinstance(src, str):

                        sources.add(src)

            elif isinstance(mods, dict):

              for _, body in mods.items():

                if isinstance(body, dict):

                  src = body.get("source")

                  if isinstance(src, str):

                    sources.add(src)

            return sources

          def normalize_local_module_path(source, app_dir):

            if source.startswith("./") or source.startswith("../"):

              import posixpath

              return posixpath.normpath(posixpath.join(app_dir, source))

            return None

          def list_repo_tf_files_under(dir_path, sha):

            try:

              out = subprocess.check_output(["git","ls-tree","-r","--name-only",sha,dir_path], text=True)

              paths = [p.strip() for p in out.splitlines() if p.strip().endswith(".tf")]

              return paths

            except Exception:

              return []

          def collect_azurerm_types_for_app(app_dir, sha):

            az_types = set()

            module_dirs = set()

            tf_paths = list_repo_tf_files_under(app_dir, sha)

            for p in tf_paths:

              txt = get_file_at_sha(p, sha)

              if not txt:

                continue

              az_types |= parse_azurerm_resource_types_from_tf(txt)

              for src in parse_module_sources_from_tf(txt):

                local = normalize_local_module_path(src, app_dir)

                if local:

                  module_dirs.add(local)

            for mdir in sorted(module_dirs):

              m_tf_paths = list_repo_tf_files_under(mdir, sha)

              for p in m_tf_paths:

                txt = get_file_at_sha(p, sha)

                if not txt:

                  continue

                az_types |= parse_azurerm_resource_types_from_tf(txt)

            return az_types

          with open("issues.json") as f:

            issues = json.load(f)

          issue_to_types = {}

          issue_turnaround = {} # issue -> days (float)

          for issue in issues:

            inum = issue["number"]

            created = issue.get("created_at")

            closed = issue.get("closed_at")

            if created and closed:

              from datetime import datetime

              fmt = "%Y-%m-%dT%H:%M:%SZ"

              try:

                dt_created = datetime.strptime(created, fmt)

                dt_closed = datetime.strptime(closed, fmt)

                delta_days = (dt_closed - dt_created).total_seconds() / 86400.0

              except Exception:

                delta_days = None

            else:

              delta_days = None

            issue_turnaround[inum] = delta_days

            pr_urls = issue.get("pr_urls",[])

            pr_numbers = sorted({pr_number_from_url(u) for u in pr_urls if pr_number_from_url(u)})

            types_for_issue = set()

            for prn in pr_numbers:

              sha = get_pr_head_sha(prn)

              files = list_pr_files(prn)

              touched_apps = set()

              for f in files:

                path = f.get("filename","")

                if path.startswith("workload/"):

                  parts = path.split("/")

                  if len(parts) >= 2:

                    touched_apps.add("/".join(parts[:2]))

              for app_dir in sorted(touched_apps):

                types_for_issue |= collect_azurerm_types_for_app(app_dir, sha)

            if types_for_issue:

              issue_to_types[inum] = sorted(types_for_issue)

          # Build CSV rows: issue, azurerm_type

          rows = []

          for inum, types in issue_to_types.items():

            for t in set(types):

              rows.append({"issue": inum, "azurerm_type": t})

          import pandas as pd

          df = pd.DataFrame(rows)

          df.to_csv("severity_data.csv", index=False)

          # Turnaround CSV

          ta_rows = []

          for inum, days in issue_turnaround.items():

            ta_rows.append({"issue": inum, "turnaround_days": days})

          pd.DataFrame(ta_rows).to_csv("turnaround.csv", index=False)

          with open("issue_to_azurerm_types.json","w") as f:

            json.dump(issue_to_types, f, indent=2)

          with open("issue_turnaround.json","w") as f:

            json.dump(issue_turnaround, f, indent=2)

          print(f"ISSUES_WITH_TYPES={len(issue_to_types)}")

          PY

      - name: Generate charts and markdown (severity + turnaround) and include AI summary

        id: report

        env:

          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}

        run: |

          set -euo pipefail

          python - <<'PY'

          import os, json, datetime

          import pandas as pd

          import matplotlib.pyplot as plt

          ts = datetime.datetime.utcnow().strftime("%Y%m%d-%H%M%S")

          os.makedirs("history", exist_ok=True)

          # Severity chart (issue frequency by azurerm type)

          if os.path.exists("severity_data.csv"):

            df = pd.read_csv("severity_data.csv")

            if not df.empty:

              counts = df.groupby("azurerm_type")["issue"].nunique().sort_values(ascending=False)

            else:

              counts = pd.Series(dtype=int)

          else:

            counts = pd.Series(dtype=int)

          png_sev = f"history/severity-by-azurerm-{ts}.png"

          plt.figure(figsize=(12,6))

          if not counts.empty:

            counts.plot(kind="bar")

            plt.title("Issue frequency by azurerm resource type")

            plt.xlabel("azurerm resource type")

            plt.ylabel("number of closed issues touching type")

          else:

            plt.text(0.5, 0.5, "No azurerm-impacting issues in window", ha="center", va="center")

            plt.axis("off")

          plt.tight_layout()

          plt.savefig(png_sev)

          plt.clf()

          # Turnaround bar chart for issues closed in window

          ta_png = f"history/turnaround-by-issue-{ts}.png"

          if os.path.exists("turnaround.csv"):

            ta = pd.read_csv("turnaround.csv")

            ta = ta.dropna(subset=["turnaround_days"])

            if not ta.empty:

              # sort by turnaround descending for visibility

              ta_sorted = ta.sort_values("turnaround_days", ascending=False).head(50)

              plt.figure(figsize=(12,6))

              plt.bar(ta_sorted["issue"].astype(str), ta_sorted["turnaround_days"])

              plt.xticks(rotation=90)

              plt.title("Turnaround time (days) for closed issues in window")

              plt.xlabel("Issue number")

              plt.ylabel("Turnaround (days)")

              plt.tight_layout()

              plt.savefig(ta_png)

              plt.clf()

            else:

              plt.figure(figsize=(8,2))

              plt.text(0.5,0.5,"No turnaround data available",ha="center",va="center")

              plt.axis("off")

              plt.savefig(ta_png)

              plt.clf()

          else:

            plt.figure(figsize=(8,2))

            plt.text(0.5,0.5,"No turnaround data available",ha="center",va="center")

            plt.axis("off")

            plt.savefig(ta_png)

            plt.clf()

          # Prepare condensed issues JSON for AI: one-line per issue with number, user, title

          if os.path.exists("issues.json"):

            with open("issues.json") as f:

              issues = json.load(f)

          else:

            issues = []

          condensed = []

          for i in issues:

            condensed.append({

              "number": i.get("number"),

              "user": i.get("user"),

              "title": i.get("title"),

              "html_url": i.get("html_url")

            })

          with open("issues_for_ai.json","w") as f:

            json.dump(condensed, f, indent=2)

          # Call OpenAI Chat to produce "who wants what" one-liners

          import subprocess, shlex, sys

          ai_input = json.dumps(condensed)[:15000] # truncate to avoid huge payloads

          prompt = (

            "You are given a JSON array of GitHub issues with fields: number, user, title, html_url. "

            "Produce a concise list of one-line 'who wants what' statements, one per issue, in plain text. "

            "Format: '#<number> — <user> wants <succinct request derived from title>'. "

            "Do not add commentary. If the title is ambiguous, produce a best-effort short paraphrase."

            f"\n\nJSON:\n{ai_input}"

          )

          # Use curl to call OpenAI Chat Completions (Chat API). Adjust model as appropriate.

          import os, json, subprocess

          OPENAI_KEY = os.environ.get("OPENAI_API_KEY")

          if OPENAI_KEY:

            payload = {

              "model": "gpt-4o-mini",

              "messages": [{"role":"system","content":"You are a concise summarizer."},

                           {"role":"user","content":prompt}],

              "temperature":0.2,

              "max_tokens":400

            }

            proc = subprocess.run([

              "curl","-sS","https://api.openai.com/v1/chat/completions",

              "-H", "Content-Type: application/json",

              "-H", f"Authorization: Bearer {OPENAI_KEY}",

              "-d", json.dumps(payload)

            ], capture_output=True, text=True)

            if proc.returncode == 0 and proc.stdout:

              try:

                resp = json.loads(proc.stdout)

                ai_text = resp["choices"][0]["message"]["content"].strip()

              except Exception:

                ai_text = "AI summary unavailable (parsing error)."

            else:

              ai_text = "AI summary unavailable (request failed)."

          else:

            ai_text = "AI summary skipped (no OPENAI_API_KEY)."

          # Write markdown report combining charts and AI summary

          md_path = f"history/severity-report-{ts}.md"

          with open(md_path, "w") as f:

            f.write("# Weekly Terraform azurerm hotspot report\n\n")

            f.write(f"**Window (days):** {os.environ.get('WINDOW_DAYS','7')}\n\n")

            f.write("## AI Summary (who wants what)\n\n")

            f.write("```\n")

            f.write(ai_text + "\n")

            f.write("```\n\n")

            f.write("## Top azurerm resource types by issue frequency\n\n")

            if not counts.empty:

              f.write("![" + os.path.basename(png_sev) + "](" + os.path.basename(png_sev) + ")\n\n")

              f.write(counts.head(30).to_frame("issues").to_markdown() + "\n\n")

            else:

              f.write("No azurerm-impacting issues found in the selected window.\n\n")

            f.write("## Turnaround time for closed issues (days)\n\n")

            f.write("![" + os.path.basename(ta_png) + "](" + os.path.basename(ta_png) + ")\n\n")

            f.write("## Data artifacts\n\n")

            f.write("- `severity_data.csv` — per-issue azurerm type mapping\n")

            f.write("- `turnaround.csv` — per-issue turnaround in days\n")

          # expose outputs

          print(f"REPORT_MD={md_path}")

          print(f"REPORT_PNG={png_sev}")

          print(f"REPORT_TA_PNG={ta_png}")

          PY

      - name: Add report files to history and commit via PR

        id: create_pr

        uses: peter-evans/create-pull-request@v6

        with:

          commit-message: "Add weekly Terraform azurerm hotspot report and charts (prune to last 10)"

          title: "Weekly Terraform azurerm hotspot report"

          body: |

            This PR adds the latest weekly azurerm hotspot report and charts under `history/`.

            The workflow prunes older reports to keep at most 10 report sets.

          branch: "weekly-terraform-azurerm-hotspots"

          base: "main"

          path: "history"

      - name: Prune history to max 10 report sets (post-commit)

        if: steps.create_pr.outcome == 'success'

        run: |

          python - <<'PY'

          import os, re

          from pathlib import Path

          hist = Path("history")

          hist.mkdir(exist_ok=True)

          pat = re.compile(r"^severity-by-azurerm-(\d{8}-\d{6})\.(md|png)$|^severity-report-(\d{8}-\d{6})\.md$|^turnaround-by-issue-(\d{8}-\d{6})\.(md|png)$")

          # Collect timestamps by grouping filenames that contain the same timestamp

          groups = {}

          for p in hist.iterdir():

            m = re.search(r"(\d{8}-\d{6})", p.name)

            if not m:

              continue

            ts = m.group(1)

            groups.setdefault(ts, []).append(p)

          timestamps = sorted(groups.keys(), reverse=True)

          keep = set(timestamps[:10])

          drop = [p for ts, files in groups.items() if ts not in keep for p in files]

          for p in drop:

            try:

              p.unlink()

            except Exception:

              pass

          print(f"Pruned {len(drop)} files; kept {len(keep)} report sets.")

          PY

      - name: Notify runbook webhook (which will send az communication email)

        if: steps.create_pr.outcome == 'success'

        env:

          RUNBOOK_WEBHOOK_URL: ${{ secrets.RUNBOOK_WEBHOOK_URL }}

          PR_URL: ${{ steps.create_pr.outputs.pull-request-url }}

          WINDOW_DAYS: ${{ env.WINDOW_DAYS }}

        run: |

          # The runbook webhook is expected to accept a JSON payload and perform the az communication email send.

          # Adjust payload keys to match your runbook's expected schema.

          payload=$(jq -n \

            --arg pr "$PR_URL" \

            --arg window "$WINDOW_DAYS" \

            '{subject: ("Weekly Terraform azurerm hotspot report - " + $window + "d"), body: ("A new weekly azurerm hotspot report has been generated. Review the PR: " + $pr), pr_url: $pr, window_days: $window}')

          curl -sS -X POST "$RUNBOOK_WEBHOOK_URL" \

            -H "Content-Type: application/json" \

            -d "$payload"

      - name: Output artifact list

        if: always()

        run: |

          echo "Generated files in history/:"

          ls -la history || true


#article https://1drv.ms/w/c/d609fb70e39b65c8/IQCgI3nUNXvURaFZyMS0qLj5AUMpj2i7dlU_xryBURBX_eA?e=ZgqfcO 


Monday, March 9, 2026

 

A GitHub blog1 describes how GitHub Models can be invoked directly inside GitHub Actions so that issue triage, summarization, and decisionmaking happen automatically as part of a repositorys workflow. It begins by establishing that GitHub Actions must be granted explicit permissions to read issues, update them, and access AI models, because the AI inference step is treated like any other privileged operation inside a workflow. Once permissions are in place, the article walks through examples that show how an AI model can read the title and body of a newly opened issue, evaluate whether the information is complete, and then decide whether to request more details from the user. The workflow fetches the issue content, sends it to the model, and uses the model’s output to branch the workflow: if the model determines the issue lacks reproduction steps, the workflow posts a comment asking for more information; if the issue is complete, the workflow proceeds without intervention. This pattern demonstrates how AI can be embedded as a decisionmaking component inside GitHub Actions, using the models output to drive conditional logic. The article emphasizes security considerations such as minimizing permissions to reduce the risk of prompt injection, and it shows how the AI inference action can be used repeatedly to analyze text, generate summaries, or classify issues.

 

The workflow described in the article is oriented toward evaluating issue completeness and generating a priority and summary, but the same mechanism can be adapted to produce deeper operational intelligence such as hotspots by module or component, timetoclose trends, and severity distribution. Instead of prompting the model to determine whether a bug report is actionable, the workflow can fetch a batch of issueseither all open issues or those created within a defined time windowand pass them to the model with instructions to categorize each issue by component, module, or subsystem. The model can be prompted to extract structured fields such as inferred component, likely root cause category, and severity level based on the issue description. Once the model returns structured JSON, the workflow can aggregate the results using a script step that counts issues per component, computes severity distribution, and calculates the average or median time to close by comparing creation and closing timestamps. These aggregated metrics can then be written into a JSON or CSV artifact that the workflow commits back into the repository or uses to generate an HTML or markdown report.

 

To compute hotspots by module or component, the workflow would prompt the model with instructions to classify each issue into one of the repository’s known modules or to infer the module from filenames, stack traces, or keywords. The output can be tallied to reveal which modules accumulate the most issues over a given period. For timetoclose trends, the workflow can fetch closed issues, compute the duration between creation and closure, and then ask the model to summarize patterns or anomalies in those durations. Severity distribution can be generated by prompting the model to assign a severity level to each issue based on its description and then aggregating the counts. These results can be visualized by adding a step that uses a scripting language such as Python or Node.js to generate charts, which can be embedded into an HTML dashboard or markdown file. The AI model’s role becomes classification, extraction, and summarization, while the workflow handles data retrieval, aggregation, and visualization. The resulting output can also be committed back into the repository or published through GitHub Pages so leadership can view the trends without accessing raw issue data.

 

Sample GitHub workflow for severity distribution:

name: Weekly Severity Distribution Report

 

on:

  workflow_dispatch:

  schedule:

    - cron: "0 13 * * MON"

 

permissions:

  contents: write

  pull-requests: write

  issues: write

 

jobs:

  severity-report:

    runs-on: ubuntu-latest

 

    steps:

      - name: Check out repository

        uses: actions/checkout@v4

 

      - name: Set up Python

        uses: actions/setup-python@v5

        with:

          python-version: "3.11"

 

      - name: Install Python dependencies

        run: |

          pip install matplotlib pandas requests

 

      - name: Fetch closed issues and associated PRs

        id: fetch

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

        run: |

          python << 'EOF'

          import os, requests, json, datetime

 

          repo = os.environ["GITHUB_REPOSITORY"]

          token = os.environ["GH_TOKEN"]

          headers = {"Authorization": f"Bearer {token}"}

 

          since = (datetime.datetime.utcnow() - datetime.timedelta(days=7)).isoformat() + "Z"

          issues_url = f"https://api.github.com/repos/{repo}/issues?state=closed&since={since}"

          issues = requests.get(issues_url, headers=headers).json()

 

          results = []

          for issue in issues:

              if "pull_request" in issue:

                  continue

 

              comments = requests.get(issue["comments_url"], headers=headers).json()

              closing_pr = None

              for c in comments:

                  if "pull request" in c.get("body","").lower():

                      closing_pr = c["body"]

 

              results.append({"issue": issue, "closing_pr": closing_pr})

 

          with open("issues.json","w") as f:

              json.dump(results,f,indent=2)

          EOF

 

      - name: Compute module-touch counts (severity = issue frequency)

        id: analyze

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

        run: |

          python << 'EOF'

          import os, json, requests, re, pandas as pd

 

          repo = os.environ["GITHUB_REPOSITORY"]

          token = os.environ["GH_TOKEN"]

          headers = {"Authorization": f"Bearer {token}"}

 

          with open("issues.json") as f:

              issues = json.load(f)

 

          rows = []

          for entry in issues:

              issue = entry["issue"]

              pr_number = None

 

              if entry["closing_pr"]:

                  m = re.search(r"#(\d+)", entry["closing_pr"])

                  if m:

                      pr_number = m.group(1)

 

              if not pr_number:

                  continue

 

              pr_files = requests.get(

                  f"https://api.github.com/repos/{repo}/pulls/{pr_number}/files",

                  headers=headers

              ).json()

 

              modules = set()

              for f in pr_files:

                  path = f["filename"]

                  module = path.split("/")[0]

                  modules.add(module)

 

              for m in modules:

                  rows.append({"module": m, "issue_id": issue["number"]})

 

          df = pd.DataFrame(rows)

          df.to_csv("severity_data.csv", index=False)

          EOF

 

      - name: Generate severity distribution chart

        run: |

          python << 'EOF'

          import pandas as pd

          import matplotlib.pyplot as plt

 

          df = pd.read_csv("severity_data.csv")

          pivot = df.groupby("module").size()

 

          plt.figure(figsize=(10,6))

          pivot.plot(kind="bar")

          plt.title("Issue Frequency by Module (Past Week)")

          plt.xlabel("Module")

          plt.ylabel("Number of Issues Touching Module")

          plt.tight_layout()

          plt.savefig("severity_distribution.png")

          EOF

 

      - name: Prepare history folder

        run: |

          mkdir -p history

          cp severity_distribution.png history/

 

      - name: Create pull request with chart

        id: create_pr

        uses: peter-evans/create-pull-request@v6

        with:

          commit-message: "Add weekly module issue-frequency chart"

          title: "Weekly Module Issue-Frequency Report"

          body: "This PR adds the latest issue-frequency chart (severity = number of issues touching each module)."

          branch: "weekly-severity-report"

          base: "main"

 

      - name: Azure CLI login

        uses: azure/login@v2

        with:

          client-id: ${{ secrets.AZURE_CLIENT_ID }}

          tenant-id: ${{ secrets.AZURE_TENANT_ID }}

          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

          client-secret: ${{ secrets.AZURE_CLIENT_SECRET }}

 

      - name: Install ACS extension

        run: |

          az extension add --name communication

 

      - name: Email Teams channel with PR link

        if: success()

        env:

          ACS_CONNECTION_STRING: ${{ secrets.ACS_CONNECTION_STRING }}

          TEAMS_CHANNEL_EMAIL: ${{ secrets.TEAMS_CHANNEL_EMAIL }}

          PR_URL: ${{ steps.create_pr.outputs.pull-request-url }}

        run: |

          az communication email send \

            --connection-string "$ACS_CONNECTION_STRING" \

            --sender "DoNotReply@yourdomain.com" \

            --to "$TEAMS_CHANNEL_EMAIL" \

            --subject "Weekly Module Issue-Frequency Report" \

            --body-plain "The weekly module issue-frequency chart has been generated and is available in this pull request: $PR_URL"

References:

1.      [The GitHub Blog](https://github.blog/ai-and-ml/generative-ai/automate-your-project-with-github-models-in-actions/)

Sunday, March 8, 2026

 This is the summary of a book titled “People Glue: Hold on to your best people by setting them free” written by Helen Beedham and published by Practical Inspiration Publishing in 2026.This book looks at a simple but often misunderstood question: why people stay at work, and why they leave. Helen Beedham argues that money matters, but it is rarely the main reason people commit to an organization long term. What keeps people is a sense of freedom in how they work, paired with clear expectations about what needs to be done. When freedom is handled well, it becomes a strong force that helps organizations hold on to their best people.

The cost of losing employees is high. Replacing someone can cost anywhere from a large fraction of their annual salary to double it, once recruitment, onboarding, and lost productivity are taken into account. Beyond cost, frequent turnover weakens client relationships, slows teams down, and makes it harder for organizations to build the skills they will need in the future. Despite this, most workers in the US and the UK stay with an employer for fewer than four years. Research consistently shows that higher pay is not the main driver of job changes. Many people leave because they want more flexibility, more interesting work, and better opportunities to grow. For most workers, work–life balance and a sense of control over their time matter more than compensation alone.

Through research and surveys, Beedham and her colleagues identified four kinds of freedom that matter most to people at work. The first, and by far the most important across all demographic groups, is autonomy. People want a say in when, where, and how they do their work. After the COVID 19 pandemic, organizations that forced a full return to the office saw higher turnover than those that offered remote or hybrid options. Flexibility has become a baseline expectation for many workers. Importantly, autonomy does not mean chaos or a lack of standards. It means trusting people to decide how best to meet agreed goals. When people feel overly monitored or micromanaged, their motivation drops. When they have room to set priorities and make decisions, they are more likely to hold themselves to high performance standards.

Meaningful work is the second major freedom. Most people want to feel that what they do matters, even if they define “meaning” differently. For some, it is about contributing to society. For others, it is about solving interesting problems, learning, or feeling part of a team. Many workplaces unintentionally strip meaning from work by filling schedules with meetings and urgent tasks, leaving little time for focused thinking. Research shows that people need dedicated time each week to work deeply, yet most get far less than they need. When organizations reduce unnecessary meetings, stress drops and productivity rises sharply. Meaningful work is less about constant happiness and more about being energized and focused on solving real problems together.

The third freedom is self expression. People need to know that their ideas and perspectives are taken seriously. When someone speaks up and is ignored or dismissed, they are far less likely to contribute again. This problem affects many workers, but especially women and people from underrepresented groups. A lack of respect and belonging is a major reason people leave jobs. At the same time, self expression does not mean saying everything without restraint. It depends on mutual respect, thoughtful communication, and an environment where disagreement is handled constructively. When people feel safe to speak honestly, they help surface problems early and often offer solutions leaders would otherwise miss.

The final freedom is growth. While survey respondents ranked it lower than the others, it still plays an important role, especially as skill shortages grow worldwide. Many workers feel their employers focus more on hiring new talent than developing the people they already have. Nearly half of employees say learning opportunities influence whether they stay. People want to grow in ways that fit their goals, not just the needs of their current role. They value challenging assignments, room to fail and learn, and visibility into possible future paths. Organizations that support internal movement, mentoring, and skill development tend to see higher engagement and retention.

A key message of the book is that freedom only works when expectations are clear. Giving people freedom without structure leads to confusion, uneven treatment, and frustration. Leaders need to be explicit about what needs to be done, who is responsible, what decisions people can make on their own, and where boundaries lie. Beedham emphasizes that enabling freedom does not mean letting everyone do whatever they want. It means being clear about goals, roles, timelines, and standards, while trusting people to decide how to meet them.

Freedom is also not a one time initiative. It requires ongoing adjustment. Leaders should pay attention to what works, what does not, and why. When people push boundaries, it is not always a problem. Sometimes it signals innovation or unclear expectations rather than bad intent. Overreacting by removing freedom or assigning blame damages trust and can drive high performers away. When standards truly matter, such as in areas like safety or data privacy, leaders need to explain why rules exist and enforce them consistently.

The book makes the case that retaining people is less about control and perks and more about trust, clarity, and respect. When people are given room to work in ways that suit them, feel their work has purpose, know their voices matter, and see opportunities to grow, they are far more likely to stay.

Saturday, March 7, 2026

 This is a summary of a book: “The DOSE Effect: Optimize Your Brain and Body by Boosting Your Dopamine, Oxytocin, Serotonin, and Endorphins” written by Tj Power, a neuroscientist and founder of DOSE Lab and published by Dey Street in 2025. This book examines how modern lifestyles disrupt the neurochemical systems that regulate motivation, mood, social connection, and stress resilience. Drawing on neuroscience and behavioral research, Power focuses on four key neurotransmitters—dopamine, oxytocin, serotonin, and endorphins—and explains how everyday habits influence their balance. He argues that chronic stress, insufficient sleep, poor diet, and constant digital stimulation interfere with these systems, leading to reduced motivation, emotional instability, and diminished well-being and proposes a healthier behavior and environment can allow the stimuli and responder to co-exist better.

Dopamine is presented as the primary driver of motivation and goal-directed behavior. It operates through a pleasure–pain mechanism in which effortful or uncomfortable actions initially produce strain but are followed by a sense of reward upon completion. This system evolved to reinforce survival-related behaviors, but in contemporary environments it is frequently overstimulated by effortless rewards such as highly processed food, alcohol, online shopping, and social media. These activities produce rapid dopamine spikes without corresponding effort, often followed by declines in mood and motivation. Repeated exposure to such stimuli narrows the range of activities that feel rewarding, contributing to compulsive behavior and reduced drive. In contrast, dopamine regulation is strengthened through sustained effort, structured routines, and engagement in meaningful pursuits. Consistently completing demanding tasks, maintaining order in one’s environment, and working toward long-term goals reinforces the association between effort and reward, gradually restoring motivation and psychological resilience.

He emphasizes that discipline is central to maintaining a stable dopamine system. Small, repeatable actions—such as maintaining personal routines or completing routine responsibilities—condition the brain to tolerate effort and delay gratification. Over time, this process supports a broader capacity for sustained focus and perseverance. Equally important is the presence of a clearly defined pursuit that provides direction and anticipation. Without an ongoing sense of purpose, achievements alone may fail to produce lasting satisfaction, whereas engagement in the pursuit itself supports motivation and emotional stability.

Oxytocin is described as the neurochemical foundation of social bonding, trust, and self-confidence. It is released during moments of affection, cooperation, and emotional connection, and it plays a critical role in forming and maintaining relationships. Low oxytocin levels are associated with loneliness, self-doubt, and social withdrawal, conditions that are exacerbated by habits such as excessive phone use, superficial online comparison, and reduced face-to-face interaction. Chronic deficits in social connection are portrayed as having significant psychological and physiological consequences. Conversely, oxytocin levels increase through acts of service, supportive relationships, and physical touch, all of which promote feelings of safety, belonging, and emotional stability. Regular interpersonal engagement and contribution to others’ well-being are presented as essential components of long-term mental health.

Serotonin is examined primarily through its connection to physical health and nutrition. Unlike other neurotransmitters, the majority of serotonin is produced in the gut, making dietary patterns and digestion central to emotional regulation. Diets high in ultra-processed foods and refined sugars are associated with fluctuations in mood, energy, and anxiety, while consistent, nutrient-dense eating supports more stable serotonin production. Sleep and exposure to natural light further influence serotonin levels, reinforcing circadian rhythms that promote calmness and sustained energy. Time spent outdoors, particularly in low-stimulation environments, is identified as a reliable way to improve mood, focus, and overall physiological balance.

Endorphins are characterized as the body’s primary mechanism for managing stress and physical discomfort. They evolved to mitigate pain and regulate emotional responses during periods of intense physical demand. In modern contexts, insufficient physical activity and prolonged sedentary behavior reduce endorphin release, leaving individuals more vulnerable to chronic stress and tension. Regular movement, particularly activities that combine strength, endurance, and short periods of high exertion, stimulates endorphin production and improves stress tolerance. Stretching and mobility practices further support this system by reducing physical tension and promoting relaxation.

Overall, he presents mental and emotional well-being as the outcome of interacting biological systems that are shaped by daily behavior. Rather than emphasizing short-term interventions or external solutions, it argues for sustained, effort-based habits that align with the brain’s underlying neurochemistry. By prioritizing purposeful work, meaningful relationships, nutritious food, regular movement, adequate sleep, and time in natural environments, individuals can create conditions that support more stable motivation, emotional regulation, and long-term psychological health.


#codingexercise: https://1drv.ms/b/c/d609fb70e39b65c8/IQBBH30P0VQQQpbR9PdMI2mHAcj-baxH_XBgJ14c9j42tXI?e=Xc8Kok 

Friday, March 6, 2026

 Subarray Sum equals K

Given an array of integers nums and an integer k, return the total number of subarrays whose sum equals to k.

A subarray is a contiguous non-empty sequence of elements within an array.

Example 1:

Input: nums = [1,1,1], k = 2

Output: 2

Example 2:

Input: nums = [1,2,3], k = 3

Output: 2

Constraints:

• 1 <= nums.length <= 2 * 104

• -1000 <= nums[i] <= 1000

• -107 <= k <= 107

class Solution {

    public int subarraySum(int[] nums, int k) {

        if (nums == null || nums.length == 0) return -1;

        int[] sums = new int[nums.length];  

        int sum = 0;

        for (int i = 0; i < nums.length; i++){

            sum += nums[i];

            sums[i] = sum;

        }

        int count = 0;

        for (int i = 0; i < nums.length; i++) {

            for (int j = i; j < nums.length; j++) {

                int current = nums[i] + (sums[j] - sums[i]);

                if (current == k){

                    count += 1;

                }

            }

        }

        return count;

    }

}

[1,3], k=1 => 1

[1,3], k=3 => 1

[1,3], k=4 => 1

[2,2], k=4 => 1

[2,2], k=2 => 2

[2,0,2], k=2 => 4

[0,0,1], k=1=> 3

[0,1,0], k=1=> 2

[0,1,1], k=1=> 3

[1,0,0], k=1=> 3

[1,0,1], k=1=> 4

[1,1,0], k=1=> 2

[1,1,1], k=1=> 3

[-1,0,1], k=0 => 2

[-1,1,0], k=0 => 3

[1,0,-1], k=0 => 2

[1,-1,0], k=0 => 3

[0,-1,1], k=0 => 3

[0,1,-1], k=0 => 3