Monday, March 9, 2026

 

A GitHub blog1 describes how GitHub Models can be invoked directly inside GitHub Actions so that issue triage, summarization, and decisionmaking happen automatically as part of a repositorys workflow. It begins by establishing that GitHub Actions must be granted explicit permissions to read issues, update them, and access AI models, because the AI inference step is treated like any other privileged operation inside a workflow. Once permissions are in place, the article walks through examples that show how an AI model can read the title and body of a newly opened issue, evaluate whether the information is complete, and then decide whether to request more details from the user. The workflow fetches the issue content, sends it to the model, and uses the model’s output to branch the workflow: if the model determines the issue lacks reproduction steps, the workflow posts a comment asking for more information; if the issue is complete, the workflow proceeds without intervention. This pattern demonstrates how AI can be embedded as a decisionmaking component inside GitHub Actions, using the models output to drive conditional logic. The article emphasizes security considerations such as minimizing permissions to reduce the risk of prompt injection, and it shows how the AI inference action can be used repeatedly to analyze text, generate summaries, or classify issues.

 

The workflow described in the article is oriented toward evaluating issue completeness and generating a priority and summary, but the same mechanism can be adapted to produce deeper operational intelligence such as hotspots by module or component, timetoclose trends, and severity distribution. Instead of prompting the model to determine whether a bug report is actionable, the workflow can fetch a batch of issueseither all open issues or those created within a defined time windowand pass them to the model with instructions to categorize each issue by component, module, or subsystem. The model can be prompted to extract structured fields such as inferred component, likely root cause category, and severity level based on the issue description. Once the model returns structured JSON, the workflow can aggregate the results using a script step that counts issues per component, computes severity distribution, and calculates the average or median time to close by comparing creation and closing timestamps. These aggregated metrics can then be written into a JSON or CSV artifact that the workflow commits back into the repository or uses to generate an HTML or markdown report.

 

To compute hotspots by module or component, the workflow would prompt the model with instructions to classify each issue into one of the repository’s known modules or to infer the module from filenames, stack traces, or keywords. The output can be tallied to reveal which modules accumulate the most issues over a given period. For timetoclose trends, the workflow can fetch closed issues, compute the duration between creation and closure, and then ask the model to summarize patterns or anomalies in those durations. Severity distribution can be generated by prompting the model to assign a severity level to each issue based on its description and then aggregating the counts. These results can be visualized by adding a step that uses a scripting language such as Python or Node.js to generate charts, which can be embedded into an HTML dashboard or markdown file. The AI model’s role becomes classification, extraction, and summarization, while the workflow handles data retrieval, aggregation, and visualization. The resulting output can also be committed back into the repository or published through GitHub Pages so leadership can view the trends without accessing raw issue data.

 

Sample GitHub workflow for severity distribution:

name: Weekly Severity Distribution Report

 

on:

  workflow_dispatch:

  schedule:

    - cron: "0 13 * * MON"

 

permissions:

  contents: write

  pull-requests: write

  issues: write

 

jobs:

  severity-report:

    runs-on: ubuntu-latest

 

    steps:

      - name: Check out repository

        uses: actions/checkout@v4

 

      - name: Set up Python

        uses: actions/setup-python@v5

        with:

          python-version: "3.11"

 

      - name: Install Python dependencies

        run: |

          pip install matplotlib pandas requests

 

      - name: Fetch closed issues and associated PRs

        id: fetch

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

        run: |

          python << 'EOF'

          import os, requests, json, datetime

 

          repo = os.environ["GITHUB_REPOSITORY"]

          token = os.environ["GH_TOKEN"]

          headers = {"Authorization": f"Bearer {token}"}

 

          since = (datetime.datetime.utcnow() - datetime.timedelta(days=7)).isoformat() + "Z"

          issues_url = f"https://api.github.com/repos/{repo}/issues?state=closed&since={since}"

          issues = requests.get(issues_url, headers=headers).json()

 

          results = []

          for issue in issues:

              if "pull_request" in issue:

                  continue

 

              comments = requests.get(issue["comments_url"], headers=headers).json()

              closing_pr = None

              for c in comments:

                  if "pull request" in c.get("body","").lower():

                      closing_pr = c["body"]

 

              results.append({"issue": issue, "closing_pr": closing_pr})

 

          with open("issues.json","w") as f:

              json.dump(results,f,indent=2)

          EOF

 

      - name: Compute module-touch counts (severity = issue frequency)

        id: analyze

        env:

          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

        run: |

          python << 'EOF'

          import os, json, requests, re, pandas as pd

 

          repo = os.environ["GITHUB_REPOSITORY"]

          token = os.environ["GH_TOKEN"]

          headers = {"Authorization": f"Bearer {token}"}

 

          with open("issues.json") as f:

              issues = json.load(f)

 

          rows = []

          for entry in issues:

              issue = entry["issue"]

              pr_number = None

 

              if entry["closing_pr"]:

                  m = re.search(r"#(\d+)", entry["closing_pr"])

                  if m:

                      pr_number = m.group(1)

 

              if not pr_number:

                  continue

 

              pr_files = requests.get(

                  f"https://api.github.com/repos/{repo}/pulls/{pr_number}/files",

                  headers=headers

              ).json()

 

              modules = set()

              for f in pr_files:

                  path = f["filename"]

                  module = path.split("/")[0]

                  modules.add(module)

 

              for m in modules:

                  rows.append({"module": m, "issue_id": issue["number"]})

 

          df = pd.DataFrame(rows)

          df.to_csv("severity_data.csv", index=False)

          EOF

 

      - name: Generate severity distribution chart

        run: |

          python << 'EOF'

          import pandas as pd

          import matplotlib.pyplot as plt

 

          df = pd.read_csv("severity_data.csv")

          pivot = df.groupby("module").size()

 

          plt.figure(figsize=(10,6))

          pivot.plot(kind="bar")

          plt.title("Issue Frequency by Module (Past Week)")

          plt.xlabel("Module")

          plt.ylabel("Number of Issues Touching Module")

          plt.tight_layout()

          plt.savefig("severity_distribution.png")

          EOF

 

      - name: Prepare history folder

        run: |

          mkdir -p history

          cp severity_distribution.png history/

 

      - name: Create pull request with chart

        id: create_pr

        uses: peter-evans/create-pull-request@v6

        with:

          commit-message: "Add weekly module issue-frequency chart"

          title: "Weekly Module Issue-Frequency Report"

          body: "This PR adds the latest issue-frequency chart (severity = number of issues touching each module)."

          branch: "weekly-severity-report"

          base: "main"

 

      - name: Azure CLI login

        uses: azure/login@v2

        with:

          client-id: ${{ secrets.AZURE_CLIENT_ID }}

          tenant-id: ${{ secrets.AZURE_TENANT_ID }}

          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}

          client-secret: ${{ secrets.AZURE_CLIENT_SECRET }}

 

      - name: Install ACS extension

        run: |

          az extension add --name communication

 

      - name: Email Teams channel with PR link

        if: success()

        env:

          ACS_CONNECTION_STRING: ${{ secrets.ACS_CONNECTION_STRING }}

          TEAMS_CHANNEL_EMAIL: ${{ secrets.TEAMS_CHANNEL_EMAIL }}

          PR_URL: ${{ steps.create_pr.outputs.pull-request-url }}

        run: |

          az communication email send \

            --connection-string "$ACS_CONNECTION_STRING" \

            --sender "DoNotReply@yourdomain.com" \

            --to "$TEAMS_CHANNEL_EMAIL" \

            --subject "Weekly Module Issue-Frequency Report" \

            --body-plain "The weekly module issue-frequency chart has been generated and is available in this pull request: $PR_URL"

References:

1.      [The GitHub Blog](https://github.blog/ai-and-ml/generative-ai/automate-your-project-with-github-models-in-actions/)

No comments:

Post a Comment