Monday, April 13, 2026

 This is the summary of a book titled “From panic to profit: Uncover value, boost revenue, and grow your business with 80/20 principle” written by Bill Canady and published by Wiley in 2025. This book guides a journey through the transformation of a struggling business, beginning with the emotional reality leaders face when their companies stall or decline. He acknowledges that fear, uncertainty, and doubt can quickly distort judgment, but insists that panic is unnecessary when a disciplined operating system exists to restore clarity and momentum. That system is the Profitable Growth Operating System, or PGOS, a framework built on the 80/20 principle—the idea that a small fraction of actions, customers, products, or processes generate the majority of results. Canady treats this principle not as a clever heuristic but as a natural law that leaders must learn to see and apply if they want to redirect their organizations toward profitable growth.

Most companies do not naturally operate with an 80/20 mindset. They spread their attention across too many initiatives, customers, and internal processes, diluting their ability to generate meaningful results. Canady argues that the first step in reversing this pattern is to redefine the organization’s current state through the lens of 80/20 analysis. This requires gathering real data—about customers, employees, products, and markets—so that leaders can distinguish the vital few from the trivial many. As he puts it, “in the absence of data, knowledge, and understanding, there is an intellectual and emotional vacuum almost instantly filled by FUD: Fear, Uncertainty and Doubt,” a line that captures his belief that disciplined measurement is the antidote to fear-driven decision-making.

But data alone is not enough. Canady describes a leadership structure essential for PGOS to work: a visionary who sets direction and enforces a culture of pace, transparency, and results; one or more prophets who translate that vision into strategy and processes; and operators who run the business day to day while applying 80/20 thinking to their own domains. This triumvirate must be aligned, active, and committed, because PGOS is not a theoretical exercise—it is a system that must be lived through daily decisions, trade-offs, and disciplined execution.

Once the need for change is established, Canady turns to the importance of simplification. He argues that many companies instinctively cut staff when they need to streamline, but the real inefficiencies often lie elsewhere: in unproductive product lines, unprofitable customers, outdated processes, or organizational habits that create friction. Simplification, in his view, means stripping away the activities that consume energy without contributing to growth. This may involve reducing product variety, narrowing customer focus, or redesigning workflows. Such decisions are rarely easy, and they often provoke resistance from people invested in the old ways, but Canady insists that simplification is essential for freeing resources to invest in the activities that truly matter.

With the organization’s priorities clarified, the next step is to set a concrete goal—one that expresses success in the simplest possible terms: money. Canady encourages leaders to articulate a financial target that can guide decisions over the next several years, even if the underlying data is imperfect. The goal becomes the anchor for strategy development, which must be grounded in updated assumptions about the company’s environment and a clear understanding of how revenue translates into profit. He stresses the importance of both short-term wins and long-term positioning, arguing that early successes build confidence and “earn the right to grow.”

Execution then becomes the central challenge. Canady describes the need to structure the organization so that critical initiatives have clear owners, defined responsibilities, and the resources required to succeed. He urges leaders to confront the “brutal facts” of their situation, borrowing Admiral Jim Stockdale’s reminder that optimism must never obscure reality. The action plan that emerges should translate goals and strategies into specific steps, timelines, and accountabilities. It is only through action—imperfect, iterative, and grounded in the 80/20 principle—that the plan becomes real.

The first 100 days of a PGOS transformation are particularly important. During this period, the company must gather data, refine goals, launch initiatives, and begin generating measurable improvements. These early efforts create the momentum that allows the organization to claim its “right to grow.” But Canady warns that vigilance must continue beyond the initial phase. Leaders must keep their eyes open, continuously evaluating whether their goals, strategies, and actions are truly driving revenue and profitability. They must be willing to revise plans, eliminate underperforming offerings, and make difficult decisions that keep the company aligned with its most productive activities.

By the end of the book, Canady presents PGOS not as a rigid formula but as a disciplined way of thinking—one that combines simplification, data-driven insight, aligned leadership, and relentless focus on the activities that generate the greatest impact. The 80/20 principle becomes both a diagnostic tool and a compass, guiding leaders through uncertainty toward a business that grows not by doing more, but by doing what matters most.


Sunday, April 12, 2026

 This is a runbook for migrating databricks workload from one region to another

1. Step 1. Create the workspace to receive the workloads in the destination.

#!/usr/bin/env bash

set -euo pipefail

# Source subscription / RG

SOURCE_SUBSCRIPTION_ID="<SOURCE_SUBSCRIPTION_ID>"

SOURCE_RG="<SOURCE_DATABRICKS_RG>" # e.g., rg-dbx-prod

SOURCE_LOCATION="<SOURCE_REGION>" # e.g., westus2

# DR subscription / RG / region

TARGET_SUBSCRIPTION_ID="<TARGET_SUBSCRIPTION_ID>"

TARGET_LOCATION="eastus2"

SUFFIX="eus2"

TARGET_RG="${SOURCE_RG}-${SUFFIX}"

# 1. Set source subscription and export

az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

EXPORT_DIR="./tfexport-${SOURCE_RG}-${SUFFIX}"

mkdir -p "${EXPORT_DIR}"

echo "Exporting all resources from ${SOURCE_RG} using aztfexport..."

aztfexport group \

  --resource-group "${SOURCE_RG}" \

  --output-directory "${EXPORT_DIR}" \

  --append

echo "Export complete. Files in ${EXPORT_DIR}"

# 2. Create target RG in target subscription

az account set --subscription "${TARGET_SUBSCRIPTION_ID}"

echo "Creating target resource group ${TARGET_RG} in ${TARGET_LOCATION}..."

az group create \

  --name "${TARGET_RG}" \

  --location "${TARGET_LOCATION}" \

  --output none

# 3. Rewrite names and locations in Terraform files

# - Add suffix to resource names

# - Change location to eastus2

# - Optionally change resource_group_name references

echo "Rewriting Terraform for DR region and names..."

find "${EXPORT_DIR}" -type f -name "*.tf" | while read -r FILE; do

  # Example: append -eus2 to name fields and change location

  # This is simplistic; refine with more precise sed/regex as needed.

  # Change location

  sed -i "s/\"${SOURCE_LOCATION}\"/\"${TARGET_LOCATION}\"/g" "${FILE}"

  # Append suffix to resource names (name = "xyz" → "xyz-eus2")

  # Be careful not to touch things like SKU names, etc.

  sed -i -E "s/(name *= *\"[a-zA-Z0-9_-]+)\"/\1-${SUFFIX}\"/g" "${FILE}"

  # If resource_group_name is hard-coded, retarget it

  sed -i "s/\"${SOURCE_RG}\"/\"${TARGET_RG}\"/g" "${FILE}"

done

echo "Terraform rewrite done. Review ${EXPORT_DIR} before applying."

# 4. (Optional) Initialize and apply Terraform in target subscription

# cd "${EXPORT_DIR}"

# terraform init

# terraform apply

2. Step 2. Copy all the workloads for the workspace.

#!/usr/bin/env bash

set -euo pipefail

# Databricks CLI profiles

SOURCE_PROFILE="src-dbx"

TARGET_PROFILE="dr-dbx"

# Temp export directory

EXPORT_DIR="./dbx-migration-eus2"

NOTEBOOKS_DIR="${EXPORT_DIR}/notebooks"

JOBS_FILE="${EXPORT_DIR}/jobs.json"

mkdir -p "${NOTEBOOKS_DIR}"

echo "Using Databricks profiles:"

echo " Source: ${SOURCE_PROFILE}"

echo " Target: ${TARGET_PROFILE}"

echo ""

# 1. Export all notebooks from source workspace

# This uses workspace export with recursive flag.

echo "Exporting notebooks from source workspace..."

databricks --profile "${SOURCE_PROFILE}" workspace list / -r | awk '{print $1}' | while read -r PATH; do

  echo "Exporting ${PATH}"

  databricks --profile "${SOURCE_PROFILE}" workspace export_dir \

    "${PATH}" \

    "${NOTEBOOKS_DIR}${PATH}" \

    --overwrite

done

echo "Notebook export complete."

# 2. Import notebooks into target workspace

echo "Importing notebooks into target workspace..."

find "${NOTEBOOKS_DIR}" -type d | while read -r DIR; do

  REL_PATH="${DIR#${NOTEBOOKS_DIR}}"

  if [[ -n "${REL_PATH}" ]]; then

    databricks --profile "${TARGET_PROFILE}" workspace mkdirs "${REL_PATH}"

  fi

done

find "${NOTEBOOKS_DIR}" -type f | while read -r FILE; do

  REL_PATH="${FILE#${NOTEBOOKS_DIR}}"

  TARGET_PATH="${REL_PATH}"

  echo "Importing ${TARGET_PATH}"

  databricks --profile "${TARGET_PROFILE}" workspace import \

    --format AUTO \

    --language AUTO \

    --overwrite \

    "${FILE}" \

    "${TARGET_PATH}"

done

echo "Notebook import complete."

# 3. Export jobs from source workspace

echo "Exporting jobs from source workspace..."

databricks --profile "${SOURCE_PROFILE}" jobs list --output JSON > "${JOBS_FILE}"

# 4. Recreate jobs in target workspace

echo "Recreating jobs in target workspace..."

jq -c '.jobs[]' "${JOBS_FILE}" | while read -r JOB; do

  # Remove fields that cannot be reused directly (job_id, created_time, etc.)

  CLEANED=$(echo "${JOB}" | jq 'del(.job_id, .created_time, .creator_user_name, .settings.schedule_status, .settings.created_time, .settings.modified_time) | {name: .settings.name, settings: .settings}')

  echo "Creating job: $(echo "${CLEANED}" | jq -r '.name')"

  databricks --profile "${TARGET_PROFILE}" jobs create --json "${CLEANED}"

done

echo "Job migration complete."

3. Step 3. Find allowed storage accounts and copy data to eus2

#!/usr/bin/env bash

set -euo pipefail

SOURCE_SUBSCRIPTION_ID="<SOURCE_SUBSCRIPTION_ID>"

TARGET_SUBSCRIPTION_ID="<TARGET_SUBSCRIPTION_ID>"

SOURCE_RG="<SOURCE_DATABRICKS_RG>"

SUFFIX="eus2"

# Databricks workspace info (source)

DATABRICKS_WS_NAME="<SOURCE_DATABRICKS_WORKSPACE_NAME>"

DATABRICKS_WS_RG="${SOURCE_RG}"

# 1. Get workspace VNet/subnets (assuming VNet injection)

az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

WS_INFO=$(az databricks workspace show -g "${DATABRICKS_WS_RG}" -n "${DATABRICKS_WS_NAME}")

VNET_ID=$(echo "${WS_INFO}" | jq -r '.parameters.customVirtualNetworkId.value // empty')

SUBNET_IDS=$(echo "${WS_INFO}" | jq -r '.parameters.customPublicSubnetName.value, .parameters.customPrivateSubnetName.value // empty' | sed "/^$/d")

echo "Workspace VNet: ${VNET_ID}"

echo "Workspace subnets (names):"

echo "${SUBNET_IDS}"

echo ""

# 2. Find storage accounts whose network rules allow these subnets

echo "Finding storage accounts with network rules allowing workspace subnets..."

STORAGE_ACCOUNTS=$(az storage account list --query "[].id" -o tsv)

MATCHED_SA=()

for SA_ID in ${STORAGE_ACCOUNTS}; do

  RULES=$(az storage account network-rule list --account-name "$(basename "${SA_ID}")" --resource-group "$(echo "${SA_ID}" | awk -F/ '{print $5}')" 2>/dev/null || echo "")

  if [[ -z "${RULES}" ]]; then

    continue

  fi

  for SUBNET in ${SUBNET_IDS}; do

    # Check if subnet name appears in virtualNetworkRules

    if echo "${RULES}" | jq -e --arg sn "${SUBNET}" '.virtualNetworkRules[]?.virtualNetworkResourceId | contains($sn)' >/dev/null 2>&1; then

      echo "Matched storage account: ${SA_ID} for subnet: ${SUBNET}"

      MATCHED_SA+=("${SA_ID}")

      break

    fi

  done

done

# Deduplicate

MATCHED_SA_UNIQ=($(printf "%s\n" "${MATCHED_SA[@]}" | sort -u))

echo ""

echo "Matched storage accounts:"

printf "%s\n" "${MATCHED_SA_UNIQ[@]}"

echo ""

# 3. For each matched storage account, copy data to corresponding DR storage account with eus2 suffix

for SA_ID in "${MATCHED_SA_UNIQ[@]}"; do

  SA_NAME=$(basename "${SA_ID}")

  SA_RG=$(echo "${SA_ID}" | awk -F/ '{print $5}')

  TARGET_SA_NAME="${SA_NAME}${SUFFIX}"

  echo "Processing storage account:"

  echo " Source: ${SA_NAME} (RG: ${SA_RG})"

  echo " Target: ${TARGET_SA_NAME}"

  echo ""

  # Get source key

  SRC_KEY=$(az storage account keys list \

    --account-name "${SA_NAME}" \

    --resource-group "${SA_RG}" \

    --query "[0].value" -o tsv)

  # Switch to target subscription and get target key

  az account set --subscription "${TARGET_SUBSCRIPTION_ID}"

  TARGET_SA_RG="<TARGET_SA_RG_FOR_${TARGET_SA_NAME}>" # or derive if same naming pattern

  TGT_KEY=$(az storage account keys list \

    --account-name "${TARGET_SA_NAME}" \

    --resource-group "${TARGET_SA_RG}" \

    --query "[0].value" -o tsv)

  # Build connection strings

  SRC_CONN="DefaultEndpointsProtocol=https;AccountName=${SA_NAME};AccountKey=${SRC_KEY};EndpointSuffix=core.windows.net"

  TGT_CONN="DefaultEndpointsProtocol=https;AccountName=${TARGET_SA_NAME};AccountKey=${TGT_KEY};EndpointSuffix=core.windows.net"

  # List containers in source

  az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

  CONTAINERS=$(az storage container list \

    --connection-string "${SRC_CONN}" \

    --query "[].name" -o tsv)

  for CONT in ${CONTAINERS}; do

    echo "Copying container: ${CONT}"

    SRC_URL="https://${SA_NAME}.blob.core.windows.net/${CONT}"

    TGT_URL="https://${TARGET_SA_NAME}.blob.core.windows.net/${CONT}"

    # Generate SAS tokens or use connection string; here we use account key via azcopy env vars

    export AZCOPY_AUTO_LOGIN_TYPE=AZURE_AD # or use SAS if preferred

    # If using key-based auth:

    # export AZCOPY_ACCOUNT_KEY="${SRC_KEY}" for source and then TGT_KEY for target in separate runs

    # For simplicity, assume managed identity / AAD with proper RBAC.

    azcopy copy "${SRC_URL}" "${TGT_URL}" \

      --recursive=true

    echo "Completed copy for container ${CONT}"

  done

  # Reset to source subscription for next iteration

  az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

done

echo "All matched storage accounts copied to DR counterparts."




#codingexercises: CodingExercise-04-12-2026.docx

Saturday, April 11, 2026

 This is a runbook for migrating a set of microservices in the form of a UI and many APIs from one region to another

1. Step 1. Create all the containers in the destination.

#!/usr/bin/env bash

set -euo pipefail

# -----------------------------

# CONFIGURATION

# -----------------------------

SOURCE_RG="rg-microservices"

DEST_SUFFIX="eus2"

DEST_LOCATION="eastus2"

DEST_RG="${SOURCE_RG}-${DEST_SUFFIX}"

echo "Source RG: ${SOURCE_RG}"

echo "Destination RG: ${DEST_RG}"

echo "Destination Region: ${DEST_LOCATION}"

echo ""

# -----------------------------

# CREATE DESTINATION RG

# -----------------------------

echo "Creating destination resource group..."

az group create \

  --name "${DEST_RG}" \

  --location "${DEST_LOCATION}" \

  --output none

echo "Destination RG created."

echo ""

# -----------------------------

# ENUMERATE RESOURCES

# -----------------------------

echo "Enumerating resources in ${SOURCE_RG}..."

RESOURCE_IDS=$(az resource list -g "${SOURCE_RG}" --query "[].id" -o tsv)

if [[ -z "${RESOURCE_IDS}" ]]; then

  echo "No resources found in ${SOURCE_RG}"

  exit 1

fi

echo "Found resources:"

echo "${RESOURCE_IDS}"

echo ""

# -----------------------------

# EXPORT EACH RESOURCE WITH aztfexport

# -----------------------------

EXPORT_DIR="./export-${SOURCE_RG}-${DEST_SUFFIX}"

mkdir -p "${EXPORT_DIR}"

for RID in ${RESOURCE_IDS}; do

  NAME=$(basename "${RID}")

  NEW_NAME="${NAME}-${DEST_SUFFIX}"

  echo "Exporting resource:"

  echo " Source ID: ${RID}"

  echo " New Name: ${NEW_NAME}"

  echo ""

  aztfexport resource \

    --resource "${RID}" \

    --resource-group "${DEST_RG}" \

    --location "${DEST_LOCATION}" \

    --name-mapping "${NAME}=${NEW_NAME}" \

    --output-directory "${EXPORT_DIR}" \

    --append

done

echo ""

echo "----------------------------------------"

echo "Export completed. Terraform files stored in:"

echo " ${EXPORT_DIR}"

echo "----------------------------------------"

2. Step 2. Copy all the configurations for each web service.

#!/usr/bin/env bash

set -euo pipefail

SOURCE_RG="rg-microservices"

DEST_SUFFIX="eus2"

DEST_RG="${SOURCE_RG}-${DEST_SUFFIX}"

echo "Source RG: ${SOURCE_RG}"

echo "Destination RG: ${DEST_RG}"

echo ""

# ---------------------------------------------

# ENUMERATE ALL WEB APPS IN SOURCE RG

# ---------------------------------------------

echo "Enumerating Web Apps in ${SOURCE_RG}..."

APPS=$(az webapp list -g "${SOURCE_RG}" --query "[].name" -o tsv)

if [[ -z "${APPS}" ]]; then

  echo "No Web Apps found in ${SOURCE_RG}"

  exit 1

fi

echo "Found Web Apps:"

echo "${APPS}"

echo ""

# ---------------------------------------------

# COPY CONFIG FOR EACH APP

# ---------------------------------------------

for APP in ${APPS}; do

  DEST_APP="${APP}-${DEST_SUFFIX}"

  echo "---------------------------------------------"

  echo "Copying configuration:"

  echo " Source: ${APP}"

  echo " Dest: ${DEST_APP}"

  echo "---------------------------------------------"

  # -----------------------------

  # GET SOURCE CONFIG

  # -----------------------------

  CONFIG=$(az webapp config show -g "${SOURCE_RG}" -n "${APP}")

  KIND=$(echo "${CONFIG}" | jq -r '.kind')

  LINUX_FX=$(echo "${CONFIG}" | jq -r '.linuxFxVersion')

  WINDOWS_FX=$(echo "${CONFIG}" | jq -r '.windowsFxVersion')

  STARTUP_CMD=$(echo "${CONFIG}" | jq -r '.appCommandLine')

  # -----------------------------

  # COPY RUNTIME STACK

  # -----------------------------

  if [[ "${LINUX_FX}" != "null" && "${LINUX_FX}" != "" ]]; then

    echo "Applying Linux runtime stack: ${LINUX_FX}"

    az webapp config set \

      -g "${DEST_RG}" \

      -n "${DEST_APP}" \

      --linux-fx-version "${LINUX_FX}" \

      --startup-file "${STARTUP_CMD}" \

      --output none

  fi

  if [[ "${WINDOWS_FX}" != "null" && "${WINDOWS_FX}" != "" ]]; then

    echo "Applying Windows runtime stack: ${WINDOWS_FX}"

    az webapp config set \

      -g "${DEST_RG}" \

      -n "${DEST_APP}" \

      --windows-fx-version "${WINDOWS_FX}" \

      --output none

  fi

  # -----------------------------

  # COPY APP SETTINGS

  # -----------------------------

  echo "Copying app settings..."

  az webapp config appsettings list \

    -g "${SOURCE_RG}" \

    -n "${APP}" \

    | jq -r '.[] | "\(.name)=\(.value)"' \

    > /tmp/appsettings.txt

  if [[ -s /tmp/appsettings.txt ]]; then

    az webapp config appsettings set \

      -g "${DEST_RG}" \

      -n "${DEST_APP}" \

      --settings $(cat /tmp/appsettings.txt) \

      --output none

  fi

  # -----------------------------

  # COPY CONTAINER SETTINGS (IF ANY)

  # -----------------------------

  if [[ "${LINUX_FX}" == *"DOCKER"* ]]; then

    echo "Detected container-based app. Copying container settings..."

    IMAGE=$(echo "${LINUX_FX}" | sed 's/DOCKER|//')

    REGISTRY=$(az webapp config container show -g "${SOURCE_RG}" -n "${APP}")

    SERVER=$(echo "${REGISTRY}" | jq -r '.dockerRegistryServerUrl')

    USER=$(echo "${REGISTRY}" | jq -r '.dockerRegistryServerUserName')

    PASS=$(echo "${REGISTRY}" | jq -r '.dockerRegistryServerPassword')

    az webapp config container set \

      -g "${DEST_RG}" \

      -n "${DEST_APP}" \

      --docker-custom-image-name "${IMAGE}" \

      --docker-registry-server-url "${SERVER}" \

      --docker-registry-server-user "${USER}" \

      --docker-registry-server-password "${PASS}" \

      --output none

  fi

  echo "Completed configuration copy for ${APP} → ${DEST_APP}"

  echo ""

done

echo "---------------------------------------------"

echo "All Web App configurations copied successfully."

echo "---------------------------------------------"

3. Step 3. Repoint the dns alias, certificates and deploy images

#!/usr/bin/env bash

set -euo pipefail

SOURCE_RG="rg-microservices"

DEST_SUFFIX="eus2"

DEST_RG="${SOURCE_RG}-${DEST_SUFFIX}"

# DNS zone (example: example.com)

DNS_ZONE_RG="rg-dns"

DNS_ZONE_NAME="example.com"

echo "Source RG: ${SOURCE_RG}"

echo "Destination RG: ${DEST_RG}"

echo "DNS Zone: ${DNS_ZONE_NAME}"

echo ""

# ---------------------------------------------

# ENUMERATE ALL WEB APPS IN SOURCE RG

# ---------------------------------------------

echo "Enumerating Web Apps in ${SOURCE_RG}..."

APPS=$(az webapp list -g "${SOURCE_RG}" --query "[].name" -o tsv)

if [[ -z "${APPS}" ]]; then

  echo "No Web Apps found in ${SOURCE_RG}"

  exit 1

fi

echo "Found Web Apps:"

echo "${APPS}"

echo ""

# ---------------------------------------------

# PROCESS EACH APP

# ---------------------------------------------

for APP in ${APPS}; do

  DEST_APP="${APP}-${DEST_SUFFIX}"

  echo "---------------------------------------------"

  echo "Repointing DNS:"

  echo " Source App: ${APP}"

  echo " Dest App: ${DEST_APP}"

  echo "---------------------------------------------"

  # Get default hostnames

  SRC_HOST="${APP}.azurewebsites.net"

  DEST_HOST="${DEST_APP}.azurewebsites.net"

  # Get custom hostnames bound to the source app

  HOSTNAMES=$(az webapp config hostname list \

    -g "${SOURCE_RG}" \

    -n "${APP}" \

    --query "[].name" -o tsv)

  if [[ -z "${HOSTNAMES}" ]]; then

    echo "No custom hostnames found for ${APP}"

    continue

  fi

  echo "Custom hostnames:"

  echo "${HOSTNAMES}"

  echo ""

  # ---------------------------------------------

  # UPDATE DNS CNAME RECORDS

  # ---------------------------------------------

  for HOST in ${HOSTNAMES}; do

    # Extract the relative record name (e.g., api.example.com → api)

    RECORD_NAME=$(echo "${HOST}" | sed "s/.${DNS_ZONE_NAME}//")

    echo "Updating DNS CNAME:"

    echo " Host: ${HOST}"

    echo " Record name: ${RECORD_NAME}"

    echo " New target: ${DEST_HOST}"

    # Create or update the CNAME record

    az network dns record-set cname set-record \

      --resource-group "${DNS_ZONE_RG}" \

      --zone-name "${DNS_ZONE_NAME}" \

      --record-set-name "${RECORD_NAME}" \

      --cname "${DEST_HOST}" \

      --output none

    echo "DNS updated: ${HOST} → ${DEST_HOST}"

    echo ""

  done

done

echo "---------------------------------------------"

echo "All DNS aliases repointed successfully."

echo "---------------------------------------------"


#Codingexercise: https://1drv.ms/w/c/d609fb70e39b65c8/IQBsEmahUOniS6mjF1KpHShiAXrShCZDA26CITHTjDsJzp4?e=CTKIwy


Friday, April 10, 2026

 This is a runbook for migrating a set of microservices in the form of a UI and many APIs from one region to another  

  1. Step 1. Create all the containers in the destination. 

#!/usr/bin/env bash 

set -euo pipefail 

 

# ----------------------------- 

# CONFIGURATION 

# ----------------------------- 

SOURCE_RG="rg-microservices" 

DEST_SUFFIX="eus2" 

DEST_LOCATION="eastus2" 

 

DEST_RG="${SOURCE_RG}-${DEST_SUFFIX}" 

 

echo "Source RG: ${SOURCE_RG}" 

echo "Destination RG: ${DEST_RG}" 

echo "Destination Region: ${DEST_LOCATION}" 

echo "" 

 

# ----------------------------- 

# CREATE DESTINATION RG 

# ----------------------------- 

echo "Creating destination resource group..." 

az group create \ 

  --name "${DEST_RG}" \ 

  --location "${DEST_LOCATION}" \ 

  --output none 

 

echo "Destination RG created." 

echo "" 

 

# ----------------------------- 

# ENUMERATE RESOURCES 

# ----------------------------- 

echo "Enumerating resources in ${SOURCE_RG}..." 

RESOURCE_IDS=$(az resource list -g "${SOURCE_RG}" --query "[].id" -o tsv) 

 

if [[ -z "${RESOURCE_IDS}" ]]; then 

  echo "No resources found in ${SOURCE_RG}" 

  exit 1 

fi 

 

echo "Found resources:" 

echo "${RESOURCE_IDS}" 

echo "" 

 

# ----------------------------- 

# EXPORT EACH RESOURCE WITH aztfexport 

# ----------------------------- 

EXPORT_DIR="./export-${SOURCE_RG}-${DEST_SUFFIX}" 

mkdir -p "${EXPORT_DIR}" 

 

for RID in ${RESOURCE_IDS}; do 

  NAME=$(basename "${RID}") 

  NEW_NAME="${NAME}-${DEST_SUFFIX}" 

 

  echo "Exporting resource:" 

  echo "  Source ID: ${RID}" 

  echo "  New Name:  ${NEW_NAME}" 

  echo "" 

 

  aztfexport resource \ 

    --resource "${RID}" \ 

    --resource-group "${DEST_RG}" \ 

    --location "${DEST_LOCATION}" \ 

    --name-mapping "${NAME}=${NEW_NAME}" \ 

    --output-directory "${EXPORT_DIR}" \ 

    --append 

 

done 

 

echo "" 

echo "----------------------------------------" 

echo "Export completed. Terraform files stored in:" 

echo "  ${EXPORT_DIR}" 

echo "----------------------------------------" 

  1. Step 2. Copy all the configurations for each web service. 

#!/usr/bin/env bash 

set -euo pipefail 

 

SOURCE_RG="rg-microservices" 

DEST_SUFFIX="eus2" 

DEST_RG="${SOURCE_RG}-${DEST_SUFFIX}" 

 

echo "Source RG: ${SOURCE_RG}" 

echo "Destination RG: ${DEST_RG}" 

echo "" 

 

# --------------------------------------------- 

# ENUMERATE ALL WEB APPS IN SOURCE RG 

# --------------------------------------------- 

echo "Enumerating Web Apps in ${SOURCE_RG}..." 

APPS=$(az webapp list -g "${SOURCE_RG}" --query "[].name" -o tsv) 

 

if [[ -z "${APPS}" ]]; then 

  echo "No Web Apps found in ${SOURCE_RG}" 

  exit 1 

fi 

 

echo "Found Web Apps:" 

echo "${APPS}" 

echo "" 

 

# --------------------------------------------- 

# COPY CONFIG FOR EACH APP 

# --------------------------------------------- 

for APP in ${APPS}; do 

  DEST_APP="${APP}-${DEST_SUFFIX}" 

 

  echo "---------------------------------------------" 

  echo "Copying configuration:" 

  echo "  Source: ${APP}" 

  echo "  Dest:   ${DEST_APP}" 

  echo "---------------------------------------------" 

 

  # ----------------------------- 

  # GET SOURCE CONFIG 

  # ----------------------------- 

  CONFIG=$(az webapp config show -g "${SOURCE_RG}" -n "${APP}") 

  KIND=$(echo "${CONFIG}" | jq -r '.kind') 

  LINUX_FX=$(echo "${CONFIG}" | jq -r '.linuxFxVersion') 

  WINDOWS_FX=$(echo "${CONFIG}" | jq -r '.windowsFxVersion') 

  STARTUP_CMD=$(echo "${CONFIG}" | jq -r '.appCommandLine') 

 

  # ----------------------------- 

  # COPY RUNTIME STACK 

  # ----------------------------- 

  if [[ "${LINUX_FX}" != "null" && "${LINUX_FX}" != "" ]]; then 

    echo "Applying Linux runtime stack: ${LINUX_FX}" 

    az webapp config set \ 

      -g "${DEST_RG}" \ 

      -n "${DEST_APP}" \ 

      --linux-fx-version "${LINUX_FX}" \ 

      --startup-file "${STARTUP_CMD}" \ 

      --output none 

  fi 

 

  if [[ "${WINDOWS_FX}" != "null" && "${WINDOWS_FX}" != "" ]]; then 

    echo "Applying Windows runtime stack: ${WINDOWS_FX}" 

    az webapp config set \ 

      -g "${DEST_RG}" \ 

      -n "${DEST_APP}" \ 

      --windows-fx-version "${WINDOWS_FX}" \ 

      --output none 

  fi 

 

  # ----------------------------- 

  # COPY APP SETTINGS 

  # ----------------------------- 

  echo "Copying app settings..." 

  az webapp config appsettings list \ 

    -g "${SOURCE_RG}" \ 

    -n "${APP}" \ 

    | jq -r '.[] | "\(.name)=\(.value)"' \ 

    > /tmp/appsettings.txt 

 

  if [[ -s /tmp/appsettings.txt ]]; then 

    az webapp config appsettings set \ 

      -g "${DEST_RG}" \ 

      -n "${DEST_APP}" \ 

      --settings $(cat /tmp/appsettings.txt) \ 

      --output none 

  fi 

 

  # ----------------------------- 

  # COPY CONTAINER SETTINGS (IF ANY) 

  # ----------------------------- 

  if [[ "${LINUX_FX}" == *"DOCKER"* ]]; then 

    echo "Detected container-based app. Copying container settings..." 

 

    IMAGE=$(echo "${LINUX_FX}" | sed 's/DOCKER|//') 

    REGISTRY=$(az webapp config container show -g "${SOURCE_RG}" -n "${APP}") 

 

    SERVER=$(echo "${REGISTRY}" | jq -r '.dockerRegistryServerUrl') 

    USER=$(echo "${REGISTRY}" | jq -r '.dockerRegistryServerUserName') 

    PASS=$(echo "${REGISTRY}" | jq -r '.dockerRegistryServerPassword') 

 

    az webapp config container set \ 

      -g "${DEST_RG}" \ 

      -n "${DEST_APP}" \ 

      --docker-custom-image-name "${IMAGE}" \ 

      --docker-registry-server-url "${SERVER}" \ 

      --docker-registry-server-user "${USER}" \ 

      --docker-registry-server-password "${PASS}" \ 

      --output none 

  fi 

 

  echo "Completed configuration copy for ${APP} → ${DEST_APP}" 

  echo "" 

done 

 

echo "---------------------------------------------" 

echo "All Web App configurations copied successfully." 

echo "---------------------------------------------" 

  1. Step 3. Repoint the dns alias, certificates and deploy images 

#!/usr/bin/env bash 

set -euo pipefail 

 

SOURCE_RG="rg-microservices" 

DEST_SUFFIX="eus2" 

DEST_RG="${SOURCE_RG}-${DEST_SUFFIX}" 

 

# DNS zone (example: example.com) 

DNS_ZONE_RG="rg-dns" 

DNS_ZONE_NAME="example.com" 

 

echo "Source RG: ${SOURCE_RG}" 

echo "Destination RG: ${DEST_RG}" 

echo "DNS Zone: ${DNS_ZONE_NAME}" 

echo "" 

 

# --------------------------------------------- 

# ENUMERATE ALL WEB APPS IN SOURCE RG 

# --------------------------------------------- 

echo "Enumerating Web Apps in ${SOURCE_RG}..." 

APPS=$(az webapp list -g "${SOURCE_RG}" --query "[].name" -o tsv) 

 

if [[ -z "${APPS}" ]]; then 

  echo "No Web Apps found in ${SOURCE_RG}" 

  exit 1 

fi 

 

echo "Found Web Apps:" 

echo "${APPS}" 

echo "" 

 

# --------------------------------------------- 

# PROCESS EACH APP 

# --------------------------------------------- 

for APP in ${APPS}; do 

  DEST_APP="${APP}-${DEST_SUFFIX}" 

 

  echo "---------------------------------------------" 

  echo "Repointing DNS:" 

  echo "  Source App: ${APP}" 

  echo "  Dest App:   ${DEST_APP}" 

  echo "---------------------------------------------" 

 

  # Get default hostnames 

  SRC_HOST="${APP}.azurewebsites.net" 

  DEST_HOST="${DEST_APP}.azurewebsites.net" 

 

  # Get custom hostnames bound to the source app 

  HOSTNAMES=$(az webapp config hostname list \ 

    -g "${SOURCE_RG}" \ 

    -n "${APP}" \ 

    --query "[].name" -o tsv) 

 

  if [[ -z "${HOSTNAMES}" ]]; then 

    echo "No custom hostnames found for ${APP}" 

    continue 

  fi 

 

  echo "Custom hostnames:" 

  echo "${HOSTNAMES}" 

  echo "" 

 

  # --------------------------------------------- 

  # UPDATE DNS CNAME RECORDS 

  # --------------------------------------------- 

  for HOST in ${HOSTNAMES}; do 

    # Extract the relative record name (e.g., api.example.com → api) 

    RECORD_NAME=$(echo "${HOST}" | sed "s/.${DNS_ZONE_NAME}//") 

 

    echo "Updating DNS CNAME:" 

    echo "  Host:        ${HOST}" 

    echo "  Record name: ${RECORD_NAME}" 

    echo "  New target:  ${DEST_HOST}" 

 

    # Create or update the CNAME record 

    az network dns record-set cname set-record \ 

      --resource-group "${DNS_ZONE_RG}" \ 

      --zone-name "${DNS_ZONE_NAME}" \ 

      --record-set-name "${RECORD_NAME}" \ 

      --cname "${DEST_HOST}" \ 

      --output none 

 

    echo "DNS updated: ${HOST} → ${DEST_HOST}" 

    echo "" 

  done 

 

done 

 

echo "---------------------------------------------" 

echo "All DNS aliases repointed successfully." 

echo "---------------------------------------------"