Thursday, April 30, 2026

 This is a summary of a book titled “Wait, You Need It When?!?: The Essential Guide to Time Management, Productivity, and Powerful Habits That Get Things Done” written by Peter Economy and published by Career Press in 2026. This book argues that time is the one resource you can never replenish, yet many people treat it as if it were infinite. The result is a workday filled with drift: low-value tasks, constant interruptions, and habits that quietly consume hours. One estimate suggests employees spend about 51% of the workday on tasks that add little value, while social media, email checking, and unnecessary meetings further erode focus. The author stresses that this isn’t merely an efficiency issue; it is a life-management issue. “Money you can get more of, belongings come and go, but once you’ve burned through a particular piece of time, you can never retrieve it….There’s no going back, only forward.”

When time management breaks down, the consequences show up everywhere. Individually, it can mean rushed or sloppy work, missed deadlines, and fewer opportunities to grow. For organizations, it translates into productivity losses, lower quality, delayed delivery, and higher turnover. The damage can ripple outward to customers when follow-through falters, and to colleagues who may feel they are compensating for someone else’s disorganization. The author also highlights a less visible cost: when work expands to fill evenings and weekends, personal relationships and basic self-care are often the first to be squeezed out, leaving people both less present at home and less effective at work.

To regain control, the book emphasizes making deliberate choices about attention and priorities. That starts with ranking tasks by importance and urgency, setting goals that are challenging but realistic, and then translating those goals into small, actionable steps. It also means protecting concentration by eliminating distractions, delegating where appropriate, and using breaks strategically so focus can recover before it collapses. Practical tactics—like scheduling blocks of uninterrupted time for demanding work, tracking how you actually spend your hours, and learning to say no to nonessential requests—create the conditions for consistent progress. He encourages mindfulness as well: noticing the patterns that sabotage your intentions and staying flexible enough to adapt when circumstances change.

Because time feels different depending on what you’re doing, The author recommends building awareness of your subjective experience of it. Meaningful work can make hours pass quickly, while monotonous tasks can feel endless; stress and feeling “behind” can warp your sense of the day. A brief reset—such as a short mindfulness practice—can reduce the sensation of rushing and help you return to the present, where better choices are easier to make.

The author calls for a “serious business mindset”—a purpose-driven attitude that builds credibility and keeps your efforts aligned with your goals. One concrete way to support that mindset is to design a workspace that signals focus. Ergonomic tools, lighting and noise adjustments, and an organized layout all reduce friction. Even small environmental choices matter: research cited in the book suggests that the freedom to personalize a workspace can raise productivity, while plants can provide a modest boost; clutter, by contrast, makes sustained attention harder. He also notes that productivity is not simply a function of longer hours. Regular breaks and clear boundaries protect both performance and work-life balance, and they prevent others from assuming you are available at all times.

Interruptions are especially costly because each shift of attention has a recovery price; the book cites an average of 23 minutes and 15 seconds to fully return to a task after an interruption. To reduce that tax, he advises setting expectations with colleagues by blocking deep-work periods and clearly communicating when you will and won’t be reachable. Technology can reinforce these boundaries through “do not disturb” settings and website blockers, while collaboration tools can replace meetings that don’t require real-time discussion. Physical cues—like closing a door or using headphones—can help others recognize focus time. Just as important is practicing single-tasking: scheduling one to three hours for a single priority rather than bouncing between demands, and keeping “digital hygiene” strong by unsubscribing from unwanted lists, turning off nonessential notifications, and maintaining an orderly file system.

Sustained performance, the book suggests, comes from routines that balance structure with adaptability. By identifying your peak energy windows and building time blocks around them, you can create consistency without becoming rigid. Techniques like the Pomodoro method—working in focused 20- to 30-minute intervals followed by short breaks, with a longer break after several rounds—provide a simple rhythm that prevents burnout while keeping momentum. Goal setting, too, should be both disciplined and flexible. The author highlights the CLEAR framework (Collaborative, Limited, Emotional, Appreciable, Refinable), which encourages seeking input, keeping goals to a manageable number, tying them to what genuinely matters to you, breaking them into milestones you can recognize and celebrate, and refining them as conditions evolve.

Daily to-do lists play an important supporting role by freeing mental bandwidth and making priorities explicit. To make lists actionable rather than overwhelming, He draws on David Allen’s Getting Things Done approach: capture everything that demands attention, clarify the next action and desired outcome, organize tasks in a system that fits your contexts and deadlines, reflect regularly to delete, delegate, or reprioritize, and then engage with the items that will have the greatest impact. The same respect for time applies to meetings. With a significant portion of meetings viewed as ineffective and many running longer than an hour, the book recommends clarifying purpose, using a timed agenda, limiting attendance to the people who can decide or contribute meaningfully, and ending with clear action items and follow-up dates. Finally, he connects productivity to intrinsic motivation: when your work aligns with values, passions, and purpose, focus becomes easier to sustain. He encourages experimentation—trying new classes, volunteering, or networking in inspiring spaces—and reflecting on what energizes you, because “As long as you’re still living and breathing, you can do something different. So if you need to make a change, don’t hesitate: The time is now.”


Wednesday, April 29, 2026

 What Confluent can tell us about video sensing applications?

Confluent’s Streaming Data platform is a cloud-native, fully managed event streaming system built on Apache Kafka but rearchitected from the ground up for elastic scalability, real-time processing, and enterprise-grade governance. At its heart, the platform turns raw data in motion into reliable, governed data products that power real-time applications, analytics, and AI.

The Foundation: KORA, Confluent’s Cloud-Native Kafka Engine

Everything starts with KORA, Confluent’s custom-engineered version of Apache Kafka. Unlike traditional Kafka deployments, KORA is designed for a multi-tenant, serverless cloud architecture. It delivers millions of messages per second with sub-10ms latency and guarantees 99.99% uptime through multi-availability-zone clustering. Topics are partitioned across brokers for horizontal scale and fault tolerance, and producers and consumers are fully decoupled—meaning you can add or evolve services without breaking dependencies.

Storage That Scales: Tiered Storage Architecture

One of KORA’s most powerful innovations is its three-tier storage system, which replaces Kafka’s traditional single-layer local-disk storage:

• Hot tier (memory/SSD): Stores recent data for ultra-low-latency access.

• Warm tier (local SSD cache): Handles intermediate retention.

• Cold tier (cloud object storage like S3, GCS, or Azure Blob): Provides infinite, cost-effective retention.

After data segments are flushed, they’re automatically moved to colder, cheaper storage while metadata is tracked internally. This separation of compute and storage lets you scale each independently and retain data for months or years at a fraction of the cost—something vanilla Kafka can’t do efficiently.

Governance and Quality: Schema Registry and Stream Governance

To keep streaming data trustworthy, Confluent includes a centralized Schema Registry that manages Avro, Protobuf, and JSON Schema with strict compatibility rules (backward, forward, full, or none). This ensures producers and consumers stay in sync even as schemas evolve.

Built on top is the Stream Governance suite, which delivers three critical capabilities:

1. Stream Quality: Enforces data contracts with schema validation and business rule checks.

2. Stream Catalog: Provides data discovery with tagging and rich business metadata.

3. Stream Lineage: Maps end-to-end event flows, showing exactly where data comes from and where it goes.

Together, these tools turn chaotic data streams into governed, high-quality data products.

Connectors: Plug-and-Play Data Integration

Confluent ships with 120+ pre-built Kafka connectors for databases, data warehouses, cloud services, and more. These source and sink connectors abstract away the complexity of data integration. You can also apply transformations on the fly using Single Message Transformations (SMTs), making it easy to clean, enrich, or reformat data as it moves through the platform.

Stream Processing: Real-Time Computation at Scale

For real-time computation, the platform supports multiple processing engines:

• Apache Flink®: A powerful engine for stateful stream processing with automatic schema evolution handling.

• Kafka Streams: A lightweight client library for building stream processing applications with a processor topology of source, processor, and sink nodes. It uses a depth-first processing strategy and partition-based state stores, avoiding backpressure issues.

• ksqlDB: A streaming SQL engine that lets you query and transform data using familiar SQL syntax.

• Tableflow: Creates materialized views for real-time analytics.

The platform even supports LLM and ML model inference directly inside stream processing, enabling streaming agents that can invoke external tools—bringing AI capabilities into real-time data pipelines.

Multi-Cloud, Hybrid, and Geo-Replication

Confluent is built for modern cloud realities:

• Cluster Linking enables geo-replication across clusters and clouds.

• Multi-cloud support includes native integration with S3, GCS, and Azure Blob, plus a Bring-Your-Own-Cloud (BYOC) option.

• Networking security includes private linking, VPC peering, and end-to-end in-transit encryption.

• The architecture supports Kappa architecture, unifying operational and analytical workloads in a single pipeline.

This flexibility lets you run Confluent consistently across AWS, GCP, Azure, or on-premises environments.

How Data Flows Through the Platform

Imagine this journey:

1. Producers send events into Kafka topics, which are partitioned distributed logs.

2. Schema Registry validates each event against its schema.

3. Data lands in tiered storage, automatically moving from hot to cold as it ages.

4. Connectors pull data in from or push data out to external systems.

5. Flink, Kafka Streams, or ksqlDB process the data in real time.

6. Processed data flows to consumer applications, data warehouses, analytics dashboards, or AI models.

Because producers and consumers are decoupled, you can add, remove, or scale any part of this pipeline without disrupting the rest.

Why Confluent Stands Out

Compared to vanilla Kafka, Confluent delivers:

• Tiered storage for infinite retention at low cost.

• Auto-scaling that’s 30× faster than manual Kafka rebalancing.

• Built-in governance with Schema Registry and Stream Governance.

• Fully managed operations with a 99.99% uptime SLA.

• Multiple processing engines: Flink + ksqlDB + Kafka Streams, not just one.

In short, Confluent’s Streaming Data platform transforms the challenge of managing real-time data into a seamless, governed, and scalable experience—enabling event-driven architectures, real-time analytics, and AI applications powered by high-quality, trusted data in motion.

What this architecture informs for AI Agents specifically for drone video sensing applications

AI agents are usually arranged in one of the following patterns:

• Automatic Query Decomposition by one agent co-ordinating with other agents to invoke each of the queries incurring token costs in parallel per agent.

• Lambda processing or function app agents: scaling to workload for predefined routines on a task by task basis.

• Reasoning agent: forming a breakdown of step-by-step tasks for execution and query response reconstitution.

• Model Context Protocol enabled Agents: for agents to independently reach each other for fulfilment.

• Grounding Agents: with connectivity to online or specific data sources or services.

What Confluent architecture suggests is to perform this at an event-by-event basis on a perpetual agent as follows:

package com.sms.event;

/**

 * This represents an observable notification.

 * @param <T> The type of event that is to be observed.

 */

public interface Notifier<T> {

    /**

     * Attach a listener for notification type T.

     * @param listener This is the listener.

     *

     */

    void subscribe(final Listener<T> listener);

    /**

     * Detach a listener.

     */

    void unsubscribe();

    /**

     * finished notifying.

     */

    void onCompleted();

    /**

     * regular event processing.

     */

    void onNext(T notification);

    /**

     * failed event processing.

     */

    void onError(Throwable exception);

}

package com.sms.event;

/**

 * Listener interface for receiving notifications.

 * @param <T> Notification type.

 */

 @FunctionalInterface

public interface Listener<T> {

    /**

     * Attach a notifier for notification type T.

     * @param notifier This is the notifier.

     *

     */

    void subscribe(final Notifier<T> notifier);

    /**

     * Detach a notifier.

     */

    void unsubscribe();

    /**

     * finished notifying.

     */

    void onCompleted();

    /**

     * regular event processing.

     */

    void onNext(T notification);

    /**

     * failed event processing.

     */

    void onError(Throwable exception);

}

package com.sms.event;

import java.util.concurrent.Executors;

import java.util.concurrent.ExecutorService;

import java.util.Map;

import java.util.HashMap;

import javax.annotation.concurrent.GuardedBy;

import lombok.Data;

import lombok.Synchronized;

import lombok.extern.slf4j.Slf4j;

/**

 * Equivalent of a message broker.

 * @param <T> Type of notification.

 */

@Slf4j

public class NotificationSystem<T extends Notification> {

     @GuardedBy("$lock")

    private final Map<String, Notifier<T>> notifierMap = new HashMap<String, Notifier<T>>();

    private final Map<String, Listener<T>> listenerMap = new HashMap<String, Listener<T>>();

    private final ExecutorService executorService = Executors.newFixedThreadPool(1);

    @SuppressWarnings({ "unchecked", "rawtypes" })

    @Synchronized

    public void addListener(final String type,

                            final Listener<T> listener) {

        if (!isListenerPresent(listener)) {

            listenerMap.put(type, listener);

        }

    }

    /**

     * This method will notify listeners.

     *

     * @param notification Notification.

     * @param <T> Type of notification.

     */

    @Synchronized

    public void notify(final T notification) {

        String type = notification.getClass().getSimpleName();

        Listener<T> listener = listenerMap.get(type);

        log.info("Executing listener of type: {} for notification: {}", type, notification);

        executorService.submit(() -> {

            try {

                listener.onNext(notification);

            } catch (Throwable ex) {

                listener.onError(ex);

            }

        });

    }

    @Synchronized

    public void removeListener(final String type, final Listener<T> listener) {

        listenerMap.remove(type);

    }

    private boolean isListenerPresent(final Listener<T> listener) {

        return listenerMap.values().stream().anyMatch(le -> le.equals(listener));

    }

    @SuppressWarnings({ "unchecked", "rawtypes" })

    @Synchronized

    public void addNotifier(final String type,

                            final Notifier<T> notifier) {

        if (!isNotifierPresent(notifier)) {

            notifierMap.put(type, notifier);

        }

    }

    @Synchronized

    public void removeNotifier(final String type, final Notifier<T> notifier) {

        notifierMap.remove(type);

    }

    public boolean isNotifierPresent(final Notifier<T> notifier) {

        return notifierMap.values().stream().anyMatch(n -> n.equals(notifier));

    }

    public boolean isSubscriberPresent(final Listener<T> listener) {

        return listenerMap.values().stream().anyMatch(l -> l.equals(listener));

    }

}


Tuesday, April 28, 2026

 This is a summary of a book titled “Trust Agents: Using the Web to Build Influence, Improve Reputation, and Earn Trust” written by Chris Brogan and Julien Smith and published by Wiley in 2009. People become less trusting, and the public’s skepticism toward institutions runs high as time passes. In this environment, traditional marketing and polished corporate messaging don’t build confidence; they often deepen suspicion. The authors argue that the web—because it is connective, searchable, and radically transparent—offers companies a different path. Instead of trying to control the message or hide imperfections, organizations can earn credibility by showing up as real participants in online communities. The people who make this work are what Brogan and Smith call “trust agents”: individuals who represent a business without acting like salespeople, who trade pressure for presence, and who build influence by being useful and genuine. For implementors of OAuth protocol, this relates to bringing audience from third party websites.

A trust agent’s influence comes from understanding a core shift: online, people don’t want to be “managed” by brands; they want to be cared for by humans. The authors stress that effective participants are not infiltrators who join groups to extract value, and they are not loud promoters trying to “convert” every interaction into a transaction. They are power users of modern web tools—blogs, feeds, social networks, audio and video platforms—but the tools matter less than the approach. The web is described as a gigantic lever: once you publish something helpful publicly, it can continue to reach new people long after you press “post,” and one thoughtful answer can save you from repeating the same response in countless private emails. Over time, that visible generosity becomes reputation, and reputation becomes trust.

To act with credibility, the book says, you first have to listen. Brogan and Smith recommend building a “listening station” so you can understand what online communities already believe about your company and your competitors—what they praise, what they distrust, and what questions keep resurfacing. Their 2009 instructions are anchored in the tools of that moment (Google services, feed readers, and blog search engines like Technorati), but the underlying practice is timeless: set up a system that continuously surfaces mentions of your organization, your products, and the themes your customers care about. The goal is not surveillance for its own sake; it is awareness. Only by paying attention can you participate in ways that feel responsive rather than performative.

Once you can “see the map” of what people are saying, you can begin to contribute. The authors emphasize that the content you create online—whether posts, videos, podcasts, or simple comments—has durability. Because it remains discoverable, it can keep answering questions and demonstrating your expertise long after the moment has passed. This is where social capital forms: when you repeatedly help people solve problems, clarify confusing topics, or point them toward useful resources, the community starts to recognize you as someone worth listening to. That recognition is not merely popularity; it is a kind of stored goodwill you can draw on later when you need to introduce an idea, request feedback, or rally people around a project.

From there, Brogan and Smith organize the trust agent’s mindset into six interlocking principles. Each principle is less a rigid rule than a way of behaving that makes trust more likely to form in public spaces where anyone can evaluate you. Together they encourage experimentation, belonging, leverage, relationship-building, empathy, and collective action—skills that turn the web from a broadcasting channel into a place where influence is earned.

The first principle, “Make Your Own Game,” argues that the internet rewards those willing to challenge industry habits. Online you can set new terms, reach audiences directly, and bypass gatekeepers who once controlled distribution. The book highlights musicians who rewrote the rules: the Arctic Monkeys built momentum through MySpace, and Radiohead experimented with a pay-what-you-want release that still generated massive sales. These examples illustrate the broader point: trust agents don’t wait for permission. They watch what the community values, take smart risks, and create approaches that feel fresh rather than formulaic.

To support that spirit of experimentation, the authors borrow a framework from Douglas Rushkoff: treating culture—and the web—as a kind of game you can learn, hack, and even redesign. At first you “play,” learning the norms and feedback signals of your space: links, comments, followers, revenue, and the general sentiment people express in public. Then you begin to “cheat,” not by being dishonest but by thinking laterally—finding unusual, effective ways to use familiar tools or sell familiar offerings. Finally, you may move into “programming,” building something new entirely and discovering its rules through trial, error, and persistence. In the trust agent’s world, that willingness to learn and iterate becomes a visible marker of competence and confidence.

The second principle, “One of Us,” focuses on belonging. Trust online is rarely granted to outsiders who sound like advertisements, and it is quickly withdrawn from anyone who appears self-serving. The book points to an early and influential example: Microsoft employee Robert Scoble, who blogged candidly about his company—even criticizing products. That openness helped him gain standing in technical communities, not because he was perfect, but because he was plainly real. Brogan and Smith connect this to the “trust equation” described in The Trusted Advisor: credibility, reliability, and intimacy raise trust, while self-orientation lowers it. Online, these factors still apply, but they are shaped by what other people publicly say about you, by the consistency of your visible actions over time, and by the surprising power of “verbal intimacy” in a world with fewer nonverbal cues.

The third principle, the “Archimedes Effect,” explains how the web turns small efforts into outsized outcomes. Like a lever, online platforms amplify reputation, relationships, and time: a single introduction can connect networks, and a single well-placed resource can help thousands. Yet the authors warn that leverage collapses the moment you treat your audience as targets. Trust agents serve as helpful gatekeepers for their communities, curating information, connecting people, and staying focused on long-term value rather than short-term selling.

Using that leverage well requires what the authors call “multicapitalism”: the ability to recognize different forms of value—money, attention, credibility, access, goodwill—and to exchange them intelligently. They offer Donald Trump as an example of turning one kind of capital into another: wealth into visibility, visibility into new ventures. For a trust agent, the more ethical version of this is building a presence online, meeting people in person when possible, and then sustaining the relationship with ongoing online touches. Over time, those repeated, generous interactions become the compounding force behind influence.

The fourth principle, “Agent Zero,” describes a particular kind of network position. Trust agents often sit at the hub of conversations, not because they demand attention, but because they continuously connect people and ideas. They comment, respond, congratulate, and share—quickly and sincerely. They use their network to solve problems, introduce collaborators, and spotlight other people’s work. Ironically, by staying out of the spotlight and acting with a service mindset, they become highly visible in the way that matters: as dependable human links within a community.

The fifth principle, “Human Artist,” is about interpersonal skill—especially empathy, observation, and respect for social norms. Brogan and Smith argue that trust agents succeed because they are good at reading the room, even when “the room” is a comment thread, a forum, or a fast-moving social feed. They take time to learn which communities matter to them, what those people value, and what behavior is considered acceptable. They listen before they speak, match the tone of the space, and follow a web-friendly version of the Golden Rule: treat online contacts the way you would want to be treated. Most importantly, they resist the temptation to market to new online friends. In community settings, aggressive selling is often treated as a violation, and it can damage reputation faster than any single mistake.

The sixth principle, “Build an Army,” highlights the web’s ability to coordinate people at scale. With platforms such as wikis, review sites, and social networks, trust agents can gather large groups around a shared purpose, helping them collaborate in ways that were once impractical. Wikipedia is an obvious example of crowdsourcing’s potential, but the authors also point to corporate efforts that succeed when they prioritize participation over persuasion. General Motors’ GMNext.com, for instance, gave customers wiki-style tools and space to share stories about vehicles they loved. The initiative worked precisely because GM didn’t treat the community like a pipeline for hard sales; it treated it as a place where customers could express identity and enthusiasm in their own words—marketing that feels credible because it isn’t forced.

In the final pages, the advice becomes practical and immediate: show up where your communities already gather, and communicate more than you think you need to. Join relevant networks, build a base of contacts, and don’t be overly cautious about connecting with people you haven’t met yet—online, relationships often begin as lightweight interactions that deepen over time. Use tools like Twitter (and today’s equivalents) to learn what people care about in real time. Comment thoughtfully on blogs and forums, answer questions, and “check in” regularly so your presence is steady rather than sporadic. The authors’ challenge is simple: aim to become the best communicator the web has ever seen, not by talking the most, but by listening well, contributing generously, and earning trust one visible interaction at a time.


Monday, April 27, 2026

 Azure Web App Logging

An Azure Web App can log in two broad ways: locally on the app host for quick troubleshooting, or externally through Azure Monitor diagnostic settings for longer-lived and downstream analytics use. The best choice depends on the following factors: speed and simplicity, or durability, integration, and centralized operations.

Logging options

Local logging writes logs to the App Service file system, where you can download them or access them over FTPS. This is the lightest-weight option for development and short investigations, and Azure App Service supports FTPS-only mode so you can avoid plain FTP; if you are using file-system logging, a common optimization is to keep retention at 0 days and size quota around 35 MB so you do not accumulate unnecessary storage or incur avoidable cost on the app resource.

Diagnostic settings send logs to a Storage account, Event Hub, or Log Analytics. This is the better fit when you need centralized retention, querying, or forwarding to operational tools such as Splunk through Event Hub or another ingestion pipeline, but it can generate meaningful storage and ingestion volume depending on how verbose the selected log categories are

Practical trade-offs

Local file-system logging is usually faster to access and easier for developers because the logs sit close to the app and can be pulled immediately. The downside is that it is not designed for long-term retention or enterprise-scale observability, and the footprint should be kept intentionally small so it does not compete with the app for space or create unnecessary overhead.

Diagnostic settings are better for compliance, analytics, and cross-team access because they move data out of the app into durable Azure services. The trade-off is cost and volume: app logs, HTTP logs, and platform logs can grow quickly, and sending all categories to Storage or Event Hub increases both ingestion and downstream processing costs, especially if a SIEM such as Splunk also charges for indexed volume.

Blob storage option

Sending logs to Azure Blob Storage is often the middle ground between local-only logs and a full streaming pipeline. Compared with keeping logs on the app host, blob storage gives you better retention, easier central access, and stronger separation of duties; compared with Event Hub, it is simpler and usually cheaper for archive-style retention, but less suitable for real-time operational forwarding.

From a security perspective, blob storage is preferable when you want to restrict access with managed identities, RBAC, and private networking rather than exposing the app host file system or broadly granting FTPS access. In general, the more external the log destination, the better your control plane story becomes, but the more important it is to secure identities, network paths, and storage permissions.

Cost impact

When logging is turned on for all log types, the monthly cost increases in two places: the App Service side and the destination side. On the app side, local logging can consume file-system quota and operational overhead, while external logging can add Azure Monitor, Storage, Event Hub, and downstream SIEM costs; in practice, the biggest cost driver is usually log volume rather than the mere act of enabling logging

A full “everything on” configuration can become expensive if verbose application logs, HTTP logs, and platform diagnostics are all emitted continuously. The right way to manage cost is to limit categories to what is actually needed, reduce verbosity in production, and set retention policies that match the business need instead of defaulting to indefinite collection

Premium tier considerations

If the app service plan is upgraded to the lowest Premium tier, turning on logging through diagnostic settings is generally a better production pattern than relying on only local file logging. Premium gives more headroom for performance-sensitive workloads, but logging still adds CPU, I/O, and network overhead, especially if the destination is remote and every write must be exported out of the app path

The main security concern is not the Premium tier itself, but the expanded data flow: logs may contain request paths, headers, identifiers, or exception details, so access to the destination must be tightly limited. The main performance concern is bursty log generation, which can increase latency if the app spends too much time serializing and exporting log data rather than serving requests

Dev and ops access

A good pattern is to optimize for both developer and operational needs by splitting access modes. Developers can use local logs or near-real-time access for low-latency troubleshooting and faster iteration, while operations teams consume the same data centrally with read-only access, least privilege, and controlled retention in Storage, Event Hub, or a SIEM pipeline

This reduces friction because developers get interactive access without waiting on a downstream pipeline, while operations gets governed, durable visibility with auditability and restricted permissions. In practice, that usually means keeping local logs small and temporary, and pushing only the logs needed for production observability into centralized destinations

Recommendations

Azure’s general direction for App Service logging is to use local logs for short-lived troubleshooting, diagnostic settings for durable monitoring, and secure transport and access controls for anything beyond the app host. FTPS should be limited to FTPS-only or disabled when not needed, detailed error pages should not be exposed to clients in production, and logging categories should be scoped narrowly to reduce cost and noise.

A popular policy posture is:

• Keep local file-system logs small, temporary, and developer-focused.

• Use diagnostic settings for production retention and centralized monitoring.

• Route only necessary categories to Storage or Event Hub.

• Restrict destination access with least privilege and private connectivity where possible.

• Treat log content as sensitive operational data and control retention accordingly

Sunday, April 26, 2026

 Continued from previous article 


Some replicas are asynchronous by nature and are called observers. They do not participate in the in-sync replica or become a partition leader, but they restore availability to the partition and allow producers to produce data again. Connected clusters might involve clusters in distinct and different geographic regions and usually involve linking between the clusters. Linking is an extension of the replica fetching protocol that is inherent to a single cluster. A link contains all the connection information necessary for the destination cluster to connect to the source cluster. A topic on the destination cluster that fetches data over the cluster link is called a mirror topic. This mirror may have a same or prefixed name, synced configurations, byte for byte copy and consumer offsets as well as access control lists.

Managed services over brokers complete the delivery value to the business from standalone deployments of brokers such that cluster sizing, over-provisioning, failover design and infrastructure management are automated. They are known to amplify the availability to 99.99% uptime service-level agreement. Often, they involve a replicator which is a worker that executes connector and its tasks to co-ordinate data streaming between source and destination broker clusters. A replicator has a source consumer that consumes the records from the source cluster and then passes these records to the Connect framework. The Connect framework would have a built-in producer that then produces these records to the destination cluster. It might also have dedicated clients to propagate overall metadata updates to the destination cluster.

In a geographically distributed replication for business continuity and disaster recovery, the primary region has the active cluster that the producers and consumers write to and read from, and the secondary region has read-only clusters with replicated topics for read only consumers. It is also possible to configure two clusters to replicate to each other so that both of them have their own sets of producers and consumers but even in these cases, the replicated topic on either side will only have read-only consumers. Fan-in and Fan-out are other possible arrangements for such replication.

Disaster recovery almost always occurs with a failover of the primary active cluster to a secondary cluster. When disaster strikes, the maximum amount of data usually measured in terms of time that can be lost after a recovery is minimized by virtue of this replication. This is referred to as the Recovery Point Objective. The targeted duration until the service level is restored to the expectations of the business process is referred to as the Recovery Time Objective. The recovery helps the system to be brought back to operational mode. Cost, business requirements, use cases and regulatory and compliance requirements mandate this replication and the considerations made for the data in motion for replication often stand out as best practice for the overall solution.

One of the toughest challenges in data engineering has been the diversity of stacks, platforms, products and logic to the detriment of smooth operations, business continuity and disaster recovery. The problem stems from the dichotomy between assets and debt. When developers spend time writing to say SQL edge, then they find a greater debt to move to an open-source stack because the data operations proliferate and there is very little curating. That is why planning for all the Ops consideration is just as necessary at design time as the feature itself.


#codingexercise: CodingExercise-04-26-2026.docx

Saturday, April 25, 2026

 (Continued from previous article)

When these IoT resources are shared, isolation model, impact-to-scaling performance, state management and security of the IoT resources become complex. Scaling resources helps meet the changing demand from the growing number of consumers and the increase in the amount of traffic. We might need to increase the capacity of the resources to maintain an acceptable performance rate. Scaling depends on number of producers and consumers, payload size, partition count, egress request rate and usage of IoT hubs capture, schema registry, and other advanced features. When additional IoT is provisioned or rate limit is adjusted, the multitenant solution can perform retries to overcome the transient failures from requests. When the number of active users reduces or there is a decrease in the traffic, the IoT resources could be released to reduce costs. Data isolation depends on the scope of isolation. When the storage for IoT is a relational database server, then the IoT solution can make use of IoT Hub. Varying levels and scope of sharing of IoT resources demands simplicity from the architecture. Patterns such as the use of the deployment stamp pattern, the IoT resource consolidation pattern and the dedicated IoT resources pattern help to optimize the operational cost and management with little or no impact on the usages.   

Edge computing relies heavily on asynchronous backend processing. Some form of message broker becomes necessary to maintain order between events, retries and dead-letter queues. The storage for the data must follow the data partitioning guidance where the partitions can be managed and accessed separately. Horizontal, vertical, and functional partitioning strategies must be suitably applied. In the analytics space, a typical scenario is to build solutions that integrate data from many IoT devices into a comprehensive data analysis architecture to improve and automate decision making.

Event Hubs, blob storage, and IoT hubs can collect data on the ingestion side, while they are distributed after analysis via alerts and notifications, dynamic dashboarding, data warehousing, and storage/archival. The fan-out of data to different services is itself a value addition but the ability to transform events into processed events also generates more possibilities for downstream usages including reporting and visualizations.

One of the main considerations for data pipelines involving ingestion capabilities for IoT scale data is the business continuity and disaster recovery scenario. This is achieved with replication.  A broker stores messages in a topic which is a logical group of one or more partitions. The broker guarantees message ordering within a partition and provides a persistent log-based storage layer where the append-only logs inherently guarantee message ordering. By deploying brokers over more than one cluster, geo-replication is introduced to address disaster recovery strategies.

Each partition is associated with an append-only log, so messages appended to the log are ordered by the time and have important offsets such as the first available offset in the log, the high watermark or the offset of the last message that was successfully written and committed to the log by the brokers and the end offset  where the last message was written to the log and exceeds the high watermark. When a broker goes down, subsequent durability and availability must be addressed with replicas. Each partition has many replicas that are evenly distributed but one replica is elected as the leader and the rest are followers. The leader is where all the produce and consume requests go, and followers replicate the writes from the leader.

A pull-based replication model is the norm for brokers where dedicated fetcher threads periodically pull data between broker pairs. Each replica is a byte-for-byte copy of each other, which makes this replication offset preserving. The number of replicas is determined by the replication factor. The leader maintains a ledge called the in-sync replica set, where messages are committed by the leader after all replicas in the ISR set replicate the message. Global availability demands that brokers are deployed with different deployment modes. Two popular deployment modes are 1) a single broker that stretches over multiple clusters and 2) a federation of connected clusters.


Thursday, April 23, 2026

 Data in motion – IoT solution and data replication

The transition of data from edge sensors to the cloud is a data engineering pattern that does not always get a proper resolution with the boilerplate Event-Driven architectural design proposed by the public clouds because much of the fine tuning is left to the choice of the resources, event hubs and infrastructure involved in the streaming of events. This article explores the design and data in motion considerations for an IoT solution beginning with an introduction to the public cloud proposed design, the choices between products and the considerations for the handling and tuning of distributed, real-time data streaming systems with particular emphasis on data replication for business continuity and disaster recovery. A sample use case can include the continuous events for geospatial analytics in fleet management and its data can include driverless vehicles weblogs.

Event Driven architecture consists of event producers and consumers. Event producers are those that generate a stream of events and event consumers are ones that listen for events. The right choice of architectural style plays a big role in the total cost of ownership for a solution involving events.

The scale out can be adjusted to suit the demands of the workload and the events can be responded to in real time. Producers and consumers are isolated from one another. IoT requires events to be ingested at very high volumes. The producer-consumer design has scope for a high degree of parallelism since the consumers are run independently and in parallel, but they are tightly coupled to the events. Network latency for message exchanges between producers and consumers is kept to a minimum. Consumers can be added as necessary without impacting existing ones.

Some of the benefits of this architecture include the following: The publishers and subscribers are decoupled. There are no point-to-point integrations. It's easy to add new consumers to the system. Consumers can respond to events immediately as they arrive. They are highly scalable and distributed. There are subsystems that have independent views of the event stream.

Some of the challenges faced with this architecture include the following: Event loss is tolerated so if there needs to be guaranteed delivery, this poses a challenge. IoT traffic mandates a guaranteed delivery. Events are processed in exactly the order they arrive. Each consumer type typically runs in multiple instances, for resiliency and scalability. This can pose a challenge if the processing logic is not idempotent, or the events must be processed in order.

The benefits and the challenges suggest some of these best practices. Events should be lean and mean and not bloated. Services should share only IDs and/or a timestamp. Large data transfer between services is an antipattern. Loosely coupled event driven systems are best.

IoT Solutions can be proposed either with an event driven stack involving open-source technologies or via a dedicated and optimized storage product such as a relational engine that is geared towards edge computing. Either way capabilities to stream, process and analyze data are expected by modern IoT applications. IoT systems vary in flavor and size. Not all IoT systems have the same certifications or capabilities.


Wednesday, April 22, 2026

 Derived metrics in observability pipelines for Inflection signatures If we assume an immovable, straight-down (nadir) camera with no pitch, yaw, roll, or zoom, the geometry of the problem simplifies in a way that is almost ideal for defining observability metrics. The drone’s motion is now the primary source of variation across frames: translation along straight edges, and a change in translation direction at corners. That means we can design metrics that are explicitly sensitive to changes in planar motion and scene displacement while being largely invariant to viewpoint distortions. Those metrics can be computed per frame or per short window, aggregated over time, and then reintroduced into the observability pipeline as custom events that act as “inflection hints” for downstream agents. The starting point is to treat each frame as a node in a temporal sequence with associated observability features. With a nadir camera, the dominant effect of motion is a shift of the ground texture in the image plane. Along a straight edge, this shift is approximately constant in direction and magnitude (modulo speed variations), while at a corner, the direction of shift changes. We can capture this with a simple but powerful family of metrics based on inter-frame displacement. For each pair of consecutive frames, we compute a dense or block-based optical flow field and summarize it into a mean flow vector and a dispersion measure. The mean flow magnitude reflects how fast the ground is moving under the camera; the mean flow direction reflects the direction of travel. The dispersion (e.g., standard deviation of flow vectors) reflects local inconsistencies due to parallax, moving objects, or noise. Over straight edges, we expect the mean flow direction to be stable and the dispersion to be relatively low and slowly varying. At corners, the mean direction will rotate over a short sequence of frames, and dispersion may spike as the motion field transitions. This gives us three basic observability metrics per frame or per window: average flow magnitude, average flow direction, and flow dispersion. These can be logged as metrics in the observability pipeline and then aggregated over sliding windows to produce higher-level signals: direction stability (e.g., variance of direction over the last N frames), magnitude stability, and dispersion anomalies. Because the camera is fixed in orientation, we can also exploit frame differencing and spatial alignment more aggressively. For example, we can compute a global translational alignment between consecutive frames using phase correlation or template matching. The resulting translation vector is a robust proxy for the drone’s planar motion. Again, along straight edges, the translation vector’s direction is stable; at corners, it rotates. The 

Tuesday, April 21, 2026

 Smallest stable index:

You are given an integer array nums of length n and an integer k.

For each index i, define its instability score as max(nums[0..i]) - min(nums[i..n - 1]).

In other words:

• max(nums[0..i]) is the largest value among the elements from index 0 to index i.

• min(nums[i..n - 1]) is the smallest value among the elements from index i to index n - 1.

An index i is called stable if its instability score is less than or equal to k.

Return the smallest stable index. If no such index exists, return -1.

Example 1:

Input: nums = [5,0,1,4], k = 3

Output: 3

Explanation:

• At index 0: The maximum in [5] is 5, and the minimum in [5, 0, 1, 4] is 0, so the instability score is 5 - 0 = 5.

• At index 1: The maximum in [5, 0] is 5, and the minimum in [0, 1, 4] is 0, so the instability score is 5 - 0 = 5.

• At index 2: The maximum in [5, 0, 1] is 5, and the minimum in [1, 4] is 1, so the instability score is 5 - 1 = 4.

• At index 3: The maximum in [5, 0, 1, 4] is 5, and the minimum in [4] is 4, so the instability score is 5 - 4 = 1.

• This is the first index with an instability score less than or equal to k = 3. Thus, the answer is 3.

Example 2:

Input: nums = [3,2,1], k = 1

Output: -1

Explanation:

• At index 0, the instability score is 3 - 1 = 2.

• At index 1, the instability score is 3 - 1 = 2.

• At index 2, the instability score is 3 - 1 = 2.

• None of these values is less than or equal to k = 1, so the answer is -1.

Example 3:

Input: nums = [0], k = 0

Output: 0

Explanation:

At index 0, the instability score is 0 - 0 = 0, which is less than or equal to k = 0. Therefore, the answer is 0.

Constraints:

• 1 <= nums.length <= 100

• 0 <= nums[i] <= 109

• 0 <= k <= 109

class Solution {

    public int firstStableIndex(int[] nums, int k) {

        long[] scores = new long[nums.length];

        for (int i = 0; i < nums.length; i++) {

            int max = Integer.MIN_VALUE;

            int min = Integer.MAX_VALUE;

            for (int j = 0; j <= i; j++) {

                if (nums[j] > max) {

                    max = nums[j];

                }

            }

            for (int j = i; j < nums.length; j++) {

                if (nums[j] < min) {

                    min = nums[j];

                }

            }

            // System.out.println("max="+max+"&min="+min);

            scores[i] = (long) max - min;

        }

        long min_score = k;

        int min_score_index = -1;

        int first_stable_index = Integer.MIN_VALUE;

        for (int i = 0; i < scores.length; i++) {

            if ( scores[i] <= min_score ) {

                min_score = scores[i];

                min_score_index = i;

                if (first_stable_index == Integer.MIN_VALUE) {

                    first_stable_index = i;

                }

            }

        }

        if (first_stable_index == Integer.MIN_VALUE) {

            first_stable_index = -1;

        }

        return first_stable_index;

    }

}

Test cases:

Case 1:

Input

nums =

[5,0,1,4]

k =

3

Output

3

Expected

3

Case 2:

Input

nums =

[3,2,1]

k =

1

Output

-1

Expected

-1

Case 3:

Input

nums =

[0]

k =

0

Output

0

Expected

0

#Codingexercise: Codingexercise-04-21-2026.docx

Today's article: Derived Metrics 

Sunday, April 19, 2026

 Longest Balanced Substring After One Swap

You are given a binary string s consisting only of characters '0' and '1'.

A string is balanced if it contains an equal number of '0's and '1's.

You can perform at most one swap between any two characters in s. Then, you select a balanced substring from s.

Return an integer representing the maximum length of the balanced substring you can select.

Example 1:

Input: s = "100001"

Output: 4

Explanation:

• Swap "100001". The string becomes "101000".

• Select the substring "101000", which is balanced because it has two '0's and two '1's.

Example 2:

Input: s = "111"

Output: 0

Explanation:

• Choose not to perform any swaps.

• Select the empty substring, which is balanced because it has zero '0's and zero '1's.

Constraints:

• 1 <= s.length <= 105

• s consists only of the characters '0' and '1'

class Solution {

    public int longestBalanced(String s) {

        int max = 0;

        for (int i = 0; i < s.length(); i++) {

            for (int j = i+1; j < s.length(); j++) {

                int count0 = 0;

                int count1 = 0;

                for (int k = i; k <= j; k++) {

                    if (s.charAt(k) == '1') {

                        count1++;

                    } else {

                        count0++;

                    }

                }

                if (count0 == count1 && (j-i+1) > max) {

                    max = j - i + 1;

                }

                else if ((j - i + 1) <= (2 * Math.min(count0, count1) + 1)) {

                    for (int m = 0; m < i; m++) {

                        if (s.charAt(m) == '0' && Math.min(count0, count1) == count0 && (j-i+2) > max) { max = (j-i+2);}

                        if (s.charAt(m) == '1' && Math.min(count0, count1) == count1 && (j-i+2) > max) { max = (j-i+2);}

                    }

                    for (int n = j+1; n < s.length(); n++) {

                        if (s.charAt(n) == '0' && Math.min(count0, count1) == count0 && (j-i+2) > max) { max = (j-i+2);}

                        if (s.charAt(n) == '1' && Math.min(count0, count1) == count1 && (j-i+2) > max) { max = (j-i+2);}

                    }

                } else {

                    // skip

                }

            }

        }

        return max;

    }

}

Test cases:

Case 1:

Input

s =

"100001"

Output

4

Expected

4

Case 2:

Input

s =

"111"

Output

0

Expected

0


Saturday, April 18, 2026

 Detecting structural transitions in continuous visual data streams is a foundational challenge in online video analytics, particularly when the underlying physical process exhibits long periods of repetitive behavior punctuated by brief but critical inflection events. This paper introduces a principled framework for inflection point detection in streaming aerial imagery, motivated by the practical requirement of identifying the four corner events in a drone’s rectangular survey flight path using only the video stream itself, without reliance on GPS, IMU, or external telemetry. The problem is challenging because the majority of the flight consists of highly repetitive, low variation frames captured along straight edges of the rectangle, while the corner events—though visually distinct—occur over a short temporal span and must be detected with 100% recall to ensure the integrity of downstream spatial reasoning tasks such as survey tiling, mosaic alignment, and trajectory reconstruction.

We propose an online clustering and evolution analysis framework inspired by the principles of Ocean (ICDE 2024), which models the streaming feature space using a composite window and tracks the lifecycle of evolving clusters representing stable orientation regimes of the drone. Each frame is transformed into a compact orientation–motion embedding, derived from optical flow based dominant motion direction, homography based rotation cues, and low dimensional CNN features capturing scene layout stability. These embeddings form a continuous stream over which we maintain a set of micro clusters that summarize local density, cohesion, and temporal persistence. The straight line segments of the flight correspond to long lived, high cohesion clusters with stable centroids and minimal drift, while the corners manifest as abrupt transitions in cluster membership, density, and orientation statistics. We formalize these transitions as cluster lifetime inflection points, defined by a conjunction of (i) a sharp change in the dominant orientation component, (ii) a rapid decay in the density of the current cluster, and (iii) the emergence of a new cluster with increasing density and decreasing intra cluster variance.

A key contribution of this work is a thresholding strategy that differentiates true corner events from background repetitive conformance. By modeling the temporal evolution of cluster statistics within a sliding composite window, we derive adaptive thresholds that remain robust to noise, illumination changes, and minor camera jitter while guaranteeing that any genuine orientation transition exceeding a minimal angular displacement is detected. We prove that under mild assumptions about the smoothness of motion along straight edges and the bounded duration of corner rotations, the proposed method achieves perfect recall of all four corners. Extensive conceptual analysis demonstrates that even if the drone’s speed varies, the camera experiences minor vibrations, or the rectangular path is imperfectly executed, the cluster lifetime inflection signature remains uniquely identifiable.

This framework provides a generalizable foundation for online structural change detection in video streams, applicable beyond drone navigation to domains such as autonomous driving, robotic inspection, and surveillance analytics. The corner detection use case serves as a concrete and rigorous anchor for the methodology, ensuring that the proposed approach is both theoretically grounded and practically verifiable. The resulting system is capable of selecting the exact frames corresponding to the four corners from the continuous first person video stream, even when the full tiling of the survey area is not attempted, thereby satisfying the validation requirements of real world aerial analytics pipelines.


Friday, April 17, 2026

 Problem 2:


 Sides of a triangle


You are given a positive integer array sides of length 3.


Determine if there exists a triangle with positive area whose three side lengths are given by the elements of sides.


If such a triangle exists, return an array of three floating-point numbers representing its internal angles (in degrees), sorted in non-decreasing order. Otherwise, return an empty array.


Answers within 10-5 of the actual answer will be accepted.


Example 1:


Input: sides = [3,4,5]


Output: [36.86990,53.13010,90.00000]


Explanation:


You can form a right-angled triangle with side lengths 3, 4, and 5. The internal angles of this triangle are approximately 36.869897646, 53.130102354, and 90 degrees respectively.


Example 2:


Input: sides = [2,4,2]


Output: []


Explanation:


You cannot form a triangle with positive area using side lengths 2, 4, and 2.


Constraints:


• sides.length == 3


• 1 <= sides[i] <= 1000


class Solution {


    public double[] internalAngles(int[] sides) {


        Arrays.sort(sides);


        if (sides[0] + sides[1] > sides[2] &&


            sides[1] + sides[2] > sides[0] &&


            sides[0] + sides[2] > sides[1]) {


            double A = angleFromSides(sides[1], sides[2], sides[0]);


            double B = angleFromSides(sides[0], sides[2], sides[1]);


            double C = angleFromSides(sides[0], sides[1], sides[2]);


            double[] angles = {A, B, C};


            Arrays.sort(angles); // non-decreasing order


            return angles;


        } else {


            return new double[0];


        }


    }


    private static double angleFromSides(int side1, int side2, int opposite) {


        double numerator = (side1 * side1) + (side2 * side2) - (opposite * opposite);


        double denominator = 2.0 * side1 * side2;


        double cosValue = numerator / denominator;


        // Numerical safety: clamp to [-1, 1]


        cosValue = Math.max(-1.0, Math.min(1.0, cosValue));


        return Math.toDegrees(Math.acos(cosValue));


    }


}


Test cases:


Case 0:


Input


sides =


[3,4,5]


Output


[36.86990,53.13010,90.00000]


Expected


[36.86990,53.13010,90.00000]


Case 1:


Input


sides =


[2,4,2]


Output


[]


Expected


[]



Thursday, April 16, 2026

 Problem 1: Find the degree of each vertex.

You are given a 2D integer array matrix of size n x n representing the adjacency matrix of an undirected graph with n vertices labeled from 0 to n - 1.

• matrix[i][j] = 1 indicates that there is an edge between vertices i and j.

• matrix[i][j] = 0 indicates that there is no edge between vertices i and j.

The degree of a vertex is the number of edges connected to it.

Return an integer array ans of size n where ans[i] represents the degree of vertex i.

Example 1:

       1

      / \

     0--2

Input: matrix = [[0,1,1],[1,0,1],[1,1,0]]

Output: [2,2,2]

Explanation:

• Vertex 0 is connected to vertices 1 and 2, so its degree is 2.

• Vertex 1 is connected to vertices 0 and 2, so its degree is 2.

• Vertex 2 is connected to vertices 0 and 1, so its degree is 2.

Thus, the answer is [2, 2, 2].

Example 2:

   0 --- 1

       2cc

Input: matrix = [[0,1,0],[1,0,0],[0,0,0]]

Output: [1,1,0]

Explanation:

• Vertex 0 is connected to vertex 1, so its degree is 1.

• Vertex 1 is connected to vertex 0, so its degree is 1.

• Vertex 2 is not connected to any vertex, so its degree is 0.

Thus, the answer is [1, 1, 0].

Example 3:

Input: matrix = [[0]]

Output: [0]

Explanation:

There is only one vertex and it has no edges connected to it. Thus, the answer is [0].

Constraints:

• 1 <= n == matrix.length == matrix[i].length <= 100

• matrix[i][i] == 0

• matrix[i][j] is either 0 or 1

• matrix[i][j] == matrix[j][i]

class Solution {

    public int[] findDegrees(int[][] matrix) {

        int[] degree = new int[matrix.length];

        for (int i = 0; i < matrix.length; i++) {

            degree[i] = 0;

            for (int j = 0; j < matrix[0].length; j++) {

                if (matrix[i][j] == 1) {

                    degree[i] += 1;

                }

            }

        }

        return degree;

    }

}

Test cases:

Case 0:

Input

matrix =

[[0,1,1],[1,0,1],[1,1,0]]

Output

[2,2,2]

Expected

[2,2,2]

Case 1:

Input

matrix =

[[0,1,0],[1,0,0],[0,0,0]]

Output

[1,1,0]

Expected

[1,1,0]

Case 2:

Input

matrix =

[[0]]

Output

[0]

Expected

[0]


Wednesday, April 15, 2026

 This is a summary of the book titled “The Transformation Myth: Leading Your Organization through Uncertain Times (Management on the Cutting Edge)” written by Anh Nguyen Phillips, Rich Nanda, Jonathan R. Copulsky and Gerald C Kane and published by MIT Press in 2023. The book traces how the COVID‑19 pandemic exposed the fragility of long‑standing organizational assumptions while simultaneously revealing how disruption can become a catalyst for renewal. It argues that the companies that adapted most effectively were those that treated the crisis not as an interruption to be endured but as an inflection point demanding experimentation, reflection and long‑term reinvention. As the authors note, many leaders responded the way clinicians do when confronting acute and chronic conditions, trying rapid fixes where necessary while also laying the groundwork for more durable transformation. This shift in mindset—away from waiting for normalcy to return and toward embracing uncertainty as a space for opportunity—anchors the book’s central claim that growth‑oriented organizations are better positioned to navigate upheaval. As one line in the book puts it, “Leaders and organizations with a growth mindset will be better positioned to cope with disruption.”

From this foundation, the narrative emphasizes that clarity of purpose, values and mission becomes indispensable when teams face ambiguity. Purpose gives people a reason to stay engaged; values ensure that decisions remain principled even under pressure; and mission provides direction when circumstances are shifting too quickly for detailed plans to hold. The authors pair this with a call for rigorous scenario planning, urging leaders to examine long‑term trends, map uncertainties and guard against biases such as the “status quo bias” or the “bandwagon effect,” both mentioned explicitly in the text. By exploring multiple plausible futures and identifying “no regrets moves,” optional bets and transformative opportunities, organizations can avoid being blindsided by change.

The book also stresses that technology alone does not drive transformation; rather, it is the ecosystem of people, partners and capabilities that determines whether digital tools actually solve meaningful problems. Cloud computing becomes a vivid example of this principle, described as a flexible “Lego set” that allows companies to scale, pivot and innovate without heavy fixed investments. Data and machine learning similarly offer advantages only when paired with thoughtful questions, strong data literacy and a culture that values insight over infrastructure.

Finally, the book argues that crises reshape leaders as much as organizations. They heighten empathy, sharpen awareness of customer needs and reveal how deeply habits shape human behavior. As the book notes, “Crises have a way of bringing people together,” and the leaders who rose to the moment during the pandemic did so through authenticity, transparency and a willingness to experiment boldly. The authors conclude that disruption, while destabilizing, can leave organizations more resilient and leaders more human if they approach uncertainty with curiosity, discipline and a commitment to continuous learning.


Tuesday, April 14, 2026

 This is a runbook for migrating GenAI workload comprising of AKS server and langfuse namespaces from one region to another:

1. Step 1: export rg with aztfexport:

#!/usr/bin/env bash

set -euo pipefail

# ---- CONFIG ----

SOURCE_SUBSCRIPTION_ID="<SOURCE_SUBSCRIPTION_ID>"

SOURCE_RG="<SOURCE_GENAI_RG>" # e.g., rg-genai-aks

SOURCE_LOCATION="<SOURCE_REGION>" # e.g., westus2

TARGET_SUBSCRIPTION_ID="<TARGET_SUBSCRIPTION_ID>"

TARGET_LOCATION="eastus2"

SUFFIX="eus2"

TARGET_RG="${SOURCE_RG}-${SUFFIX}"

EXPORT_DIR="./tfexport-${SOURCE_RG}-${SUFFIX}"

# ---- EXPORT FROM SOURCE ----

az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

mkdir -p "${EXPORT_DIR}"

echo "Exporting all resources from ${SOURCE_RG} using aztfexport..."

aztfexport group \

  --resource-group "${SOURCE_RG}" \

  --output-directory "${EXPORT_DIR}" \

  --append

echo "Export complete: ${EXPORT_DIR}"

# ---- CREATE TARGET RG ----

az account set --subscription "${TARGET_SUBSCRIPTION_ID}"

echo "Creating target RG ${TARGET_RG} in ${TARGET_LOCATION}..."

az group create \

  --name "${TARGET_RG}" \

  --location "${TARGET_LOCATION}" \

  --output none

# ---- REWRITE TF FOR DR ----

echo "Rewriting Terraform for ${TARGET_LOCATION} and -${SUFFIX} names..."

find "${EXPORT_DIR}" -type f -name "*.tf" | while read -r FILE; do

  # Change region

  sed -i "s/\"${SOURCE_LOCATION}\"/\"${TARGET_LOCATION}\"/g" "${FILE}"

  # Append suffix to resource names (simple heuristic; review before apply)

  sed -i -E "s/(name *= *\"[a-zA-Z0-9_-]+)\"/\1-${SUFFIX}\"/g" "${FILE}"

  # Retarget RG references

  sed -i "s/\"${SOURCE_RG}\"/\"${TARGET_RG}\"/g" "${FILE}"

done

echo "Rewrite done. Review ${EXPORT_DIR} and then:"

echo " cd ${EXPORT_DIR}"

echo " terraform init && terraform apply"

2. Step 2: migrate namespaces and workloads:

#!/usr/bin/env bash

set -euo pipefail

# ---- CONFIG ----

SOURCE_SUBSCRIPTION_ID="<SOURCE_SUBSCRIPTION_ID>"

TARGET_SUBSCRIPTION_ID="<TARGET_SUBSCRIPTION_ID>"

SRC_AKS_RG="<SRC_AKS_RG>"

SRC_AKS_NAME="<SRC_AKS_NAME>"

DST_AKS_RG="<DST_AKS_RG>"

DST_AKS_NAME="<DST_AKS_NAME>"

# Namespaces to exclude (system)

EXCLUDE_NS_REGEX="^(kube-system|kube-public|kube-node-lease|gatekeeper-system|azure-arc|default)$"

# ---- GET CONTEXTS ----

echo "Getting kubeconfig for source AKS..."

az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

az aks get-credentials -g "${SRC_AKS_RG}" -n "${SRC_AKS_NAME}" --overwrite-existing

SRC_CONTEXT=$(kubectl config current-context)

echo "Getting kubeconfig for destination AKS..."

az account set --subscription "${TARGET_SUBSCRIPTION_ID}"

az aks get-credentials -g "${DST_AKS_RG}" -n "${DST_AKS_NAME}" --overwrite-existing

DST_CONTEXT=$(kubectl config current-context)

echo "Source context: ${SRC_CONTEXT}"

echo "Destination context: ${DST_CONTEXT}"

echo ""

# ---- EXPORT NAMESPACES & WORKLOADS FROM SOURCE ----

EXPORT_DIR="./aks-migration-eus2"

mkdir -p "${EXPORT_DIR}"

kubectl config use-context "${SRC_CONTEXT}"

echo "Exporting namespaces and workloads from source cluster..."

NAMESPACES=$(kubectl get ns -o jsonpath='{.items[*].metadata.name}')

for NS in ${NAMESPACES}; do

  if [[ "${NS}" =~ ${EXCLUDE_NS_REGEX} ]]; then

    echo "Skipping system namespace: ${NS}"

    continue

  fi

  NS_DIR="${EXPORT_DIR}/${NS}"

  mkdir -p "${NS_DIR}"

  echo "Exporting namespace: ${NS}"

  # Namespace definition

  kubectl get ns "${NS}" -o yaml > "${NS_DIR}/namespace.yaml"

  # Core workload types (adjust as needed)

  for KIND in deployment statefulset daemonset service configmap secret ingress cronjob job; do

    kubectl get "${KIND}" -n "${NS}" -o yaml > "${NS_DIR}/${KIND}.yaml" || true

  done

done

echo "Export complete: ${EXPORT_DIR}"

# ---- APPLY TO DESTINATION CLUSTER ----

kubectl config use-context "${DST_CONTEXT}"

echo "Applying namespaces and workloads to destination cluster..."

for NS in ${NAMESPACES}; do

  if [[ "${NS}" =~ ${EXCLUDE_NS_REGEX} ]]; then

    continue

  fi

  NS_DIR="${EXPORT_DIR}/${NS}"

  if [[ ! -d "${NS_DIR}" ]]; then

    continue

  fi

  echo "Creating namespace: ${NS}"

  kubectl apply -f "${NS_DIR}/namespace.yaml" || true

  for KIND in deployment statefulset daemonset service configmap secret ingress cronjob job; do

    FILE="${NS_DIR}/${KIND}.yaml"

    if [[ -s "${FILE}" ]]; then

      echo "Applying ${KIND} in ${NS}"

      kubectl apply -n "${NS}" -f "${FILE}"

    fi

  done

done

echo "AKS namespace/workload migration complete."

3. Step 3: find storage accounts with aks subnet allows and migrate data

#!/usr/bin/env bash

set -euo pipefail

# ---- CONFIG ----

SOURCE_SUBSCRIPTION_ID="<SOURCE_SUBSCRIPTION_ID>"

TARGET_SUBSCRIPTION_ID="<TARGET_SUBSCRIPTION_ID>"

SRC_AKS_RG="<SRC_AKS_RG>"

SRC_AKS_NAME="<SRC_AKS_NAME>"

SUFFIX="eus2"

# ---- GET AKS VNET/SUBNETS ----

az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

AKS_INFO=$(az aks show -g "${SRC_AKS_RG}" -n "${SRC_AKS_NAME}")

# For Azure CNI with custom VNet:

VNET_SUBNET_IDS=$(echo "${AKS_INFO}" | jq -r '.agentPoolProfiles[].vnetSubnetId' | sort -u)

echo "AKS subnets:"

echo "${VNET_SUBNET_IDS}"

echo ""

# ---- FIND MATCHING STORAGE ACCOUNTS ----

echo "Finding storage accounts whose network rules allow these subnets..."

STORAGE_ACCOUNTS=$(az storage account list --query "[].id" -o tsv)

MATCHED_SA=()

for SA_ID in ${STORAGE_ACCOUNTS}; do

  SA_NAME=$(basename "${SA_ID}")

  SA_RG=$(echo "${SA_ID}" | awk -F/ '{print $5}')

  RULES=$(az storage account network-rule list \

    --account-name "${SA_NAME}" \

    --resource-group "${SA_RG}" 2>/dev/null || echo "")

  if [[ -z "${RULES}" ]]; then

    continue

  fi

  for SUBNET_ID in ${VNET_SUBNET_IDS}; do

    if echo "${RULES}" | jq -e --arg sn "${SUBNET_ID}" '.virtualNetworkRules[]?.virtualNetworkResourceId == $sn' >/dev/null 2>&1; then

      echo "Matched storage account: ${SA_NAME} (RG: ${SA_RG}) for subnet: ${SUBNET_ID}"

      MATCHED_SA+=("${SA_ID}")

      break

    fi

  done

done

MATCHED_SA_UNIQ=($(printf "%s\n" "${MATCHED_SA[@]}" | sort -u))

echo ""

echo "Matched storage accounts:"

printf "%s\n" "${MATCHED_SA_UNIQ[@]}"

echo ""

# ---- COPY DATA TO DR STORAGE ACCOUNTS ----

for SA_ID in "${MATCHED_SA_UNIQ[@]}"; do

  SA_NAME=$(basename "${SA_ID}")

  SA_RG=$(echo "${SA_ID}" | awk -F/ '{print $5}')

  TARGET_SA_NAME="${SA_NAME}${SUFFIX}"

  echo "Processing storage account:"

  echo " Source: ${SA_NAME} (RG: ${SA_RG})"

  echo " Target: ${TARGET_SA_NAME}"

  echo ""

  # Source key

  az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

  SRC_KEY=$(az storage account keys list \

    --account-name "${SA_NAME}" \

    --resource-group "${SA_RG}" \

    --query "[0].value" -o tsv)

  SRC_CONN="DefaultEndpointsProtocol=https;AccountName=${SA_NAME};AccountKey=${SRC_KEY};EndpointSuffix=core.windows.net"

  # Target key

  az account set --subscription "${TARGET_SUBSCRIPTION_ID}"

  # Adjust RG derivation if needed

  TARGET_SA_RG="<TARGET_RG_FOR_${TARGET_SA_NAME}>"

  TGT_KEY=$(az storage account keys list \

    --account-name "${TARGET_SA_NAME}" \

    --resource-group "${TARGET_SA_RG}" \

    --query "[0].value" -o tsv)

  TGT_CONN="DefaultEndpointsProtocol=https;AccountName=${TARGET_SA_NAME};AccountKey=${TGT_KEY};EndpointSuffix=core.windows.net"

  # List containers in source

  az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

  CONTAINERS=$(az storage container list \

    --connection-string "${SRC_CONN}" \

    --query "[].name" -o tsv)

  for CONT in ${CONTAINERS}; do

    echo "Copying container: ${CONT}"

    SRC_URL="https://${SA_NAME}.blob.core.windows.net/${CONT}"

    TGT_URL="https://${TARGET_SA_NAME}.blob.core.windows.net/${CONT}"

    azcopy copy "${SRC_URL}" "${TGT_URL}" --recursive=true

    echo "Completed copy for container ${CONT}"

  done

done

echo "Storage data copy to DR accounts complete."



Monday, April 13, 2026

 This is the summary of a book titled “From panic to profit: Uncover value, boost revenue, and grow your business with 80/20 principle” written by Bill Canady and published by Wiley in 2025. This book guides a journey through the transformation of a struggling business, beginning with the emotional reality leaders face when their companies stall or decline. He acknowledges that fear, uncertainty, and doubt can quickly distort judgment, but insists that panic is unnecessary when a disciplined operating system exists to restore clarity and momentum. That system is the Profitable Growth Operating System, or PGOS, a framework built on the 80/20 principle—the idea that a small fraction of actions, customers, products, or processes generate the majority of results. Canady treats this principle not as a clever heuristic but as a natural law that leaders must learn to see and apply if they want to redirect their organizations toward profitable growth.

Most companies do not naturally operate with an 80/20 mindset. They spread their attention across too many initiatives, customers, and internal processes, diluting their ability to generate meaningful results. Canady argues that the first step in reversing this pattern is to redefine the organization’s current state through the lens of 80/20 analysis. This requires gathering real data—about customers, employees, products, and markets—so that leaders can distinguish the vital few from the trivial many. As he puts it, “in the absence of data, knowledge, and understanding, there is an intellectual and emotional vacuum almost instantly filled by FUD: Fear, Uncertainty and Doubt,” a line that captures his belief that disciplined measurement is the antidote to fear-driven decision-making.

But data alone is not enough. Canady describes a leadership structure essential for PGOS to work: a visionary who sets direction and enforces a culture of pace, transparency, and results; one or more prophets who translate that vision into strategy and processes; and operators who run the business day to day while applying 80/20 thinking to their own domains. This triumvirate must be aligned, active, and committed, because PGOS is not a theoretical exercise—it is a system that must be lived through daily decisions, trade-offs, and disciplined execution.

Once the need for change is established, Canady turns to the importance of simplification. He argues that many companies instinctively cut staff when they need to streamline, but the real inefficiencies often lie elsewhere: in unproductive product lines, unprofitable customers, outdated processes, or organizational habits that create friction. Simplification, in his view, means stripping away the activities that consume energy without contributing to growth. This may involve reducing product variety, narrowing customer focus, or redesigning workflows. Such decisions are rarely easy, and they often provoke resistance from people invested in the old ways, but Canady insists that simplification is essential for freeing resources to invest in the activities that truly matter.

With the organization’s priorities clarified, the next step is to set a concrete goal—one that expresses success in the simplest possible terms: money. Canady encourages leaders to articulate a financial target that can guide decisions over the next several years, even if the underlying data is imperfect. The goal becomes the anchor for strategy development, which must be grounded in updated assumptions about the company’s environment and a clear understanding of how revenue translates into profit. He stresses the importance of both short-term wins and long-term positioning, arguing that early successes build confidence and “earn the right to grow.”

Execution then becomes the central challenge. Canady describes the need to structure the organization so that critical initiatives have clear owners, defined responsibilities, and the resources required to succeed. He urges leaders to confront the “brutal facts” of their situation, borrowing Admiral Jim Stockdale’s reminder that optimism must never obscure reality. The action plan that emerges should translate goals and strategies into specific steps, timelines, and accountabilities. It is only through action—imperfect, iterative, and grounded in the 80/20 principle—that the plan becomes real.

The first 100 days of a PGOS transformation are particularly important. During this period, the company must gather data, refine goals, launch initiatives, and begin generating measurable improvements. These early efforts create the momentum that allows the organization to claim its “right to grow.” But Canady warns that vigilance must continue beyond the initial phase. Leaders must keep their eyes open, continuously evaluating whether their goals, strategies, and actions are truly driving revenue and profitability. They must be willing to revise plans, eliminate underperforming offerings, and make difficult decisions that keep the company aligned with its most productive activities.

By the end of the book, Canady presents PGOS not as a rigid formula but as a disciplined way of thinking—one that combines simplification, data-driven insight, aligned leadership, and relentless focus on the activities that generate the greatest impact. The 80/20 principle becomes both a diagnostic tool and a compass, guiding leaders through uncertainty toward a business that grows not by doing more, but by doing what matters most.


Sunday, April 12, 2026

 This is a runbook for migrating databricks workload from one region to another

1. Step 1. Create the workspace to receive the workloads in the destination.

#!/usr/bin/env bash

set -euo pipefail

# Source subscription / RG

SOURCE_SUBSCRIPTION_ID="<SOURCE_SUBSCRIPTION_ID>"

SOURCE_RG="<SOURCE_DATABRICKS_RG>" # e.g., rg-dbx-prod

SOURCE_LOCATION="<SOURCE_REGION>" # e.g., westus2

# DR subscription / RG / region

TARGET_SUBSCRIPTION_ID="<TARGET_SUBSCRIPTION_ID>"

TARGET_LOCATION="eastus2"

SUFFIX="eus2"

TARGET_RG="${SOURCE_RG}-${SUFFIX}"

# 1. Set source subscription and export

az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

EXPORT_DIR="./tfexport-${SOURCE_RG}-${SUFFIX}"

mkdir -p "${EXPORT_DIR}"

echo "Exporting all resources from ${SOURCE_RG} using aztfexport..."

aztfexport group \

  --resource-group "${SOURCE_RG}" \

  --output-directory "${EXPORT_DIR}" \

  --append

echo "Export complete. Files in ${EXPORT_DIR}"

# 2. Create target RG in target subscription

az account set --subscription "${TARGET_SUBSCRIPTION_ID}"

echo "Creating target resource group ${TARGET_RG} in ${TARGET_LOCATION}..."

az group create \

  --name "${TARGET_RG}" \

  --location "${TARGET_LOCATION}" \

  --output none

# 3. Rewrite names and locations in Terraform files

# - Add suffix to resource names

# - Change location to eastus2

# - Optionally change resource_group_name references

echo "Rewriting Terraform for DR region and names..."

find "${EXPORT_DIR}" -type f -name "*.tf" | while read -r FILE; do

  # Example: append -eus2 to name fields and change location

  # This is simplistic; refine with more precise sed/regex as needed.

  # Change location

  sed -i "s/\"${SOURCE_LOCATION}\"/\"${TARGET_LOCATION}\"/g" "${FILE}"

  # Append suffix to resource names (name = "xyz" → "xyz-eus2")

  # Be careful not to touch things like SKU names, etc.

  sed -i -E "s/(name *= *\"[a-zA-Z0-9_-]+)\"/\1-${SUFFIX}\"/g" "${FILE}"

  # If resource_group_name is hard-coded, retarget it

  sed -i "s/\"${SOURCE_RG}\"/\"${TARGET_RG}\"/g" "${FILE}"

done

echo "Terraform rewrite done. Review ${EXPORT_DIR} before applying."

# 4. (Optional) Initialize and apply Terraform in target subscription

# cd "${EXPORT_DIR}"

# terraform init

# terraform apply

2. Step 2. Copy all the workloads for the workspace.

#!/usr/bin/env bash

set -euo pipefail

# Databricks CLI profiles

SOURCE_PROFILE="src-dbx"

TARGET_PROFILE="dr-dbx"

# Temp export directory

EXPORT_DIR="./dbx-migration-eus2"

NOTEBOOKS_DIR="${EXPORT_DIR}/notebooks"

JOBS_FILE="${EXPORT_DIR}/jobs.json"

mkdir -p "${NOTEBOOKS_DIR}"

echo "Using Databricks profiles:"

echo " Source: ${SOURCE_PROFILE}"

echo " Target: ${TARGET_PROFILE}"

echo ""

# 1. Export all notebooks from source workspace

# This uses workspace export with recursive flag.

echo "Exporting notebooks from source workspace..."

databricks --profile "${SOURCE_PROFILE}" workspace list / -r | awk '{print $1}' | while read -r PATH; do

  echo "Exporting ${PATH}"

  databricks --profile "${SOURCE_PROFILE}" workspace export_dir \

    "${PATH}" \

    "${NOTEBOOKS_DIR}${PATH}" \

    --overwrite

done

echo "Notebook export complete."

# 2. Import notebooks into target workspace

echo "Importing notebooks into target workspace..."

find "${NOTEBOOKS_DIR}" -type d | while read -r DIR; do

  REL_PATH="${DIR#${NOTEBOOKS_DIR}}"

  if [[ -n "${REL_PATH}" ]]; then

    databricks --profile "${TARGET_PROFILE}" workspace mkdirs "${REL_PATH}"

  fi

done

find "${NOTEBOOKS_DIR}" -type f | while read -r FILE; do

  REL_PATH="${FILE#${NOTEBOOKS_DIR}}"

  TARGET_PATH="${REL_PATH}"

  echo "Importing ${TARGET_PATH}"

  databricks --profile "${TARGET_PROFILE}" workspace import \

    --format AUTO \

    --language AUTO \

    --overwrite \

    "${FILE}" \

    "${TARGET_PATH}"

done

echo "Notebook import complete."

# 3. Export jobs from source workspace

echo "Exporting jobs from source workspace..."

databricks --profile "${SOURCE_PROFILE}" jobs list --output JSON > "${JOBS_FILE}"

# 4. Recreate jobs in target workspace

echo "Recreating jobs in target workspace..."

jq -c '.jobs[]' "${JOBS_FILE}" | while read -r JOB; do

  # Remove fields that cannot be reused directly (job_id, created_time, etc.)

  CLEANED=$(echo "${JOB}" | jq 'del(.job_id, .created_time, .creator_user_name, .settings.schedule_status, .settings.created_time, .settings.modified_time) | {name: .settings.name, settings: .settings}')

  echo "Creating job: $(echo "${CLEANED}" | jq -r '.name')"

  databricks --profile "${TARGET_PROFILE}" jobs create --json "${CLEANED}"

done

echo "Job migration complete."

3. Step 3. Find allowed storage accounts and copy data to eus2

#!/usr/bin/env bash

set -euo pipefail

SOURCE_SUBSCRIPTION_ID="<SOURCE_SUBSCRIPTION_ID>"

TARGET_SUBSCRIPTION_ID="<TARGET_SUBSCRIPTION_ID>"

SOURCE_RG="<SOURCE_DATABRICKS_RG>"

SUFFIX="eus2"

# Databricks workspace info (source)

DATABRICKS_WS_NAME="<SOURCE_DATABRICKS_WORKSPACE_NAME>"

DATABRICKS_WS_RG="${SOURCE_RG}"

# 1. Get workspace VNet/subnets (assuming VNet injection)

az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

WS_INFO=$(az databricks workspace show -g "${DATABRICKS_WS_RG}" -n "${DATABRICKS_WS_NAME}")

VNET_ID=$(echo "${WS_INFO}" | jq -r '.parameters.customVirtualNetworkId.value // empty')

SUBNET_IDS=$(echo "${WS_INFO}" | jq -r '.parameters.customPublicSubnetName.value, .parameters.customPrivateSubnetName.value // empty' | sed "/^$/d")

echo "Workspace VNet: ${VNET_ID}"

echo "Workspace subnets (names):"

echo "${SUBNET_IDS}"

echo ""

# 2. Find storage accounts whose network rules allow these subnets

echo "Finding storage accounts with network rules allowing workspace subnets..."

STORAGE_ACCOUNTS=$(az storage account list --query "[].id" -o tsv)

MATCHED_SA=()

for SA_ID in ${STORAGE_ACCOUNTS}; do

  RULES=$(az storage account network-rule list --account-name "$(basename "${SA_ID}")" --resource-group "$(echo "${SA_ID}" | awk -F/ '{print $5}')" 2>/dev/null || echo "")

  if [[ -z "${RULES}" ]]; then

    continue

  fi

  for SUBNET in ${SUBNET_IDS}; do

    # Check if subnet name appears in virtualNetworkRules

    if echo "${RULES}" | jq -e --arg sn "${SUBNET}" '.virtualNetworkRules[]?.virtualNetworkResourceId | contains($sn)' >/dev/null 2>&1; then

      echo "Matched storage account: ${SA_ID} for subnet: ${SUBNET}"

      MATCHED_SA+=("${SA_ID}")

      break

    fi

  done

done

# Deduplicate

MATCHED_SA_UNIQ=($(printf "%s\n" "${MATCHED_SA[@]}" | sort -u))

echo ""

echo "Matched storage accounts:"

printf "%s\n" "${MATCHED_SA_UNIQ[@]}"

echo ""

# 3. For each matched storage account, copy data to corresponding DR storage account with eus2 suffix

for SA_ID in "${MATCHED_SA_UNIQ[@]}"; do

  SA_NAME=$(basename "${SA_ID}")

  SA_RG=$(echo "${SA_ID}" | awk -F/ '{print $5}')

  TARGET_SA_NAME="${SA_NAME}${SUFFIX}"

  echo "Processing storage account:"

  echo " Source: ${SA_NAME} (RG: ${SA_RG})"

  echo " Target: ${TARGET_SA_NAME}"

  echo ""

  # Get source key

  SRC_KEY=$(az storage account keys list \

    --account-name "${SA_NAME}" \

    --resource-group "${SA_RG}" \

    --query "[0].value" -o tsv)

  # Switch to target subscription and get target key

  az account set --subscription "${TARGET_SUBSCRIPTION_ID}"

  TARGET_SA_RG="<TARGET_SA_RG_FOR_${TARGET_SA_NAME}>" # or derive if same naming pattern

  TGT_KEY=$(az storage account keys list \

    --account-name "${TARGET_SA_NAME}" \

    --resource-group "${TARGET_SA_RG}" \

    --query "[0].value" -o tsv)

  # Build connection strings

  SRC_CONN="DefaultEndpointsProtocol=https;AccountName=${SA_NAME};AccountKey=${SRC_KEY};EndpointSuffix=core.windows.net"

  TGT_CONN="DefaultEndpointsProtocol=https;AccountName=${TARGET_SA_NAME};AccountKey=${TGT_KEY};EndpointSuffix=core.windows.net"

  # List containers in source

  az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

  CONTAINERS=$(az storage container list \

    --connection-string "${SRC_CONN}" \

    --query "[].name" -o tsv)

  for CONT in ${CONTAINERS}; do

    echo "Copying container: ${CONT}"

    SRC_URL="https://${SA_NAME}.blob.core.windows.net/${CONT}"

    TGT_URL="https://${TARGET_SA_NAME}.blob.core.windows.net/${CONT}"

    # Generate SAS tokens or use connection string; here we use account key via azcopy env vars

    export AZCOPY_AUTO_LOGIN_TYPE=AZURE_AD # or use SAS if preferred

    # If using key-based auth:

    # export AZCOPY_ACCOUNT_KEY="${SRC_KEY}" for source and then TGT_KEY for target in separate runs

    # For simplicity, assume managed identity / AAD with proper RBAC.

    azcopy copy "${SRC_URL}" "${TGT_URL}" \

      --recursive=true

    echo "Completed copy for container ${CONT}"

  done

  # Reset to source subscription for next iteration

  az account set --subscription "${SOURCE_SUBSCRIPTION_ID}"

done

echo "All matched storage accounts copied to DR counterparts."




#codingexercises: CodingExercise-04-12-2026.docx