Thursday, June 19, 2025

 In continuation of the previous posts, this shows how to deploy an agent to search our drone images and detected objects for user queries: 

 

from azure.search.documents.indexes import SearchIndexClient  

 

from azure.search.documents.indexes.models import (  

    KnowledgeAgent 

    KnowledgeAgentAzureOpenAIModel 

    KnowledgeAgentRequestLimits 

    KnowledgeAgentTargetIndex, 

    AzureOpenAIVectorizerParameters 

) 

from azure.ai.agents.models import AzureAISearchTool, AzureAISearchQueryType, MessageRole, ListSortOrder 

 

from azure.ai.agents import AgentsClient 

from dotenv import load_dotenv 

from azure.identity import DefaultAzureCredential, get_bearer_token_provider 

from azure.core.credentials import AzureKeyCredential 

import os 

 

load_dotenv(override=True) 

 

project_endpoint = os.environ["AZURE_PROJECT_ENDPOINT"] 

project_api_key = os.environ["AZURE_PROJECT_API_KEY"] 

agent_model = os.getenv("AZURE_AGENT_MODEL", "gpt-4o-mini") 

search_endpoint = os.environ["AZURE_SEARCH_SERVICE_ENDPOINT"] 

api_version = os.getenv("AZURE_SEARCH_API_VERSION") 

search_api_key = os.getenv("AZURE_SEARCH_ADMIN_KEY") 

credential = AzureKeyCredential(search_api_key) 

token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://search.azure.com/.default") 

index_name = os.getenv("AZURE_SEARCH_02_INDEX_NAME", "index02") 

azure_openai_endpoint = os.environ["AZURE_OPENAI_ENDPOINT"] 

azure_openai_api_key = os.getenv("AZURE_OPENAI_API_KEY") 

azure_openai_gpt_deployment = os.getenv("AZURE_OPENAI_GPT_DEPLOYMENT", "gpt-4o-mini") 

azure_openai_gpt_model = os.getenv("AZURE_OPENAI_GPT_MODEL", "gpt-4o-mini") 

azure_openai_embedding_deployment = os.getenv("AZURE_OPENAI_EMBEDDING_DEPLOYMENT", "text-embedding-ada-002") 

azure_openai_embedding_model = os.getenv("AZURE_OPENAI_EMBEDDING_MODEL", "text-embedding-ada-002") 

agent_name = os.getenv("AZURE_SEARCH_AGENT_NAME", "objects-search-agent") 

api_version = "2025-05-01-Preview" 

agent_max_output_tokens=10000 

 

# The search_tool object can now be used within an Azure AI project, 

# typically as part of an agent or flow, to perform search operations 

# against the specified Azure AI Search index. 

# For example, if you are building an agent, this tool could be invoked 

# when the agent needs to retrieve information from your search index. 

agents_client = AgentsClient(endpoint=azure_openai_endpoint, credential=DefaultAzureCredential()) 

index_client = SearchIndexClient(endpoint=search_endpoint, credential=AzureKeyCredential(search_api_key))  

instructions = """ 

You are an AI assistant that answers questions about the stored and indexed drone images and objects. 

The data source is an Azure AI Search resource where the schema has JSON description field, a vector field and an id field and this id field must be cited in your answer. 

If you do not find a match for the query, respond with "I don't know", otherwise cite references with the value of the id field. 

""" 

 

connection_id = os.getenv("AI_AZURE_AI_CONNECTION_ID","https://srch-vision-01.search.windows.net") 

# Initialize agent AI search tool and add the search index connection id 

 

# Initialize the AzureAISearchTool 

# You can specify optional parameters like query_type, filter, and top_k 

search_tool = AzureAISearchTool( 

    index_connection_id=connection_id, 

    index_name=index_name, 

    query_type=AzureAISearchQueryType.VECTOR_SEMANTIC_HYBRID, 

    filter="",  # Optional filter expression 

    top_k=3  # Number of results to return 

) 

# ai_search_tool = AzureSearchToolset(search_endpoint, index_name, search_api_key) 

 

# agents_client.create_agent(agent)  

agent = agents_client.create_agent( 

    model=agent_model, # azure_openai_embedding_model, 

    name=agent_name, 

    instructions=instructions, 

    tools=search_tool.definitions, 

    tool_resources=search_tool.resources 

) 

 

# Create a thread for the conversation 

thread = agents_client.threads.create() 

 

# Send a user message (the query text) 

query_text = "How many red cars can be found?" 

message = agents_client.messages.create( 

    thread_id=thread.id, 

    role=MessageRole.USER, 

    content=query_text, 

) 

# Run the agent to process the query 

run = agents_client.runs.create_and_process(thread_id=thread.id, agent_id=agent.id) 

 

# Check run status 

if run.status == "failed": 

    print(f"Run failed: {run.last_error}") 

 

# Retrieve and print all messages in the thread (including agent's answer) 

messages = agents_client.messages.list(thread_id=thread.id, order=ListSortOrder.ASCENDING) 

for message in messages: 

    print(f"Role: {message.role}, Content: {message.content}") 

 

 


Wednesday, June 18, 2025

 This highlights the need to and the method for reducing workload for populating the drone world catalog based on aerial drone imagery.

#! /usr/bin/python

import json

from azure.search.documents import SearchClient

from azure.core.credentials import AzureKeyCredential

from azure.ai.vision.imageanalysis import ImageAnalysisClient

from azure.search.documents.models import (

    VectorizedQuery,

    VectorizableTextQuery

)

from dedup import ImageDeduplicator

from tenacity import retry, stop_after_attempt, wait_fixed

import os

import re

import sys

import time

search_endpoint = os.environ["AZURE_SEARCH_SERVICE_ENDPOINT"]

api_version = os.getenv("AZURE_SEARCH_API_VERSION")

search_api_key = os.getenv("AZURE_SEARCH_ADMIN_KEY")

index_name = os.getenv("AZURE_SEARCH_INDEX_NAME", "index00")

credential = AzureKeyCredential(search_api_key)

dest_index_name = os.getenv("AZURE_SEARCH_02_INDEX_NAME", "index02")

vision_api_key = os.getenv("AZURE_AI_VISION_API_KEY")

vision_api_version = os.getenv("AZURE_AI_VISION_API_VERSION")

vision_region = os.getenv("AZURE_AI_VISION_REGION")

vision_endpoint = os.getenv("AZURE_AI_VISION_ENDPOINT")

source_url_template = os.getenv("AZURE_SOURCE_SAS_URI")

destination_url_template = os.getenv("AZURE_DESTINATION_SAS_URI")

sys.path.insert(0, os.path.abspath(".."))

from visionprocessor.vectorizer import vectorize_image, analyze_image

deduplicator = ImageDeduplicator()

# Initialize SearchClient

search_client = SearchClient(

    endpoint=search_endpoint,

    index_name=index_name,

    credential=AzureKeyCredential(search_api_key)

)

destination_client = SearchClient(

    endpoint=search_endpoint,

    index_name=dest_index_name,

    credential=AzureKeyCredential(search_api_key)

)

vision_credential = AzureKeyCredential(vision_api_key)

analysis_client = ImageAnalysisClient(vision_endpoint, vision_credential)

import cv2

import numpy as np

import requests

from io import BytesIO

from azure.storage.blob import BlobClient

def read_image_from_blob(sas_url):

    """Reads an image from Azure Blob Storage using its SAS URL."""

    response = None

    try:

        response = requests.get(sas_url)

    except Exception as e:

        print(f"Error from requests.get: {e}")

    if response.status_code == 200:

        image_array = np.asarray(bytearray(response.content), dtype=np.uint8)

        image = cv2.imdecode(image_array, cv2.IMREAD_COLOR)

        return image

    else:

        # raise Exception(f"Failed to fetch image. Status code: {response.status_code}")

        return None

def upload_image_to_blob(clipped_image, sas_url):

    """Uploads the clipped image to Azure Blob Storage using its SAS URL."""

    _, encoded_image = cv2.imencode(".jpg", clipped_image)

    blob_client = BlobClient.from_blob_url(sas_url)

    blob_client.upload_blob(encoded_image.tobytes(), overwrite=True)

    # print("Clipped image uploaded successfully.")

def save_or_display(clipped_image, destination_file):

    cv2.imwrite(destination_file, clipped_image)

    cv2.imshow("Clipped Image", clipped_image)

    cv2.waitKey(0)

    cv2.destroyAllWindows()

def clip_image(image, bounding_box):

    # Extract bounding box parameters

    x, y, width, height = bounding_box

    # Clip the region using slicing

    clipped_image = image[y:y+height, x:x+width]

    return clipped_image

def prepare_json_string_for_load(text):

  text = text.replace("\"", "'")

  text = text.replace("{'", "{\"")

  text = text.replace("'}", "\"}")

  text = text.replace(" '", " \"")

  text = text.replace("' ", "\" ")

  text = text.replace(":'", ":\"")

  text = text.replace("':", "\":")

  text = text.replace(",'", ",\"")

  text = text.replace("',", "\",")

  return re.sub(r'\n\s*', '', text)

def to_string(bounding_box):

    return f"{bounding_box['x']},{bounding_box['y']},{bounding_box['w']},{bounding_box['h']}"

def is_duplicate_image(deduplicator, image):

    value = deduplicator.is_duplicate(image)

    return value

def is_visited(deduplicator, vector):

    value = deduplicator.is_visited(vector)

    return value

def is_existing(deduplicator, vector):

    start_time = time.time()

    value = deduplicator.is_existing(destination_client, vector)

    end_time = time.time()

    elapsed_time = end_time - start_time

    print(f"Elapsed time for is_existing: {elapsed_time:.3f} seconds")

    return value

@retry(stop=stop_after_attempt(5), wait=wait_fixed(60))

def upload(document):

    try:

        upload_results = destination_client.upload_documents([document])

        error = ','.join([upload_result.error_message for upload_result in upload_results if upload_result.error_message]).strip(",")

        if error:

            print(error)

    except HttpResponseError as e:

        print(f"Error from upload: {e}")

        raise

# Example usage

def shred(entry_id):

        source_file=entry_id

        source_sas_url = source_url_template.replace("{source_file}", source_file)

        print(entry_id)

        entry = search_client.get_document(key=entry_id) # , select=["id", "description"])

        id=entry['id']

        description_text=entry['description']

        tags = entry['tags']

        title = entry['title']

        description_json = None

        try:

            description_text = prepare_json_string_for_load(entry["description"]).replace('""','')

            description_json = json.loads(description_text)

        except Exception as e:

            print(description_text)

            print(f"{entry_id}: parsing error: {e}")

        if description_json == None:

            print("Description could not be parsed.")

            return

        if description_json and description_json["_data"] and description_json["_data"]["denseCaptionsResult"] and description_json["_data"]["denseCaptionsResult"]["values"]:

            objectid = 0

            for item in description_json["_data"]["denseCaptionsResult"]["values"]:

                objectid += 1

                if objectid == 1:

                    continue

                destination_file=source_file+f"-{objectid:04d}"

                destination_sas_url = destination_url_template.replace("{destination_file}", destination_file)

                box = item.get("boundingBox", None)

                print(f"{destination_file}: {box}")

                if box:

                    bounding_box = (box["x"], box["y"], box["w"], box["h"])

                    # Read image from Azure Blob

                    image = read_image_from_blob(source_sas_url)

                    if image.any() == False:

                       print(f"{destination_file} not found.")

                       continue

                    # Clip image

                    clipped = clip_image(image, bounding_box)

                    # Upload clipped image to Azure Blob

                    upload_image_to_blob(clipped, destination_sas_url)

                    vector = vectorize_image(destination_sas_url, vision_api_key, "eastus")

                    vector = np.pad(vector, (0, 1536 - len(vector)), mode='constant')

                    print("checking existing")

                    if vector.any() and is_existing(deduplicator, vector) == False:

                        print(f"Match does not exist for {destination_file}.")

                    else:

                        print(f"Match exists for {destination_file}")

                else:

                    print("no objects detected")

for number in range(5412, 5413):

    entry_id = f"{number:06d}"

    shred(entry_id)

With deduplicator.is_existing() method as:

import cv2

import imagehash

import numpy as np

from PIL import Image

from collections import deque

from azure.search.documents.models import (

    VectorizedQuery,

    VectorizableTextQuery

)

class ImageDeduplicator:

    def __init__(self, buffer_size=100):

        """Initialize a ring buffer for tracking image hashes."""

        self.buffer_size = buffer_size

        self.hash_buffer = deque(maxlen=buffer_size)

        self.vector_buffer = deque(maxlen=buffer_size)

    def compute_hash(self, image):

        """Compute perceptual hash of an image."""

        return imagehash.phash(Image.fromarray(image))

    def is_existing(self, external_vector_client, vector):

        vector_query = VectorizedQuery(vector=vector,

                                  k_nearest_neighbors=3,

                                  exhaustive=False,

                                  fields = "vector")

        results = external_vector_client.search(

        search_text=None,

        vector_queries= [vector_query],

        select=["id", "description","vector"],

        # select='id,description,vector',

        include_total_count=True,

        top=4

        )

        if results != None and results.get_count() > 0:

            best = 0

            id = None

            for match in results:

                # print(f"{match['id']} found." + ",".join([key for key in match.keys()]))

                match_vector = match["vector"]

                score = self.cosine_similarity(vector, match_vector)

                # print(f"score={score}")

                if score > best:

                    id = match['id']

                    best = score

                else:

                    continue

            matches = ','.join([match['id'] for match in results]).strip(',')

            print(f"matches: {matches}")

            if best > 0.8:

               print(f"match found with score {best} for {id}.")

               return True

        else:

            print("no match found.")

        return False

    def get_hash_buffer_len(self):

        return len(self.hash_buffer)

    def get_vector_buffer_len(self):

        return len(self.vector_buffer)

    def cosine_similarity(self, vec1, vec2):

        """Computes cosine similarity between two vectors."""

        dot_product = np.dot(vec1, vec2)

        norm_vec1 = np.linalg.norm(vec1)

        norm_vec2 = np.linalg.norm(vec2)

        return dot_product / (norm_vec1 * norm_vec2)

And results as follows:

005412

005412-0002: {'x': 986, 'y': 49, 'w': 563, 'h': 526}

checking existing

000370-0002

001225-0002

002703-0002

match found with score 0.9856458102556909 for 000370-0002.

Elapsed time for is_existing: 0.607 seconds

Match exists for 005412-0002

005412-0003: {'x': 1363, 'y': 400, 'w': 422, 'h': 373}

checking existing

001784-0006

004981-0004

014676-0003

match found with score 0.9866765401858795 for 001784-0006.

Elapsed time for is_existing: 0.291 seconds

Match exists for 005412-0003

005412-0004: {'x': 0, 'y': 0, 'w': 1896, 'h': 1050}

checking existing

005412-0004

003169-0006

012227-0006

match found with score 0.9999997660907427 for 005412-0004.

Elapsed time for is_existing: 0.239 seconds

Match exists for 005412-0004

005412-0005: {'x': 1110, 'y': 705, 'w': 403, 'h': 363}

checking existing

005412-0005

004463-0007

004980-0008

match found with score 1.0000000000000002 for 005412-0005.

Elapsed time for is_existing: 0.310 seconds

Match exists for 005412-0005

005412-0006: {'x': 1279, 'y': 213, 'w': 77, 'h': 76}

checking existing

005412-0006

014698-0009

013267-0008

match found with score 1.0000000000000002 for 005412-0006.

Elapsed time for is_existing: 0.288 seconds

Match exists for 005412-0006

005412-0007: {'x': 266, 'y': 717, 'w': 69, 'h': 59}

checking existing

005412-0007

012227-0004

015072-0007

match found with score 1.0 for 005412-0007.

Elapsed time for is_existing: 0.314 seconds

Match exists for 005412-0007

005412-0008: {'x': 612, 'y': 441, 'w': 160, 'h': 184}

checking existing

005412-0008

004989-0009

001226-0003

match found with score 1.0 for 005412-0008.

Elapsed time for is_existing: 0.289 seconds

Match exists for 005412-0008

005412-0009: {'x': 775, 'y': 381, 'w': 68, 'h': 66}

checking existing

005412-0009

013213-0005

005416-0004

match found with score 0.9999997252673284 for 005412-0009.

Elapsed time for is_existing: 0.319 seconds

Match exists for 005412-0009

005412-0010: {'x': 4, 'y': 330, 'w': 76, 'h': 66}

checking existing

005412-0010

004464-0007

015072-0005

match found with score 1.0 for 005412-0010.

Elapsed time for is_existing: 0.269 seconds

Match exists for 005412-0010

At nearly 0.3 seconds per object existence check in the drone world catalog and about ten objects per image in a set of 17533 images in a single tour of a drone, this comes to 17533 * 10 * 0.3 / (60*60) hours = 14.61 hours. So workload reduction is called for and images that have even 20% matches or more with existing objects in the catalog can be discarded unless a thresholded time-span is exceeded.

And this even works for comparisons as shown:

To generate a preview video, we could use something like:

def get_preview_url(video_id, access_token):

    insights_url = f"https://api.videoindexer.ai/{LOCATION}/Accounts/{ACCOUNT_ID}/Videos/{video_id}/Index?accessToken={access_token}"

    response = requests.get(insights_url)

    insights = response.json()

    preview_url = insights.get('summarizedInsights', {}).get('previewUrl')

    return preview_url


Tuesday, June 17, 2025

 In the previous few articles, we talked about increasing the performance of a drone video sensing platform. Specifically, we called out two factors: 1. Leverage the characteristics of aerial drone video to reduce the working set size from any capture to build a drone world catalog and 2. defer much of the processing to analytics from video/image processing for any workloads. Indeed, these assertions are grounded in facts such as continuous drone images have a lot of overlap and direction and pattern of flight has no relevance to how fast and comprehensive the drone world catalog is populated or to retrieve specifically detected objects with high precision and recall. Also, the better the drone world catalog or knowledge base, the more inclusive the platform becomes for more drone sensing applications both in terms of spatial and temporal dimensions with some of the work taken away from repeated video processing in favor of specific analytical queries.

This approach makes the drone video sensing platform more flexible, open and available to a diverse set of drone sensing applications. With the performance articulated design decisions favoring a trend that has traditionally worked for any kind of data management platforms, the platform can host and serve many applications from interested parties for reducing their overhead and allowing them to focus more on their business cases. There will always be competition from mature and deep pocket companies to own the vertical from video processing, analytics and end-user experience but the industry as such is relatively new and growing and more hardware vendors trying to write their own software rather than letting software provide a common denominator to allow them added value and focus on their upgrades in device capabilities. So while LLMs continue to fine-tuned and upgraded to handle much of the tasks upfront with classification, labeling and tracking, our bet is that the data rather than the re/processing is going to be more valuable and require the best practices sooner or later and planning for it upfront across devices, vendors and LLMs. In fact, even to reduce workload with say video indexing or to perform more analytics after the initial video processing, we do embrace AI models and allow drone sensing application developers to be more expressive in their queries than they could otherwise.

The requirements of drone sensing applications are going to be different from those of our platform by virtue of the specific business cases they target. The platform must consider increasing performance along each of these cases in a way that raises the bar for the platform as a common denominator across these applications. This calls a brief review of the various players in the industry today:

AeroVironment - top supplier to Defense - Arlington, VA

AmericanRobotics - fully automated - drone-in-a-box - Waltham, MA

AgEagle aerial systems - drone software for image analysis - Wichita, Kansas

Ascent Aerosystems - all weather UAVs - Wilmington, Massachusetts

Brinc drones - fly indoors beyond GPS range - Seattle, WA

Freefly Systems - high payload, filmography - Woodinville, WA

Harris Aerial - Endurance, long-range and payload - Orlando, FL

Hylio - autonomous swarm spraying for agriculture - Richmond, TX

Inspired Flight - modular, open-architecture drones for map/survey - San Luis Obispo, CA

RedCat+Teal+FlightWave - fast-deploy in dark or GPS-denied - Salt Lake City, Utah

Skydio - leader in autonomous flight, obstacle avoidance and hands-free operation - San Mateo, CA

SkyFish - 3D modeling of cell towers, bridges and power lines - Stevensville, Montana

Teledyne Flir - thermal imaging, cutting edge IR - Wilsonville, Oregon

Vantage Robotics - safest flights near crowds or stealth mission - San Leandro, California

As this list shows, companies are targeting differentiated use cases to provide viable commercial solutions and are subject to NDAA compliance or supply chain operations for their businesses. But as a software, the drone video sensing platform has the unique opportunity to serve all while with the best of purview, audit, aging and other best practices.


Monday, June 16, 2025

 This is a continuation of previous article to reduce the number of objects detected and catalogued from aerial drone images for optimum performance. One technique to do so is to lookup the vector store for a similar image and skip it unless the timestamp exceeds the time range for the current flight.

Sample:

import requests

import json

import sys

import os

import numpy as np

# Add the parent folder to the module search path

sys.path.insert(0, os.path.abspath(".."))

from visionprocessor.vectorizer import vectorize_image

# Azure AI Search configurations

search_endpoint = os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT")

index_name = os.getenv("AZURE_SEARCH_INDEX_NAME")

search_api_key = os.getenv("AZURE_SEARCH_ADMIN_KEY")

vision_api_key = os.getenv("AZURE_AI_VISION_API_KEY")

# Query string for red cars

query_text = "Find red cars in drone images"

blob_url = "<BLOB_SAS_URL>"

vector = vectorize_image(blob_url, vision_api_key, "eastus")

vector = np.pad(vector, (0, 1536 - len(vector)), mode='constant')

# print(f"len={len(vector)}")

# Vector search payload

body = {

        "count": True,

        "select": "id,description,vector",

        "vectorQueries": [

            {

                "vector": vector.tolist(),

                "k": 5,

                "fields": "vector",

                "kind": "vector",

                "exhaustive": True

            }

        ]

    }

# Headers for Azure Search API

headers = {

    "Content-Type": "application/json",

    "api-key": search_api_key

}

# Send search request to Azure AI Search

response = requests.post(

    f"{search_endpoint}/indexes/{index_name}/docs/search?api-version=2024-07-01",

    headers=headers,

    data=json.dumps(body)

)

# Parse response

search_results = response.json()

print(len(search_results))

print(search_results)

ids = ",".join([item["id"] for item in search_results.get("value", [])]).strip(",")

print(ids)

# output:

# RedCar3: 015644,015643,012669,008812,011600

# RedCar4: 014076,014075,014077,014074,014543

# Count occurrences of "red car" in descriptions

red_car_count = sum(1 for item in search_results.get("value", []) if "red car" in item["description"].lower())

print(f"Total red cars found in drone images: {red_car_count}")

Reference: previous article: https://1drv.ms/w/c/d609fb70e39b65c8/EVdJ7oJaqFFAvkx9udkFX1UBC0KcZkrPJU6k5yTdwcZlNg?e=LR1SYf


Sunday, June 15, 2025

 This is a summary of the book titled “Beyond No: Harnessing the Power of Resistance for Positive Organizational Growth” written by Erik Nagel and published by Wiley in 2025. The word No is as much a workplace parlance for resistance as any other jargon and perhaps the most unambiguous, yet one must understand the message, address the problems and seek solutions contends the author. While it can be collective as in resistance from workforce to get management attention or lodge protest, it behooves the receiver such as the management to cope with recalcitrance, learn from it, manage it and respond effectively. The author cites common forms of resistance and offers a path forward to both parties. 

Resistance in Everyday Management 

Managers regularly encounter resistance, whether passive (employees ignoring changes) or aggressive (labor strikes). Resistance isn’t confined to lower-ranking employees; managers themselves may push back against corporate decisions. 

Seeking Compromises in Resistance 

The book discusses real-world examples of handling resistance, such as a food sector manager negotiating incremental raises for an underpaid employee. However, not all conflicts resolve positively—stubborn resistance can sometimes lead to termination. 

The Challenge of Covert Resistance 

Silent pushback—like employees subtly undermining company culture—can be trickier to detect than vocal complaints. Historical examples, like slow-down strikes or discreet sabotage, illustrate how covert resistance manifests in different industries. 

Hierarchical Structures and Resistance Suppression 

Organizations with rigid structures, like consulting firms, often mitigate resistance by setting clear expectations. Employees at Magnum Consulting, for example, knowingly accept grueling work conditions due to high pay and prestige. 

Common Myths About Resistance 

Nagel debunks managerial misconceptions about resistance, such as assuming employees are inherently lazy or afraid of change. These myths absolve leaders of responsibility for engaging with employees and addressing legitimate concerns. 

Leadership and Resistance 

Executives who dictate change without employee input often spark resistance. The book warns against a “top-down thinking” mentality, advocating for collaborative leadership where managers acknowledge front-line insights. 

Strategies for Addressing Resistance 

Nagel outlines four key steps for leaders: 

  1. Take accountabilityDon’t scapegoat employees; engage with their concerns. 

  1. Loosen control – Employees resist more when they feel powerless. 

  1. Encourage debate – Welcoming dissent leads to more effective change. 

  1. Stay open to being wrong – Leaders must listen and adapt rather than impose rigid strategies. 

This book offers a nuanced approach to resistance, urging leaders to harness workplace pushback as a tool for growth rather than viewing it as a disruption. Let me know if you need a deeper dive into any section!