Wednesday, April 1, 2026

 This is a summary of the book titled “Between You and AI: Unlock the Power of Human Skills to Thrive in an AI-Driven World” written by Andrea Iorio and published by Wiley, 2025. This book argues that the most durable advantage in an AI-saturated workplace comes from combining machine efficiency with distinctly human judgment.

Iorio frames AI as a powerful accelerator of structured work—searching, summarizing, classifying, drafting, and pattern-finding—but he cautions that automation alone rarely differentiates a person or an organization. He points to forecasts that a substantial share of work may be automated and notes that the winners will be those who use AI to amplify what machines do not supply on their own: meaning, context, relationships, ethical reflection, and creative reframing.

A “hybrid” skill set: delegating well-bounded tasks to AI while strengthening emotional intelligence, critical thinking, and creativity, is a must. “The way forward is not about choosing between AI and human expertise — it is about integrating both into a new hybrid set of skills that leverages the best of each.” To illustrate, Iorio revisits the moment IBM’s Deep Blue defeated chess champion Garry Kasparov and the subsequent rise of “advanced chess,” where human players use AI analysis but still shape strategy and decide when to depart from the model’s suggestions.

He extends that example to everyday business decisions. At Nubank, for instance, customer service representatives work with an AI co-pilot that offers real-time suggestions. The system improves speed and consistency, while the human agent contributes empathy and situational awareness—qualities that matter when someone is frustrated, confused, or dealing with a sensitive issue.

Because AI can surface information that once required years of specialized study, Iorio argues that advantage increasingly comes from knowing how to work with these systems, not from memorizing what they can retrieve. He cites research from his team suggesting many leaders would rather collaborate with someone who can use AI well to find and synthesize answers than with someone who relies on expertise alone. In that sense, prompting becomes a practical craft: “The more thought you put into your prompt from the start, the more time and productivity you will save later.”

When Iorio turns to prompt design, his guidance is straightforward: be deliberate about the role you want the system to play, the specificity of the question, the context that shapes what “good” looks like, and the format you need back. Instead of asking for a generic report, you might ask the model to respond as a consultant, define the industry and constraints, describe the audience, and request an output structure that you can review and refine.

From there, the book emphasizes what Iorio calls “data sensemaking.” AI can process huge volumes of information, detect patterns, and generate predictions, but it cannot decide what matters most in a particular environment. Sensemaking means choosing the questions worth asking, defining indicators that connect to real decisions, and interpreting outputs in light of goals, constraints, and lived experience. It also includes actively looking for surprising relationships in the data, distinguishing vanity metrics from signals that should change priorities, and connecting past performance to leading indicators that hint at where the market is moving.

In 1997, IBM’s Deep Blue beat then-undefeated Chess champion Garry Kasparov. In the aftermath of his loss, Kasparov began playing Advanced Chess, in which human players collaborate with an AI. Players consider AI’s advice and intervene with their own strategies.

Historically, people gained a competitive advantage by acquiring highly specialized knowledge. For example, lawyers charge high fees because they dedicate years to becoming experts in the law. But nowadays, AIs such as GPT-4 can pass bar exams and explain legal matters, such as data privacy policies, to laypeople. This doesn’t mean human lawyers — or other human experts — are going away. However, according to a survey by Andrea Iorio and his team, nearly 60% of leaders would prefer to collaborate with people skilled in using AI to find answers than with people with strong expertise but who don’t use AI.

Sensemaking also requires skepticism about where outputs come from and how they generalize. Iorio notes that models can overreach when information is thin or when training data reflects historical bias. The remedy he recommends is continuous review: checking whether data is current, whether it represents the populations affected by the decision, documenting known limitations, and building human review into workflows—especially where the stakes are high.

Another theme is “reperception,” Iorio’s term for deliberately letting go of inherited assumptions to make room for new possibilities. He describes common cognitive traps—such as seeking only confirming evidence, getting overwhelmed by abundant information, defaulting to familiar “safe” strategies, and mistaking slow early progress for a sign that change will never accelerate. In practice, reperception can look like intentionally exposing yourself to viewpoints outside your usual feed, using frameworks to narrow attention to what is truly decision-relevant, and regularly posing questions that challenge what you take for granted.

To show how a single “impossible” question can reopen a problem, Iorio retells the story of Edwin Land being asked by his young daughter why she could not see a photo immediately—a moment that helped spur the invention of instant photography. He pairs that mindset shift with adaptability: noticing emerging curves early and acting on what you learn. John Deere, for example, moved beyond selling equipment toward using sensors and AI to provide farmers with guidance on planting and yield, expanding into software and services rather than relying only on its historical product line.

Iorio then draws on the concept of “antifragility”: not merely withstanding shocks, but improving because of them. Citing research on decades of innovative projects, he argues that failure is a common feature of eventual success when teams extract lessons quickly and apply them to the next iteration. AI, in his view, can lower the cost of learning by helping prevent routine errors through automation, manage unavoidable risks through prediction and monitoring, and accelerate experimentation by analyzing patterns across large sets of past failures.

He highlights how simulation and pattern analysis can compress feedback loops. Automotive firms that once relied on a limited number of expensive physical crash tests can now run many virtual scenarios, learn faster, and refine designs earlier. In a different domain, NotCo’s AI system, “Giuseppe,” searches through vast ingredient combinations to propose plant-based recipes that human teams can then test and adjust, turning unusual suggestions into practical prototypes.

Plant-based food developer NotCo developed a proprietary AI, “Giuseppe,” that analyzes the texture, structure, and flavor properties of 300,000 potential ingredients and suggests recipes for vegan products. Even though some of “Giuseppe’s” ideas seem unusual — like using pineapple and cauliflower as part of plant-based milk — the AI allowed NotCo to generate and test new products, such as a vegan custard for Shake Shack, in far less time than traditional approaches required.

A later section focuses on trust. Iorio notes that people are often wary of AI in sensitive settings such as healthcare, even when the technology can improve detection and treatment. He describes research in multiple sclerosis care in which systems can scan records and imaging for patterns clinicians might miss, and he argues that the value of such tools depends on making their use understandable and accountable to the people affected.

A 2023 study by the Pew Research Center found that 60% of Americans are concerned about their medical providers using AI. AI can significantly improve the diagnosis and treatment of diseases, particularly for complex conditions that don’t have a definitive test, such as, for example, multiple sclerosis (MS). People with MS may experience a variety of symptoms, including blurry vision and difficulty walking. They often visit different specialists for each problem. Research published in a 2023 issue of the International Journal of MS Care found that AI can seek patterns across health records and identify signs of MS that individual doctors might overlook. Researchers at University College London used the AI MindGlide to detect patterns in MS patients’ MRI scans and — in a matter of seconds — recommend treatment plans that are most likely to be effective.

He returns repeatedly to the “black box” problem: when a model produces an output that neither users nor even developers can readily explain, organizations may not be able to justify decisions or detect errors. For regulated decisions—such as credit and lending—he points to the importance of transparency and “explainable AI,” meaning systems and processes that allow humans to trace the logic, challenge results, and correct them when necessary.

Finally, Iorio argues that responsibility cannot be delegated to a tool. Using the 2018 fatal crash involving an Uber self-driving vehicle as an example, he shows how accountability tends to fall back on humans and organizations even when automated systems are involved. “AI can execute, but it cannot care… it cannot be held responsible.” For that reason, he recommends defining who owns AI-assisted decisions, building checkpoints for human review, and testing outputs against organizational values so that efficiency does not override fairness, safety, or long-term trust.

Andrea Iorio hosts the Metanoia Lab podcast and NVIDIA’s Vem AI podcast in Brazil. He is an MBA professor at Fundação Dom Cabral, a columnist for MIT Technology Review Brazil, and a frequent speaker on AI and leadership.