This is a summary of the book titled “Applying AI in Learning and Development: From Platforms to Performance” written by Josh Cavalier and published by ATD (Association for Talent Development) in 2025. This book examines how learning and development (L\&D) professionals can use artificial intelligence thoughtfully to improve both learning efficiency and organizational performance. Rather than presenting AI as a replacement for human expertise, the book positions it as a partner that can handle routine, data-intensive tasks while allowing L\&D professionals to focus on strategy, analysis, and design.
Cavalier begins by showing how AI can streamline common instructional design activities. Tasks such as transcribing interviews, summarizing discussions, or generating draft materials—once time-consuming—can be completed quickly with AI support. As these efficiencies increase, the role of the L\&D professional evolves. The book introduces the idea of the human–machine performance analyst (HMPA), a role in which practitioners use judgment, contextual knowledge, and empathy to interpret data and shape learning interventions, while AI supports content creation and analysis. An example illustrates this shift: when compliance incidents continued despite high course completion rates, an L\&D professional used AI-generated data as a starting point but relied on interviews and observation to identify the real issue—irrelevant training. Redesigning the program into role-specific scenarios led to a measurable reduction in incidents.
Throughout the book, he emphasizes that the core skills of L\&D—understanding how people learn, connecting learning to performance, and aligning learning with business outcomes—remain unchanged. What has changed is the set of tools available and the scope of influence L\&D can have across an organization. He encourages teams to begin experimenting with AI in small, low-risk ways, such as using meeting assistants to capture action items or deploying chatbots to answer frequently asked learner questions. Progress should be tracked, lessons documented, and experimentation treated as part of normal professional growth rather than a one-time initiative.
A significant portion of the book focuses on assessing an organization’s current relationship with AI. He outlines several common patterns, ranging from informal individual experimentation to full organizational integration. In some organizations, employees use external AI tools without guidance, increasing the risk of data exposure. Others hesitate to act at all, stalled by concerns about privacy, bias, or regulation. Still others implement AI unevenly, creating silos where some departments benefit while others are left behind. The most mature organizations, by contrast, provide approved tools, clear policies, and role-specific training that allow AI to be used consistently and responsibly. Understanding where an organization falls along this spectrum helps L\&D leaders determine realistic next steps.
From there, the book argues that successful AI adoption depends less on choosing a particular tool and more on establishing a strong foundation. AI initiatives should be explicitly tied to business goals such as faster onboarding, improved compliance, or better customer service, with clear explanations of how time or costs will be saved. Small pilots and case studies can demonstrate value and reduce resistance, especially when results are communicated through concrete comparisons rather than abstract claims.
He places strong emphasis on governance. While many L\&D professionals already experiment with AI, far fewer feel confident about using it ethically. An effective AI policy, he argues, must address data privacy, security, regulatory compliance, and bias. Policies should specify which tools are approved, what information can be shared with them, and where human review is required. The book uses the well-known example of Amazon’s abandoned résumé-screening system to illustrate how biased training data can produce discriminatory outcomes. To mitigate these risks, he recommends close collaboration with legal, HR, and cybersecurity teams, as well as processes that allow learners and employees to flag AI-generated content for review.
When it comes to technology selection, the book encourages L\&D leaders to advocate for platforms that support both learning and broader business needs. Many organizations are moving away from standalone learning management systems toward integrated human capital management platforms that combine learning, skills tracking, performance management, and workforce planning. He suggests defining what the organization wants AI to accomplish over the next six to twelve months and evaluating vendors against practical criteria such as transparency, system integration, usability, analytics, scalability, support, security, and return on investment. The goal is not to adopt the most advanced system available, but to choose the one that fits the organization’s context and constraints.
The book also provides detailed guidance on working effectively with generative AI. Cavalier stresses that output quality depends heavily on prompt quality. Clear instructions, explicit constraints, and well-defined criteria produce more useful results than vague requests. He recommends treating prompts as reusable assets by developing templates and maintaining a shared prompt library that documents use cases, tested models, and variations. Chaining prompts within a single session—moving from objectives to outlines to scripts, for example, can also improve coherence. Despite these efficiencies, the book repeatedly underscores the importance of human oversight to ensure accuracy, relevance, and alignment with learning goals.
In its final section, the book explores the use of AI agents to personalize learning at scale. Unlike traditional automated systems, these agents can reason, adapt, and make recommendations based on learner data, such as skill gaps, goals, and performance trends. Examples show how personalized recommendations can increase engagement and motivation. However, he is careful to frame AI agents as collaborators rather than autonomous decision-makers. He advocates for models in which AI proposes learning paths or resources, while human coaches or managers remain involved in reflection and decision-making. Implementing these systems requires careful attention to data quality, accessibility, integration with existing platforms, and iterative testing with pilot groups.
Overall, Applying AI in Learning and Development presents AI not as a disruptive force to be feared or a shortcut to be exploited, but as a tool that amplifies the strategic role of L&D. By combining experimentation with governance, efficiency with human judgment, and technology with organizational context, he argues that L&D professionals can use AI to deliver learning that is both more personalized and more closely tied to real performance outcomes.
No comments:
Post a Comment