Research

Building Consensus for Responsible AI in Healthcare

Article /

The American Journal of Bioethics, 2025

Authors: Matthew Elmore, Michelle M. Mello, Lisa Lehmann, Michael Pencina, Danton Char, Merage Ghane, Lucy Orr-Ewing, Brian Anderson and Nicoleta J. Economou-Zavlanos.

Abstract: Although AI in healthcare depends on collaboration across disciplines, those involved in its development and implementation can operate in silos, having limited insight into one another’s practices. A shared framework is therefore important for harmonizing best practices across a growing range of specialties, interests, and concerns. Responsible AI entails a common understanding of both ethics and quality management principles, and it requires a shared translation of those principles into practical, transparent approaches to evaluating AI systems. This article examines the goals, challenges, and strategies for building consensus around AI guidelines in healthcare, drawing from the early experience of the Coalition for Health AI (CHAI). From 2023 to 2024, CHAI led a yearlong consensus-building initiative to develop the Responsible AI Guide—a framework translating high-level principles into concrete recommendations (Elmore et al. CitationForthcoming). The effort drew survey insights from people across more than 100 organizations and convened over 60 experts for deliberation—including clinicians, patient advocates, regulators, data scientists, and healthcare administrators. These activities not only served to construct the Guide; they underscored the importance of inclusive, iterative consensus-building as the foundation of trustworthy AI in healthcare. In the absence of clear regulation, institution-level coalitions like CHAI play a vital role. By turning collective expertise into actionable recommendations, consensus-based frameworks can support accountability and continuous adaptation as AI and its care contexts evolve.

Read the full article in The American Journal of Bioethics, Volume 25, Issue 10.