Healthcare AI
Proactively Addressing Ethical Challenges
Healthcare organizations are increasingly interested in how artificial intelligence (AI) tools can be used to improve care, ease clinicians’ workload, and improve efficiency. However, existing uses of AI in medicine show that significant ethical problems can arise. These include bias (uneven model performance for different groups of patients), unintended use, human users’ over-reliance on model output, and questions about informed consent and privacy. Three-quarters of adults in the U.S. worry that AI tools will be deployed too quickly in healthcare, before risks are adequately assessed, and 60% feel uncomfortable about AI being used in their own care.
To address these challenges, the Healthcare Ethical Assessment Lab for Artificial Intelligence (HEAL-AI) is developing and testing a rigorous, yet rapid and practicable, process that healthcare organizations can use to proactively address ethical challenges with AI tools. We seek to empower other organizations, including smaller and lower-resourced institutions, to strengthen their governance of AI and avert harm to patients and clinicians.
Co-funded by a Stage 1 investment from Stanford Impact Labs and support from the Patient-Centered Outcomes Research Institute (PCORI), HEAL-AI is a partnership between Stanford University researchers and Stanford Health Care (SHC), a multi-hospital system. For AI tools proposed for use at SHC facilities, HEAL-AI conducts ethical assessments and contributes findings to a broader SHC evaluation that also examines a given tool’s usefulness, fairness, and financial implications.
Our team explores ethical issues in collaboration with a co-learning community of patient representatives and through consultation with AI ethics experts and qualitative interviews with tool developers, hospital teams planning how tools will be integrated into care processes, and clinicians who will use the tools. In addition to making recommendations to SHC, our team is working to identify categories of tools that present similar ethical issues and develop a “playbook” to guide others through ethical assessment of these different types.
Because there is strong demand among healthcare organizations and policymakers for robust, practicable, evidence-based AI governance mechanisms, our project has the potential to meaningfully change the arc of AI adoption in healthcare.
Clinical Data Scientist, Stanford Health Care; Research Scientist, Stanford School of Medicine
Associate Professor, School of Medicine
Clinical Assistant Professor , Stanford School of Medicine
Research Scientist , Stanford School of Medicine
Postdoctoral Fellow , Stanford School of Medicine
Professor, Law School, School of Medicine
Chief Data Scientist, Stanford Health Care
Postdoctoral Fellow , Stanford School of Medicine
Related links
- Standing on FURM Ground: A Framework for Evaluating Fair, Useful, and Reliable AI Models in Health Care Systems New England Journal of Medicine [NEJM Catalyst, September 18, 2024]
- Denial—Artificial Intelligence Tools and Health Insurance Coverage Decisions [JAMA Network, March 7, 2024]
- Michelle Mello Testifies Before U.S. Senate on AI in Health Care [Stanford Health Policy News, February 8, 2024]
- President Biden’s Executive Order on Artificial Intelligence—Implications for Health Care Organizations [Stanford Health Policy News, November 30, 2023]
- Webinar, “Ethics of AI in Healthcare,” hosted by The American Journal of Bioethics