Healthcare AI

Investment / Stage 1: Seed Partnerships

Proactively Addressing Ethical Challenges

illustration of the letters AI wrapped with a blood pressure pump
Illustration: Eric Nyquist

    Healthcare organizations are increasingly interested in how artificial intelligence (AI) tools can be used to improve care, ease clinicians’ workload, and improve efficiency. However, existing uses of AI  in medicine show that significant ethical problems can arise. These include bias (uneven model performance for different groups of patients), unintended use, human users’ over-reliance on model output, and questions about informed consent and privacy. Three-quarters of adults in the U.S. worry that AI tools will be deployed too quickly in healthcare, before risks are adequately assessed, and 60% feel uncomfortable about AI being used in their own care.

    To address these challenges, the Healthcare Ethical Assessment Lab for Artificial Intelligence (HEAL-AI) is developing and testing a rigorous, yet rapid and practicable, process that healthcare organizations can use to proactively address ethical challenges with AI tools.  We seek to empower other organizations, including smaller and lower-resourced institutions, to strengthen their governance of AI and avert harm to patients and clinicians.  

    Co-funded by a Stage 1 investment from Stanford Impact Labs and support from the Patient-Centered Outcomes Research Institute (PCORI), HEAL-AI is a partnership between Stanford University researchers and Stanford Health Care (SHC), a multi-hospital system. For AI tools proposed for use at SHC facilities, HEAL-AI conducts ethical assessments and contributes findings to a broader SHC evaluation that also examines a given tool’s usefulness, fairness, and financial implications. 

    Our team explores ethical issues in collaboration with a co-learning community of patient representatives and through consultation with AI ethics experts and qualitative interviews with tool developers, hospital teams planning how tools will be integrated into care processes, and clinicians who will use the tools. In addition to making recommendations to SHC, our team is working to identify categories of tools that present similar ethical issues and develop a “playbook” to guide others through ethical assessment of these different types.

    Because there is strong demand among healthcare organizations and policymakers for robust, practicable, evidence-based AI governance mechanisms, our project has the potential  to meaningfully change the arc of AI adoption in healthcare. 

    image of female with blonde hair wearing an orange tshirt
    Alison Callahan

    Clinical Data Scientist, Stanford Health Care; Research Scientist, Stanford School of Medicine

    image of male wearing a grey sweater and blue shirt
    Danton Char

    Associate Professor, School of Medicine

    image of male wearing a blue and white checked shirt
    N. Lance Downing

    Clinical Assistant Professor , Stanford School of Medicine

    image of female wearing a black suit with short black hair
    Michelle Mello

    Professor, Law School, School of Medicine

    image of male wearing a green shirt with black hair
    Nigam Shah

    Chief Data Scientist, Stanford Health Care