PhD Proposal: Towards Safety and Trust in Large Language Models in Medicine

Talk
Yifan Yang
Time: 
12.02.2024 11:30 to 13:00
Location: 

IRB IRB-4145

Abstract:

Large Language Models (LLMs) have recently gained significant attention in the medical field for their human-level capabilities, sparking considerable interest in their potential applications across healthcare. Along with proposing guidelines and conducting reviews for these applications, we also spend efforts towards applying LLMs in medicine, including matching patients to clinical trials using LLMs, augmenting LLMs with domain-specific tools for improved access to biomedical information, and empowering language agents for risk prediction through large-scale clinical tool learning.Despite their promise, real-world adoption faces critical challenges, with risks in practical settings that have not been systematically characterized. In this proposal, we identify and quantify biases in LLM-generated medical reports, specifically uncovering disparities affecting patients of different racial backgrounds. Using real-world patient data, we further show that both open-source and proprietary LLMs can be manipulated across multiple tasks, underscoring the need for rigorous evaluation. To address these challenges, we propose five core principles for safe and trustworthy medical AI—Truthfulness, Resilience, Fairness, Robustness, and Privacy—along with ten specific evaluative criteria. Under this framework, we introduce a comprehensive benchmark, featuring 1,000 expert-verified questions to rigorously assess LLM performance in sensitive clinical contexts.Through these efforts, we present existing results and propose future research directions aimed at ensuring that LLMs in healthcare are both safe and trustworthy.