Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can large language models’ clinical decision-making match human consensus on when to perform a kidney biopsy? (Preprint)
0
Zitationen
5
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence (AI) and Large Language models (LLMs) are increasing in sophistication and have become integrated into many industries. The potential for LLMs to augment clinical decisions is an evolving area of research. </sec> <sec> <title>OBJECTIVE</title> This study compared the responses of over 1000 kidney specialist physicians (nephrologists) to outputs of commonly used LLMs using a questionnaire determining when a kidney biopsy should be performed. </sec> <sec> <title>METHODS</title> This research group completed a large online questionnaire for nephrologists to determine when a kidney biopsy should be performed. The same questions were put to both human doctors and LLMs in an identical order. Eight LLMs were interrogated: Chat GPT 3.5, Mistral Hugging Face, Perplexity, Microsoft Co-pilot, Llama 2, GPT 4.0, MedLM and Claude 3. The most common response given by clinicians (human mode) to each question was taken as the baseline for comparison. Questionnaire responses generated a score reflecting biopsy propensity. </sec> <sec> <title>RESULTS</title> Chat GPT 3.5 and GPT 4.0 had the highest levels of agreement, with the human mode selected in 6/11 questions and a similar propensity score to the human mode. Llama 2 and Microsoft Co-pilot produced similar propensity scores, but with lower levels of agreement with the human mode. </sec> <sec> <title>CONCLUSIONS</title> LLM outputs were able to replicate human clinical decision making in this study, however the performance varied widely between LLM models. Questions with more uniform human responses produced LLM outputs with greater alignment, whereas in questions with low levels of human consensus there was poor output alignment. This may limit the practical use of LLMs in real world clinical practice. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.