Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reasoning-driven large language models in medicine: opportunities, challenges, and the road ahead
0
Zitationen
18
Autoren
2026
Jahr
Abstract
Developments in large language models (LLMs) in the past 2 years have shifted the focus from text, image, and audio generation to LLMs capable of multistep reasoning (thinking). The development of LLMs is particularly important for medicine and health care, but the translation of these models has been limited by the black-box nature of previous LLMs. New reasoning-driven LLMs incorporate chain-of-thought prompting and reveal intermediate reasoning steps, offering transparency and traceability, potentially improving the clinical adoption and utility of LLMs. In this Viewpoint, we examine four emerging reasoning-driven LLMs, namely OpenAI's o1 and o3-mini, Google's Gemini 2.0 Flash Thinking, and DeepSeek R1. We compare their methodological approaches, benchmark their performance on medical question-answering tasks, and assess their potential for clinical integration. We highlight both opportunities and challenges associated with deploying reasoning-driven LLMs. Key future considerations include real-world validation, rigorous benchmarking with ethical safeguards, and advancements in improving the efficiency and sustainability of reasoning-driven LLMs. Addressing these challenges will enable the fine-tuning of these LLMs for specific medical applications, enhancing their potential clinical decision support, patient education, medical training, and evidence synthesis.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
Institutionen
- Beihang University(CN)
- National University of Singapore(SG)
- Singapore National Eye Center(SG)
- Singapore Eye Research Institute(SG)
- Beijing Tsinghua Chang Gung Hospital(CN)
- Tsinghua University(CN)
- Duke-NUS Medical School(SG)
- Centre Hospitalier de l’Université de Montréal(CA)
- Ludwig Boltzmann Institute Applied Diagnostics(AT)
- Wellcome Centre for Ethics and Humanities(GB)
- Department of Health and Social Care(GB)
- Mills Peninsula Health Services(US)
- Shanghai Jiao Tong University(CN)
- Yale University(US)