Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Beyond overconfidence: Embedding curiosity and humility for ethical medical AI
1
Zitationen
14
Autoren
2026
Jahr
Abstract
Contemporary medical AI systems exhibit a critical vulnerability: they deliver confident predictions without mechanisms to express uncertainty or acknowledge limitations, leading to dangerous overreliance in clinical settings. This paper introduces the BODHI (Bridging, Open, Discerning, Humble, Inquiring) framework, a dual-reflective architecture grounded in two essential epistemic virtues: curiosity and humility, as foundational design principles for healthcare AI. Curiosity drives systems to actively explore diagnostic uncertainty, seek additional information when faced with ambiguous presentations, and recognize when training distributions fail to match clinical reality. Humility provides complementary restraint, enabling uncertainty quantification, boundary recognition, and appropriate deference to human expertise. We demonstrate how these virtues function synergistically in a dynamic feedback loop, preventing both reckless exploration and excessive caution while supporting collaborative clinical decision-making. Drawing from psychological theories of curiosity and cross-species evidence of epistemic humility, we argue that these capacities represent fundamental biological design principles essential for systems operating in high-stakes, uncertain environments. The BODHI framework addresses systemic failures in medical AI deployment, from biased training data to institutional workflow pressures, by embedding uncertainty awareness and collaborative restraint into foundational system architecture. Key implementation features include calibrated confidence measures, out-of-distribution detection, curiosity-driven escalation protocols, and transparency mechanisms that adapt to clinical context. Rather than pursuing algorithmic perfection through pure optimization, we advocate for human-AI partnerships that enhance clinical reasoning through mutual accountability and calibrated trust. This approach represents a paradigm shift from overconfident automation toward collaborative systems that embody the wisdom to pause, reflect, and defer when appropriate.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.
Autoren
Institutionen
- Massachusetts Institute of Technology(US)
- Beth Israel Deaconess Medical Center(US)
- Harvard University(US)
- University of Melbourne(AU)
- University College London(GB)
- Cambridge University Hospitals NHS Foundation Trust(GB)
- Mbarara University of Science and Technology(UG)
- King's College London(GB)
- Florida Institute of Technology(US)
- ETH Zurich(CH)
- University of Zurich(CH)