Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human-Centered Pathways to Trustworthy AI in Healthcare: A Comparative Analysis of Explainable AI, Human-in-the-Loop, Hybrid AI, and Uncertainty Quantification Techniques
0
Zitationen
9
Autoren
2026
Jahr
Abstract
<title>Abstract</title> Despite its transformative potential in healthcare, the adoption of artificial intelligence (AI) in clinical practice remains constrained by a persistent trust deficit among clinicians and patients. To address this, we conducted a systematic comparative review of 112 peer-reviewed studies published between 2015 and 2025, following the PRISMA guidelines for study selection. Articles were sourced from major scientific databases, focusing on methodological innovations and clinical evaluations to enhance AI trustworthiness. Using a novel Composite Human-Centered Trustworthiness Score (HCTS), we systematically evaluated and compared the contributions of relevant studies. Our analysis identified four human-centered pathways: explainable AI (XAI), comprising intrinsic interpretable models and post-hoc techniques (e.g., SHAP, LIME) to support error analysis and stakeholder communication; human-in-the-loop (HITL) frameworks that leverage clinician expertise via active learning and interactive visualization to improve model reliability and usability; hybrid neuro-symbolic architectures that integrate symbolic reasoning with deep learning to achieve robustness in complex or data-sparse settings; and uncertainty quantification (UQ) methods (e.g., Bayesian inference, Monte Carlo dropout, and ensemble techniques) that provide confidence estimates that are critical for high-stakes clinical decisions. We found that integrated strategies, including XAI-driven HITL loops and XAI + UQ frameworks, yield the greatest gains in transparency, human oversight, and computational capability. Addressing technical challenges (data heterogeneity, system interoperability), ethical and regulatory imperatives (fairness, accountability), and advancing multimodal and continual-learning paradigms are essential for ensuring the safe, transparent, and sustainable deployment of AI in clinical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.