Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
EXPLAINABLE ARTIFICIAL INTELLIGENCE IN HEALTHCARE: FROM ALGORITHMIC TRANSPARENCY TO TRUST AND SOCIAL ACCEPTANCE IN CLINICAL PRACTICE
0
Zitationen
11
Autoren
2026
Jahr
Abstract
Background and objective: The rapid expansion of artificial intelligence (AI) in healthcare has resulted in substantial advances in diagnostics, prognostics, clinical decision support systems, and patient monitoring. Despite promising performance, many AI-based systems remain insufficiently understood by clinicians and patients due to their “black-box” nature. This lack of transparency may undermine trust, hinder acceptance, and limit safe integration into routine clinical practice. Explainable artificial intelligence (XAI) has emerged as a response to these challenges by enabling human-interpretable explanations of algorithmic decisions. The objective of this narrative review is to synthesize current evidence on XAI in healthcare, with particular emphasis on its technical foundations, clinical applications, influence on trust and decision-making, and broader social and ethical implications. Scope of review: This review synthesizes literature published between 2019 and 2025 addressing explainability in medical AI. The analysis includes methodological studies, clinical evaluations, human–computer interaction research, and social science investigations related to transparency, trust, accountability, and acceptance of AI systems. Relevant publications were identified through structured searches of PubMed, MEDLINE, Scopus, and Google Scholar using keywords related to explainable AI, interpretability, ethics, trust, and clinical decision support. Findings: XAI methods—including feature attribution, model simplification, counterfactual explanations, and visualization techniques—demonstrate potential to enhance clinician understanding of AI outputs and increase confidence in algorithm-assisted decisions. Evidence suggests that explainability may support diagnostic accuracy, reduce automation bias, and facilitate error detection. However, explainability alone does not ensure trust. Clinical context, user expertise, organizational culture, and regulatory frameworks play critical roles in shaping the adoption and appropriate use of explainable systems. Empirical research addressing patient perspectives remains limited. Conclusions: Explainable AI constitutes an important step toward the responsible and socially acceptable integration of intelligent systems in healthcare. While XAI can enhance transparency and trust, its effectiveness depends on thoughtful design, contextual adaptation to clinical workflows, and alignment with user needs. Further interdisciplinary research is required to standardize explainability approaches, evaluate their real-world impact on clinical outcomes, and address the ethical, legal, and societal challenges associated with medical AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.