Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in Health Care: Trust and Transparency in AI-Powered Medical Diagnosis
2
Zitationen
1
Autoren
2025
Jahr
Abstract
The integration of artificial intelligence (AI) in medical diagnostics has the potential to revolutionize health care by improving accuracy, efficiency, and decision-making. However, the adoption of AI-powered diagnostic systems is challenged by their inherent black-box nature, making it difficult to understand how they generate predictions. This chapter explores the role of explainable AI (XAI) in enhancing trust and transparency in AI-driven medical diagnosis. It examines key challenges, including the lack of interpretability in complex models and the risks of bias, which can undermine clinical reliability and patient confidence. To address these concerns, the chapter discusses various XAI techniques, including model-agnostic approaches like local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP), as well as model-specific methods for deep learning systems. These techniques provide insights into AI-generated diagnoses, fostering greater clinician trust and improving communication between healthcare providers and patients. Additionally, the chapter highlights ethical and regulatory considerations necessary for the responsible deployment of AI in medical settings. To support practical understanding, the chapter includes a detailed pseudocode illustrating the implementation of XAI methods in a clinical diagnostic context, offering a step-by-step view of how interpretability can be operationalized. By promoting transparency and accountability, XAI not only enhances the safety and effectiveness of AI-assisted medical care but also ensures compliance with ethical standards and legal frameworks. As AI continues to evolve, integrating explainability into diagnostic systems will be essential for ensuring their widespread acceptance and responsible use in healthcare.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.676 Zit.
Generative Adversarial Nets
2023 · 19.895 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.318 Zit.
"Why Should I Trust You?"
2016 · 14.522 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.191 Zit.