Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in Healthcare
5
Zitationen
2
Autoren
2025
Jahr
Abstract
Explainable artificial intelligence (XAI) improves trust and transparency in AI systems’ decision making, which is a critical function in the healthcare industry. Understanding and interpreting insights offered by artificial intelligence (AI) is crucial in the healthcare sector, as decisions can have significant effects on the health of patients. XAI makes it easier to understand AI models, so patients and medical practitioners may understand the rationale behind particular diagnosis, treatment suggestions, and forecasts. XAI encourages responsibility and assists in resolving ethical issues related to AI in healthcare by offering interpretable outcomes. This transparency promotes cooperation between human experts and AI systems by enabling healthcare practitioners to verify the accuracy of AI-driven outputs. Furthermore, the capacity to explain AI predictions improves the overall acceptance of these technologies in clinical settings, particularly in scenarios where important decisions need to be made, like illness diagnosis or treatment planning. To sum up, the application of Explainable AI in healthcare not only enhances the interpretability of AI models but also promotes collaboration, accountability, and trust between healthcare professionals and AI systems, which ultimately leads to more reliable and informed healthcare decision-making processes. This chapter covers an introduction, and application of XAI in healthcare was the study's other main area of interest. The chapter also covers the future possibilities of XAI in healthcare as well as its drawbacks. Because of this, research suggests that XAI in healthcare is still a relatively new field that needs further investigation in the future.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.