Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in Healthcare Application
26
Zitationen
9
Autoren
2024
Jahr
Abstract
Given the inherent risks in medical decision-making, medical professionals carefully evaluate a patient's symptoms before arriving at a plausible diagnosis. For AI to be widely accepted and useful technology, it must replicate human judgment and interpretation abilities. XAI attempts to describe the data underlying the black-box approach of deep learning (DL), machine learning (ML), and natural language processing (NLP) that explain how judgments are made. This chapter provides a survey of the most recent XAI methods employed in medical imaging and related fields, categorizes and lists the types of XAI, and highlights the methods used to make medical imaging topics more interpretable. Additionally, it focuses on the challenging XAI issues in medical applications and guides the development of better deep-learning system explanations by applying XAI principles in the analysis of medical pictures and text.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.179 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.561 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Analysis of Survival Data.
1985 · 4.379 Zit.