Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
GNN as Explainable Tool with Heterogeneous and Homogeneous Data for Medical Claim Validation
0
Zitationen
8
Autoren
2024
Jahr
Abstract
This paper investigates the explainability of Graph Neural Networks (GNNs) in detecting fraudulent medical insurance claims, a critical challenge in the healthcare industry. Given the complexity of healthcare data and the high stakes involved in fraud detection, understanding model decisions is essential. We apply explainability techniques, GNNExplainer and PGExplainer to two GNN architectures: HINormer, a heterogeneous GNN, and RE-GraphSAGE, a modified homogeneous GNN adapted for heterogeneous data. Both models achieved high classification accuracy (84 % and 83 %) and served as a basis for evaluating the reliability and practicality of explainability techniques in health-care fraud detection, marking a pioneering effort in applying these methods to heterogeneous GNNs in medical claims. Using real-world data from the MENA region, we assess the ability of these explainers to provide meaningful interpretations of model decisions. Real-case scenarios reviewed by medical experts highlight that while these techniques can sometimes offer valid justifications, further development is required to ensure consistent reliability in practical settings. This work underscores the critical need for advanced explainability tools to foster trust and transparency in high-stakes medical decision-making.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.536 Zit.
nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation
2020 · 7.660 Zit.
Calculation of average PSNR differences between RD-curves
2001 · 4.088 Zit.
Magnetic Resonance Classification of Lumbar Intervertebral Disc Degeneration
2001 · 3.886 Zit.
Vertebral fracture assessment using a semiquantitative technique
1993 · 3.604 Zit.