OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 06:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

GNN as Explainable Tool with Heterogeneous and Homogeneous Data for Medical Claim Validation

2024·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2024

Jahr

Abstract

This paper investigates the explainability of Graph Neural Networks (GNNs) in detecting fraudulent medical insurance claims, a critical challenge in the healthcare industry. Given the complexity of healthcare data and the high stakes involved in fraud detection, understanding model decisions is essential. We apply explainability techniques, GNNExplainer and PGExplainer to two GNN architectures: HINormer, a heterogeneous GNN, and RE-GraphSAGE, a modified homogeneous GNN adapted for heterogeneous data. Both models achieved high classification accuracy (84 % and 83 %) and served as a basis for evaluating the reliability and practicality of explainability techniques in health-care fraud detection, marking a pioneering effort in applying these methods to heterogeneous GNNs in medical claims. Using real-world data from the MENA region, we assess the ability of these explainers to provide meaningful interpretations of model decisions. Real-case scenarios reviewed by medical experts highlight that while these techniques can sometimes offer valid justifications, further development is required to ensure consistent reliability in practical settings. This work underscores the critical need for advanced explainability tools to foster trust and transparency in high-stakes medical decision-making.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Medical Imaging and AnalysisArtificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen