Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Translating Explainable Generative AI into Practice: use Cases and Challenges in Modern Healthcare Systems
0
Zitationen
6
Autoren
2025
Jahr
Abstract
The main goal of this research project is to use Explainable Generative Artificial Intelligence (X-GenAI) to make healthcare ecosystems' multimodal data integration, interpretability, and clinical reliability better. One way to do this is to use an integrated structure. The proposed method achieves the objective of explicable inference by effectively integrating many datasets into a unified semantic space. Attention-normalized latent fusion makes this approach feasible. Such results can be achieved by the amalgamation of genetic, textual, and imaging techniques. This method uses confidence-weighted ensemble refinement, causal attribution mapping, and gradient-based interpretive visualization to try to make accuracy and interpretability line up. Practical evaluations outperformed the previous hybrid VAE and causal GAN benchmarks in diagnostic efficacy, with an accuracy rate of 95.1%, an AUROC of 0.963, and an interpretability score above 93 %. A Brier score of 0.040 suggests that the model is both reliable and morally right. There have also been considerable improvements in equity and calibration. Implementing these improvements reduced the difference in demographic parity to 2.6 %. The results indicate that healthcare AI systems may be designed to work in a variety of clinical settings while also giving clear, dependable, and regulatory-compliant help with decisions. You may attain this goal by using both causal explicability and multimodal synthesis.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.