Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
SHAP-Driven Interpretability of Autism Risk in Pregnancy Using Explainable AI
1
Zitationen
3
Autoren
2024
Jahr
Abstract
Explainable artificial intelligence (XAI) has gained growing popularity for its ability to explain how deep learning and machine learning models make decisions. The frameworks for SHapley Additive exPlanations (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME) have become interpretive tools for ML models. This research closely examines the application of LIME and SHAP in the interpretation of autism spectrum disorder (ASD) detection. It stresses XAI's important role in making AI-based ASD predictions more accurate. Researchers also identified risk factors related to ASD by analyzing the impact of features on it. SHAP shows the TG, AGE, and LDL emerge as the primary features contributing to the prediction of ASD. LIME predicts each patient with 55% confidence in having ASD. The study's findings suggest that machine learning approaches can offer accurate forecasts of ASD status, with the suggested model capable of diagnosing it in its early phases.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.