Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Mental Health Diagnosis: Enhancing Transparency, Trust, and Clinical Decision-Making
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Explainable Artificial Intelligence (XAI) within mental health diagnosis has emerged as a groundbreaking approach to fulfill transparency, trust and interpretability in AI-based health care systems. The present review incorporates the recent advances in applying XAI to detection, diagnosis, and management of mental health through diverse methodologies, such as linguistic analysis and social media mining and wearable biosensors and deep learning models. The studies pinpoint the importance of XAI in the interpretation of opaque AI judgments, the establishment of trust by clinicians, and ethical application in sensitive mental health settings. Most strikingly, the applications can be found in the areas of depression and psychotic disorder prediction, autism spectrum disorder assessment, and suicide risk assessment. Multimodal data fusion, logic-based neural networks, personalization and clinical usability Multimodal data fusion, logic-based neural networks, and human-centered interfaces are the emerging trends. Issues of data quality, model generalizability and understanding of the model by the users still remain to be solved by interdisciplinary efforts. The review emphasizes the importance of XAI in improving the diagnostic accuracy of AI as well as responsible AI usage in psychiatry. With explainability and deep AI growing in popularity as the mental health issues gain importance as a matter of public health, there is an opportunity to explore what explainability can bring to accessible and effective solutions to mental healthcare.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.