Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Heart2Mind: Human-Centered Contestable Psychiatric Disorder Prediction System Using Wearable ECG Monitors
0
Zitationen
7
Autoren
2026
Jahr
Abstract
Psychiatric disorders affect millions, yet diagnosis depends on subjective assessments and uneven access to care. To address these challenges, there is a growing need for Contestable AI (CAI), a framework that extends beyond Explainable AI (XAI) by allowing clinicians to inspect, question, and revise algorithmic outputs, thereby reducing automation bias and strengthening accountability. We present Heart2Mind 1 , a human-centered CAI system for psychiatric disorder prediction that provides objective evidence while preserving clinical oversight. Heart2Mind collects R-R interval (RRI) time series from Polar H9/H10 wearable ECG sensors via a Cardiac Monitoring Interface and analyzes them using a Multi-Scale Temporal-Frequency Transformer (MSTFT) that combines time-domain and frequency-domain features. For contestability, the Contestable Diagnosis Interface integrates model explanations with dialogue. Self-Adversarial Explanations compare attention-based and gradient-based explanation maps to flag inconsistent predictions, and a collaboration chatbot helps users verify and challenge outputs. On the HRV-ACC dataset, MSTFT achieved 91.7% accuracy under leave-one-out cross-validation, outperforming benchmark methods. Human-centered evaluation with the Human-CAI Consensus Rate showed experts and CAI could confirm correct decisions and correct errors through readable, efficient dialogues ( \(FKGL\approx 15\) , median 8.3 minutes, 4 turns). These results support low-cost wearable CAI screening with objective biomarkers, safeguards, and an interactive path for clinicians to refine recommendations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.246 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.228 Zit.
"Why Should I Trust You?"
2016 · 14.150 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.091 Zit.