Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Enhancing Clinician Trust in AI Diagnostics: A Dynamic Framework for Confidence Calibration and Transparency
8
Zitationen
11
Autoren
2025
Jahr
Abstract
<b>Background:</b> Artificial Intelligence (AI)-driven Decision Support Systems (DSSs) promise improvements in diagnostic accuracy and clinical workflow efficiency, but their adoption is hindered by inadequate confidence calibration, limited transparency, and poor alignment with real-world decision processes, which limit clinician trust and lead to high override rates. <b>Methods:</b> We developed and validated a dynamic scoring framework to enhance trust in AI-generated diagnoses by integrating AI confidence scores, semantic similarity measures, and transparency weighting into the override decision process using 6689 cardiovascular cases from the MIMIC-III dataset. Override thresholds were calibrated and validated across varying transparency and confidence levels, with override rate as the primary acceptance measure. <b>Results:</b> The implementation of this framework reduced the override rate to 33.29%, with high-confidence predictions (90-99%) overridden at a rate of only 1.7%, and low-confidence predictions (70-79%) at a rate of 99.3%. Minimal transparency diagnoses had a 73.9% override rate compared to 49.3% for moderate transparency. Statistical analyses confirmed significant associations between confidence, transparency, and override rates (<i>p</i> < 0.001). <b>Conclusions:</b> These findings suggest that enhanced transparency and confidence calibration can substantially reduce override rates and promote clinician acceptance of AI diagnostics. Future work should focus on clinical validation to optimize patient safety, diagnostic accuracy, and efficiency.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.303 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.155 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.555 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.453 Zit.