OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 22:21

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Understanding Clinicians’ Informational Needs for AI-driven Clinical Decision Support Systems: Qualitative Interview Study (Preprint)

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> Advancements in Artificial Intelligence (AI) are transforming healthcare, particularly through AI-driven Clinical Decision Support Systems (AI-CDSS) that aid in predicting disease progression and personalizing treatment. Despite their potential, adoption remains limited due to clinician concerns about algorithm misuse, misinterpretation, and lack of transparency. </sec> <sec> <title>OBJECTIVE</title> This qualitative study explores the informational needs and preferences of clinicians to better understand and appropriately use AI-CDSS in decision-making. In parallel this study explores AI experts’ perspectives on what information should be communicated to enable safe and appropriate use of AI-CDSS. </sec> <sec> <title>METHODS</title> A qualitative description design study was conducted using semi-structured interviews with 16 participants (8 clinicians and 8 AI experts). Discussions focused on experiences with AI, informational needs, and feedback on existing reporting standards including Model Cards (Mitchell et al., 2019), Model Facts (Sendak et al., 2020), and the TRIPOD-AI checklist (Collins et al., 2015, 2024). The transcripts were analyzed through codebook thematic analysis. </sec> <sec> <title>RESULTS</title> Four key themes were identified: (i) Clinicians need clear information on training data, its origin, size, and inclusion/exclusion criteria, to judge model applicability; (ii) Performance metrics must go beyond AUC and be clinically relevant to support informed decisions; (iii) Limitations and warnings about inappropriate use should be specific and clearly communicated to prevent misuse; (iv) Information should be presented in layered, customizable formats within existing clinical software, avoiding unnecessary jargon and allowing optional deeper explanations. While each of the reviewed reporting standards offered strengths, none were considered sufficient alone. Participants recommended a combined and clinician-centered approach to information delivery. Alignment of reporting standards with clinical workflows and decision thresholds was thought to be crucial to bridge the current usability gap. </sec> <sec> <title>CONCLUSIONS</title> To improve AI-CDSS adoption in clinical practice, reporting standards must be designed for better clinician comprehension and usability. Enhancing transparency, particularly regarding training data and performance, can likely help clinicians assess AI-CDSS more effectively. Information should be delivered in an accessible, layered format, fitting clinical workflows. Co-creation with clinicians throughout AI-CDSS development was a cross-cutting theme, highlighting its importance in ensuring tools are not only technically sound but also practically usable. Future research should explore how to structurally report on performance and validation metrics for clinician understanding and assess the impact of information provision on AI-CDSS adoption. </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationArtificial Intelligence in HealthcareMachine Learning in Healthcare
Volltext beim Verlag öffnen