OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 07:02

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Improving Explainability and Integrability of Medical AI to Promote Health Care Professional Acceptance and Use: Mixed Systematic Review (Preprint)

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> The integration of artificial intelligence (AI) in health care has significant potential, yet its acceptance by health care professionals (HCPs) is essential for successful implementation. Understanding HCPs’ perspectives on the explainability and integrability of medical AI is crucial, as these factors influence their willingness to adopt and effectively use such technologies. </sec> <sec> <title>OBJECTIVE</title> This study aims to improve the acceptance and use of medical AI. From a user perspective, it explores HCPs’ understanding of the explainability and integrability of medical AI. </sec> <sec> <title>METHODS</title> We performed a mixed systematic review by conducting a comprehensive search in the PubMed, Web of Science, Scopus, IEEE Xplore, and ACM Digital Library and arXiv databases for studies published between 2014 and 2024. Studies concerning an explanation or the integrability of medical AI were included. Study quality was assessed using the Joanna Briggs Institute critical appraisal checklist and Mixed Methods Appraisal Tool, with only medium- or high-quality studies included. Qualitative data were analyzed via thematic analysis, while quantitative findings were synthesized narratively. </sec> <sec> <title>RESULTS</title> Out of 11,888 records initially retrieved, 22 (0.19%) studies met the inclusion criteria. All selected studies were published from 2020 onward, reflecting the recency and relevance of the topic. The majority (18/22, 82%) originated from high-income countries, and most (17/22, 77%) adopted qualitative methodologies, with the remainder (5/22, 23%) using quantitative or mixed method approaches. From the included studies, a conceptual framework was developed that delineates HCPs’ perceptions of explainability and integrability. Regarding explainability, HCPs predominantly emphasized postprocessing explanations, particularly aspects of local explainability such as feature relevance and case-specific outputs. Visual tools that enhance the explainability of AI decisions (eg, heat maps and feature attribution) were frequently mentioned as important enablers of trust and acceptance. For integrability, key concerns included workflow adaptation, system compatibility with electronic health records, and overall ease of use. These aspects were consistently identified as primary conditions for real-world adoption. </sec> <sec> <title>CONCLUSIONS</title> To foster wider adoption of AI in clinical settings, future system designs must center on the needs of HCPs. Enhancing post hoc explainability and ensuring seamless integration into existing workflows are critical to building trust and promoting sustained use. The proposed conceptual framework can serve as a practical guide for developers, researchers, and policy makers in aligning AI solutions with frontline user expectations. </sec> <sec> <title>CLINICALTRIAL</title> PROSPERO CRD420250652253; https://www.crd.york.ac.uk/PROSPERO/view/CRD420250652253 </sec>

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic Skills
Volltext beim Verlag öffnen