Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
IA para diagnóstico asistido en salud: precisión, sesgos y adopción clínica
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Objective: We investigated, through a systematic literature review, how artificial intelligence applied to assisted health diagnosis has been evaluated and discussed across three critical dimensions: diagnostic accuracy, algorithmic biases, and clinical adoption, since the available evidence tends to treat them in a fragmented manner, hindering safe and equitable implementation decisions. Methodology: An RSL was conducted following the PRISMA 2020 guidelines (Page et al., 2021), with searches in Scopus and Dimensions.ai using specific search strings focused on clinical diagnosis, accuracy/performance, bias/fairness, and adoption/implementation; duplicates were removed, title/abstract screening was performed, and full-text evaluation was conducted using explicit inclusion/exclusion criteria (2015–2025, English/Spanish, peer-reviewed articles and reviews, full text). Results: The bibliometric analysis of the 1,315 records revealed a concentration of production and international collaboration in high-income countries, with the United States as the dominant node in terms of documents, citations, and total link strength. After the PRISMA selection, 18 studies were included in the qualitative synthesis, identifying consistent patterns: high performance reported in controlled settings, performance variability depending on data representativeness and external validation, recurrent risks of inequity due to biases in subpopulations, and clinical adoption mediated by trust, interpretability, and integration into the workflow. Conclusions: The evidence suggests that sustainable clinical adoption depends both on accuracy and on stratified equity assessments and sociotechnical implementation conditions; it is recommended to strengthen integrated designs that simultaneously evaluate performance, biases, and adoption in real-world settings, along with governance frameworks and external validation, to reduce gaps and risks in clinical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.