Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinical readiness and limitations of artificial intelligence in hematologic diagnostics: a critical analytical review
0
Zitationen
2
Autoren
2026
Jahr
Abstract
This review examines artificial intelligence (AI) in hematologic diagnostics through the lens of clinical readiness rather than technical performance, focusing on interpretability, generalizability, and governance as the primary determinants of safe adoption. A structured literature review (January 2020–June 2025) was conducted using PubMed, IEEE Xplore, Scopus, Web of Science, and Google Scholar, supplemented by citation tracking and manual screening of ASH and ISLH proceedings. Eligible studies were critically appraised based on clinical relevance, validation context, and integration feasibility rather than performance metrics alone. AI systems demonstrate strong discriminative capacity in image-based tasks (e.g., leukemia triage, APL screening) and numerical modeling (e.g., malignancy prediction, anemia classification). However, reported performance frequently reflects curated datasets and controlled conditions, limiting external validity. Real-world adoption is constrained by restricted interpretability, dataset bias, pre-analytical variability, and weaknesses in auditability and workflow integration. Commercial platforms illustrate feasibility at scale but remain dependent on expert oversight and robust governance structures. AI in hematology is best positioned as a clinically embedded decision-support and triage layer rather than as an autonomous diagnostic authority. Clinical readiness is governed less by accuracy than by transparency, robustness, and accountability. Sustainable adoption will therefore require alignment between technical validation, human trust calibration, and regulatory oversight to ensure operational safety and clinical legitimacy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.