Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
21 Clinical View
0
Zitationen
5
Autoren
2024
Jahr
Abstract
Development of sophisticated artificial intelligence (AI) solutions enables a new approach to augment clinical decision-making and improve efficiency in the practice of radiology. Effective use of AI solutions in this context requires defining appropriate clinical questions and distinguishing image-interpretive from non-interpretive use cases. Image-interpretive use cases provide an opportunity for AI to improve clinical decision-making through detection of pertinent findings, identification of imaging features undetectable to the human eye, or automation of tedious tasks associated with imaging findings. Examples include identification of urgent pathology, automated provision of descriptive characteristics (i.e., measurements, morphology, change over time), and determining molecular phenotype based on imaging features. Non-interpretive use cases enhance the practice of radiology beyond tasks directly pertaining to medical images. Examples include worklist prioritization of urgent studies, automated study protocoling, resource optimization, enhancing image quality, and automating reporting of findings. While solutions for these tasks hold potential for significant improvement in quality and efficiency of radiology practice, successful AI deployment will ultimately require an “AI–physician” interface that leverages AI-derived efficiencies in the context of human-derived insight. As a growing array of AI technology allows for enhanced prognostic, diagnostic, and therapeutic capabilities, paradigms in clinical workflow will continue to evolve. Accordingly, AI users must approach deployed AI solutions conscientiously to develop flexible frameworks for tool governance and ethical use of AI in practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.