Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Testing process for artificial intelligence applications in radiology practice
7
Zitationen
7
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI) applications are becoming increasingly common in radiology. However, ensuring reliable operation and expected clinical benefits remains a challenge. A systematic testing process aims to facilitate clinical deployment by confirming software applicability to local patient populations, practises, adherence to regulatory and safety requirements, and compatibility with existing systems. In this work, we present our testing process developed based on practical experience. First, a survey and pre-evaluation is conducted, where information requests are sent for potential products, and the specifications are evaluated against predetermined requirements. In the second phase, data collection, testing, and analysis are conducted. In the retrospective stage, the application undergoes testing with a pre selected dataset and is evaluated against specified key performance indicators (KPIs). In the prospective stage, the application is integrated into the clinical workflow and evaluated with additional process-specific KPIs. In the final phase, the results are evaluated in terms of safety, effectiveness, productivity, and integration. The final report summarises the results and includes a procurement/deployment or rejection recommendation. The process allows termination at any phase if the application fails to meet essential criteria. In addition, we present practical remarks from our experiences in AI testing and provide forms to guide and document the testing process. The established AI testing process facilitates a systematic evaluation and documentation of new technologies ensuring that each application undergoes equal and sufficient validation. Testing with local data is crucial for identifying biases and pitfalls of AI algorithms to improve the quality and safety, ultimately benefiting patient care.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.