Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
STAGER checklist: Standardized Testing and Assessment Guidelines for Evaluating Generative AI Reliability
4
Zitationen
8
Autoren
2023
Jahr
Abstract
Generative Artificial Intelligence (AI) holds immense potential in medical applications. Numerous studies have explored the efficacy of various generative AI models within healthcare contexts, but there is a lack of a comprehensive and systematic evaluation framework. Given that some studies evaluating the ability of generative AI for medical applications have deficiencies in their methodological design, standardized guidelines for their evaluation are also currently lacking. In response, our objective is to devise standardized assessment guidelines tailored for evaluating the performance of generative AI systems in medical contexts. To this end, we conducted a thorough literature review using the PubMed and Google Scholar databases, focusing on research that tests generative AI capabilities in medicine. Our multidisciplinary team, comprising experts in life sciences, clinical medicine, medical engineering, and generative AI users, conducted several discussion sessions and developed a checklist of 23 items. The checklist is designed to encompass the critical evaluation aspects of generative AI in medical applications comprehensively. This checklist, and the broader assessment framework it anchors, address several key dimensions, including question collection, querying methodologies, and assessment techniques. We aim to provide a holistic evaluation of AI systems. The checklist delineates a clear pathway from question gathering to result assessment, offering researchers guidance through potential challenges and pitfalls. Our framework furnishes a standardized, systematic approach for research involving the testing of generative AI's applicability in medicine. It enhances the quality of research reporting and aids in the evolution of generative AI in medicine and life sciences.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.