Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Toward Transparency: Implications and Future Directions of Artificial Intelligence Prediction Model Reporting in Healthcare (Preprint)
0
Zitationen
5
Autoren
2024
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> The rapid integration of Artificial Intelligence (AI) in healthcare emphasizes the transformative potential it holds for improving patient outcomes through data-driven decision-making. There is a drive toward implementing more complex predictive algorithms for disease diagnosis and prognosis. However, there are unique implications of AI that limit its clinical applicability and validity. To address these challenges, the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) group released an extension and update of its 2015 reporting guideline, the TRIPOD+AI, to enhance transparency and methodological rigor in regression and AI prediction model studies. The TRIPOD+AI framework encompasses an expanded scope, incorporating new domains such as fairness, open scientific practices, and patient and public engagement. It is anticipated that these augmented guidelines will facilitate the rigorous evaluation and subsequent adoption of artificial intelligence tools across diverse healthcare settings. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.