Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AID-RT: Standardising Artificial Intelligence Documentation in RadioTherapy with a domain-specific model card
0
Zitationen
19
Autoren
2026
Jahr
Abstract
<h2>Abstract</h2><h3>Background and Purpose</h3> Insufficient documentation of artificial intelligence (AI) models remains a widespread issue, which hampers reproducibility in research environments and safe integration in clinical departments. Our goal was to develop a standardised, structured, and domain-specific reporting framework tailored to AI models in radiotherapy (RT), enhancing transparency and accountability. <h3>Methods</h3> A working group was formed after the ESTRO Physics Workshop 2023, <i>"AI for the Fully Automated Radiotherapy Treatment Chain",</i> comprising 16 experts from 13 institutions<i>.</i> We reviewed existing initiatives for AI model and data reporting and drafted an initial template, which was sent for review to all participants. Three popular RT applications were selected to define task-specific fields: synthetic CT, segmentation, and dose prediction. Five review rounds were performed, where suggested changes were voted in a live shared Google doc. Unclear fields and conflicting votes were discussed at online meetings, and consensus was reached by majority voting. <h3>Results</h3> The final template included 6 sections: 0) Card metadata, 1) Model basic information; 2) Model technical specifications (i.e. architecture, software and hardware); 3) Training data, methodology, and information; 4) Evaluation data, methodology, and results (a.k.a commissioning for clinical models); and 5) Other considerations, including ethical use, risk analysis, and monitoring. It is publicly available at Zenodo as a Microsoft Word document and as a digital template to facilitate information entry at Streamlit.app. <h3>Conclusions</h3> We proposed a practical, consensus-driven template tailored to the unique requirements of AI models in RT, with applicability in both research and clinical environments, conveying the key information required for informed use
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.
Autoren
- Ana M. Barragán-Montero
- Margerie Huet-Dastarac
- Silvia M. Herranz-Hernández
- Benjamin Tengler
- Emma Riis Skarsø
- Arthur Galapon
- Carlos E. S. Cárdenas
- M. Fusella
- Geoffroy Herbin
- Y. de Hond
- Franziska Knuth
- Ciaran Malone
- Peter M. A. van Ooijen
- Charlotte Robert
- Michele Zeverino
- Coen Hurkmans
- Tomas Janssen
- Stine Korreman
- Charlotte L. Brouwer
Institutionen
- European Society for Therapeutic Radiology and Oncology(BE)
- University of Tübingen(DE)
- Aarhus University(DK)
- Danish Pain Research Center(DK)
- University Medical Center Groningen(NL)
- University of Groningen(NL)
- University of Alabama at Birmingham(US)
- Policlinico Abano Terme(IT)
- Ion Beam Applications (Belgium)(BE)
- Radboud University Nijmegen(NL)
- Catharina Ziekenhuis(NL)
- Erasmus MC Cancer Institute(NL)
- St. Luke's Hospital(IE)
- Institut Gustave Roussy(FR)
- University of Lausanne(CH)
- Eindhoven University of Technology(NL)
- The Netherlands Cancer Institute(NL)