Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Legal and Ethical Framework for Artificial Intelligence in Gastrointestinal Endoscopy: A World Endoscopy Organization International Consensus Statement
0
Zitationen
20
Autoren
2025
Jahr
Abstract
The OperA (Optimising Colorectal Cancer Prevention through Personalized Treatment with Artificial Intelligence) project aims to transform colorectal cancer care through artificial intelligence (AI) innovations. Recognizing that legal and ethical challenges remain key obstacles to clinical integration, this Delphi study sought to identify and prioritize such concerns in the context of gastrointestinal (GI) endoscopy. Fourteen international experts participated in a 2-round Delphi process. In round 1, the steering committee, with feedback from participants, proposed legal and ethical issues pertaining to AI in endoscopy. Round 2 involved iterative rating and refinement of these issues to achieve consensus on their importance. Consensus was reached on 10 key statements spanning 3 thematic domains: data governance, medicolegal implications, and equity and bias. Experts emphasized the need for robust data protection, transparent algorithmic development, and institutional clarity on data ownership. Liability concerns related to AI-assisted diagnosis and automated reporting were highlighted, alongside calls for guidance from legal and professional bodies. Finally, participants underscored the importance of demographic diversity in training data sets and transparent reporting practices to mitigate bias and ensure equitable AI deployment. As AI tools become increasingly integrated into the clinical practice of gastroenterology, addressing legal, ethical, and equity-related challenges is essential. This expert consensus provides a foundation for developing guidelines and regulatory frameworks to support responsible AI adoption in GI endoscopy.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- University College Hospital(GB)
- University College London(GB)
- Oslo University Hospital(NO)
- IT&D Informação Tecnologia e Desenvolvimento (Brazil)(BR)
- Humanitas University(IT)
- IRCCS Humanitas Research Hospital(IT)
- KU Leuven(BE)
- Portsmouth Hospitals NHS Trust(GB)
- Vancouver Hospital and Health Sciences Centre(CA)
- Vancouver General Hospital(CA)
- Amsterdam University Medical Centers(NL)
- University of Amsterdam(NL)
- University of California, San Francisco(US)
- VA Greater Los Angeles Healthcare System(US)
- University Hospital Augsburg(DE)
- Showa University Northern Yokohama Hospital(JP)
- Tokyo Fuji University(JP)
- National Cancer Center Hospital East(JP)
- Icahn School of Medicine at Mount Sinai(US)
- University of Electronic Science and Technology of China(CN)
- Jichi Medical University(JP)
- Beth Israel Deaconess Medical Center(US)