Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can surgeons trust AI? Perspectives on machine learning in surgery and the importance of eXplainable Artificial Intelligence (XAI)
23
Zitationen
4
Autoren
2025
Jahr
Abstract
PURPOSE: This brief report aims to summarize and discuss the methodologies of eXplainable Artificial Intelligence (XAI) and their potential applications in surgery. METHODS: We briefly introduce explainability methods, including global and individual explanatory features, methods for imaging data and time series, as well as similarity classification, and unraveled rules and laws. RESULTS: Given the increasing interest in artificial intelligence within the surgical field, we emphasize the critical importance of transparency and interpretability in the outputs of applied models. CONCLUSION: Transparency and interpretability are essential for the effective integration of AI models into clinical practice.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.811 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.336 Zit.
"Why Should I Trust You?"
2016 · 14.615 Zit.
Generative adversarial networks
2020 · 13.228 Zit.
Autoren
Institutionen
- Heidelberg University(DE)
- University Hospital Heidelberg(DE)
- National Center for Tumor Diseases(DE)
- University of Basel(CH)
- University Hospital of Basel(CH)
- University Hospital Carl Gustav Carus(DE)
- Technische Universität Dresden(DE)
- Turing Institute(GB)
- University of California, Los Angeles(US)
- University of Cambridge(GB)
- Bridge University(SS)
- The Alan Turing Institute(GB)
- Artificial Intelligence in Medicine (Canada)(CA)