Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI explainability in oculomics: How it works, its role in establishing trust, and what still needs to be addressed
6
Zitationen
6
Autoren
2025
Jahr
Abstract
Recent developments in artificial intelligence (AI) have seen a proliferation of algorithms that are now capable of predicting a range of systemic diseases from retinal images. Unlike traditional retinal disease detection AI models which are trained on well-recognised retinal biomarkers, systemic disease detection or "oculomics" models use a range of often poorly characterised retinal biomarkers to arrive at their predictions. As the retinal phenotype that oculomics models use may not be intuitive, clinicians have to rely on the developers' explanations of how these algorithms work in order to understand them. The discipline of understanding how AI algorithms work employs two similar but distinct terms: Explainable AI and Interpretable AI (iAI). Explainable AI describes the holistic functioning of an AI system, including its impact and potential biases. Interpretable AI concentrates solely on examining and understanding the workings of the AI algorithm itself. iAI tools are therefore what the clinician must rely on if they are to understand how the algorithm works and whether its predictions are reliable. The iAI tools that developers use can be delineated into two broad categories: Intrinsic methods that improve transparency through architectural changes and post-hoc methods that explain trained models via external algorithms. Currently post-hoc methods, class activation maps in particular, are far more widely used than other techniques but they have their limitations especially when applied to oculomics AI models. Aimed at clinicians, we examine how the key iAI methods work, what they are designed to do and what their limitations are when applied to oculomics AI. We conclude by discussing how combining existing iAI techniques with novel approaches could allow AI developers to better explain how their oculomics models work and reassure clinicians that the results issued are reliable.
Ähnliche Arbeiten
Optical Coherence Tomography
1991 · 13.567 Zit.
Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs
2016 · 7.204 Zit.
Global Prevalence of Glaucoma and Projections of Glaucoma Burden through 2040
2014 · 6.677 Zit.
YOLOv3: An Incremental Improvement
2018 · 5.881 Zit.
Ranibizumab for Neovascular Age-Related Macular Degeneration
2006 · 5.793 Zit.