Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating trustworthiness in AI-Based diabetic retinopathy screening: addressing transparency, consent, and privacy challenges
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Background Artificial intelligence (AI) offers significant potential to drive advancements in healthcare; however, the development and implementation of AI models present complex ethical, legal, social, and technical challenges, as data practices often undermine regulatory frameworks in various regions worldwide. This study explores stakeholder perspectives on the development and deployment of AI algorithms for diabetic retinopathy (DR) screening, with a focus on ethical risks, data practices, governance, and emerging shortcomings in the Global South AI discourse. Methods Fifteen semi-structured interviews were conducted with ophthalmologists, program officers, AI developers, bioethics experts, and legal professionals. Thematic analysis was guided by OECD principles for responsible AI stewardship. Interviews were analyzed using MAXQDA software to identify themes related to AI trustworthiness and ethical governance. Results Six key themes emerged regarding the perceived trustworthiness of AI: algorithmic effectiveness, responsible data collection, ethical approval processes, explainability, implementation challenges, and accountability. Participants reported critical shortcomings in AI companies’ data collection practices, including a lack of transparency, inadequate consent processes, and limited patient awareness about data ownership. These findings highlight how unchecked data collection and curation practices may reinforce data colonialism in low and middle-income healthcare systems. Conclusion Ensuring trustworthy AI requires transparent and accountable data practices, robust patient consent mechanisms, and regulatory frameworks aligned with ethical and privacy standards. Addressing these issues is vital to safeguarding patient rights, preventing data misuse, and fostering responsible AI ecosystems in the Global South. Supplementary Information The online version contains supplementary material available at 10.1186/s12910-025-01265-7.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.