Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Awareness, trust, and expectations of AI for glaucoma care among Bulgarian ophthalmologists: Role of demographic factors
1
Zitationen
4
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) holds promise for enhancing glaucoma screening and management, yet its adoption depends on clinician perceptions, particularly in resource-limited regions like Eastern Europe. This study explores awareness, trust, and expectations of AI in glaucoma care among Bulgarian ophthalmologists, examining the influence of demographic factors such as age, gender, and professional experience. A cross-sectional survey was conducted from March to May 2024 among 156 ophthalmologists and residents recruited via Bulgarian professional societies. The 25-question survey, informed by the Technology Acceptance Model and validated (content validity index = 0.85; Cronbach's α = 0.78), assessed awareness, trust (5- point Likert scale), and expectations. Data were analyzed using non-parametric tests (chi-square, Spearman correlation) and thematic analysis for qualitative responses. The study was approved by the Ethics Committee of Medical University of Varna (No141/14.03.2024), with informed consent obtained and adherence to the Declaration of Helsinki. Participants (73.1% female; median age 35 years, IQR 10) showed varying awareness, with less experienced clinicians (<5 years) more informed (χ2 = 17.89, p < 0.001). Trust was low (7.5% fully trusted AI diagnosis; 5.7% for treatment), with gender differences (males more distrustful in diagnosis, p = 0.009). Younger respondents were more optimistic about AI's impact (ρ = 0.268, p < 0.001). Qualitative themes highlighted diagnostic utility (95% mentions) and concerns like training deficiencies (45%). Bulgarian ophthalmologists exhibit cautious optimism toward AI in glaucoma care, shaped by demographics, underscoring the need for targeted training to build trust. These findings inform regional AI implementation strategies, aligning with ethical priorities for equitable digital health adoption.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.