Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial Intelligence in Radiology: Perceptions, Adoption Barriers, and Trust Among Iranian Radiologists in a Global Context
3
Zitationen
6
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is transforming radiology globally, yet adoption varies significantly across regions due to cultural, educational, and infrastructural factors. This study examines Iranian radiologists’ perceptions, trust, and barriers to AI adoption through a cross-sectional survey of 128 professionals (radiologists, residents, and technologists) from diverse healthcare settings. Results revealed cautious optimism: 78.1% anticipated AI would significantly impact radiology within a decade, primarily as a workflow optimizer (69.5%) or second reader (73.4%). However, critical barriers emerged, including lack of formal AI training (77.3% had none), low confidence in AI tools (mean score: 2.35/5), and concerns about reliability (52.3%) and legal accountability (46.1%). Only 29.7% trusted AI-generated reports (90% accuracy), with 83.6% demanding mandatory human oversight. Demographic differences were notable; younger professionals (<35 years) were more optimistic about AI’s augmentative role (p < 0.05). These findings align with trends in low- and middle-income countries (LMICs), where limited training and infrastructure hinder adoption compared to high-income regions. The study highlights urgent needs: integrating AI into radiology curricula, pilot programs to build trust, and regulatory frameworks addressing transparency and liability. By addressing these challenges, Iran could leverage AI’s potential while navigating LMIC-specific constraints. This research contributes to global discourse on equitable AI adoption by contextualizing Iran’s position alongside international benchmarks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.