Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI of the Beholder: How Surgical and Medical Specialties View Intelligent Technology
0
Zitationen
8
Autoren
2026
Jahr
Abstract
BackgroundHow clinicians conceptualize artificial intelligence reveals underlying assumptions about professional authority and decision-making. This study examined whether surgical and medical specialties frame AI differently in research and if such differences reflect divergent professional norms.Methods1561 AI-related research abstracts published between January 1, 2019, and March 27, 2025, in 30 high-impact journals. Abstracts were identified through a structured PubMed query and analyzed using a large language model (DeepSeek Reasoner) trained to classify along three dimensions: the human-AI relationship, the impact on professional autonomy, and the locus of decision control. A stratified validation sample was independently coded by a human rater. Chi-square testing and logistic regression were used to assess differences by specialty and publication year.ResultsSurgical abstracts more frequently framed AI as assistive (69.8% vs 54.9%; <i>P</i> < .001), explicitly addressed professional autonomy (73.5% vs 61.3%; <i>P</i> < .001), and specified decision control (69.3% vs 58.6%; <i>P</i> < .001) compared to medical abstracts. These differences persisted across the 7-year period. In multivariable logistic regression, assistive framing (OR, 2.43; 95% CI, 1.82-3.23) and explicit autonomy discussion (OR, 1.46; 95% CI, 1.11-1.92) were independently associated with surgical specialty.ConclusionsSurgical and medical specialties exhibit distinct patterns in how they conceptualize AI, reflecting established perspectives on authority, expertise, and the human-machine relationship. These framings have implications for AI tool design, clinical implementation, and healthcare governance. Recognizing conceptual differences on AI is critical as healthcare transitions toward algorithmically mediated decision-making, as they may shape the future culture of clinical care.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.