Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Pilot Study of Pulmonologists’ Receptivity to Artificial Intelligence Use
0
Zitationen
13
Autoren
2025
Jahr
Abstract
Abstract RATIONALE: Artificial intelligence (AI) has the potential to transform healthcare, including cancer care. Yet, many are concerned about real or perceived risks associated with AI – a phenomenon known as the AI trust gap. Addressing all stakeholders’ concerns is essential to bridging the trust gap and leveraging the full potential of AI. This study explores pulmonologists’ perspectives on use of AI in healthcare. METHODS: The Department of Veteran Affairs (VA) developed a trustworthy AI framework with six guiding principles: AI should be purposeful, safe and effective, secure and private, fair and equitable, transparent and explainable, and accountable and monitored. We generated a survey instrument with questions mapped to the AI framework domains, where possible adapting existing survey items identified through a literature search. Five clinical vignettes related to AI use in lung cancer treatment were also developed with the guidance of clinical experts and included in the survey. Each vignette was followed by questions assessing respondents’ receptivity to using AI algorithms for clinical decision support. We then invited fellows and faculty of an academic pulmonary and critical care division to pilot test the online survey to obtain feedback in advance of a national VA survey. Descriptive analyses of responses were calculated using SAS software. RESULTS: Twenty-four of 66 invited pulmonologists participated (36.3% response rate), with fellows and attendings evenly represented. Nearly half (45.8%) reported managing 26-50 patients with lung cancer in the past year. Most used AI daily in personal life (66.7%) and showed more trust than distrust in AI for non-clinical decision-making (41.6% vs. 29.2%). However, 79.2% were unsure or reported not using AI in clinical settings. Across all domains, the strongest concerns were about transparency and the ability to explain results, followed by equity and fairness, and safety and effectiveness. Providers showed slightly more concern than enthusiasm for AI in lung cancer decision making (6.7 vs 5.1). However, 95.8% of providers reported they would seek guidance from AI in lung cancer dilemmas related to imaging and histology. Providers were least likely to use AI in assessing the need for invasive staging (Figure 1), and 83.3% would recommend invasive means contrary to AI. CONCLUSION: This pilot study revealed pulmonologists’ cautious receptivity to AI in lung cancer clinical care, echoing concerns from prior patient and non-pulmonologist provider surveys. A national survey of VA physicians from multiple specialties who manage patients with lung cancer is underway.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- Boston University(US)
- VA Boston Healthcare System(US)
- VA New England Healthcare System(US)
- Richard L. Roudebush VA Medical Center(US)
- Indiana University – Purdue University Indianapolis(US)
- VA Greater Los Angeles Healthcare System(US)
- United States Department of Veterans Affairs(US)
- Veterans Health Administration(US)