Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the Ethical and Practical Considerations of Artificial Intelligence in Real-World Health Care Settings: Stakeholder Focus Group Study (Preprint)
0
Zitationen
6
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Artificial intelligence (AI) technologies continue to transform how we research human disease, diagnose and treat patients, and operate hospitals. However, emerging ethical dilemmas surrounding their design, use, and oversight demand both policy attention and empirical research. </sec> <sec> <title>OBJECTIVE</title> This study aims to explore current AI development, integration, and use activities across the Texas Medical Center (TMC), the largest medical center in the world, and identify emerging ethical priorities. </sec> <sec> <title>METHODS</title> We conducted a total of 3 qualitative focus groups via Zoom (Zoom Video Communications, Inc) between May and June 2025 to gauge the perspectives of 19 clinicians, developers, administrators, and patient advocates on core aspects of clinical AI tools at the point of care. </sec> <sec> <title>RESULTS</title> Participants described current development and deployment of AI tools across the TMC, with areas of high potential focused on extending clinical expertise, reducing administrative burden, and improving cross-specialty collaboration. However, they also identified many challenges, including significant barriers to accessing quality datasets for training, insufficient systematic governance on the validation, auditing, and use of AI tools in the clinic, and limited patient involvement in AI development decisions. Discussion on validation of models occurring primarily in well-resourced locations like the TMC raised worries about a potential digital divide in health care. These concerns were heightened for practitioners working in safety-net hospitals and in other underresourced health care settings. Participants also highlighted that discussions on AI ethics at the development stage are currently lacking and suggested embedding bioethicists into development teams to account for this issue. Clinicians and patient advocates differed in their views on patient notification about the use of AI at the point of care, justifying future research on this question. Accountability also remained an unresolved issue, with participants split on whether the provider should take full responsibility for any patient care errors resulting from AI. </sec> <sec> <title>CONCLUSIONS</title> These contributions identify the ethical tensions currently occurring in the real-world daily lives of professionals involved with health AI within a large regional academic medical center. Addressing these challenges will require AI-specific governance that ensures contextual validation, easy access to data, independent auditing, meaningful stakeholder involvement, and support and education for frontline users who must integrate these tools into their daily practice. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.687 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.591 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.114 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.867 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.