Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ethical Assessment of Generative AI Tools for Clinical Summarization Tasks
0
Zitationen
4
Autoren
2026
Jahr
Abstract
As healthcare organizations' use of generative AI moves from initial experimentation to scaled deployments, the need to build oversight systems to identify and address ethical challenges assumes greater urgency. Among the AI applications attracting the strongest early interest are clinical summarization tools, which use large language models (LLMs). To assist healthcare organizations weighing adoption of LLMs, we describe an ethical assessment process employed at our healthcare system to identify problems that may affect patient care so problems can be addressed prior to deployment or monitored over time to detect harms. The process uses stakeholder interviewing to explore risks and other concerns arising from integration of AI tools into clinical workflow and identify areas where values and priorities of different stakeholder groups do not align. We describe ethical issues identified in assessments of tools that (1) draft end-of-shift nursing notes and (2) generate clinical notes from conversations between clinicians and patients.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.646 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.554 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.071 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.851 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.