Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
GPT-4 in a Cancer Center — Institute-Wide Deployment Challenges and Lessons Learned
27
Zitationen
9
Autoren
2024
Jahr
Abstract
The enormous potential for generative pretrained transformers (GPTs) and other artificial intelligence (AI) large language models (LLMs) to improve health care has become increasingly clear. Software tools based on LLMs have been shown to perform as well as or better than humans on many health care–related tasks, including generation of clinical documentation, extraction of structured data from medical records, performance on a growing number of medical board examination benchmarks, and writing accurate and empathetic responses to patients' medical questions. However, health care and cancer care settings pose unique ethical, legal, regulatory, and technical challenges for large-scale deployment and adoption of LLMs. Such challenges include the essentiality of patient data privacy and security, the direct negative consequences of errors and biases, the need for model interpretability and supporting evidence, the necessity of safeguarding intellectual property and proprietary data, and the difficulty of modifying clinical and operational workflows. Consequently, few LLMs are in use in hospitals outside of controlled research studies or small pilot programs, and none to our knowledge is yet broadly deployed in a dedicated cancer center. In this case study, we report the challenges and lessons learned in the evaluation and deployment of LLMs at the Dana-Farber Cancer Institute for use in all business areas, including basic research, clinical research, and operations, but not in direct clinical care. In early discussions about whether and how to proceed, we realized that although some risks could be mitigated by clear policy guardrails and a secure technical environment, others would remain, including those regarding compliance with rapidly evolving regulations. We also recognized that substantial, ongoing work would be required to ensure appropriate ethical consideration of each use case and to ensure patient- and human-centric decision-making. After engaging in discussions over many months and employing a process framework for ethical implementation of AI in our cancer center, we believed it would be better to tackle these challenges as a community, rather than prohibit the use of LLMs altogether. Here, we detail aspects of sponsorship, governance, technical implementation, program launch, socialization, user feedback, and ongoing support and user training in preparation to make generative AI LLMs broadly available to our 12,500-member workforce in a compliant, auditable, and secure manner. We hope other institutions can benefit from our experience as they consider the deployment of these software tools to further their medical and research missions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.