Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Responsible Use of Artificial Intelligence in Health Care: Evidence, Challenges, and Best Practices: An Opinion of the Drug Information Practice and Research Network of the American College of Clinical Pharmacy
0
Zitationen
24
Autoren
2025
Jahr
Abstract
ABSTRACT As artificial intelligence (AI) continues to rapidly reshape health care, there is a critical need for clear frameworks for clinicians to ensure ethical, equitable, and effective integration and use of AI in patient care. Key integrations of AI include enhancing health communications, patient engagement, clinicians' training, pharmaceutical advertising, clinical decision‐making, and automation of clinical operations and workflow. However, there are growing concerns related to regulatory gaps, the spread of misinformation, security threats, patient and data privacy leaks, and widening health disparities gaps. These concerns are exacerbated by limited institutional infrastructure and limited AI literacy of clinicians and patients. Recent policy developments reflect efforts to guide responsible AI development and use. While progress has been made, the lack of standardized human oversight remains a critical gap, particularly as policies may not fully consider the challenges and complexities at institutional, societal, technical, and individual levels. Thus, herein, the Drug Information Practice and Research Network (DI PRN) of the American College of Clinical Pharmacy (ACCP) : (1) explores current and emerging multifaceted challenges, a call to action, and opportunities of AI integration in health care including examination of the regulatory, ethical, operational, and health equity implications; and (2) provides practical recommendations for responsible use of AI through DI PRN‐developed example case‐based approaches and best practices infographics to enhance AI literacy for diverse learners including clinicians, trainees, and patients. This DI PRN opinion paper highlights the importance of proactive governance frameworks and equips and empowers diverse learners with practical AI literacy tools to confidently engage with AI technologies in patient care.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.
Autoren
- Maha Abdalla
- Maha Saad
- Daniel Abazia
- Ashish Advani
- Abdullah Alhammad
- Allison Bernknopf
- Veera Raghavulu Bitra
- Matthew L. Blommel
- Kelly M. Conn
- Micheline Andel Goldwire
- Rena Gosser
- Tomona Iso
- Steven T. Johnson
- Karen L. Kier
- Audrey B. Kostrzewa
- Daniel Majerczyk
- Faria Munir
- Jennifer Phillips
- Miguel Segovia
- Julie B. Sibbesen
- Marina Sehman
- Christine D. Sommer
- Jennifer E. Stark
- Krisy‐Ann Thornby
Institutionen
- South College(US)
- St. John's University(US)
- Rutgers, The State University of New Jersey(US)
- KDH Research & Communication (United States)(US)
- King Saud University(SA)
- Ferris State University(US)
- University of Botswana(BW)
- West Virginia University(US)
- St. John Fisher College(US)
- Regis University(US)
- University of Washington(US)
- Loma Linda University(US)
- West Health(US)
- Ohio Northern University(US)
- University of Wisconsin System(US)
- Concordia University Wisconsin(US)
- Roosevelt University(US)
- University of Illinois Chicago(US)
- Mayo Clinic in Florida(US)
- Mayo Clinic in Arizona(US)
- Mayo Clinic(US)
- First Data (United States)(US)
- Veterans Health Care System of the Ozarks(US)
- Palm Beach Atlantic University(US)