Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A qualitative Interview Study Investigating Patient, Health Professional, and Developer Perspectives on Real-World Implementation of Patient-Centered AI Systems
0
Zitationen
10
Autoren
2025
Jahr
Abstract
Our objective was to triangulate patient, health professional, and developer perspectives for implementing patient-centered artificial intelligence (AI) systems. We conducted semi-structured interviews with patients (N = 18), health professionals (N = 8), and AI developers (N = 8). We created interview guides informed by frameworks in bioethics and health information informatics. We utilized a predictive algorithm for determining risk for postpartum depression as a use case to concretize our discussions. Our team analyzed transcripts from interview recordings using thematic, directed content analysis and the constant comparative process. Participants found mitigating potential harms caused by AI (e.g., bias, stigma, or patient anxiety) greatly important. They also believed that AI must provide clinical benefits by allowing health professionals and patients to easily take actions based on AI output. To take safe action, end users needed transparency to understand the AI's accuracy and predictors driving risk. Patient participants wanted health professionals to interpret AI output, but health professionals did not always feel they had the time or training to do so. Participants also raised concerns regarding how data quality may affect AI accuracy, who may be responsible for inappropriate actions taken based on AI, and issues regarding data security, privacy, and accessibility. Our results support real-world implementation of more patient-centered AI tools by: providing health professionals with competencies for discussing AI-based risks; engaging patients and health professionals throughout the development process; inclusively communicating AI output to health professionals and patients; and implementing multi-layer systems of AI governance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.