Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI and shared decision-making: a systematic review
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Abstract Shared decision-making (SDM) is a collaborative process involving patients, their support networks, and the healthcare team, where patients are given a central role in making decisions about their health through effective communication. Since its introduction in the medical field, artificial intelligence (AI) has been increasingly used to support the SDM process. This systematic review is aimed at assessing the precise relationship between AI systems and the SDM process, highlighting both the potential for improvement as well as flaws. The systematic review followed the PRISMA methodology. Three databases were used to search for relevant literature: PubMed, Scopus (limited to the Medical field), and Web of Science (all fields). Two filtering rounds were performed, one on titles and abstracts, and one on the full-text of the articles. Extracted data included year, medical specialty/field, article type, AI method, developed or analyzed AI, clinical setting, use of AI, biases or limitations, funds, competing interests, ethical concerns, and algorithmic transparency/fairness. The authors also summarized the objective and contribution of each paper. The database search retrieved 927 records after duplicate removal. After the filtering rounds, 66 studies met the inclusion criteria; mostly articles published in 2024 (26.9%). Medical specialties were reported in 61.5% of studies, most frequently oncology (15.3%), orthopedics (7.7%), and cardiology (6.4%). Conceptual papers predominated (38.4%), followed by observational studies (20 reports) and reviews (18 reports). Almost half of the included articles (48.7%) did not specify an AI method. Among those that did, machine learning (28 reports) was most common, followed by others such as deep learning (9 reports) and large language models (6 reports). The objectives and contributions of the papers were distilled and discussed, including topics such as decision aids and conversation support, value consideration in recommender systems, time efficiency and aid in non-clinical tasks, black-box AI, fairness, reliability, and trust, human empathy, training and education, and design. The main risks that AI presents to the SDM process were identified in communication failures, lack of consideration for patient preferences, and absence of co-designing. Nevertheless, AI holds significant promise: it can increase clinical efficiency, contribute to chronic disease management, and enable patients to engage in self-monitoring and treatment adherence, thereby supporting sustained participation in their care. AI can also act as a decision aid and enrich patient–clinician dialog by clarifying options and aligning recommendations with individual values, ultimately fostering more collaborative decision-making. AI shall not be framed as a “negative” force for SDM, but rather should be harnessed for the opportunities it presents. Research and education should focus on overcoming the identified obstacles.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.