OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.05.2026, 17:43

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Key Information Influencing Patient Decision-Making About AI in Health Care: Survey Experiment Study

2025·0 Zitationen·Journal of Medical Internet ResearchOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2025

Jahr

Abstract

Background: Artificial intelligence (AI)-enabled devices are increasingly used in health care. However, there has been limited research on patients' informational preferences, including which elements of AI device labeling enhance patient understanding, trust, and acceptance. Clear and effective patient-facing communication is essential to address patient concerns and support informed decision-making regarding AI-enabled care. Objective: We evaluated 3 aims using simulated AI device labels in a cardiovascular context. First, we identified key information elements that influence patient trust and acceptance of an AI device. Second, we examined how these effects varied based on patient characteristics. Third, we explored how patients evaluated informational content of AI labels and their perceived effectiveness of the AI labels in informing decision-making about the use of AI device, building trust in the device, and shaping their intention to use it in their health care. Methods: We recruited 340 US patients from ResearchMatch.org to participate in a web-based survey that contained 2 experiments. In the discrete choice experiment, participants indicated preferences in terms of trust and acceptance regarding 16 pairs of simulated AI device labels that varied across 8 types of information needs identified in our previous qualitative work. In the single profile factorial experiment, participants evaluated 4 randomly assigned label prototypes regarding the label's legibility, comprehensibility, information overload, credibility, and perceived effectiveness in informing about the AI device, as well as participants' trust in the AI device and intention to use the device in their health care. Data were analyzed using mixed effects binary or ordinal logistic regression. Results: The discrete choice experiment showed that information about regulatory approval, high device performance, provider oversight, and AI's value added to usual care significantly increased the likelihood of patient trust by 14.1%-19.3% and acceptance by 13.3%-17.9%. Subgroup analyses revealed variations based on patient characteristics such as familiarity with AI, health literacy, and recency of last medical checkup. The single profile factorial experiment showed that patients reported good label comprehension, and that information about provider oversight, regulatory approval, device performance, and AI's added value improved perceived credibility and effectiveness of the AI label (odds ratio [OR] range: 1.35-2.05), reduced doubts in the AI device (OR range: 0.61-0.77), and increased trust and intention to use the AI device (OR range: 1.47-1.73). However, information about data privacy and safety management protocols was less influential. Conclusions: Patients value information about an AI device's performance, provider oversight, regulatory status, and added value during decision-making. Providing transparent, easily understandable information about these aspects is critical to support patient determinations of trust and acceptance of AI-enabled health care. Information elements' impact on patient trust and acceptance varies by patient characteristics, highlighting the need for a tailored approach to address the concerns of diverse patient groups about AI in health care.

Ähnliche Arbeiten