OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 22:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Abstract 4361453: Inaccurate information regarding cardiovascular disease prevention enabled by generative artificial intelligence

2025·0 Zitationen·Circulation
Volltext beim Verlag öffnen

0

Zitationen

10

Autoren

2025

Jahr

Abstract

Background: Inaccurate information regarding cardiovascular disease (CVD) prevention is present on the internet and may influence medical decisions. Artificial intelligence “bots” are prevalent on the internet and may be used for medical questions. Research Question: This physician-led experiment evaluated the generation of inaccurate CVD information on two widely used generative artificial intelligence (genAI) models, namely OpenAI o1 and DeepSeek-R1. Methods: This experiment was performed in Februrary 2025. Information was generated by OpenAI o1 and DeepSeek-R1 in response to prompts related to nine cardiovascular disease prevention topics, including statin therapy, LDL cholesterol and supplements. The prompts varied in two “tones”: a “neutral” tone, and a “misinformation” tone requesting inaccurate information. Two board-certified cardiologists specializing in preventive cardiology at a tertiary care center reviewed each response and agreed on a single grade. Responses were graded as appropriate (accurate content), borderline (minor inaccuracies that are not likely to be clinically meaningful), or inappropriate (inaccurate content that is likely to be clinically meaningful). Results: For neutral tone prompts, 88.9% (8/9) of OpenAI o1 responses and 66.7% (6/9) of DeepSeek R1 responses were appropriate (table 1, 2). For misinformation-prompting prompts, OpenAI o1 produced no appropriate responses; 77.8% (7/9) were inappropriate, and 22.2% (2/9) borderline. DeepSeek R1 produced inappropriate responses for all misinformation prompts (9/9). Conclusion: In this physician-led qualitative experiment, OpenAI o1 and DeepSeek R1, two popular and publicly accessible genAI models, were easily prompted to support inaccurate information regarding CVD prevention topics that are widely relevant to the health of patients, including statins, supplements, and LDL cholesterol. Findings suggest that LLM-powered automated personas on the internet could propagate inaccurate CVD information with ease. Further research is warranted.

Ähnliche Arbeiten