Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Abstract 4361453: Inaccurate information regarding cardiovascular disease prevention enabled by generative artificial intelligence
0
Zitationen
10
Autoren
2025
Jahr
Abstract
Background: Inaccurate information regarding cardiovascular disease (CVD) prevention is present on the internet and may influence medical decisions. Artificial intelligence “bots” are prevalent on the internet and may be used for medical questions. Research Question: This physician-led experiment evaluated the generation of inaccurate CVD information on two widely used generative artificial intelligence (genAI) models, namely OpenAI o1 and DeepSeek-R1. Methods: This experiment was performed in Februrary 2025. Information was generated by OpenAI o1 and DeepSeek-R1 in response to prompts related to nine cardiovascular disease prevention topics, including statin therapy, LDL cholesterol and supplements. The prompts varied in two “tones”: a “neutral” tone, and a “misinformation” tone requesting inaccurate information. Two board-certified cardiologists specializing in preventive cardiology at a tertiary care center reviewed each response and agreed on a single grade. Responses were graded as appropriate (accurate content), borderline (minor inaccuracies that are not likely to be clinically meaningful), or inappropriate (inaccurate content that is likely to be clinically meaningful). Results: For neutral tone prompts, 88.9% (8/9) of OpenAI o1 responses and 66.7% (6/9) of DeepSeek R1 responses were appropriate (table 1, 2). For misinformation-prompting prompts, OpenAI o1 produced no appropriate responses; 77.8% (7/9) were inappropriate, and 22.2% (2/9) borderline. DeepSeek R1 produced inappropriate responses for all misinformation prompts (9/9). Conclusion: In this physician-led qualitative experiment, OpenAI o1 and DeepSeek R1, two popular and publicly accessible genAI models, were easily prompted to support inaccurate information regarding CVD prevention topics that are widely relevant to the health of patients, including statins, supplements, and LDL cholesterol. Findings suggest that LLM-powered automated personas on the internet could propagate inaccurate CVD information with ease. Further research is warranted.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.