OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 03.05.2026, 14:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Can LLMs serve in identifying fake Health Information: it depends on how and who you ask. (Preprint)

2024·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

8

Autoren

2024

Jahr

Abstract

<sec> <title>UNSTRUCTURED</title> Misleading information has significant implications for society but can have disastrous impact for health matters. Transformative artificial intelligence (AI) tools such as large language models (LLMs) have the potential for limitless content generation (including fake), soon making internet information impossible to assess using traditional human approaches. We asked if the same LLMs (GPT4 and Gemini1-5-Pro) could be part of a more scalable solution. We tested 2 publicly available LLMs for their ability to identify misinformation in HealthReleases previously labeled by human experts. We found that simple prompts lead to overall low accuracy (F1 Macro 0,45 (GPT4) and 0,49 (Gemini1-5Pro)), but very different profiles for each LLM. Adding specific criteria used by experts to critically assess the Releases enhanced Gemini (0.66) but surprisingly reduced GPT4 (0,37) performances. We therefore developed a novel approach incorporating summaries of expert feedback into prompts and then observed major improvements in performance for both LLMs(GPT4;0.63 and Gemini1-5Pro; 0.96). Our study provides the first use case of LLMs as high throughput proofing of medical text, but more importantly provides insights into LLMs’ “truth biases”. We provide a novel paradigm integrating knowledge into the prompts which may reduce the need for LLM training, and the requirement for ever larger datasets and compute power. Importantly, we show how experts could and need to be involved in LLMs used to enhance their performance and potentially minimize the data wall issue. </sec>

Ähnliche Arbeiten

Autoren

Themen

Misinformation and Its ImpactsArtificial Intelligence in Healthcare and EducationSocial Media in Health Education
Volltext beim Verlag öffnen