OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 12:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis

2024·85 Zitationen·BMJOpen Access
Volltext beim Verlag öffnen

85

Zitationen

14

Autoren

2024

Jahr

Abstract

This study found that although effective safeguards are feasible to prevent LLMs from being misused to generate health disinformation, they were inconsistently implemented. Furthermore, effective processes for reporting safeguard problems were lacking. Enhanced regulation, transparency, and routine auditing are required to help prevent LLMs from contributing to the mass generation of health disinformation.

Ähnliche Arbeiten