OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 19:44

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Linguistic features of AI mis/disinformation and the detection limits of LLMs

2025·0 Zitationen·Nature CommunicationsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

The persuasive capability of large language models (LLMs) in generating mis/disinformation is widely recognized, but the linguistic ambiguity of such content and inconsistent findings on LLM-based detection reveal unresolved risks in information governance. To address the lack of Chinese datasets, this study compiles two datasets of Chinese AI mis/disinformation generated by multi-lingual models involving deepfakes and cheapfakes. Through psycholinguistic and computational linguistic analyses, the quality modulation effects of eight language features (including sentiment, cognition, and personal concerns), along with toxicity scores and syntactic dependency distance differences, were discovered. Furthermore, key factors influencing zero-shot LLMs in comprehending and detecting AI mis/disinformation are examined. The results show that although implicit linguistic distinctions exist, the intrinsic detection capability of LLMs remains limited. Meanwhile, the quality modulation effects of AI mis/disinformation linguistic features may lead to the failure of AI mis/disinformation detectors. These findings highlight the major challenges of applying LLMs in information governance.

Ähnliche Arbeiten