Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Deepfakes and Large Language Models: Risks, Defenses, and the Future of Generative AI
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Generative Artificial Intelligence (GenAI) is rapidly changing how digital content is created and consumed. Two widely used GenAI technologies are deepfakes and large language models (LLMs). Deepfakes can generate realistic images, videos, audio, and text that imitate real people, while LLMs provide robust language understanding, reasoning, and multimodal coordination. When combined, these technologies significantly increase the realism, speed, and accessibility of synthetic media, raising concerns about misinformation, impersonation, and loss of digital trust. At the same time, the same reasoning capabilities that enable deepfake generation can also be leveraged for detection, verification, and mitigation. This article explores how LLMs strengthen deepfake generation by enabling realistic scripts, coordinated multimodal outputs, and scalable automation. Furthermore, it highlights how LLMs can also be used to fight deepfakes through semantic analysis, cross-modal verification, and provenance-based safeguards. By examining this dual role in an agentic AI setting, the article emphasizes why LLMs are central to both the deepfake problem and its defense.
Ähnliche Arbeiten
2019 · 31.483 Zit.
Techniques to Identify Themes
2003 · 5.367 Zit.
Answering the Call for a Standard Reliability Measure for Coding Data
2007 · 4.057 Zit.
Basic Content Analysis
1990 · 4.045 Zit.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
2013 · 3.036 Zit.