Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Hallucination-Aware Optimization for Large Language Model-Empowered Communications
1
Zitationen
8
Autoren
2026
Jahr
Abstract
Large Language Models (LLMs) have significantly advanced communications fields, such as Telecom Q&A, mathematical modeling, and optimization solving. However, LLMs encounter an inherent issue known as hallucination, i.e., generating fact-conflicting or irrelevant content. This problem critically undermines the applicability of LLMs in communication systems yet has not been systematically explored. Hence, this article provides a comprehensive review of LLM applications in communications, with a particular emphasis on hallucination mitigation. Specifically, we analyze hallucination causes and summarize hallucination mitigation strategies from both model- and system-based perspectives. Afterward, we review representative LLM-empowered communication schemes, detailing hallucination issues and comparing their mitigation strategies. Finally, we present a case study of a Telecom-oriented LLM that utilizes a novel hybrid approach to reduce hallucination and improve the service experience. On the model side, we publish a Telecom hallucination dataset and apply direct preference optimization to fine-tune LLMs, resulting in a 20.6% correct rate improvement. Moreover, we construct a mobile-edge mixture- of-experts architecture for optimal LLM expert activation. Our research aims to propel the field of LLM-empowered communications forward by detecting and minimizing hallucination impacts.
Ähnliche Arbeiten
Federated Learning: Challenges, Methods, and Future Directions
2020 · 4.398 Zit.
Deep Learning: Methods and Applications
2014 · 3.306 Zit.
Mobile Edge Computing: A Survey on Architecture and Computation Offloading
2017 · 2.900 Zit.
Machine Learning: An Artificial Intelligence Approach
2013 · 2.639 Zit.
Machine learning and deep learning
2021 · 2.335 Zit.