Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating Hallucinations in Large Language Models(LLM):Metrics and Mitigation
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Large Language Models’ (LLMs’) quick development has transformed natural language processing and made it possible to produce content that appears human in a variety of applications. One major issue that still exists, though, is the problem of hallucinations, in which LLMs generate information that is not supported by the incoming data or empirical knowledge. These hallucinations pose hazards in crucial applications, including the legal, medical, and educational fields, by compromising the quality and dependability of AIgenerated content. Resolving this issue is essential for enhancing the reliability and practical applications of LLMs. Through a thorough approach that incorporates quantitative measures and visualization techniques, this study seeks to assess hallucinations in LLMs. The overall objective is to establish robust methods for identifying, quantifying, and comparing hallucination patterns across different models and applications. Through the scrutiny of the performance of different large language models (LLMs), the research seeks to determine patterns and causative factors of hallucinations. Developing more accurate and dependable models will be facilitated in the process. The findings of the investigation should enable us to better comprehend the mechanisms of hallucinations and develop effective mitigation strategies.
Ähnliche Arbeiten
The Psychological Meaning of Words: LIWC and Computerized Text Analysis Methods
2009 · 5.645 Zit.
The Stress Process
1981 · 4.430 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.783 Zit.
Psychological Aspects of Natural Language Use: Our Words, Our Selves
2002 · 2.533 Zit.
Emotion: A Psychoevolutionary Synthesis
1980 · 2.523 Zit.