OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 11:01

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating Hallucinations in Large Language Models(LLM):Metrics and Mitigation

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Large Language Models’ (LLMs’) quick development has transformed natural language processing and made it possible to produce content that appears human in a variety of applications. One major issue that still exists, though, is the problem of hallucinations, in which LLMs generate information that is not supported by the incoming data or empirical knowledge. These hallucinations pose hazards in crucial applications, including the legal, medical, and educational fields, by compromising the quality and dependability of AIgenerated content. Resolving this issue is essential for enhancing the reliability and practical applications of LLMs. Through a thorough approach that incorporates quantitative measures and visualization techniques, this study seeks to assess hallucinations in LLMs. The overall objective is to establish robust methods for identifying, quantifying, and comparing hallucination patterns across different models and applications. Through the scrutiny of the performance of different large language models (LLMs), the research seeks to determine patterns and causative factors of hallucinations. Developing more accurate and dependable models will be facilitated in the process. The findings of the investigation should enable us to better comprehend the mechanisms of hallucinations and develop effective mitigation strategies.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Mental Health via WritingArtificial Intelligence in Healthcare and EducationTopic Modeling
Volltext beim Verlag öffnen