Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Survey and analysis of hallucinations in large language models: attribution to prompting strategies or model behavior
30
Zitationen
3
Autoren
2025
Jahr
Abstract
for prompt engineers, model developers, and AI practitioners. We further propose best practices and future directions to reduce hallucinations in both prompt design and model development pipelines.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.732 Zit.
Coding Algorithms for Defining Comorbidities in ICD-9-CM and ICD-10 Administrative Data
2005 · 10.547 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.949 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.550 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.061 Zit.