Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How to moderate LLM based chats from hallucinations?
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Chatbots powered by large language models (LLMs) are increasingly prevalent in various domains. Nonetheless, they face challenges such as hallucinations and losing context during extended conversations. This study tackles these issues by proposing a multi-agent strategy for chat architecture where multiple LLMs focus on distinct tasks to enhance the quality of their output. The suggested solution involves a supervisor agent working in conjunction with a document search and review module. We assess the performance of information systems with chatbots designed to respond to sustainability questions in English and handle technical documentation for plant equipment in Polish. A comprehensive analysis of commercial and open-source models revealed that Qwen2.5 v14b’s performance is comparable to that of the Gemini family models.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.633 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.584 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.551 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.443 Zit.