Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trust-Aware Generative Conversational AI: Mitigating Hallucinations In LLM-Powered Chatbots
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Large language models (LLMs) have shown impressive levels of capability in machine-generated human-like conversational responses, but they frequently generate incorrect or fake information, the so-called hallucinations. This discourages the trust of the user and restricts the use of AI-based chatbots in high-stakes uses, like healthcare, finance, and customer care. This paper presents a Trust-Aware Generative Conversational AI model that will help reduce hallucinations in chatbots with LLMs. The proposed architecture incorporates the knowledge-infused language modeling (KILM), contextual validation systems, and the trust score system to evaluate the accuracy of the generated answers. In particular, the system integrates structured knowledge, so-called curated knowledge bases, into the LLM, cross-checks the results with various sources, and gives a trust rating to each answer to instruct the chatbot to give the correct and contextually accurate answers. Our testing was performed based on benchmark datasets, such as ConvAI2 and a corpus of domain-specific and factual knowledge. Measures were taken of quantitative variables (like factual accuracy, hallucination rate, and user trust scores). In the experimental study, the suggested trust-aware system has demonstrated a reduction in incidences of hallucinations by 42 percent over the baseline LLM chatbots, and an improvement in user-perceived reliability by 37 percent. Qualitative analysis also demonstrates consistency of context and correctness of facts in different conversation situations. This study indicates that the concept of knowledge infusion and verification in generative conversational AI helps a great deal to increase trustworthiness without stereotyping dialogue naturalness. The results are a basis to construct credible, stakes high chatbot applications and emphasize on the significance of trust-aware design in the next generation AI communication system.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.552 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.317 Zit.