Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Love the Way You Lie: Unmasking the Deceptions of LLMs
4
Zitationen
5
Autoren
2023
Jahr
Abstract
Within the dynamic realm of Artificial Intelligence (AI), models like ChatGPT, Bard, and Bing are renowned for replicating human language. However, their emergence sparks debate over biases and trustworthiness. This research delves into the predominant inaccuracies in chatbots that tend to mislead novices and explores the possibility of establishing an AI Reliability (AIR) framework to fortify trust in these entities. Errors are categorized as factual inaccuracies, misinformation, fabricated data, and deviations from topics, among others. The in-progress AIR Framework offers a meticulous approach to assess chatbot accuracy, leveraging the experiences of nearly 100 CS/IT students with mainly ChatGPT. Recognizing the limitations and hallucinations of these models is essential as they become integral to our lives, underscoring the imperative for responsible and reliable AI.