Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Big claims, low outcomes: fact checking ChatGPT’s efficacy in handling linguistic creativity and ambiguity
14
Zitationen
6
Autoren
2024
Jahr
Abstract
Ambiguity has always been a pain in the neck of Natural Language Processing (NLP). Despite enormous AI tools for human language processing, it remains a key concern for Language Technology Researchers to develop a linguistically intelligent tool that could effectively understand linguistic ambiguity and creativity possessed by human language. In this regard, the newly designed AI tool ChatGPT has dramatically attracted human attention due to its remarkable ability to answer human questions from a wide range of domains, which needs a reality check. This article scrutinises ChatGPT’s ability to answer and interpret neologisms, codemixing, and linguistically ambiguous sentences. For this, we have tested lexically, syntactically, and semantically ambiguous expressions, codemixed words, as well as a few language game instances. The findings show that ChatGPT still fails to understand linguistically complex sentences, specifically those common in everyday discourse or not part of any standard textbook. More specifically, semantically ambiguous sentences and language games remain an uphill task for ChatGPT to understand. This has implications for further improving the output of ChatGPT.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.197 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.047 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.410 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.