Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT and library users: AI risks of hallucinations and misinformation
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Purpose: The paper aims to explore the implications of using ChatGPT by library users, focusing on the potential risks of AI hallucinations, the reliability of AI-generated content for research, and strategies to mitigate these risks. The aim is to provide a comprehensive overview of ChatGPT's impact on research and content generation, highlighting the critical role of libraries in guiding users towards responsible AI usage. Methodology/Design: A systematic review was employed to harvest relevant literature from Google Scholar. Sources between 2020 and 2024 were included in the study. This approach involved identifying, evaluating, and synthesizing research articles, reports, and studies related to ChatGPT, AI hallucinations, and their implications for library services. Findings: The findings indicate that while ChatGPT offers significant advantages in terms of accessibility and efficiency, its reliance on research and content generation poses considerable risks. These include the dissemination of misinformation, erosion of critical thinking skills, and ethical concerns related to bias. The study highlights the need for improved training data, human oversight, and user education to mitigate these risks effectively. Implications: The implications of this study are critical for libraries and their users. Libraries must implement comprehensive strategies to ensure the responsible use of AI tools like ChatGPT. This includes educating users about the limitations of AI, encouraging critical evaluation of AI-generated content, and promoting verification through trusted sources. Originality: This essay provides a unique and thorough examination of the challenges and opportunities presented by ChatGPT in the context of library services. It combines insights from reviews with practical recommendations, and offers a balanced perspective on how to leverage AI technology while addressing its inherent risks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.