Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the trustworthiness of ChatGPT: how students perceive AI reliability in a business school context
2
Zitationen
2
Autoren
2025
Jahr
Abstract
Purpose This study aims to investigate how students in an International Business Education (IBE) context perceive the reliability and accuracy of artificial intelligence (AI)-generated content, specifically focusing on ChatGPT. It explores the factors that influence these perceptions, including students’ familiarity with the tool, their understanding of its primary functions and their strategies for verifying information. Design/methodology/approach A qualitative analysis was conducted involving approximately 70 third-year students enrolled in a Geopolitics course at a French Business School. This study used surveys to collect data on students’ experiences and perceptions of ChatGPT. The analysis followed the Gioia methodology, allowing for an in-depth exploration of students’ perspectives on the use of generative AI tools in their academic work. Findings This study reveals significant variation in students’ confidence regarding the reliability of ChatGPT, influenced by their prior experience with AI, their understanding of its limitations and their strategies for cross-verifying information. Key concerns include ethical considerations, such as fears of academic dishonesty and potential biases in AI-generated content. The findings suggest a need for clear guidelines on the appropriate use of generative AI in academic settings. Originality/value This research contributes to the underexplored area of generative AI in IBE by highlighting the socio-ethical implications of using AI tools like ChatGPT in educational settings. It provides practical recommendations for educators and institutions on developing AI literacy programs, ethical guidelines and inclusive policies that consider diverse learner needs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.