Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT Unveiled: Understanding Perceptions of Academic Integrity in Higher Education - A Qualitative Approach
22
Zitationen
3
Autoren
2024
Jahr
Abstract
Abstract The purpose of this research is to gain a complete understanding of how students and faculty in higher education perceive the role of AI tools, their impact on academic integrity, and their potential benefits and threats in the educational milieu, while taking into account ways to help curb its disadvantages. Drawing upon a qualitative approach, this study conducted in-depth interviews with a diverse sample of faculty members and students in higher education, in universities across Lebanon. These interviews were analyzed and coded using NVivo software, allowing for the identification of recurring themes and the extraction of rich qualitative data. The findings of this study illuminated a spectrum of perceptions. While ChatGPT and AI tools are recognized for their potential in enhancing productivity, promoting interactive learning experiences, and providing tailored support, they also raise significant concerns regarding academic integrity. This research underscores the need for higher education institutions to carefully navigate the integration of AI tools like ChatGPT. It calls for the formulation of clear policies and guidelines for their ethical and responsible use, along with comprehensive support and training. This study contributes to the existing literature by presenting a comprehensive exploration of the perceptions of both students and faculty regarding AI tools in higher education, through a qualitative rich approach. By delving into the intricate dynamics of ChatGPT and academic integrity, this study offers fresh insights into the evolving educational landscape and the ongoing dialogue between technology and ethics.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.