Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT and Academic Integrity: What Drives Management Students to Be Honest or Dishonest?
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The surge in artificial intelligence (AI) adoption in higher education, exemplified by Large Language Models like ChatGPT, offers unprecedented opportunities for learning and collaboration while posing potential threats to academic integrity. This study investigates the complex relationship between personal factors, external factors, technological advancements, especially AI adoption, and the academic integrity of post-graduate management students. The central element of this study is a mini-project assigned to 60 PGDM students in a Business Research Methods course. Preliminary evaluations of the project reports submitted by students suggested extensive use of AI-generated content. The submitted reports were classified by level of AI-generated content, and this classification was then integrated with student responses to a structured questionnaire grounded in eight theoretical constructs related to academic integrity. Employing an explanatory sequential mixed-method design, the study first used discriminant analysis to identify the factors that influence students’ ethical decision-making within AI-integrated educational environments, followed by thematic analysis of students’ qualitative responses to triangulate the findings. This research finds that peer pressure, academic stress, and perceived institutional unfairness drive management students to unethical use of AI tools. This study highlights the need for transparent evaluation practices, a mechanism for peer accountability, institutional support systems and clear guidelines on ethical AI use. If the institutions fail to provide support, students are more likely to engage in behaviours that may put their academic future at risk. The study has implications for teaching interventions, policy formulation, and future research that may help create a culture of integrity within AI-enhanced learning environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.