Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Impact of AI Guilt on Students’ Use of ChatGPT for Academic Tasks: Examining Disciplinary Differences
13
Zitationen
2
Autoren
2025
Jahr
Abstract
Generative artificial intelligence (GenAI) tools like ChatGPT are reshaping higher education, raising concerns about academic integrity alongside potential benefits. The psychological tension accompanying the decision whether to use GenAI or not for a particular task may lead to “AI guilt”—students’ moral discomfort when using GenAI for traditionally human tasks. This study examines the impact of AI guilt on students’ use of ChatGPT for academic tasks, focusing on disciplinary differences between pure and applied fields. We conducted a survey at a Singaporean university, measuring AI guilt and ChatGPT usage. Using logistic regression analysis, we found that AI guilt significantly reduces ChatGPT use for creativity-based tasks but not for routine-based tasks. The relationship between AI guilt and ChatGPT use is linear in pure fields, while applied fields show a nonlinear pattern. Specifically, the perceived risk of being detected and potential academic penalties decreases ChatGPT use in creativity-based tasks for all samples, while rationalization tendencies increase it. Interestingly, the interaction between rationalization tendencies and perceived social norms reduces usage, reflecting the tension between internal justification and external pressures. Heterogeneous analysis shows that rationalization tendencies and perceived risk influence pure fields more, whereas social norms drive applied fields more. These results show how AI guilt varies across disciplines and affects task-level GenAI tool use, offering insights for developing tailored ethical guidelines and integration strategies in higher education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.496 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.386 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.848 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.562 Zit.