OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.04.2026, 05:52

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Generative artificial intelligence in higher education: Emotional tensions and ethical declaration

2025·1 Zitationen·British Journal of Educational TechnologyOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2025

Jahr

Abstract

Abstract The increasing use of Generative Artificial Intelligence (GenAI) tools such as ChatGPT in higher education has raised questions about authorship, ethical responsibility, and academic transparency. While institutional guidelines exist, many remain vague and ineffective, leaving students to interpret disclosure obligations on their own. This mixed‐methods study investigates how undergraduates at a large Singaporean university decide whether to disclose GenAI use, introducing the concepts of strategic and conceptual non‐disclosure. We examine two psychological mechanisms: the placebo effect , where perceived cognitive enhancement encourages risk‐taking behaviours, and the AI ghostwriter effect , where responsibility for authorship is externalized. Quantitative results show that the AI ghostwriter effect significantly predicts lower disclosure, particularly in theoretical disciplines (e.g., Humanities, Science, and Arts). The placebo effect is not statistically significant but shows a consistent negative trend in applied fields. Qualitative data supports and explains these patterns. Students in theoretical disciplines express more ethical discomfort, whereas those in applied fields view GenAI as a practical tool. Ambiguity in institutional policies and concerns about detection fairness also emerge as key factors driving non‐disclosure. This study advances understanding of ethical behaviour in the GenAI context and offers practical recommendations for discipline‐sensitive disclosure policies. Practitioner notes What is already known about this topic Students often use GenAI in academic work but do not always disclose it. Institutional guidelines are often vague, leading to confusion about disclosure. Disciplinary norms affect how students interpret authorship and integrity. What this paper adds This study provides empirical evidence for the AI ghostwriter effect , showing that students who frame GenAI as a personal thinking tool are significantly less likely to disclose the use. It reveals a key disciplinary divergence: students in the theoretical discipline (e.g., Humanities and Sciences) more frequently report ethical discomfort, while their peers in applied field (e.g., Engineering and Business) more readily normalize GenAI as a standard tool. It introduces and validates a distinction between two motivations for non‐disclosure: Conceptual non‐disclosure (not perceiving the need to declare) is a stronger driver of non‐disclosure than strategic non‐disclosure (intentional concealment), particularly in theoretical disciplines. Implications for practice and/or policy Policies can be designed to target the ghostwriter effect in theoretical disciplines by having guidelines that explicitly distinguish between tool use and undisclosed co‐authorship. To address students' fears, institutions are encouraged to build trust with transparent and fair disclosure procedures, including clear, tiered protocols and a robust appeals process.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AINeuroethics, Human Enhancement, Biomedical Innovations
Volltext beim Verlag öffnen