Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Addressing student non-compliance in AI use declarations: implications for academic integrity and assessment in higher education
58
Zitationen
1
Autoren
2024
Jahr
Abstract
This study examines the factors driving student non-compliance with AI use declarations in academic assessments at King’s Business School, where 74% of students failed to declare AI usage despite declaration being a requirement of a mandatory coursework coversheet. Utilising the Theory of Planned Behaviour (TPB) as a framework, the research combines service evaluation survey data and semi-structured interviews to explore how attitudes, subjective norms, and perceived behavioural control influence compliance. Findings reveal that fear of academic repercussions, ambiguous guidelines, inconsistent enforcement, and peer influence are key barriers to AI use declaration. These factors complicate the declaration process, undermine transparency, and challenge academic integrity. The study extends the TPB model by highlighting the ethical and practical dilemmas posed by generative AI, which blur traditional norms of academic integrity. This research offers critical insights for policymakers, suggesting that clear, consistent, and trust-based policies are crucial in fostering ethical AI use. The findings underscore the importance of transparent communication and supportive institutional cultures to improve compliance. Ultimately, this study informs policy development by evaluating the effectiveness of declaration mechanisms and providing actionable recommendations to promote a culture of academic integrity in the evolving landscape of AI technologies.
Ähnliche Arbeiten
International Journal of Scientific and Research Publications
2022 · 2.691 Zit.
Student writing in higher education: An academic literacies approach
1998 · 2.518 Zit.
Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling
2012 · 2.321 Zit.
Comparison of Two Methods to Detect Publication Bias in Meta-analysis
2006 · 2.216 Zit.
How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment
2023 · 1.979 Zit.