Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative Artificial Intelligence and the Editing of Academic Essays: Necessary and Sufficient Ethical Judgments in Its Use by Higher Education Students
0
Zitationen
4
Autoren
2025
Jahr
Abstract
The emergence of generative artificial intelligence (GAI) has significantly transformed higher education. As a linguistic assistant, GAI can promote equity and reduce barriers in academic writing. However, its widespread availability also raises ethical dilemmas about integrity, fairness, and skill development. Despite the growing debate, empirical evidence on how students’ ethical evaluations influence their predicted use of GAI in academic tasks remains scarce. This study analyzes the ethical determinants of students’ determination to use GAI as a linguistic assistant in essay writing. Based on the Multidimensional Ethics Scale (MES), the model incorporates four ethical criteria: moral equity, moral relativism, consequentialism, and deontology. Data were collected from a sample of 151 university students. For the analysis, we used a mix of partial least squares structural equation modeling (PLS-SEM), aimed at testing sufficiency relationships, and necessary condition analysis (NCA), to identify minimum acceptance thresholds or necessary conditions. The PLS-SEM results show that only consequentialism is statistically relevant in explaining the predicted use. Moreover, the NCA reveals that reaching a minimum degree in the evaluations of all ethical constructs is necessary for use to occur. While the necessary condition effect size of moral equity and consequentialism is high, that of relativism and deontology is moderate. Thus, although acceptance of GAI use in the analyzed context increases only when its consequences are perceived as more favorable, for such use to occur it must be considered acceptable, which requires surpassing certain thresholds in all the ethical factors proposed as explanatory.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.