Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Same AI, Different Papers: Measuring Variance in AI-assisted Student Writing
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Background: Debates about generative AI in higher education claim that allowing unlimited AI access compresses academic writing quality such that student contribution becomes marginal. The output-compression hypothesis predicts manuscripts should converge toward a narrow quality band when students share the same capable AI tool.Objectives: This study examines whether unlimited generative AI access in an undergraduate qualitative methods course produces compression or maintains meaningful variance in student writing quality, and traces plausible pathways that may account for divergent outcomes.Methods: The study analyzes variance in final research papers using rubric-proximal proxy indicators (claim-evidence coupling, theory-interpretation linkage, methods specificity, scholarly sourcing) and examines six contrasting cases through longitudinal artifacts and AI interaction logs.Results: Results show substantial dispersion in claim-evidence coupling, theory-interpretation linkage, and scholarly sourcing. One dimension shows compression: methods specification language, where AI assistance combined with scaffolding narrows gaps. Boundary condition suggests compression occurs where demands are structural rather than judgmental. Case analyses reveal stronger outcomes correlate with distinct orchestration practices including precise task specification, irreducible inputs provision, iterative revision, and critical evaluation. Patterns align with Extended Executive Cognition.Conclusions: Findings establish that AI-enhanced pedagogy remains viable where variance persists on judgment-requiring dimensions. Student orchestration of AI-supported workflows determines quality when structural demands are held constant.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.