OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.03.2026, 14:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

<b>Generative AI and Academic Integrity in Online and Distance Learning: A Policy Index and Evidence for Assessment Redesign in the Global South</b>

2025·0 Zitationen·Open Journal of AI Ethics & Society (ISSN 3105-3076)Open Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Generative AI has unsettled assessment and integrity practice, especially for large-scale ODL institutions in the Global South. Using a scoping meta-synthesis of policy texts (n = 50) and public indicators for 40 ODL providers, we propose and apply the AI-aware Assessment Policy Index (AAPI), positioning institutions along redesign-versus-surveillance and transparency-versus-opacity axes. Higher AAPI scores correlate with lower AI-detector flag rates and reduced proctoring reliance (Spearman ρ = −0.42, p < .01), without evidence of increased misconduct. We contextualize these findings within a sociotechnical, value-sensitive model, noting persistent risks from biased AI-text detectors and error-prone facial analytics that overburden multilingual and low-bandwidth learners. Who benefits. The AI-aware Assessment Policy Index (AAPI) is intended for ODL leaders, program heads, assessment designers, quality-assurance officers, and national regulators seeking evidence-informed alternatives to blanket detection and high-surveillance regimes. Methods transparency. We scored six policy dimensions (assessment redesign, disclosure norms, AI literacy, process safeguards, equity safeguards, and surveillance intensity), normalized each to [0,1], and computed the AAPI as the simple average of the six dimensions; inter-coder reliability exceeded the .80 benchmark across all dimensions. We also report internal consistency of the six-dimension scale (Cronbach’s α) and provide a per-institution score table in the Supplement. The contribution is a concise, adaptable blueprint: re-design assessments, codify disclosure, invest in AI literacy, and embed due-process and equity safeguards. We conclude that credible integrity in the AI era is a design outcome, not a surveillance outcome, particularly in resource-constrained ODL systems. Given the deliberately modest sample (n = 40 institutions), the study should be read as a pilot that seeds a new research paradigm rather than offering definitive causal claims. Its value lies in furnishing a transferable blueprint whose propositions invite testing across larger, multi-regional datasets.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIOnline Learning and Analytics
Volltext beim Verlag öffnen