Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Aligning Generative AI with Higher Education Workflows: Indonesian Lecturers’ Anxiety–Satisfaction Profiles and Adoption Patterns
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Generative AI (GenAI) is increasingly embedded in higher education workflows for teaching preparation and academic work, yet lecturers’ affective readiness and perceived alignment between AI use and professional values remain underexplored. This mixed-methods study investigated 191 Indonesian university English lecturers’ GenAI-related anxiety and satisfaction, mapped adoption patterns through profile analysis, and identified key integration challenges. Quantitative data were collected using a reliable 10-item AI Anxiety Scale (α = 0.89) and a global satisfaction item and analyzed using descriptive statistics, Spearman’s correlations, and K-means clustering. The strongest anxieties concerned over-reliance (M = 4.20, SD = 0.80, d = −1.12) and content accuracy (M = 3.70, SD = 1.10, d = −0.76). Anxiety was negatively associated with satisfaction, most notably for perceived complexity (r = −0.197, p = 0.006) and dependency concerns (r = −0.184, p = 0.012). Three profiles emerged: high-anxiety lecturers reported distrust and pedagogical discomfort; moderate-anxiety lecturers adopted GenAI conditionally with verification; and low-anxiety lecturers used GenAI confidently and proactively. Qualitative reflections and interviews revealed five dominant use cases, involving writing support, material development, assessment design, translation, and lesson planning, while stressing persistent barriers related to ethical uncertainty, mistrust in AI-generated outputs, and concerns about diminished educator agency. The findings suggest that aligning GenAI with higher education workflows requires human-centered support, including context-sensitive AI literacy, clear ethical guidance, and institutional governance that strengthens responsible adoption.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.