Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
User Experience of AI-Assisted Academic Writing Tools: Perceptions of Graduate Students and Faculty amid Arabic Language Detection Limitations
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Generative artificial intelligence (GenAI) tools are slowly being integrated into higher education, changing academic writing practices and introducing vital questions about user experience (UX), academic integrity, and institutional readiness. Even though past studies have paid most of their attention to English-language settings, there is a dearth of empirical evidence dealing with these matters from an Arabic-language academic setting perspective. This is especially the case in relation to the limitations of AI detection. In an attempt to close this gap, this study explores the perceptions of graduate students and members of faculty regarding AI-assisted academic writing tools, with specific attention to UX, ethical concerns and the effectiveness of Arabic-language AI detection systems. The study embraces a descriptive-exploratory quantitative design. Data was gathered from 27 faculty members and 66 graduate students from different institutions of higher education in Saudi Arabia. The findings expose a generally positive user experience in relation to originality, academic integrity, and overdependence on AI by students. Notable limitations were identified by both faculty and students regarding AI detection systems for the Arabic language. It seems that these limitations impact assessment practices, user behaviour, and perceptions of institutional oversight. This study’s main contribution relates to context-sensitive evidence, showing that Arabic AI detection’s limitations go beyond technical challenges to impact trust, ethical perceptions, and institutional responses. These findings stress the important role of pedagogically grounded approaches, Arabic-specific detection technologies, and more explicit institutional policies to encourage responsible AI integration in higher education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.