Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI Usage in Academic Writing: Perspectives of Stakeholders
1
Zitationen
3
Autoren
2025
Jahr
Abstract
This qualitative study examines the complex attitudes, ethical considerations, and practical implications of integrating artificial intelligence (AI) in academic writing across key stakeholder groups, including university professors and students. Using semi-structured interviews with 40 participants (20 students and 20 faculty members) from diverse disciplines and institutional contexts, the research reveals divergent perspectives on AI’s role in academia. Faculty respondents expressed significant concerns about academic integrity, erosion of critical thinking, and the limitations of AI detection tools, which frequently misidentify human-written text as AI-generated. Conversely, students viewed AI as an essential productivity tool for overcoming writer’s block, refining ideas, and managing workload, though they acknowledged ethical ambiguities in its deployment. A critical tension emerged between AI’s perceived benefits—enhanced efficiency, personalized feedback, and accessibility—and its risks, including algorithmic bias, surveillance culture, and threats to student agency. Stakeholders agreed that institutional policies lag behind technological adoption, with current frameworks inadequately addressing transparency, data privacy, or equitable implementation. The study also identifies disciplinary variances: STEM educators favored AI for technical drafting, while humanities faculty emphasized its threat to authentic voice development. The findings advocate for a collaborative, multi-stakeholder approach to AI governance, emphasizing pedagogical redesign, ethical guidelines for explainable AI, and professional development to bridge digital literacy gaps. This research underscores the urgency of reimagining academic writing in the AI era, balancing innovation with the preservation of core educational values.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.