Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Trust in ChatGPT and Perceived Academic Writing Improvement: A TAM- Based Quantitative Study in a Pakistani ESL Context
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This study examines the impact of undergraduate students’ trust in ChatGPT on their perceived improvement in academic writing and their intention to utilise the tool in future writing tasks. Grounded in the Technology Acceptance Model (TAM) and the Trust in Technology Framework, this research employs a quantitative approach to examine student perceptions within a Functional English course at a public-sector university in Pakistan. A total of 225 students from the Telecommunication Engineering, Computer Science, and Chemistry departments completed a structured survey. Using descriptive statistics, Pearson’s correlation, and multiple regression analyses, results demonstrated strong positive correlations between trust in ChatGPT and students’ perceived improvements in clarity, vocabulary, and organisation (r = 0.75, 0.78, and 0.82, respectively; p < 0.001). Furthermore, multiple regression results revealed that trust and technology acceptance were significant predictors of students’ future intention to use ChatGPT (β = 0.41, p < 0.001), collectively explaining 63% of the variance in students’ adoption intentions. The study offers a novel contribution by providing localised insights from a non-Western English as a Second Language (ESL) context, where research on generative AI in academic writing remains scarce. Findings suggest that students’ confidence in AI-generated feedback fosters writing development and highlights the importance of institutional support, teacher training, and the ethical integration of AI into academic practices. Limitations include the study’s reliance on self-reported perceptions, the focus on a single institution, and the exclusion of longitudinal performance measures. Future research should incorporate mixed-method approaches and diverse ESL contexts to enhance generalizability.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.