Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Influence of Students’ Perceptions of ChatGPT Use on Problem-Solving Ability: Integration of the Technology Acceptance Model and Self-Determination Theory
0
Zitationen
5
Autoren
2025
Jahr
Abstract
The emergence of ChatGPT has fundamentally reshaped programming education while raising concerns about student over-reliance and weakened independent problem-solving. This research examines how students' attitudes toward ChatGPT including perceived utility (PU), accessibility (PEOU), capability (PC), and self-directedness (PA) influence their problem-resolution skills (PSA). The study integrates Technology Acceptance Model (TAM) with Self-Determination Theory (SDT) frameworks. Using quantitative methodology, data from 165 Informatics Education students at Universitas Negeri Makassar underwent PLS-SEM analysis. Measurement reliability was confirmed (factor loadings exceeded 0.733, AVE surpassed 0.50, CR topped 0.896, HTMT below 0.90). The structural model accounted for 70.2% variance in problem-solving capability (R² = 0.702). Three hypotheses received support: accessibility positively influenced problem-solving (β = 0.377, p < 0.001, f² = 0.138), capability showed positive effects (β = 0.334, p = 0.001, f² = 0.121), and self-directedness contributed positively (β = 0.218, p = 0.030, f² = 0.052). However, perceived utility showed no meaningful association (β = -0.017, p = 0.443). Results reveal cognitive achievement relies more on system accessibility and psychological need satisfaction than perceived utility, contradicting traditional TAM assumptions. Pedagogically, instructors should position ChatGPT as an intellectual companion enhancing critical thinking and independence rather than creating dependency. Study limitations include cross-sectional design, self-report measures, and single-institution sampling. Future research should employ longitudinal designs with objective assessments while controlling confounding variables.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.611 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.504 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.025 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.