Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Smarter tools, weaker minds? Introducing the CTRL theory of Youth–AI dependency
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Purpose This study aims to examine how trust in generative AI chatbots shapes reliance-related behaviors among young adults, focusing on verification and ethical concerns. It introduces the cognitive trust, reliance, learning erosion (CTRL) theory to explain how fluent AI interactions can normalize reduced verification and, over time, weaken learning autonomy. Design/methodology/approach A survey of participants aged 18–36 across regions was analyzed using exploratory and confirmatory factor analysis and structural equation modeling. Open-ended responses were used only for illustrative context and triangulation of quantitative patterns; the core theoretical development and hypothesis testing are based on quantitative analysis. Findings Results show 77% of youth/young adults frequently use chatbots for academic and personal tasks. While 69% associate them with academic dishonesty and 59% with privacy concerns, higher trust predicted reduced verification (p < 0.001), showing a trade-off between convenience and reasoning. Privacy concern was prevalent in the sample; however, higher perceived model quality predicted lower privacy concern (H4), consistent with a convenience–vigilance trade-off. These patterns support the CTRL theory: cognitive trust → reliance → learning erosion. Research limitations/implications The youth-focused sample limits generalizability across ages and cultures. Future work could explore longitudinal and cross-cultural impacts. Practical implications Findings highlight the need for AI literacy programs, ethics-focused chatbot design and guidelines for responsible use in education. Social implications Generative AI influences youth cognition, raising questions about digital dependency, critical thinking and academic ethics. Originality/value This paper introduces CTRL theory as a novel framework linking AI trust, cognitive offloading and learning erosion.
Ähnliche Arbeiten
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
An Experiment in Linguistic Synthesis with a Fuzzy Logic Controller
1999 · 5.632 Zit.
An experiment in linguistic synthesis with a fuzzy logic controller
1975 · 5.550 Zit.
A FRAMEWORK FOR REPRESENTING KNOWLEDGE
1988 · 4.548 Zit.
Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy
2023 · 3.310 Zit.