Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Problematic Dependency on Large Language Models vs. Attitudes Towards Them: The Moderating Role of Perceived Trustworthiness
0
Zitationen
5
Autoren
2025
Jahr
Abstract
The rapid integration of Large Language Models (LLMs) into personal and professional life has led numerous users to depend on these systems to problematic levels, raising concerns about the factors that drive such dependency. This study is among the first to examine factors contributing to the development of dependency on LLM. It is focused on examining the relationship between attitude towards LLMs (acceptance and fear) and LLM dependency (instrumental and relational) and the moderating role of trust in LLMs in shaping this relationship, across two cultural contexts: Arab and the British. Data used in this study was collected from 526 participants from the UK and 250 participants from the Arab countries. Canonical correlation analysis was employed to explore the multivariate association between attitudes and dependency, and multiple linear regression analysis was conducted to test the moderation effect of trust in LLMs. Our results indicated that in both cultural contexts, higher acceptance of LLMs was strongly linked to greater dependency, while fear played a minimal role. Additionally, trust amplified the positive link between acceptance and both LLM dependency types in the UK sample. Whilst in the Arab sample trust strengthened the negative association between fear and both LLM dependency types. Findings from this study highlight the importance of culturally sensitive LLM adoption strategies and the identification of measures to calibrate trust and attitudes to help alleviate overdependence on LLMs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.422 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.300 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.734 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.519 Zit.