Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Disaffordances or affordances: Perceptions of ChatGPT in the workplace
3
Zitationen
2
Autoren
2025
Jahr
Abstract
People increasingly use generative AI like ChatGPT in the workplace since ChatGPT was launched in late 2022. There could be negative consequences of ChatGPT use like threatening employees’ job and misuse due to less updated knowledge. The limitation of updating its knowledge, accuracy and authenticity may also be other reasons of hindering the use of ChatGPT in the workplace. This study aims to examine how the positive or negative perceptions of ChatGPT among employees affect the attitudes towards using ChatGPT. This research fills the gap by finding out the negative perceptions of using ChatGPT. The Self-Determination Theory (SDT) is selected as the foundation of explanation of generative AI adoption in general. A survey research was conducted in 2024 to collect views of working adults towards ChatGPT in Hong Kong. The findings showed that automatability, personalization and availability of ChatGPT are positively associated with ChatGPT effectiveness. Limited understanding and null decision-making are positively related to discomfort with using ChatGPT. The association between lack of emotion and discomfort with using ChatGPT, however, is not supported in this research. ChatGPT effectiveness is found positively associated with the attitudes towards ChatGPT. SDT was further validated to demonstrate the importance of fulfilling employees’ psychological needs for more motivated use of ChatGPT. Corporations should count on the three features affecting ChatGPT effectiveness which are automatability, personalization and availability. Future research might explore the influence of demographical factors of employees such as age, work position, educational level, income as well as the contextual factor such as industry type.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.490 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.376 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.832 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.553 Zit.