Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Acceptance of AI in Semi-Structured Decision-Making Situations Applying the Four-Sides Model of Communication—An Empirical Analysis Focused on Higher Education
22
Zitationen
4
Autoren
2023
Jahr
Abstract
This study investigates the impact of generative AI systems like ChatGPT on semi-structured decision-making, specifically in evaluating undergraduate dissertations. We propose using Davis’ technology acceptance model (TAM) and Schulz von Thun’s four-sides communication model to understand human–AI interaction and necessary adaptations for acceptance in dissertation grading. Utilizing an inductive research design, we conducted ten interviews with respondents having varying levels of AI and management expertise, employing four escalating-consequence scenarios mirroring higher education dissertation grading. In all scenarios, the AI functioned as a sender, based on the four-sides model. Findings reveal that technology acceptance for human–AI interaction is adaptive but requires modifications, particularly regarding AI’s transparency. Testing the four-sides model showed support for three sides, with the appeal side receiving negative feedback for AI acceptance as a sender. Respondents struggled to accept the idea of AI, suggesting a grading decision through an appeal. Consequently, transparency about AI’s role emerged as vital. When AI supports instructors transparently, acceptance levels are higher. These results encourage further research on AI as a receiver and the impartiality of AI decision-making without instructor influence. This study emphasizes communication modes in learning-ecosystems, especially in semi-structured decision-making situations with AI as a sender, while highlighting the potential to enhance AI-based decision-making acceptance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.