Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Lecturers’ pathways to integrating artificial intelligence in business and economics curricula
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Objective: This article aims to identify the factors affecting business and economics lecturers’ inclusion of artificial intelligence (AI) in the curricula. Research Design & Methods: We applied a quantitative approach to test a research model based on the theory of planned behaviour. We used partial least squares structural equation modelling to verify hypotheses using a sample of 133 university lecturers from business and economics. Findings: The study reveals that key background factors, including prior AI education and prior AI use, indirectly contribute to the inclusion of AI in the curricula. AI education contributed by enhancing lecturers’ cognitive attitudes and self-efficacy and AI use only contributed through self-efficacy. Contrary to expectations, previous instances of AI integration in teaching have had an insignificant influence on the inclusion of AI into the curriculum. Implications & Recommendations: The inclusion of AI in business and economics university teaching is a precondition for equipping graduates with skills expected in the job market. Based on the findings of this study, two paths seem to be particularly helpful in achieving this objective: improving lecturers’ attitudes via AI education and improving their self-efficacy through personal AI use. Contribution & Value Added: The contribution of this study consists of identifying the factors that influence lecturers’ intentions to incorporate AI into their curricula. Shedding light on these determinants can guide higher education policies and support the development of strategies to promote the effective incorporation of AI into current teaching programmes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.