Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Prioritizing benefits, costs, and contextual and individual factors in researchers’ adoption of generative artificial intelligence: a multi-criteria decision-making analysis
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Background Generative artificial intelligence (GAI) tools are transforming how researchers access information, write academic papers, and collaborate. However, many scholars remain cautious, questioning whether the potential productivity gains outweigh concerns about increased effort, possible inaccuracies, and ethical implications. This study investigates the factors influencing researchers’ adoption of GAI and aims to offer actionable guidance for institutions seeking to promote its responsible and effective use. Methods We first conducted a literature review and used thematic analysis to identify 18 key factors affecting GAI adoption. Next, 81 researchers rated the importance of each factor on a five-point scale. We analyzed these ratings using both the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) and the Multi-Criteria Optimization and Compromise Solution (VIKOR) methods in a multi-attribute group decision-making (MAGDM) setting. Results The analysis produced largely consistent results: Ethical concerns and performance risk were the most significant cost-related factors, while Task efficiency and knowledge acquisition were the top benefit-related drivers. Among personal and contextual variables, facilitating conditions and AI literacy emerged as the most influential. Conclusions These findings suggest that institutions should establish clear guidelines to address ethical and accuracy-related concerns surrounding GAI use. The identified factors also offer a foundation for future research and can inform the refinement of existing adoption frameworks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.