Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mapping the evolution of epistemic trust in artificial intelligence: a bibliometric analysis of key themes, influences, and global trends
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Purpose This paper aims to provide a comprehensive bibliometric analysis of AI research focusing on trust, credibility, and related issues in automated systems across diverse fields. This study offers a brief systematic literature review by identifying key themes and trends within the literature, emphasizing the critical role of trust in AI systems such as autonomous robotics, software engineering, and human-agent interactions. Design/methodology/approach Using a bibliometric approach, data were collected from Scopus spanning the years 1987–2024. The study systematically analyzes publication types, collaborative patterns, subject areas, and citation impact. It also identifies key thematic areas related to trust and credibility in AI applications, such as mobile ad hoc networks (MANETs), peer-to-peer networks, and decision-making in uncertainty. Findings A total of 111 papers were published between 1987 and 2024, with an average of 2.92 publications annually, 60.36% of which appeared in journals. Collaboration has significantly increased, with an average of 3.48 authors per paper. The period from 2020 to 2024 witnessed a surge in both publications (41 in 2024) and authorship (139 authors). Key contributors, such as Yang from the University of Michigan and high-impact authors from Purdue University, highlight the global scope of AI research. The systematic review identifies central themes such as “trust dynamics,” “credibility assessment,” and “human-robot interaction” as crucial areas within the literature. Research limitations/implications The study emphasizes the need for better visibility for conference proceedings and emerging research topics, such as reinforcement learning. The analysis also reveals the growing significance of trust and credibility in AI-driven systems, especially as AI becomes more integrated into decision-making processes, providing a roadmap for future research. Practical implications Publishing in high-impact journals such as the Journal of Management Information Systems significantly enhances research visibility, while other journals may require strategies to improve their citation potential. As AI applications continue to expand, themes like trust and credibility assessment are essential for fostering effective human-AI collaboration across interdisciplinary fields. Originality/value This study delivers a unique combination of bibliometric analysis and a systematic literature review, shedding light on key research trends in AI, particularly in the context of trust and credibility. It provides valuable insights into collaborative research patterns, institutional contributions, and the evolution of trust-related themes, positioning it as a key reference for future exploration of AI and trust dynamics.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.418 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.288 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.726 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.516 Zit.