Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Bot Delusion. Large language models and anticipated consequences for academics’ publication and citation behavior
1
Zitationen
5
Autoren
2023
Jahr
Abstract
The reproduction of social inequalities through artificial intelligence and large language models (LLMs) has been demonstrated empirically in various areas of society, for example in policing and personnel hiring decisions. Yet, a broader discussion is missing to what extent LLMs may affect the scientific enterprise, reinforce or mitigate existing structural inequalities, and introduce a “bot delusion” in academia. Focusing on publications and citations behavior, we devise a thought experiment regarding the impact of LLMs. These differentiate between the reproduction of preexisting structurally conditioned inequalities in science (socio-cognitive stasis), or to a catharsis, that may counteract structural inequalities and Matthew Effects. We develop three scenarios of the consequences of using LLMs for citations: The LLM anticipated consequences are reproducing content and status quo (scenario 1), enabling content coherence evaluation (scenario 2) and content evaluation (scenario 3). In face of the fast-paced evolution of LLMs, to attribute meaning to anticipated consequences on citations from a sociological perspective, we discuss the normative significance of LLM-use for selecting citations. Considered as ideal types, Merton’s CUDOS norms of communalism, universalism, disinterestedness, and organized skepticism capture the catharsis opportunity offered by LLM, and stasis is reflected in the Mitroff’s SPIOD counter-norms of secrecy, particularism, self-interestedness and organized dogmatism. As SPIOD only captures individual counter-norms, we introduce communal counter-norms to capture academics’ loyal citation behavior. The latter insinuates a status quo future of science (scenario 1), while the mixed-access (scenario 2) and open science (scenario 3) futures suggest a more cognitively and less socially structured scientific endeavor.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.