Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT-assisted evaluation of scholarly monographs: A feasibility analysis based on the Dimensions database
0
Zitationen
5
Autoren
2025
Jahr
Abstract
This study introduces a novel approach to assessing the impact of scholarly monographs by utilizing large language models as automated scoring tools. Based on the abstract texts of 2248 sociology monographs from the Dimensions database (2014–2023), we employ the ChatGPT-4o model to generate scores across 40 evaluation variables (e.g., Concise, Subjective and Coherent). We then examine the correlation between ChatGPT-generated scores and academic impact indicators (e.g., citation counts), as well as social impact indicators (e.g., altmetrics scores). The findings indicate that, within this sociology sample, ChatGPT-4o scores demonstrate stronger correlations with citation counts and altmetrics scores than with the conventional evaluation metric, readability index. While large language models are not yet capable of independently conducting comprehensive evaluations of scholarly monographs, this sociology-based study demonstrates considerable potential as auxiliary tools for enhancing the efficiency of academic assessment. This study highlights the transformative role of large language models in supplementing monograph evaluation methods, particularly in addressing the limitations of traditional assessment approaches in the humanities and social sciences.
Ähnliche Arbeiten
2019 · 31.434 Zit.
Techniques to Identify Themes
2003 · 5.364 Zit.
Answering the Call for a Standard Reliability Measure for Coding Data
2007 · 4.052 Zit.
Basic Content Analysis
1990 · 4.044 Zit.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
2013 · 3.024 Zit.