OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 13:46

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

ChatGPT-assisted evaluation of scholarly monographs: A feasibility analysis based on the Dimensions database

2025·0 Zitationen·Data Science and InformetricsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

This study introduces a novel approach to assessing the impact of scholarly monographs by utilizing large language models as automated scoring tools. Based on the abstract texts of 2248 sociology monographs from the Dimensions database (2014–2023), we employ the ChatGPT-4o model to generate scores across 40 evaluation variables (e.g., Concise, Subjective and Coherent). We then examine the correlation between ChatGPT-generated scores and academic impact indicators (e.g., citation counts), as well as social impact indicators (e.g., altmetrics scores). The findings indicate that, within this sociology sample, ChatGPT-4o scores demonstrate stronger correlations with citation counts and altmetrics scores than with the conventional evaluation metric, readability index. While large language models are not yet capable of independently conducting comprehensive evaluations of scholarly monographs, this sociology-based study demonstrates considerable potential as auxiliary tools for enhancing the efficiency of academic assessment. This study highlights the transformative role of large language models in supplementing monograph evaluation methods, particularly in addressing the limitations of traditional assessment approaches in the humanities and social sciences.

Ähnliche Arbeiten