Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Lessons from an AI-Sprint: a proposal for measuring human-AI cooperation in research
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Generative artificial intelligence is transforming the way scholars draft, revise, and publish. Yet, academia lacks a systematic way to measure these shifts and risks relying on anecdotal evidence in evaluating whether AI elevates or erodes scholarly standards. This Comment draws on a pre-registered, three-day field experiment that addressed this lack of measurement by pairing twenty-two early-career researchers with and without AI tools to improve scholarly manuscripts for journal submission. However, the AI models used in the field experiment are already outdated and outperformed by more powerful reasoning models, situating the results as a snapshot in time. This Comment calls for recurring events with a similar set of evaluation criteria to combine the results in a publicly available dataset. Monitoring the quality of researcher-AI collaboration is necessary if academia wants to keep track of AI’s rapid impact on research practice.
Ähnliche Arbeiten
2019 · 31.418 Zit.
Techniques to Identify Themes
2003 · 5.364 Zit.
Answering the Call for a Standard Reliability Measure for Coding Data
2007 · 4.051 Zit.
Basic Content Analysis
1990 · 4.044 Zit.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts
2013 · 3.022 Zit.