Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-Produced Humanities Research: On the Dangers of Technical Incrementalism
1
Zitationen
1
Autoren
2026
Jahr
Abstract
Academic researchers are already submitting AI-generated manuscripts for publication, sometimes without acknowledging the role of the AI. This raises a host of moral and practical questions, including questions about the overall moral status of using AI for any purpose and about whether such submissions constitute research misconduct. However, relatively little attention has been devoted to analyzing and predicting the large-scale effects, on the academic literature, of a scholarly paradigm in which many or most manuscripts are AI-generated. In this paper, I argue that at least in the case of the humanities, pervasive AI-generated articles will likely produce a kind of systemic epistemic degradation or knowledge-base erosion of an institution or system. Briefly put, the resulting academic literature will become even more dominated by articles that are safe, technical, and insular, thereby crowding out a valuable form of radically innovative scholarship. These problems will occur because of the current incentives in the academic-publishing system in the humanities and the near-future capabilities of LLMs. I conclude by suggesting a few ways by which to mitigate these impending problems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.557 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.447 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.944 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.797 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.