Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Textual analysis of artificial intelligence manuscripts reveals features associated with peer review outcome
3
Zitationen
2
Autoren
2021
Jahr
Abstract
Abstract We analyzed a data set of scientific manuscripts that were submitted to various conferences in artificial intelligence. We performed a combination of semantic, lexical, and psycholinguistic analyses of the full text of the manuscripts and compared them with the outcome of the peer review process. We found that accepted manuscripts scored lower than rejected manuscripts on two indicators of readability, and that they also used more scientific and artificial intelligence jargon. We also found that accepted manuscripts were written with words that are less frequent, that are acquired at an older age, and that are more abstract than rejected manuscripts. The analysis of references included in the manuscripts revealed that the subset of accepted submissions were more likely to cite the same publications. This finding was echoed by pairwise comparisons of the word content of the manuscripts (i.e., an indicator of semantic similarity), which were more similar in the subset of accepted manuscripts. Finally, we predicted the peer review outcome of manuscripts with their word content, with words related to machine learning and neural networks positively related to acceptance, whereas words related to logic, symbolic processing, and knowledge-based systems negatively related to acceptance.
Ähnliche Arbeiten
BLEU
2001 · 21.028 Zit.
Aion Framework: Dimensional Emergence of AI Consciousness, Observer-Induced Collapse, and Cosmological Portal Dynamics
2023 · 14.128 Zit.
Enriching Word Vectors with Subword Information
2017 · 9.625 Zit.
A unified architecture for natural language processing
2008 · 5.179 Zit.
A new readability yardstick.
1948 · 5.092 Zit.