Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Black Box or Open Science? A study on reproducibility in AI Development Papers
1
Zitationen
5
Autoren
2023
Jahr
Abstract
The surge in Artificial Intelligence (AI) research<br> has spurred significant breakthroughs across various<br> fields. However, AI is known for its Black Box<br> character and reproducing AI outcomes is a<br> challenge. Open Science, emphasizing transparency,<br> reproducibility, and accessibility, is crucial in this<br> context, ensuring research validity and facilitating<br> practical AI adoption. We propose a framework to<br> assess the quality of AI documentation and assess 51<br> papers. We conclude that despite guidelines, many AI<br> papers fall short on reproducibility due to insufficient<br> documentation. It is crucial to provide comprehensive<br> details on training data, source code, and AI models,<br> and for reviewers and editors to strictly enforce<br> reproducibility guidelines. A dearth of detailed<br> methods or inaccessible source code and models can<br> raise questions about the authenticity of certain AI<br> innovations, potentially impeding their scientific<br> value and their adoption. Although our sample size<br> inhibits broad generalization, this study nonetheless<br> offers key insights on enhancing AI research<br> reproducibility.
Ähnliche Arbeiten
UCSF Chimera—A visualization system for exploratory research and analysis
2004 · 47.030 Zit.
SciPy 1.0: fundamental algorithms for scientific computing in Python
2020 · 35.671 Zit.
Clustal W and Clustal X version 2.0
2007 · 28.873 Zit.
The REDCap consortium: Building an international community of software platform partners
2019 · 22.717 Zit.
Array programming with NumPy
2020 · 20.699 Zit.