Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Forging the Way Forward to Inclusive and Responsible Artificial Intelligence in Scholarly Publishing
1
Zitationen
3
Autoren
2025
Jahr
Abstract
With the recent Executive Order calling for “removing barriers to American leadership” in artificial intelligence (AI), development of AI and AI-enabled tools in the United States is expected to accelerate. However, in the absence of mandatory checks and balances, it is highly likely that the governance and the quality of output synthesized by generative AI tools may be compromised significantly, and the output may even lead to unintended consequences in the long run. Like all other domains, the role of AI in scientific publishing is advancing rapidly, such that it is hard to imagine the future processes for writing, reviewing, and editing articles in 25 years, let alone the ways in which processes will change by the end of 2025. Regardless of AI implications on scholarly publishing now or in the distant future, we must ensure that AI is applied in a way that is safe and ethical and helps maintain the rigor and integrity in scholarship.1-3 Of particular importance is navigating the influence AI on diversity, equity, inclusion, antiracism, and accessibility (DEIA). Clinical studies have already reported severe (even detrimental) impact on patient populations when AI is widely adopted without validation.4 This problem is further magnified when AI is trained on limited datasets that are inherently exclusionary and then applied to marginalized groups.5,6 Fast forward to the year 2050, when hopefully the publishing landscape includes affordable AI tools developed on robust datasets—empowering efficient editorial workflows, improved searchability, automated accurate language translation, possible alternative formats for both writers and readers, […]
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.