Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Report prepared by the Montreal AI Ethics Institute (MAIEI) on\n Publication Norms for Responsible AI
1
Zitationen
3
Autoren
2020
Jahr
Abstract
The history of science and technology shows that seemingly innocuous\ndevelopments in scientific theories and research have enabled real-world\napplications with significant negative consequences for humanity. In order to\nensure that the science and technology of AI is developed in a humane manner,\nwe must develop research publication norms that are informed by our growing\nunderstanding of AI's potential threats and use cases. Unfortunately, it's\ndifficult to create a set of publication norms for responsible AI because the\nfield of AI is currently fragmented in terms of how this technology is\nresearched, developed, funded, etc. To examine this challenge and find\nsolutions, the Montreal AI Ethics Institute (MAIEI) co-hosted two public\nconsultations with the Partnership on AI in May 2020. These meetups examined\npotential publication norms for responsible AI, with the goal of creating a\nclear set of recommendations and ways forward for publishers.\n In its submission, MAIEI provides six initial recommendations, these include:\n1) create tools to navigate publication decisions, 2) offer a page number\nextension, 3) develop a network of peers, 4) require broad impact statements,\n5) require the publication of expected results, and 6) revamp the peer-review\nprocess. After considering potential concerns regarding these recommendations,\nincluding constraining innovation and creating a "black market" for AI\nresearch, MAIEI outlines three ways forward for publishers, these include: 1)\nstate clearly and consistently the need for established norms, 2) coordinate\nand build trust as a community, and 3) change the approach.\n
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.