Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reducing malicious use of synthetic media research: Considerations and\n potential release practices for machine learning
8
Zitationen
2
Autoren
2019
Jahr
Abstract
The aim of this paper is to facilitate nuanced discussion around research\nnorms and practices to mitigate the harmful impacts of advances in machine\nlearning (ML). We focus particularly on the use of ML to create "synthetic\nmedia" (e.g. to generate or manipulate audio, video, images, and text), and the\nquestion of what publication and release processes around such research might\nlook like, though many of the considerations discussed will apply to ML\nresearch more broadly. We are not arguing for any specific approach on when or\nhow research should be distributed, but instead try to lay out some useful\ntools, analogies, and options for thinking about these issues.\n We begin with some background on the idea that ML research might be misused\nin harmful ways, and why advances in synthetic media, in particular, are\nraising concerns. We then outline in more detail some of the different paths to\nharm from ML research, before reviewing research risk mitigation strategies in\nother fields and identifying components that seem most worth emulating in the\nML and synthetic media research communities. Next, we outline some important\ndimensions of disagreement on these issues which risk polarizing conversations.\n Finally, we conclude with recommendations, suggesting that the machine\nlearning community might benefit from: working with subject matter experts to\nincrease understanding of the risk landscape and possible mitigation\nstrategies; building a community and norms around understanding the impacts of\nML research, e.g. through regular workshops at major conferences; and\nestablishing institutions and systems to support release practices that would\notherwise be onerous and error-prone.\n
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.566 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.865 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.411 Zit.
Fairness through awareness
2012 · 3.276 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.