Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From theory to practice: Harmonizing taxonomies of trustworthy AI
5
Zitationen
6
Autoren
2024
Jahr
Abstract
The increasing capabilities of AI pose new risks and vulnerabilities for organizations and decision makers. Several trustworthy AI frameworks have been created by U.S. federal agencies and international organizations to outline the principles to which AI systems must adhere for their use to be considered responsible. Different trustworthy AI frameworks reflect the priorities and perspectives of different stakeholders, and there is no consensus on a single framework yet. We evaluate the leading frameworks and provide a holistic perspective on trustworthy AI values, allowing federal agencies to create agency-specific trustworthy AI strategies that account for unique institutional needs and priorities. We apply this approach to the Department of Veterans Affairs, an entity with largest health care system in US. Further, we contextualize our framework from the perspective of the federal government on how to leverage existing trustworthy AI frameworks to develop a set of guiding principles that can provide the foundation for an agency to design, develop, acquire, and use AI systems in a manner that simultaneously fosters trust and confidence and meets the requirements of established laws and regulations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.