Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
GRAIMATTER Public Summary: Recommendations for disclosure control of trained Machine Learning (ML) models from Trusted Research Environments (TREs)
0
Zitationen
19
Autoren
2022
Jahr
Abstract
GRAIMATTER has developed a draft set of usable recommendations for TREs to guard against the additional risks when disclosing trained AI models from TREs. This report provides a summary of our recommendations for a general public audience. The detailed Green Paper on recommendations can be found at DOI: 10.5281/zenodo.7089491 If you would like to provide feedback or would like to learn more, please contact Smarti Reel (<strong>sreel@dundee.ac.uk</strong>) and Emily Jefferson (<strong>erjefferson@dundee.ac.uk</strong>).
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.