Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
PARMA: a Platform Architecture to enable Automated, Reproducible, and Multi-party Assessments of AI Trustworthiness
3
Zitationen
3
Autoren
2024
Jahr
Abstract
As AI applications are emerging in diverse fields - e.g., industry, healthcare or finance - weaknesses and failures of such applications might bare unacceptable risks which need to be rigorously assessed, quantified and, if necessary, mitigated. One crucial component of an effective AI trustworthiness assessment and risk management are systematic evaluations of the AI application based on properly chosen and executed tests. In addition to the known requirements of providing facilities for automated and reproducible tests, an assessment platform for Trustworthy AI must support the integration of different AI models and data sets, must be extensible for AI risk specific metrics and test tools, and should facilitate collaboration between model providers, assessment tool developers and auditors. In this paper, we develop an architecture of a platform for automated, reproducible and collaborative assessments of AI applications, based on an in-depth requirements analysis that maps use cases and collaboration scenarios to technical requirements.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.326 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.218 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.111 Zit.