OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 12:05

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

PARMA: a Platform Architecture to enable Automated, Reproducible, and Multi-party Assessments of AI Trustworthiness

2024·3 ZitationenOpen Access
Volltext beim Verlag öffnen

3

Zitationen

3

Autoren

2024

Jahr

Abstract

As AI applications are emerging in diverse fields - e.g., industry, healthcare or finance - weaknesses and failures of such applications might bare unacceptable risks which need to be rigorously assessed, quantified and, if necessary, mitigated. One crucial component of an effective AI trustworthiness assessment and risk management are systematic evaluations of the AI application based on properly chosen and executed tests. In addition to the known requirements of providing facilities for automated and reproducible tests, an assessment platform for Trustworthy AI must support the integration of different AI models and data sets, must be extensible for AI risk specific metrics and test tools, and should facilitate collaboration between model providers, assessment tool developers and auditors. In this paper, we develop an architecture of a platform for automated, reproducible and collaborative assessments of AI applications, based on an in-depth requirements analysis that maps use cases and collaboration scenarios to technical requirements.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen