Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Self-Certification of High-Risk AI Systems: The Example of AI-based Facial Emotion Recognition
0
Zitationen
3
Autoren
2026
Jahr
Abstract
The European Union's Artificial Intelligence Act establishes comprehensive requirements for high-risk AI systems, yet the harmonized standards necessary for demonstrating compliance remain not fully developed. In this paper, we investigate the practical application of the Fraunhofer AI assessment catalogue as a certification framework through a complete self-certification cycle of an AI-based facial emotion recognition system. Beginning with a baseline model that has deficiencies, including inadequate demographic representation and prediction uncertainty, we document an enhancement process guided by AI certification requirements. The enhanced system achieves higher accuracy with improved reliability metrics and comprehensive fairness across demographic groups. We focused our assessment on two of the six Fraunhofer catalogue dimensions, reliability and fairness, the enhanced system successfully satisfies the certification criteria for these examined dimensions. We find that the certification framework provides value as a proactive development tool, driving concrete technical improvements and generating documentation naturally through integration into the development process. However, fundamental gaps separate structured self-certification from legal compliance: harmonized European standards are not fully available, and AI assessment frameworks and catalogues cannot substitute for them on their own. These findings establish the Fraunhofer AI assessment catalogue as a valuable preparatory tool that complements rather than replaces formal compliance requirements at this time.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.