Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Trustworthy AI Maturity Model (TAIMM): Integrating ethics and regulation across the AI lifecycle
0
Zitationen
2
Autoren
2026
Jahr
Abstract
This paper introduces the Trustworthy AI Maturity Model (TAIMM), a lifecycle-based evaluation framework designed to support the development of ethically aligned and legally compliant Artificial Intelligence (AI) systems. TAIMM responds to the implementation gap in current AI governance approaches, which often endorse high-level ethical principles but lack tools for operationalising them in practice. Focusing on high-risk systems under the European Union’s AI Act, the model maps the Act’s recitals to the seven principles of Trustworthy AI and integrates them into a structured governance framework grounded in established System Development lifecycle models. Unlike general AI management standards such as iso 42001, TAIMM provides a maturity-oriented diagnostic tool tailored to the AI lifecycle. The framework includes three stage-specific questionnaires covering design, development, and operation each item explicitly labelled with its corresponding ethical principle. This approach enables both actionable self-assessment and quantitative analysis of ethical coverage, revealing disparities in emphasis and highlighting underrepresented principles. By translating abstract regulatory and ethical expectations into practical, auditable instruments, TAIMM advances responsible AI governance through a scalable, transparent, and context-aware evaluation model.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.660 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.879 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.485 Zit.
Fairness through awareness
2012 · 3.296 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.