Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can Risks to Fundamental Rights Arising from AI Systems Be ‘Managed’ Alongside Health and Safety Risks? Implementing Article 9 of the EU AI Act
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Abstract The EU’s AI Act introduces various ‘essential requirements’ which providers of ‘high-risk AI systems’ must comply with before their AI system can be placed on the market or put into service, including an obligation under article 9 to establish and maintain a risk management system that reduces risks to ‘health, safety and fundamental rights’ to a level ‘judged acceptable’. Although safety risk management systems are well established under the EU’s ‘New Approach’ to product safety (to which the AI Act also belongs), extending those systems to encompass ‘fundamental rights’ risks introducs novel challenges. This chapter demonstrates that it is theoretically possible to devise and maintain a single integrated risk management system that encompasses both risks to health, safety, and fundamental rights to comply with article 9, based on risk management systems for safety-critical products. However, this entails complex theoretical, conceptual, interdisciplinary, and practical challenges, particularly given uncertainty surrounding the meaning of key terms and unresolved questions concerning the extent to which fundamental rights can be asserted ‘horizontally’ against private parties. Implementation of article 9 offers rich opportunities to develop integrated cross-disciplinary methods to manage risks and threats to health, safety, and fundamental rights in collaboration with affected stakeholder groups. However, there are serious dangers that article 9 risk management systems will, in practice, be ‘theatres of compliance’, appearing to take health, safety, and fundamental rights seriously even as the deployment of AI technologies further undermines respect for the individual dignity and freedom upon which democracy depends.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.640 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.878 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.465 Zit.
Fairness through awareness
2012 · 3.295 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.