Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Model for Regulating Artificial Intelligence Liability through Comparative Jurisprudence between the U.S. and EU
0
Zitationen
1
Autoren
2024
Jahr
Abstract
The rapid proliferation of Artificial Intelligence (AI) systems across sectors has intensified debates on liability, accountability, and governance in the event of harm caused by autonomous or semi-autonomous systems. This paper develops a model for regulating AI liability through a comparative jurisprudential analysis between the United States (U.S.) and the European Union (EU). It argues that divergent legal philosophies common law pragmatism in the U.S. and regulatory formalism in the EU have produced distinct yet complementary frameworks for attributing liability to AI developers, operators, and users. The model synthesizes doctrinal insights from U.S. tort law, product liability precedents, and emerging jurisprudence on algorithmic accountability with the EU’s risk-based and precautionary approaches reflected in the proposed AI Act, Product Liability Directive revisions, and General Data Protection Regulation (GDPR) enforcement mechanisms. Through comparative examination, the study identifies structural asymmetries: while U.S. courts emphasize causation, foreseeability, and negligence standards, the EU’s legislative model favors ex ante regulation, mandatory conformity assessments, and strict liability for high-risk AI systems. These differences underscore varying conceptions of fairness, innovation incentives, and consumer protection. The proposed regulatory model integrates best practices from both jurisdictions advocating hybrid liability standards that combine the U.S. doctrine of proximate causation and reasonableness with the EU’s tiered risk classification and harmonized accountability mechanisms. It further introduces an adaptive governance framework featuring algorithmic impact assessments, explainability audits, and mandatory insurance pools to ensure compensation where attribution is indeterminate. The study concludes that an effective transatlantic AI liability regime must balance innovation with ethical restraint by embedding dynamic feedback loops between regulators, courts, and industry. Such a model encourages interoperability, global standards convergence, and equitable redress while fostering public trust in AI systems. Future research should explore the integration of causal inference models, document AI, and privacy-preserving compliance mechanisms to operationalize this hybrid framework. This work thus contributes to the emerging global discourse on AI law by offering a comparative, empirically grounded model for aligning accountability with technological complexity.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.502 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.855 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.376 Zit.
Fairness through awareness
2012 · 3.266 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.