Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT in the Dock: Reflections on the Future of Criminal Liability
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Background: This study examines the legal challenges posed by generative AI. It highlights the limitations of traditional criminal liability frameworks in addressing harm caused by AI outputs. The research explores new models of liability to ensure accountability while protecting individual rights in the age of intelligent machines. Generative AI, exemplified by ChatGPT, has evolved from a mere computational tool into a cognitive agent capable of content creation, problem-solving, and decision-making. This evolution challenges traditional criminal law frameworks, raising complex questions about the attribution of Liability when AI-generated outputs result in harm or criminal conduct. The study explores these dilemmas, focusing on the shortcomings of conventional concepts of criminal liability and exploring the need for new legal paradigms. Methods: The research employs a descriptive-analytical and comparative methodology. It analyses national and international legislation, legal principles, and contemporary jurisprudence, with a focus on the European Artificial Intelligence Act (2024) as a model. The study examines AI’s autonomous capabilities, the opacity of algorithmic decision-making, and the challenges of establishing causal links between AI actions and resulting harms. Case studies are used to explore potential liability models, including preventive liability and the concept of an "artificial actor." Results and Conclusions: The study finds that traditional frameworks of criminal accountability are inadequate for AI systems like ChatGPT, given their partial autonomy and algorithmic complexity. It highlights the potential for expanding liability to developers, operators, and users, and the necessity of flexible legal models that combine preventive, administrative, and criminal measures. The research underscores the importance of integrating legal innovation with technological oversight to safeguard individual rights while maintaining the deterrent and protective functions of criminal law.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.