Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI “Black Box” decisions & legal accountability to humanities
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The AI “Black Box” Decisions and Legal Accountability focuses on the growing problem that many modern artificial intelligence systems especially those based on complex machine-learning models make decisions in ways that are not easily understandable to humans, even to their developers, which creates serious challenges for law, ethics, and governance, These “black box” systems are increasingly used in high-stakes areas such as criminal justice, healthcare, hiring, credit scoring, surveillance, and social services, where their decisions can significantly affect people’s rights, freedoms, and opportunities. The explains that traditional legal frameworks rely on transparency, explain-ability, and the ability to assign responsibility when harm occurs, but black box artificial intelligence disrupts these principles because it is often impossible to clearly explain how a specific outcome was produced or to identify who should be held accountable the programmer, the company deploying the system, the data providers, or the artificial intelligence itself. This lack of explain-ability undermines due process, as affected individuals may not be able to challenge or appeal automated decisions, and it weakens regulatory oversight because authorities cannot easily audit or verify whether systems comply with legal standards such as non-discrimination or proportionality. The abstract also highlights the tension between innovation and accountability: while highly complex models often deliver better performance, their opacity conflicts with legal demands for justification and traceability. To address this, the abstract discusses emerging responses, including explainable artificial intelligence (XAI) techniques, algorithmic impact assessments, documentation requirements, and shifts toward outcome-based accountability rather than full technical transparency. It also considers whether existing liability regimes such as negligence, product liability, or administrative law are sufficient, or whether new legal categories and obligations are needed to govern artificial intelligence decision-making. Ultimately, the abstract argues that resolving the black box problem is not purely a technical task but a socio-legal one that requires collaboration between technologists, lawmakers, and institutions, balancing accuracy, fairness, transparency, and accountability. It concludes that without meaningful mechanisms to explain, contest, and assign responsibility for artificial intelligence decisions, public trust in automated systems will erode, and the use of artificial intelligence in critical domains may conflict with fundamental legal principles such as justice, equality before the law, and human oversight [1].
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.502 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.855 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.376 Zit.
Fairness through awareness
2012 · 3.266 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.