OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 07:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI “Black Box” decisions & legal accountability to humanities

2026·0 Zitationen·International Journal of Law Justice and JurisprudenceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The AI “Black Box” Decisions and Legal Accountability focuses on the growing problem that many modern artificial intelligence systems especially those based on complex machine-learning models make decisions in ways that are not easily understandable to humans, even to their developers, which creates serious challenges for law, ethics, and governance, These “black box” systems are increasingly used in high-stakes areas such as criminal justice, healthcare, hiring, credit scoring, surveillance, and social services, where their decisions can significantly affect people’s rights, freedoms, and opportunities. The explains that traditional legal frameworks rely on transparency, explain-ability, and the ability to assign responsibility when harm occurs, but black box artificial intelligence disrupts these principles because it is often impossible to clearly explain how a specific outcome was produced or to identify who should be held accountable the programmer, the company deploying the system, the data providers, or the artificial intelligence itself. This lack of explain-ability undermines due process, as affected individuals may not be able to challenge or appeal automated decisions, and it weakens regulatory oversight because authorities cannot easily audit or verify whether systems comply with legal standards such as non-discrimination or proportionality. The abstract also highlights the tension between innovation and accountability: while highly complex models often deliver better performance, their opacity conflicts with legal demands for justification and traceability. To address this, the abstract discusses emerging responses, including explainable artificial intelligence (XAI) techniques, algorithmic impact assessments, documentation requirements, and shifts toward outcome-based accountability rather than full technical transparency. It also considers whether existing liability regimes such as negligence, product liability, or administrative law are sufficient, or whether new legal categories and obligations are needed to govern artificial intelligence decision-making. Ultimately, the abstract argues that resolving the black box problem is not purely a technical task but a socio-legal one that requires collaboration between technologists, lawmakers, and institutions, balancing accuracy, fairness, transparency, and accountability. It concludes that without meaningful mechanisms to explain, contest, and assign responsibility for artificial intelligence decisions, public trust in automated systems will erode, and the use of artificial intelligence in critical domains may conflict with fundamental legal principles such as justice, equality before the law, and human oversight [1].

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and EducationLaw, AI, and Intellectual Property
Volltext beim Verlag öffnen