OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.05.2026, 07:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Algorithmic Denials as Ungoverned Agency: A Substrate-Layer Regulatory Architecture for Punitive-Damages-Grade Liability in AI-Driven Insurance Denial Systems

2026·0 Zitationen·Zenodo (CERN European Organization for Nuclear Research)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

AI-driven insurance denial systems are making consequential decisions about healthcare coverage and provider reimbursement at machine tempo, with algorithmic opacity, and with systematic audit resistance built into their design. They are agentic decision actors. They are not governed as agentic decision actors. This paper argues that the absence of substrate governance in algorithmic denial systems is not merely a regulatory compliance gap. It is the mechanism by which systematic harm is produced at scale while remaining legally deniable. The paper develops three claims. First, algorithmic denial systems satisfy every governance-relevant definition of an agentic system and should be classified as high-risk AI systems under emerging accountability frameworks. Second, the denial, downcoding, and payment delay patterns documented in healthcare insolvency litigation match the behavioral signatures of AI-driven denial systems whose deployment across major US insurers is independently documented—and those patterns are the predictable output of ungoverned agentic substrates. Third, the absence of substrate governance creates the conditions for punitive damages liability through a three-layer regulatory architecture that converts what is currently characterized as administrative error into recklessness per se. The paper proposes a complete regulatory architecture: three liability layers establishing substrate-level duties, operational conduct duties, and harm-proximity duties; a new statutory cause of action called Algorithmic Bad Faith; a four-component audit architecture providing mandatory model registries, real-time denial pattern monitoring, independent algorithmic auditors, and harm attribution protocols; and a punitive damages formula tied to financial benefit gained, scale of harm, foreseeability, governance failures, and statutory multipliers. Delayed payment is not a clerical issue. It is a financial weapon. Algorithmic denial systems are not tools. They are decision actors. The harm they produce is not a business failure. It is caused harm—caused by the governance choice not to build the substrate that would make the system’s behavior visible, auditable, and correctable.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIMedical Malpractice and Liability Issues
Volltext beim Verlag öffnen