OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 06:42

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Risk–Utility Optimization Framework for Governing Large Language Model Responses

2026·0 Zitationen·Knowledge Commons (Lakehead University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

Large language models (LLMs) are increasingly deployed in enterprise, public-sector, and consumer-facing settings where organizations must simultaneously pursue utility and constrain multiple categories of risk. In practice, governance choices rarely reduce to a binary distinction between "ship" and "do not ship." Instead, operators decide whether a response should be delivered automatically, escalated to human review, routed through layered review, or refused and transferred to a nonLLM channel. This paper develops a theory-first optimization framework for that governance problem. We model response governance as the selection of an action from a finite menu under joint constraints on hallucination risk, severe-output risk, latency, token expenditure, and human-review cost. The framework yields a constrained optimization problem whose Lagrangian interpretation provides a practical policy calculus: governance becomes a query-level action rule that maximizes expected net value after pricing residual harms and operational burdens. Under mild monotone single-crossing assumptions, the optimal policy admits a simple threshold structure in an estimated task-risk score. This recovers four governance regimes that matter in practice-fully automatic service, threshold-triggered human review, layered review, and refusal/transfer in extremerisk regions-as special cases of the same model. We derive comparative-statics results showing when review thresholds fall, when layered review strictly dominates one-stage review, and when refusal is preferable to further automated assistance. The contribution is not a new prediction algorithm but a formal decision framework that connects responsible AI governance, epistemic risk, and operational optimization in a tractable way that is suitable for organizational design, auditability, and regulatory interpretation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen