Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
LLM-Augmented Algorithmic Management: A Governance-Oriented Architecture for Explainable Organizational Decision Systems
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Algorithmic management systems increasingly coordinate work, allocate resources, and support decisions in corporate, public sector, and research environments. Yet many such systems remain opaque: they optimize and score effectively but struggle to communicate rationales that are contextual, auditable, and defensible under emerging governance expectations. Large language models (LLMs) can help bridge this gap by translating quantitative signals into human-readable explanations and enabling interactive clarification. However, LLM integration also introduces new risks—hallucinated rationales, bias amplification, prompt-based security failures, and automation dependence—that must be governed rather than merely engineered. This article proposes a governance-oriented architecture for LLM-augmented algorithmic management. The model combines the following elements: an algorithmic decision core; an LLM-based cognitive interface for explanation and dialogue, and a verification and governance layer that enforces policy constraints, provenance, audit trails, and human-in-command oversight. The framework is developed through targeted conceptual synthesis and normative alignment with key governance instruments (e.g., the EU AI Act, GDPR, and ISO/IEC 42001). It is illustrated through cross-domain scenarios and complemented by a demonstrative synthetic-trace simulation that highlights transparency–latency trade-offs under verification controls. Using the demonstrative simulation (n = 120 decision events), the framework illustrates a mean baseline latency of 100.3 ms and a mean LLM-augmented latency of 115.8 ms (≈15.5% increase), a mean explanation validity proxy of 85.6%, and a simulated constraint-satisfaction rate of 94.2% (113/120 events), with failed cases routed to review. These values are presented as design-level indicators of operational plausibility and governance trade-offs, not empirical performance benchmarks or state-of-the-art comparisons. The paper contributes a conceptual and governance-oriented architectural blueprint for integrating generative AI into organisational decision systems without sacrificing accountability, compliance, or operational reliability.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.