Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Rule-Based Moral Principles for Explaining Uncertainty in Natural Language Generation
1
Zitationen
2
Autoren
2025
Jahr
Abstract
As large language models (LLMs) are increasingly used in high-stakes applications, the challenge of explaining uncertainty in natural language generation has become both a technical and moral imperative. Traditional approaches rely on probabilistic methods that are often opaque, difficult to interpret, and misaligned with human expectations of transparency and accountability. In response to these limitations, this paper introduces a novel framework based on rule-based moral principles-simple, human-inspired ethical guidelines-for responding to uncertainty in LLM-generated text. Drawing on insights from experimental moral psychology and virtue ethics, we define a set of symbolic behavioral rules such as precaution, deference, and responsibility to guide system responses under conditions of epistemic or aleatoric uncertainty. These rules are implemented declaratively and are designed to generate adaptive, context-sensitive explanations even in the absence of precise confidence metrics. The moral principles are encoded as symbolic rules within a lightweight Prolog-based engine, where each uncertainty tag (low, medium, high) activates an ethically aligned system action along with an automatically generated, plain-language rationale. We evaluate the framework through scenario-based simulations that benchmark rule coverage, assess fairness implications, and analyze trust calibration. An interpretive explanation module is integrated to reveal both the assigned uncertainty level and its underlying justification in a transparent and accessible way. We illustrate the framework through hypothetical yet plausible use cases in clinical and legal domains, demonstrating how rule-based moral reasoning can enhance user trust, promote fairness, and improve the interpretability of AIgenerated language. By offering a lightweight, philosophically grounded alternative to probabilistic uncertainty modeling, our approach paves the way for more ethical, human-aligned, and socially responsible natural language generation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.