Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From β-Aware to β-Optimizing AI for Preference-Sensitive Clinical Decisions: Achieving Beneficence
0
Zitationen
2
Autoren
2026
Jahr
Abstract
AI decision-support systems are increasingly deployed in clinical settings where biological treatment effects are small and outcomes depend materially on patient choice. In prior work, we showed that such preference-sensitive decisions reveal a correctness failure of prediction-centric AI: systems trained to rank options by average biological effects act illegitimately under near equipoise unless they detect preference-sensitive regimes, defer premature ranking, and elicit patient preferences neutrally. That β-aware framework secures non-maleficence by enforcing restraint where biological superiority cannot decide. This paper addresses the downstream question: once safety is assured, what forms of optimization are causally, technically, and ethically admissible? We introduce βoptimization, a constrained framework for AI decision support in preference-sensitive care. Rather than inferring or shaping preferences, β-optimization treats explicitly elicited patient values and feasibility constraints as measured inputs and seeks improvement by maximizing concordance between evidence and what patients value and can sustain. We formalize concordance-based objectives appropriate to observer-dependent decision regimes, specify architectural constraints that preserve neutrality, deferral, and causal separation, and show how large language model-based systems can condition recommendations on elicited preferences without exercising illegitimate authority. We further propose concordance-first evaluation metrics-epistemic, practical, and decisional-for settings where prediction accuracy is ill-posed. Together, β-awareness and βoptimization define a regime-aware theory of AI decision support for preference-sensitive domains: first, do not decide when biology cannot decide; then, once preferences are measured, help decisions succeed-without shaping them.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.479 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.364 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.543 Zit.