Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Roadmap for Alignable Algorithmic Decision-Makers in the Medical Triage Domain
0
Zitationen
8
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is increasingly being used in low- and high-stakes decision-making. However, safe and responsible use of AI decision-making systems must also consider human values and characteristics. A promising research direction is to develop novel methods and techniques to align AI systems with human values and intentions, potentially reducing undesirable or harmful behaviors while promoting greater human trust. In this paper, we highlight several promising approaches to this AI alignment problem, focusing on the use of large language models (LLMs) as alignable decision-makers. Specifically, these alignment approaches include several novel prompt-based techniques (using zero- or few-shot learning, persona narratives, or training on a large dataset of pluralistic values) and a technique based on transforming output word embeddings. We demonstrate the feasibility of these approaches for difficult decision-making in the medical triage domain, while also providing several promising future research directions to pursue.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.204 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.582 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.382 Zit.