Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
No for Some, Yes for Others: Persona Prompts and Other Sources of False Refusal in Language Models
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Large language models (LLMs) are increasingly integrated into our daily lives and personalized. However, LLM personalization might also increase unintended side effects. Recent work suggests that persona prompting can lead models to falsely refuse user requests. However, no work has fully quantified the extent of this issue. To address this gap, we measure the impact of 15 sociodemographic personas (based on gender, race, religion, and disability) on false refusal. To control for other factors, we also test 16 different models, 3 tasks (Natural Language Inference, politeness, and offensiveness classification), and nine prompt paraphrases. We propose a Monte Carlo-based method to quantify this issue in a sample-efficient manner. Our results show that as models become more capable, personas impact the refusal rate less and less. Certain sociodemographic personas increase false refusal in some models, which suggests underlying biases in the alignment strategies or safety mechanisms. However, we find that the model choice and task significantly influence false refusals, especially in sensitive content tasks. Our findings suggest that persona effects have been overestimated, and might be due to other factors.
Ähnliche Arbeiten
A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems1
1999 · 5.164 Zit.
Heuristic evaluation of user interfaces
1990 · 3.371 Zit.
User Centered System Design
1986 · 3.064 Zit.
User experience - a research agenda
2006 · 2.845 Zit.
Research through design as a method for interaction design research in HCI
2007 · 2.056 Zit.