Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The perils of politeness: how large language models may amplify medical misinformation
2
Zitationen
5
Autoren
2025
Jahr
Abstract
Chen et al. demonstrate that large language models (LLMs) frequently prioritize agreement over accuracy when responding to illogical medical prompts, a behavior known as sycophancy. By reinforcing user assumptions, this tendency may amplify misinformation and bias in clinical contexts. The authors find that simple prompting strategies and LLM fine-tuning can markedly reduce sycophancy without impairing performance, highlighting a path toward safer, more trustworthy applications of LLMs in medicine.
Ähnliche Arbeiten
The spread of true and false news online
2018 · 8.017 Zit.
What is Twitter, a social network or a news media?
2010 · 6.638 Zit.
Social Media and Fake News in the 2016 Election
2017 · 6.405 Zit.
Beliefs about beliefs: Representation and constraining function of wrong beliefs in young children's understanding of deception
1983 · 6.254 Zit.
The Matthew Effect in Science
1968 · 6.138 Zit.