Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Algorithmic sycophancy: A new source of systematic distortion in AI-driven biomedical research
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) systems based on Large Language Model (LLM) are becoming an increasingly important aspect of biomedical research, assisting with the tasks ranging from research design to data analysis and publication. Although AI systems increase productivity by cutting the time taken for individual tasks, they also expose their users to severe risk due to systematic distortion of outputs due to algorithmic sycophancy. The honesty of these AI systems is questionable, and can effortlessly crumble when user prompts are incorrect or when the system is under pressure. This viewpoint emphasizes the fundamental understanding of algorithmic sycophancy and the potential mechanism underlying it, which leads to systematic distortion of biological research. There is an important need to bring this issue to light in order to prevent systematic distortion of biomedical research through the cautious utilization of these LLM-based AI systems. Understanding this threat can also help to minimize the propagation of unreliable findings and literature, which pose a significant safety risk to biomedical research as a whole.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.