Dan Jurafsky
Stanford University · US
Relevante Arbeiten
Meistzitierte Publikationen im Bereich Gesundheit & MedTech
Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study
2023 · 403 Zit. · The Lancet Digital Health
Dialect prejudice predicts AI decisions about people's character, employability, and criminality
2024 · 30 Zit. · arXiv (Cornell University)
Coding Inequity: Assessing GPT-4’s Potential for Perpetuating Racial and Gender Biases in Healthcare
2023 · 30 Zit.
Using Large Language Models to Promote Health Equity
2025 · 14 Zit. · NEJM AI
Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of Foundation Models
2023 · 11 Zit.
Advancing science- and evidence-based AI policy
2025 · 4 Zit. · Science
Language models cannot reliably distinguish belief from knowledge and fact
2025 · 4 Zit. · Nature Machine Intelligence
Labeling messages as AI-generated does not reduce their persuasive effects
2026 · 1 Zit. · PNAS Nexus
Beyond Tokens: Concept-Level Training Objectives for LLMs
2026 · 0 Zit. · arXiv (Cornell University)
Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of Foundation Models
2022 · 0 Zit. · arXiv (Cornell University)
Accommodation and Epistemic Vigilance: A Pragmatic Account of Why LLMs Fail to Challenge Harmful Beliefs
2026 · 0 Zit. · ArXiv.org
Accommodation and Epistemic Vigilance: A Pragmatic Account of Why LLMs Fail to Challenge Harmful Beliefs
2026 · 0 Zit. · arXiv (Cornell University)
Beyond Tokens: Concept-Level Training Objectives for LLMs
2026 · 0 Zit. · ArXiv.org
The Roots of Performance Disparity in Multilingual Language Models: Intrinsic Modeling Difficulty or Design Choices?
2026 · 0 Zit. · arXiv (Cornell University)
The Roots of Performance Disparity in Multilingual Language Models: Intrinsic Modeling Difficulty or Design Choices?
2026 · 0 Zit. · arXiv (Cornell University)