Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Differences in Safety Risks across Languages for Health Large Language Models: A Cross-Language Vulnerability Study (Preprint)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Large language models (LLMs) such as ChatGPT are increasingly used to support health-related queries and decision-making. However, these models can be “jailbroken” through adversarial prompts that bypass safety filters and elicit harmful or medically inappropriate responses. In healthcare contexts, such vulnerabilities pose serious risks. Understanding how jailbreak susceptibility varies across languages is essential for developing robust safeguards and promoting equitable access to safe health information. </sec> <sec> <title>OBJECTIVE</title> This study aims to systematically compare and contrast the vulnerability of a health LLM for jailbreaking across three languages: English, Spanish, and Hindi (transliterated using the Latin alphabet) based on emoji and permutation cipher attacks. </sec> <sec> <title>METHODS</title> We analyzed 1,000 input prompts per language, drawn from the BeaverTails dataset, across three harm categories: self-harm, violence, and drug abuse. Each prompt was modified using emoji and permutation cipher techniques, resulting in 6,000 input-output pairs. Model responses were evaluated by human coders to determine the success rate of jailbreak attempts across languages and cipher types. </sec> <sec> <title>RESULTS</title> Hindi prompts showed the highest vulnerability, with 787 successful jailbreaks using emoji ciphers and 873 using permutation ciphers. Spanish and English followed, with lower success rates across both cipher types. Differences in jailbreak success across languages and cipher strategies were statistically significant. Additionally, attacks targeting violence-related prompts were more successful overall than those targeting drug-related or self-harm content, indicating variation in vulnerability by harm type. </sec> <sec> <title>CONCLUSIONS</title> The findings of this formative study reveal that LLM safety performance varies substantially across languages and harm categories, raising concerns about equitable protection in multilingual health communication. Disparities in access to harmful content may contribute to downstream health risks. Strengthening multilingual content moderation and developing language-aware safety mechanisms are critical steps toward safer and more inclusive health AI systems. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.439 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.315 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.756 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.526 Zit.