Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice
4
Zitationen
6
Autoren
2025
Jahr
Abstract
In this quality improvement study using a controlled simulation, commercial LLMs demonstrated substantial vulnerability to prompt-injection attacks that could generate clinically dangerous recommendations; even flagship models with advanced safety mechanisms showed high susceptibility. These findings underscore the need for adversarial robustness testing, system-level safeguards, and regulatory oversight before clinical deployment.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.310 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.369 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.289 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.234 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.483 Zit.