Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Drug or Pokémon? An analysis of the ability of large language models to discern fabricated medications
0
Zitationen
15
Autoren
2026
Jahr
Abstract
LLMs are susceptible to adversarial attacks, especially with medications. Further model improvement is imperative before LLMs are considered safe and reliable for routine use in the medical field.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.
Autoren
Institutionen
- University of Colorado Denver(US)
- Augusta University Health(US)
- Augusta University(US)
- WellStar Health System(US)
- University of Montana(US)
- University of Colorado Anschutz Medical Campus(US)
- University of Georgia(US)
- Mayo Clinic in Arizona(US)
- Cleveland Clinic(US)
- Brigham and Women's Hospital(US)
- Harvard University(US)