Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Security Threats in the Inference Phase of Large Language Models
0
Zitationen
5
Autoren
2025
Jahr
Abstract
In recent years, Large Language Models (LLMs) have achieved remarkable advancements in natural language understanding, contextual modeling, and human-like reasoning. However, these capabilities have also unveiled a spectrum of critical security vulnerabilities, including logical inconsistencies during inference, linguistic and semantic disruptions, and entrenched cognitive biases. Such deficiencies can cause LLMs to produce inaccurate, misleading, or even harmful outputs, posing significant risks to high-stakes applications such as decision support systems, legal consultation, and medical diagnostics. This paper presents a systematic analysis of security threats arising during the inference phase of LLMs. By examining the inference pipeline across its distinct stages and the varying levels of abstraction at which threats manifest, we propose a comprehensive taxonomy of potential security risk factors. Furthermore, we conduct targeted testing and validation to assess the impact of these threats. Our study aims to provide both theoretical insights and practical guidance for the development and deployment of safer and more reliable LLM systems.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.326 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.398 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.297 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.274 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.491 Zit.