OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 04:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Security Threats in the Inference Phase of Large Language Models

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

In recent years, Large Language Models (LLMs) have achieved remarkable advancements in natural language understanding, contextual modeling, and human-like reasoning. However, these capabilities have also unveiled a spectrum of critical security vulnerabilities, including logical inconsistencies during inference, linguistic and semantic disruptions, and entrenched cognitive biases. Such deficiencies can cause LLMs to produce inaccurate, misleading, or even harmful outputs, posing significant risks to high-stakes applications such as decision support systems, legal consultation, and medical diagnostics. This paper presents a systematic analysis of security threats arising during the inference phase of LLMs. By examining the inference pipeline across its distinct stages and the varying levels of abstraction at which threats manifest, we propose a comprehensive taxonomy of potential security risk factors. Furthermore, we conduct targeted testing and validation to assess the impact of these threats. Our study aims to provide both theoretical insights and practical guidance for the development and deployment of safer and more reliable LLM systems.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen