OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 17:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

From Vibe Coding to Jailbreaking in Large Language Models: A Comparative Security Study

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

This paper explores the emerging security risks in Large Language Models (LLMs) through a comparative study of jailbreaking techniques. These adversarial methods exploit linguistic and alignment weaknesses in LLMs to bypass content safeguards and generate restricted outputs. Through experiments on models such as ChatGPT, Gemini, Claude, and Grok, we evaluate their resilience to prompt-based attacks and analyze the factors influencing their vulnerability, including response configuration and model version. The results reveal significant disparities in robustness across models and underscore the need for standardized evaluation frameworks to detect and mitigate these threats. This research contributes to the broader discourse on Artificial Intelligence (AI) security, emphasizing the importance of developing adaptive defense mechanisms to ensure responsible and trustworthy AI deployment.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and EducationHate Speech and Cyberbullying Detection
Volltext beim Verlag öffnen