OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 06:16

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

An Evaluation of the Safety of ChatGPT with Malicious Prompt Injection

2024·13 ZitationenOpen Access
Volltext beim Verlag öffnen

13

Zitationen

2

Autoren

2024

Jahr

Abstract

<title>Abstract</title> Artificial intelligence systems, particularly those involving sophisticated neural network architectures like ChatGPT, have demonstrated remarkable capabilities in generating human-like text. However, the susceptibility of these systems to malicious prompt injections poses significant risks, necessitating comprehensive evaluations of their safety and robustness. The study presents a novel automated framework for systematically injecting and analyzing malicious prompts to assess the vulnerabilities of ChatGPT. Results indicate substantial rates of harmful responses across various scenarios, highlighting critical areas for improvement in model defenses. The findings underscore the importance of advanced adversarial training, real-time monitoring, and interdisciplinary collaboration to enhance the ethical deployment of AI systems. Recommendations for future research emphasize the need for robust safety mechanisms and transparent model operations to mitigate the risks associated with adversarial inputs.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen