Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Patching LLM Like Software: A Lightweight Method for Improving Safety Policy in Large Language Models
0
Zitationen
6
Autoren
2025
Jahr
Abstract
We propose patching for large language models (LLMs) like software versions, a lightweight and modular approach for addressing safety vulnerabilities. While vendors release improved LLM versions, major releases are costly, infrequent, and difficult to tailor to customer needs, leaving released models with known safety gaps. Unlike full-model fine-tuning or major version updates, our method enables rapid remediation by prepending a compact, learnable prefix to an existing model. This "patch" introduces only 0.003% additional parameters, yet reliably steers model behavior toward that of a safer reference model. Across three critical domains (toxicity mitigation, bias reduction, and harmfulness refusal) policy patches achieve safety improvements comparable to next-generation safety-aligned models while preserving fluency. Our results demonstrate that LLMs can be "patched" much like software, offering vendors and practitioners a practical mechanism for distributing scalable, efficient, and composable safety updates between major model releases.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.378 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.475 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.373 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.322 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.514 Zit.