Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mitigating Knowledge Degradation Caused by Knowledge Editing on Identical Subjects through Two-Step Editing
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) acquire extensive factual knowledge from large-scale datasets and demonstrate remarkable performance across various tasks. However, since real-world knowledge is constantly changing, it is necessary to modify or expand the model's knowledge. To achieve this, knowledge editing techniques are employed to correct inaccurate or outdated information and inject new knowledge, thereby ensuring that the model remains current. However, in existing subject-centered editing approaches, repeatedly editing the same subject can lead to knowledge degradation, where previously edited knowledge is forgotten. In this paper, we analyze the causes of this knowledge degradation phenomenon and propose a two-step editing method that independently edits subjects and relations to mitigate this issue. Our method effectively alleviates knowledge degradation compared to existing knowledge editing techniques, achieving average performance improvements of 22.9% in multi-edit scenarios and 7.2% in sequential editing.