Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Leveraging In-Context Adversarial Augmentation for Improved Natural Language Inference Performance
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) in Natural Language Inference (NLI) tasks often encounter challenges such as shortcut learning, weak generalization, and high sensitivity to adversarial inputs. To address these issues, we propose a three-phase framework that integrates dataset preprocessing, in-context adversarial augmentation, and supervised fine-tuning with parameter-efficient techniques (LoRA). The approach leverages cosine similarity to generate semantically consistent adversarial samples, enriching training data with more challenging premise-hypothesis-reason pairs. We evaluate the methodology on the ANLI dataset across multiple training regimes (R1, R2, R3, R1+R2, and Full ANLI). Experimental results demonstrate that the proposed approach substantially improves robustness and reasoning capability. Qwen3 4 B achieves up to 87.2 % accuracy and 87.05 % Macro F1, nearly 8 % higher than its baseline training results, while consistently outperforming LLaMA 3.2 3B under all settings. Hybrid strategies that combine adversarial augmentation with mixed training also show strong gains, reaching 85.4 % accuracy and 85.27 % Macro F1. These findings confirm that adversarial augmentation combined with efficient fine-tuning is an effective way to improve inference performance and can be scaled to broader NLI benchmarks.