Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Decoupling Reasoning and Reward: A Modular Approach for Stable Alignment of Small Clinical Language Models
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract Deploying language models (LMs) in clinical settings requires navigating competing demands between accuracy, auditability, and on-device efficiency for privacy. While smaller LMs are desirable for this purpose, aligning them with methods like Group Relative Policy Optimization (GRPO) is often hindered by training instability and objective conflicts. Prior work has shown that Chain-of-Thought (CoT) supervision during supervised fine-tuning (SFT) can stabilize GRPO, but existing approaches typically entangle these objectives within a single, monolithic model. In this work, we introduce a modular, adapter-based alignment framework that decouples reasoning supervision and reward tuning into separate, composable parameter-efficient fine-tuning (PEFT) stages using LoRA adapters. We evaluate five alignment configurations on a medically grounded question answering dataset, using Qwen2.5 models from 0.5B to 7B to analyze how alignment stability, factual accuracy, and structural auditability scale with model size. Our findings demonstrate that this modular approach resolves key training instabilities, especially in smaller models, and produces structurally consistent, auditable reasoning without sacrificing accuracy. To support further research, we release, upon publication, (1) our dataset comprising over 100K clinically relevant QA pairs with CoT traces and (2) our multi-stage alignment codebase. We conclude that decoupling reasoning and reward offers a flexible and robust foundation for building privacy-preserving, verifiably aligned clinical LLMs that successfully address the competing demands in clinical AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.