OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 10:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

POPI: Personalizing LLMs via Optimized Preference Inference

2025·0 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

9

Autoren

2025

Jahr

Abstract

Large language models (LLMs) are typically aligned with population-level preferences, despite substantial variation across individual users. While many LLM personalization methods exist, the underlying structure of user-level personalization is often left implicit. We formalize user-level, prompt-independent personalization as a decomposition into two components: preference inference and conditioned generation. We advocate for a modular design that decouples these components; identify natural language as a generator-agnostic interface between them; and characterize generator-transferability as a key implication of modular personalization. Guided by this abstraction, we introduce POPI, a novel instantiation of modular personalization that parameterizes both preference inference and conditioned generation as shared LLMs. POPI jointly optimizes the two components under a unified preference optimization objective, using reinforcement learning as an optimization tool. Across multiple benchmarks, POPI consistently improves personalization performance while reducing context overhead. We further demonstrate that the learned natural-language preference summaries transfer effectively to frozen, off-the-shelf LLMs, including black-box APIs, providing empirical evidence of modularity and generator-transferability.

Ähnliche Arbeiten

Autoren

Themen

Topic ModelingExplainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen