OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 20:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

ZeroSeedRec: Enhancing Zero-Shot Recommendation with Domain-Specific Instruction-Tuned LLMs: A Pipeline for Synthetic Seed and Self-Instruct Data Generation

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Large language models (LLMs) have recently shown remarkable promise as zero-shot recommenders, capable of generating meaningful suggestions without task-specific training. Yet, much of the current research underplays the role of contemporary enhancement techniques that could significantly elevate this performance. In response, we present a refined three-step prompting approach that combines strategic data augmentation, efficient fine-tuning, and targeted instruction design. We demonstrate that training on a single, consistent instruction is sufficient, challenging the notion that variety is a prerequisite for effective tuning. While prior studies have highlighted the fragility of LLMs to different prompt formats, our experiments reveal that using a JSON structure enables more reliable data interpretation and improved outcomes. This paper presents a practical blueprint for zero-shot recommendation using large language models (LLMs).

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationTopic ModelingExplainable Artificial Intelligence (XAI)
Volltext beim Verlag öffnen