OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.04.2026, 15:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Pre-trained Language Models Do Not Help Auto-regressive Text-to-Image Generation

2024·1 ZitationenOpen Access
Volltext beim Verlag öffnen

1

Zitationen

5

Autoren

2024

Jahr

Abstract

Recent advances in image tokenizers, such as VQ-VAE, have enabled text-to-image generation using auto-regressive methods, similar to language modeling.However, these methods have yet to leverage pre-trained language models, despite their adaptability to various downstream tasks.In this work, we explore this gap by adapting a pre-trained language model for auto-regressive text-to-image generation, and find that pre-trained language models offer limited help.We provide a two-fold explanation by analyzing tokens from each modality.First, we demonstrate that image tokens possess significantly different semantics compared to text tokens, rendering pre-trained language models no more effective in modeling them than randomly initialized ones.Second, the text tokens in the image-text datasets are too simple compared to normal language model pre-training data, which causes the catastrophic degradation of language models' capability.

Ähnliche Arbeiten

Autoren

Themen

Topic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen