OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 05:04

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Understanding and Mitigating the Soft Error of Contrastive Language-Image Pre-training Models

2024·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

6

Autoren

2024

Jahr

Abstract

In recent years, MultiModal Large Language Models (MM-LLMs), based on the Contrastive Language-Image Pretraining models (CLIP), have achieved the best results in many fields. CLIP breaks through the gaps between language models and image models, realizes zero-shot image classification, and achieves excellent performance in tasks such as text-to-image generation, image style transformation, and long video generation. However, there are few studies on the fault tolerance of CLIP with soft errors, which hinders the application of multimodal large models in the field of security. Based on the analysis of the fault tolerance of common multimodal large models, we proposes a soft error mitigation framework. According to the experiments in this paper, the framework can effectively detect soft errors and mitigate the errors.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and EducationTopic Modeling
Volltext beim Verlag öffnen