OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 20.03.2026, 07:11

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Mitigating Bias Catastrophic Inheritance in Medical Large Vision-Language Models with Logit Fairness Adjustment

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

Medical Large Vision-Language Models (MLVLMs) show encouraging results in medical diagnostics but easily in-herit biases from pretraining data, leading to bias catastrophic inheritance, where data biases persist and distort predictions. In this work, we present the first systematic study of this issue in MLVLMs, revealing how inherited biases affect classification and free-text reasoning tasks. We propose Logit Fairness Adjustment (LFA), a training-free debiasing method that operates at the logits level to recalibrate biased predictions. LFA quantifies bias by computing logit margins between valid and invalid medical images, applying logit smoothing when the margin is small to reduce overconfidence and bias compensation when the margin is large to reinforce valid features. We introduce the Medical Multimodal Bias Benchmark to assess bias severity across binary classification, multi-class classification, and free-text reasoning. Experiments on LLaVA-Med, SkinGPT-4, and Qwen-VL-7B show that LFA effectively mitigates bias for MLVLMs.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Multimodal Machine Learning ApplicationsDomain Adaptation and Few-Shot LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen