Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mitigating Bias Catastrophic Inheritance in Medical Large Vision-Language Models with Logit Fairness Adjustment
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Medical Large Vision-Language Models (MLVLMs) show encouraging results in medical diagnostics but easily in-herit biases from pretraining data, leading to bias catastrophic inheritance, where data biases persist and distort predictions. In this work, we present the first systematic study of this issue in MLVLMs, revealing how inherited biases affect classification and free-text reasoning tasks. We propose Logit Fairness Adjustment (LFA), a training-free debiasing method that operates at the logits level to recalibrate biased predictions. LFA quantifies bias by computing logit margins between valid and invalid medical images, applying logit smoothing when the margin is small to reduce overconfidence and bias compensation when the margin is large to reinforce valid features. We introduce the Medical Multimodal Bias Benchmark to assess bias severity across binary classification, multi-class classification, and free-text reasoning. Experiments on LLaVA-Med, SkinGPT-4, and Qwen-VL-7B show that LFA effectively mitigates bias for MLVLMs.
Ähnliche Arbeiten
MizAR 60 for Mizar 50
2023 · 74.187 Zit.
ImageNet: A large-scale hierarchical image database
2009 · 60.502 Zit.
Microsoft COCO: Common Objects in Context
2014 · 41.138 Zit.
Fully convolutional networks for semantic segmentation
2015 · 36.302 Zit.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.