OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 20:20

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Frequency-Based Predictive Entropy for Uncertainty Quantification in Black-Box Multiple-Choice Question Answering

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2025

Jahr

Abstract

Large Language Models (LLMs) have demonstrated outstanding performance in various tasks. However, the inability to access internal logits of LLMs under black-box settings poses challenges for uncertainty quantification, thereby limiting their applications in high-risk domains. To address this, we propose a frequency-based uncertainty quantification method under black-box settings, leveraging conformal prediction (CP) to ensure provable coverage guarantees. Our approach involves multiple independent samplings of the model’s output distribution for each input, with the most frequent sample serving as a reference to calculate predictive entropy (PE). Experimental evaluations across six LLMs and four datasets (MedMCQA, MedQA, MMLU, MMLU-Pro) demonstrate that frequency-based PE outperforms logit-based PE in distinguishing between correct and incorrect predictions, as measured by AUROC. Furthermore, the method effectively controls the empirical miscoverage rate under user-specified risk levels. The study confirms that sampling frequency can serve as a viable alternative to logit-based probabilities under black-box settings, providing a reliable means of uncertainty quantification for LLMs where internal parameters are inaccessible.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Topic ModelingMachine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen