OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 10:16

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

MedMobile: a mobile-sized language model with clinical capabilities

2025·1 Zitationen·BMJ Digital Health & AIOpen Access
Volltext beim Verlag öffnen

1

Zitationen

5

Autoren

2025

Jahr

Abstract

Objective Language models (LMs) have demonstrated expert-level reasoning and recall abilities in medicine. However, computational costs and privacy concerns are mounting barriers to wide-scale implementation. To address these significant limitations, we introduce a parsimonious adaptation of phi-3-mini, MedMobile, a 3.8 billion parameter LM capable of running on a mobile device, for medical applications. Methods and analysis We perform a careful set of pipeline additions and demonstrate that chain of thought, ensembling and fine-tuning lead to the greatest performance gains, while unexpectedly retrieval augmented generation fails to demonstrate significant improvements. We evaluate the efficiency of our pipeline on the MultiMedQA and Medbullets. Results We demonstrate that MedMobile scores 75.7% on the MedQA (United States Medical Licensing Examination-like), surpassing the passing mark for licensed physicians (~60%) and rivalling scores of models 100 times its size. Across the entirety of the MultiMedQA, MedMobile achieves state-of-the-art performance for models with less than 5B parameters and represents the smallest model to pass the MedQA. Conclusions MedMobile holds promise to democratise access to LMs in medicine, bolstering lower compute needs and fast inference speeds. With the ability to combat the biggest barriers to entry for LMs in medicine, we hope that MedMobile is a critical step forward in developing clinically relevant LMs.

Ähnliche Arbeiten