OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.04.2026, 11:20

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Adaptive Loss-based Curricula for Neural Team Recommendation

2025·1 ZitationenOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2025

Jahr

Abstract

Neural team recommendation models have brought state-of-the-art efficacy while enhancing efficiency at recommending collaborative teams of experts who, more likely than not, can solve complex tasks. Yet, they suffer from popularity bias and overfit to a few dominant popular experts and, hence, result in discrimination and reduced visibility for already disadvantaged non-popular experts. Such models are trained on randomly shuffled datasets with the disproportionate distribution of a few popular experts over many teams and a sparse long-tailed distribution of non-popular ones, overlooking the difficulty of recommending hard non-popular vs. easy popular experts. To bridge the gap, we propose three curriculum-based learning strategies to empower neural team recommenders sifting through easy popular and hard non-popular experts and to mitigate popularity bias and improve upon the existing neural models. We propose (1) a parametric curriculum that assigns a learnable parameter to each expert enabling the model to learn an expert's levels of difficulty (or conversely, levels of popularity) during training, (2) a parameter-free (non-parametric) curriculum that presumes the worst-case difficulty for each expert based on the model's loss, and (3) a static curriculum to provide a minimum base for comparison amongst curriculum-based learning strategies and lack thereof. Our experiments on two benchmark datasets with distinct distributions of teams over skills showed that our parameter-free curriculum improved the performance of non-variational models across different domains, outperforming its parametric counterpart, and the static curriculum was the poorest. Moreover, among neural models, variational models obtain little to no gain from our proposed curricula, urging further research on more effective curricula for them. The code to reproduce our experiments is publically available at https://github.com/fani-lab/OpeNTF/tree/cl-wsdm25

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Topic Modeling
Volltext beim Verlag öffnen