OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 07:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

When Personalization Harms: Reconsidering the Use of Group Attributes in Prediction

2022·3 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

3

Zitationen

3

Autoren

2022

Jahr

Abstract

Machine learning models are often personalized with categorical attributes that are protected, sensitive, self-reported, or costly to acquire. In this work, we show models that are personalized with group attributes can reduce performance at a group level. We propose formal conditions to ensure the "fair use" of group attributes in prediction tasks by training one additional model -- i.e., collective preference guarantees to ensure that each group who provides personal data will receive a tailored gain in performance in return. We present sufficient conditions to ensure fair use in empirical risk minimization and characterize failure modes that lead to fair use violations due to standard practices in model development and deployment. We present a comprehensive empirical study of fair use in clinical prediction tasks. Our results demonstrate the prevalence of fair use violations in practice and illustrate simple interventions to mitigate their harm.

Ähnliche Arbeiten

Autoren

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationAdvanced Causal Inference Techniques
Volltext beim Verlag öffnen