OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 10.04.2026, 02:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Participatory-informed preference optimization (PiPrO): A reinforcement learning simulation study

2026·0 Zitationen·UNC LibrariesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Artificial intelligence (AI) has transformative potential in public health, but its impact is limited by models that implicitly prioritize a single stakeholder perspective and do not make explicit and tunable trade-offs between community and clinician endorsement. To address this gap, we introduce Participatory-informed Preference Optimization (PiPrO), a large language model embedding-based calibration framework that generates a single clinical outcome prediction while explicitly accounting for differences between community and physician interpretations of the same scenario. PiPrO takes as input two embeddings derived from a large language model representing a community-facing context and a physician-facing context. It then applies a shared lightweight feedforward predictor to produce per-stakeholder scores which are then mixed using a single global mixing weight (alpha). Alpha controls how strongly the final prediction reflects the community versus physician responses and is learned using a policy-gradient update driven by an abundant but noisy community text and a sparse and biased physician text. PiPrO reliably learned stable alpha values and a consistent reward signal. Alpha shifts systematically toward physician weighting as community feedback becomes noisier and shifts toward community weighting as physician feedback becomes more biased. Our results suggest PiPrO’s potential to produce more transparent, and context-sensitive AI-driven healthcare recommendations. Future research should validate this approach using real-world community inputs to ensure generalizability and practical impact.Author summaryArtificial intelligence tools are increasingly adopted in medicine and public health, but they are often trained to reflect only one viewpoint. In practice, community members and physicians can interpret the same clinical situation differently, and those differences can matter for recommendations that affect care. In this study, we developed a method called Participatory-informed Preference Optimization to help a prediction model account for both perspectives while still producing one final prediction. We tested the method in a simulation study using community-facing and physician-facing versions of the same scenario, and we varied how reliable each source of feedback was. We found that the model learned a stable balance between the two perspectives. It shifted toward physician input when community feedback became less reliable, and toward community input when physician feedback became more biased. These results suggest that health-related artificial intelligence can be designed to make trade-offs between stakeholder perspectives more transparent.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareDigital Mental Health Interventions
Volltext beim Verlag öffnen