Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Participatory-informed preference optimization (PiPrO): A reinforcement learning simulation study
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) has transformative potential in public health, but its impact is limited by models that implicitly prioritize a single stakeholder perspective and do not make explicit and tunable trade-offs between community and clinician endorsement. To address this gap, we introduce Participatory-informed Preference Optimization (PiPrO), a large language model embedding-based calibration framework that generates a single clinical outcome prediction while explicitly accounting for differences between community and physician interpretations of the same scenario. PiPrO takes as input two embeddings derived from a large language model representing a community-facing context and a physician-facing context. It then applies a shared lightweight feedforward predictor to produce per-stakeholder scores which are then mixed using a single global mixing weight (alpha). Alpha controls how strongly the final prediction reflects the community versus physician responses and is learned using a policy-gradient update driven by an abundant but noisy community text and a sparse and biased physician text. PiPrO reliably learned stable alpha values and a consistent reward signal. Alpha shifts systematically toward physician weighting as community feedback becomes noisier and shifts toward community weighting as physician feedback becomes more biased. Our results suggest PiPrO’s potential to produce more transparent, and context-sensitive AI-driven healthcare recommendations. Future research should validate this approach using real-world community inputs to ensure generalizability and practical impact.Author summaryArtificial intelligence tools are increasingly adopted in medicine and public health, but they are often trained to reflect only one viewpoint. In practice, community members and physicians can interpret the same clinical situation differently, and those differences can matter for recommendations that affect care. In this study, we developed a method called Participatory-informed Preference Optimization to help a prediction model account for both perspectives while still producing one final prediction. We tested the method in a simulation study using community-facing and physician-facing versions of the same scenario, and we varied how reliable each source of feedback was. We found that the model learned a stable balance between the two perspectives. It shifted toward physician input when community feedback became less reliable, and toward community input when physician feedback became more biased. These results suggest that health-related artificial intelligence can be designed to make trade-offs between stakeholder perspectives more transparent.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.418 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.288 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.726 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.516 Zit.