Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mitigating Age-Related Bias in Large Language Models: Strategies for Responsible Artificial Intelligence Development
5
Zitationen
4
Autoren
2025
Jahr
Abstract
The increasing popularity of large language models (LLMs) in digital platforms elevates the urgency to address inherent biases, particularly age-related biases, which can significantly skew the model’s fairness and performance. This paper introduces a novel two-stage bias mitigation approach utilizing LLM’s empathy ability, reinforcement learning, and human-in-the-loop mechanisms to identify and correct age-related biases without altering model parameters. There are two modes for our bias mitigation strategy. Self-bias mitigation in the loop allows LLMs to self-assess and adjust their outputs autonomously, promoting inherent bias awareness and correction. Alternatively, cooperative bias mitigation in the loop leverages collaborative filtering among multiple LLMs to debate and mitigate biases through consensus. Furthermore, we introduce the empathetic perspective exchange strategy, which can further refine the answers by changing the perspective in the context information given to the LLM. In this way, more suitable responses applicable to different ages are generated. Our comprehensive evaluation across several data sets demonstrates that our trained model, FairLLM, significantly reduces age bias, outperforming existing techniques in fairness metrics. These findings underscore the effectiveness of our proposed framework in fostering the development of more equitable artificial intelligence systems, potentially benefiting a broader demographic spectrum by reducing digital ageism. History: This paper has been accepted by Kaushik Dutta for the Special Issue on Responsible AI and Data Science for Social Good. Funding: This work was supported by the National Natural Science Foundation of China [Grants 71971046, 72172029, 72403033, 72272028, and 72442025]. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information ( https://pubsonline.informs.org/doi/suppl/10.1287/ijoc.2024.0645 ) as well as from the IJOC GitHub software repository ( https://github.com/INFORMSJoC/2024.0645 ). The complete IJOC Software and Data Repository is available at https://informsjoc.github.io/ .
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.