Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bias Detection and Mitigation in Large Language Models for Code Generation
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) are essential tools in modern software development, significantly accelerating coding, streamlining debugging, and enabling broader access to advanced algorithmic features. Their impact extends into emerging domains like the Internet of Things (IoT), where AI-generated code increasingly drives edge device behavior and smart system integration. However, LLMs often inherit and propagate biases present in their training data. These biases can surface in AI-generated code, leading to ethically problematic, algorithmically skewed, or even unsafe outcomes—particularly concerning in safety-critical and socially sensitive IoT environments. This research investigates bias in LLM-generated code and proposes a multi-faceted approach combining Contextual Code Analysis, Counterfactual Prompt Engineering, and Reinforcement Learning with Human Feedback (RLHF) to detect and mitigate such biases. We show how these techniques reveal hidden biases in naming conventions, decision logic, and coding patterns, and demonstrate how RLHF can reduce bias at scale while preserving functionality – achieving a 49% reduction. in bias across evaluated benchmarks. Our findings emphasize the need for rigorous bias evaluation frameworks, careful data curation, and transparent development workflows. The tradeoff between bias reduction and code precision is explored, with implications for the ethical use of AI in IoT systems and other high-stakes software contexts. Further research is encouraged in hybrid debiasing techniques, intersectional bias identification, energy-efficient modeling, and the development of standardized ethical coding practices.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.514 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.859 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.386 Zit.
Fairness through awareness
2012 · 3.269 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.183 Zit.