Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mapping the Trust Terrain: LLMs in Software Engineering - Insights and Perspectives
2
Zitationen
5
Autoren
2025
Jahr
Abstract
The application of Large Language Models (LLMs) in Software Engineering (SE) continues to grow rapidly across both industry and academia. As these models become integral to critical SE processes, ensuring their reliability and trustworthiness becomes essential. Achieving this requires a balanced approach to trust: excessive trust can introduce security vulnerabilities, while insufficient trust may hinder innovation. However, the conceptual landscape of trust in LLMs for SE(LLM4SE) remains unclear. Key concepts such as trust, distrust, and trustworthiness lack precise definitions, factors that shape trust formation remain underexplored, and metrics for trust in LLMs remain undeveloped. To clarify the current research landscape and identify future directions, we conducted a comprehensive review of \(88\) articles: a systematic review of \(18\) studies on LLMs in SE, supplemented by an analysis of \(70\) articles from the broader trust literature. Furthermore, we surveyed \(25\) domain experts to gather practitioners’ perspectives on trust and identify gaps between their experiences and the existing literature. Our findings provide a structured overview of trust-related concepts in LLM4SE, outlining key areas for future research. This study contributes to building more trustworthy LLM-assisted software engineering processes, ultimately supporting safer and more effective adoption of LLMs in SE.