Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns
13
Zitationen
13
Autoren
2024
Jahr
Abstract
Following the recent release of AI assistants, such as OpenAI's ChatGPT and\nGitHub Copilot, the software industry quickly utilized these tools for software\ndevelopment tasks, e.g., generating code or consulting AI for advice. While\nrecent research has demonstrated that AI-generated code can contain security\nissues, how software professionals balance AI assistant usage and security\nremains unclear. This paper investigates how software professionals use AI\nassistants in secure software development, what security implications and\nconsiderations arise, and what impact they foresee on secure software\ndevelopment. We conducted 27 semi-structured interviews with software\nprofessionals, including software engineers, team leads, and security testers.\nWe also reviewed 190 relevant Reddit posts and comments to gain insights into\nthe current discourse surrounding AI assistants for software development. Our\nanalysis of the interviews and Reddit posts finds that despite many security\nand quality concerns, participants widely use AI assistants for\nsecurity-critical tasks, e.g., code generation, threat modeling, and\nvulnerability detection. Their overall mistrust leads to checking AI\nsuggestions in similar ways to human code, although they expect improvements\nand, therefore, a heavier use for security tasks in the future. We conclude\nwith recommendations for software professionals to critically check AI\nsuggestions, AI creators to improve suggestion security and capabilities for\nethical security tasks, and academic researchers to consider general-purpose AI\nin software development.\n
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.