Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Insecure by design? A human-centric security perspective on AI-assisted software development
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Generative artificial intelligence (AI) tools are increasingly used in software development, improving the efficiency of software developers. However, this adoption introduces notable security challenges. AI/generated code is not secure by default, as it is often based on large-scale training data that includes open-source code of varying quality and trustworthiness. Developers using these tools may be unaware of the associated risks or may place excessive trust in the security of the output. This briefing paper outlines the key security risks associated with generative AI and offers human-centered strategies for mitigation. Since these risks arise not only from how generative AI models are built but also from how humans interact with them, we adopt a human-centric perspective. To this end, we provide recommendations for individuals, organizations, and educators to help harness the potential of generative AI in software development while effectively managing the associated security risks.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.504 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.856 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.378 Zit.
Fairness through awareness
2012 · 3.267 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.