OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 08:17

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

PwnPilot: Reflections on Trusting Trust in the Age of Large Language Models and AI Code Assistants

2023·7 Zitationen
Volltext beim Verlag öffnen

7

Zitationen

1

Autoren

2023

Jahr

Abstract

At the dawn of a new era in software engineering, one defined by large language models (LLMs) and AI code assistants like GitHub Copilot, new meaning can be found from a historic Turing Award Lecture that concluded one cannot trust source code they “did not totally create” themselves. In this paper, a targeted, systematic survey of the latest research results from 2019 to early 2023 highlights the possible risks of using AI code assistants that produce substantial source code contributions, and the potential for an AI Copilot to unknowingly become PwnPilot, a malevolent digital actor that introduces vulnerabilities and compromises trust. During a period of explosive growth for generative AI, renewed reflections on trusting trust point to conclusions similar to the original assertions of Ken Thompson in 1984. But despite a recent theoretical roadblock from proof of ability to plant undetectable backdoors in machine learning models, the potential for enhanced productivity from AI code assistants may still be realizable with an acceptable level of risk, perhaps even for safety critical and sensitive security relevant contexts. In support of that goal, a number of near-term risk management options and longer term research paths are identified as enablers for practitioners and inputs to potential research roadmaps toward more secure and trusted AI code generation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and EducationEthics and Social Impacts of AI
Volltext beim Verlag öffnen