Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Whose Code Is It? How AI Autonomy Reshapes Ownership, Responsibility, and Disclosure in AI-Assisted Programming
1
Zitationen
3
Autoren
2026
Jahr
Abstract
AI coding assistants are generating substantial portions of code, fundamentally challenging traditional notions of authorship and ownership in software development. We conducted a within-subjects experiment examining three AI coding assistant autonomy conditions—High (AI generates complete code), Medium (AI provides substantial suggestions), and Low (AI offers minimal assistance). We found that AI autonomy systematically reshaped developers’ psychological relationships with code through distinct patterns across ownership dimensions. Possession decreased continuously with each increase in AI contribution. Identity remained similar under Low and Medium autonomy but decreased substantially under High autonomy. Responsibility decreased from Low to Medium and High autonomy, though developers maintained some sense of responsibility across all conditions. Attribution patterns revealed symmetric bidirectional shifts where ownership and responsibility attribution moved from predominantly Human-centered under Low autonomy through balanced uncertainty at Medium autonomy to predominantly AI-centered under High autonomy. Despite these internal psychological shifts, professional disclosure practices showed striking stability. While developers became less comfortable claiming ownership to technical reviewers as AI contribution increased, their willingness to describe creation processes transparently and accept accountability for production systems remained consistent across all conditions. These findings illuminate how AI autonomy fundamentally restructures the psychological landscape of human-AI co-creation while developers preserve core professional obligations for transparency and accountability.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.784 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.893 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.541 Zit.
Fairness through awareness
2012 · 3.313 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.257 Zit.