Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evolving Enactions of Expertise: Software Engineers’ Evaluation and Demonstration of Coding Expertise with AI Coding Assistants
1
Zitationen
4
Autoren
2026
Jahr
Abstract
AI coding assistants are changing how software engineers engage in coding work. This shift raises a key question: does the changing of coding work also alter how software engineers evaluate and demonstrate coding expertise? We explore this question through a simulated live coding interview involving two software engineers, one as evaluator and the other as candidate, with AI tools allowed. Participants continued to rely on familiar criteria but adjusted the evidence they sought, as AI assistants both introduced new forms of demonstrating expertise and obscured some established workflows. The importance of these evolving enactions varied with evaluators’ emphasis on implementation versus planning. Lacking a clear link to expertise, heightened productivity expectations created additional tensions around these evolving enactions. We conclude by discussing how extended enactions can be supported through AI-focused tools and training, and how tensions between diminished enactions and productivity call for collaborative attention.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.756 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.890 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.532 Zit.
Fairness through awareness
2012 · 3.304 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.229 Zit.