Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Integrating AI Literacy, Secure Coding, and Critical Evaluation
0
Zitationen
7
Autoren
2026
Jahr
Abstract
Let's start with something honest: most software teams didn't sit down and decide to adopt AI coding tools. The tools just showed up—in IDEs, in search results, in what colleagues started pasting into Slack—and adoption happened organically, often before anyone thought much about the implications. That's not a criticism. It's just how useful tools tend to spread. But now that GitHub Copilot, ChatGPT, and half a dozen competitors are embedded in real development workflows, the question of what to actually do about them—how to teach engineers to use them well, how to keep security from quietly eroding, how to prevent the gradual atrophy of skills that no one noticed they were losing—has become genuinely urgent. This chapter tries to address that question with something more than general advice. What follows is a framework built around three interconnected competencies: understanding how AI code generation tools actually work (what we're calling AI literacy), maintaining rigorous security habits when using them, and developing the judgment to critically evaluate what they produce. None of these stands alone. Together, they describe what it means to use AI tools as a skilled professional rather than a passive recipient of output. The framework is grounded in recent empirical research—and there's quite a bit of it now, which is useful. It's designed for educators building curricula and for organizations trying to upskill engineering teams. The core argument is simple even if the details aren't: AI assistance is only as good as the human judgment wrapped around it.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.539 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.426 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.921 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.586 Zit.