Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The missing discipline in AI: a call for behavioural science
0
Zitationen
9
Autoren
2026
Jahr
Abstract
<ns3:p>Artificial intelligence systems increasingly shape how people think, feel, and act, particularly in high-impact domains such as healthcare, education, and social support. While significant attention has been paid to technical performance, bias, privacy, and security, the behavioural impacts of AI systems remain under-evaluated and under-governed. This Open Letter outlines why behavioural science should be treated as a core component of responsible AI practice. Even when technically accurate, AI systems can influence behaviour in ways that are harmful, misleading, or misaligned with people’s interests. These effects are often predictable consequences of repeated human–AI interaction, yet they are rarely assessed systematically. We describe where behavioural risks arise in practice and outline what good practice looks like, drawing on established behavioural science methods. We then propose practical steps for funders, researchers, and developers to embed behavioural expertise and behavioural evaluation across the AI lifecycle. Addressing behavioural impacts is essential to ensuring that AI systems are not only effective, but safe, trustworthy, and appropriate for real-world use.</ns3:p>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.