OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 15:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The competence paradox: when psychologists overestimate their understanding of Artificial Intelligence

2026·1 Zitationen·AI & SocietyOpen Access
Volltext beim Verlag öffnen

1

Zitationen

1

Autoren

2026

Jahr

Abstract

Abstract Artificial intelligence is rapidly transforming psychological practice. Psychologists now use AI to transcribe sessions, analyse client data, and generate treatment plans, yet few fully understand how these systems work. This commentary argues that the greatest risk AI poses to psychology is not its technical superiority to human capability, but a competence paradox: the tendency of psychologists to mistake the effective use of AI tools for a genuine understanding of how it works. This illusion of competence distorts judgment, weakens accountability, and undermines the foundations of professional expertise. We discuss how this gap forms, why it matters for both clients and clinicians, and what psychologists must know to engage with AI responsibly. Drawing on recent work in adjacent fields, we show how cognitive bias, identity protection, and anthropomorphism create a false sense of mastery. We then trace consequences across five domains. Cognitive and diagnostic skills decline through automation bias, cognitive offloading, and reduced reflective reasoning. Professional identity is strained as roles shift from clinician to editor of machine output. Ethical accountability blurs through hidden AI use, weak informed consent, and diffused liability. Collegial consultation diminishes as practitioners consult tools rather than peers, and wellbeing suffers through technostress, rising demands, financial strain, and growing reliance on AI. Finally, we argue that limited explainability within AI systems creates an explanatory dependence that constrains transparent justification of clinical decisions, shifts the burden of reasoning from clinician to tool, and makes embedded value choices harder to detect. We conclude with a call to action and a research agenda.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationDigital Mental Health InterventionsEthics and Social Impacts of AI
Volltext beim Verlag öffnen