OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.04.2026, 23:58

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

<scp>eXplainable</scp><scp>AI</scp> for routine outcome monitoring and clinical feedback

2024·3 Zitationen·Counselling and Psychotherapy ResearchOpen Access
Volltext beim Verlag öffnen

3

Zitationen

3

Autoren

2024

Jahr

Abstract

Abstract Artificial intelligence (AI), specifically machine learning (ML), is adept at identifying patterns and insights from vast amounts of data from routine outcome monitoring (ROM) and clinical feedback during treatment. When applied to patient feedback data, AI/ML models can assist clinicians in predicting treatment outcomes. Common reasons for clinician resistance to integrating data‐driven decision‐support tools in clinical practice include concerns about the reliability, relevance and usefulness of the technology coupled with perceived conflicts between data‐driven recommendations and clinical judgement. While AI/ML‐based tools might be precise in guiding treatment decisions, it might not be possible to realise their potential at present, due to implementation, acceptability and ethical concerns. In this article, we will outline the concept of eXplainable AI (XAI), a potential solution to these concerns. XAI refers to a form of AI designed to articulate its purpose, rationale and decision‐making process in a manner that is comprehensible to humans. The key to this approach is that end‐users see a clear and understandable pathway from input data to recommendations. We use real Norse Feedback data to present an AI/ML example demonstrating one use case for XAI. Furthermore, we discuss key learning points that we will employ in future XAI implementations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Machine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen