Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Comparison of HIPAA-Compliant Transcription Services for Virtual Psychiatric Interviews
4
Zitationen
12
Autoren
2023
Jahr
Abstract
Background: Automatic speech recognition (ASR) technology is increasingly being used for transcription in clinical contexts. Although there are numerous transcription services using ASR, few studies have compared the word error rate (WER) between different transcription services among different diagnostic groups in a mental health setting. There has also been little research into the types of words ASR transcriptions mistakenly generate or omit. Objective: This study compared the WER of three ASR transcription services (Amazon Transcribe, Zoom/Otter AI, and Whisper/Open AI) in interviews across two different clinical categories (controls, and participants experiencing a variety of mental health conditions). These ASR transcription services were also compared to a commercial human transcription service, REV. Words that were either included or excluded by the error in the transcripts were systematically analyzed by their Linguistic Inquiry and Word Count (LIWC) categories. Methods: Participants completed a one-time research psychiatric interview, which was recorded on a secure server. Transcriptions created by the research team were used as the gold standard from which WER was calculated. The interviewees were categorized into either the control group (N = 18), or the mental health condition group (N = 47) using the Mini-International Neuropsychiatric Interview. The total sample included 65 participants. Brunner-Munzel tests were used for comparing independent sets such as the diagnostic groupings, and Wilcoxon signed-rank tests were used for correlated samples when comparing the total sample between different transcriptions services.Results: There were significant differences between each ASR transcription service WER (P < .001). Amazon Transcribe’s output exhibited significantly lower WERs compared to the Zoom/Otter AI and Whisper/Open AI ASR. ASR performances did not significantly differ across the two different clinical categories within each service (P > 0.05). A comparison between the human transcription service output from REV and the best-performing ASR (Amazon Transcribe) demonstrated a significant difference, with REV having a slightly lower median WER (7.6% versus 8.9%). Heatmaps and spider plots were used to visualize the most common errors in LIWC categories, which were found to be within three overarching categories: Conversation, Cognition, and Function.Conclusions: Overall, our results suggest that the WER between manual and automated transcription services may be narrowing as ASR services advance. This is consistent with the trend in the literature where, depending on the context, WER has dropped from around 30% in the early 2000’s [1] to 10-15% in the 2010’s [2] to under 10% in recent years [3]. These advances, coupled with decreased cost and time in receiving transcriptions, may make ASR transcriptions a more viable option within healthcare settings. However, more research is required to determine if errors in specific types of words impact the analysis and utility of these transcriptions, particularly for specific applications and in a variety of populations in terms of clinical diagnosis, literacy level, accent, and cultural origin.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.