Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Analysis: Serving Individuals with Language Impairments using Automatic Speech Recognition Models andLarge Language Models: Challenges and Opportunities
0
Zitationen
13
Autoren
2025
Jahr
Abstract
<title>Abstract</title> Large language models (LLMs) have attracted much attention for healthcare applications, demonstrating strong potential in automating conversational interactions. However, cloud-hosted LLMs pose major data privacy concerns when processing Protected Health Information. Moreover, current LLM-based systems rely on text input/output, creating substantial barriers for users, such as children and older adults, who may have difficulty typing. To mitigate these challenges, there has been growing interest in developing edge device-based, voice-enabled LLM systems. Running LLMs on edge devices minimizes the risks of PHI leaking to the cloud, while automatic speech recognition (ASR) eliminates the need for text-based inputs. Despite these advantages, existing ASRs convert speech into word-by-word text, which often contains disfluencies and fillers (e.g., ”um”, ”hum”) and grammatical errors, especially for individuals with language impairments. This noisy input can significantly degrade the performance of LLMs, yet this chained issue remains under-explored in healthcare applications. To address this critical gap, we conducted a systematic analysis through comparison studies and ablation experiments to identify key factors affecting the performance of edge-based ASR-LLM systems when used by individuals with language impairments. Furthermore, we proposed an evaluation framework for speech-enabled AI healthcare to emphasize both interpretability and robustness, paving the way for more inclusive and secure conversational healthcare solutions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.