Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
LLM-based ambiguity detection in natural language instructions for collaborative surgical robots
2
Zitationen
3
Autoren
2025
Jahr
Abstract
Ambiguity in natural language instructions poses significant risks in safety-critical human-robot interaction, particularly in domains such as surgery. To address this, we propose a framework that uses Large Language Models (LLMs) for ambiguity detection specifically designed for collaborative surgical scenarios. Our method employs an ensemble of LLM evaluators, each configured with distinct prompting techniques to identify linguistic, contextual, procedural, and critical ambiguities. A chain-of-thought evaluator is included to systematically analyze instruction structure for potential issues. Individual evaluator assessments are synthesized through conformal prediction, which yields non-conformity scores based on comparison to a labeled calibration dataset. Evaluating Llama 3.2 11B and Gemma 3 12B, we observed classification accuracy exceeding 60% in differentiating ambiguous from unambiguous surgical instructions. Our approach improves the safety and reliability of human-robot collaboration in surgery by offering a mechanism to identify potentially ambiguous instructions before robot action.
Ähnliche Arbeiten
The SCARE 2020 Guideline: Updating Consensus Surgical CAse REport (SCARE) Guidelines
2020 · 5.580 Zit.
The SCARE 2023 guideline: updating consensus Surgical CAse REport (SCARE) guidelines
2023 · 3.001 Zit.
Virtual Reality Training Improves Operating Room Performance
2002 · 2.805 Zit.
Objective structured assessment of technical skill (OSATS) for surgical residents
1997 · 2.260 Zit.
Does Simulation-Based Medical Education With Deliberate Practice Yield Better Results Than Traditional Clinical Education? A Meta-Analytic Comparative Review of the Evidence
2011 · 1.744 Zit.