Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
PitVis-2023 challenge: Workflow recognition in videos of endoscopic pituitary surgery
3
Zitationen
33
Autoren
2025
Jahr
Abstract
The field of computer vision applied to videos of minimally invasive surgery is ever-growing. Workflow recognition pertains to the automated recognition of various aspects of a surgery, including: which surgical steps are performed; and which surgical instruments are used. This information can later be used to assist clinicians when learning the surgery or during live surgery. The Pituitary Vision (PitVis) 2023 Challenge tasks the community to step and instrument recognition in videos of endoscopic pituitary surgery. This is a particularly challenging task when compared to other minimally invasive surgeries due to: the smaller working space, which limits and distorts vision; and higher frequency of instrument and step switching, which requires more precise model predictions. Participants were provided with 25-videos, with results presented at the MICCAI-2023 conference as part of the Endoscopic Vision 2023 Challenge in Vancouver, Canada, on 08-Oct-2023. There were 18-submissions from 9-teams across 6-countries, using a variety of deep learning models. The top performing model for step recognition utilised a transformer based architecture, uniquely using an autoregressive decoder with a positional encoding input. The top performing model for instrument recognition utilised a spatial encoder followed by a temporal encoder, which uniquely used a 2-layer temporal architecture. In both cases, these models outperformed purely spatial based models, illustrating the importance of sequential and temporal information. This PitVis-2023 therefore demonstrates state-of-the-art computer vision models in minimally invasive surgery are transferable to a new dataset. Benchmark results are provided in the paper, and the dataset is publicly available at: https://doi.org/10.5522/04/26531686.
Ähnliche Arbeiten
The SCARE 2020 Guideline: Updating Consensus Surgical CAse REport (SCARE) Guidelines
2020 · 5.571 Zit.
Virtual Reality Training Improves Operating Room Performance
2002 · 2.782 Zit.
An estimation of the global volume of surgery: a modelling strategy based on available data
2008 · 2.503 Zit.
Objective structured assessment of technical skill (OSATS) for surgical residents
1997 · 2.256 Zit.
Does Simulation-Based Medical Education With Deliberate Practice Yield Better Results Than Traditional Clinical Education? A Meta-Analytic Comparative Review of the Evidence
2011 · 1.701 Zit.
Autoren
- Adrito Das
- Danyal Z. Khan
- Dimitrios Psychogyios
- Yitong Zhang
- John Hanrahan
- Francisco Vasconcelos
- You Pang
- Zhen Chen
- Jinlin Wu
- Xiaoyang Zou
- Guoyan Zheng
- Abdul Qayyum
- Moona Mazher
- Imran Razzak
- Tianbin Li
- Ye Jin
- Junjun He
- Szymon Płotka
- Joanna Kaleta
- Amine Yamlahi
- Antoine Jund
- Patrick Godau
- Satoshi Kondo
- Satoshi Kasai
- Kousuke Hirasawa
- Dominik Rivoir
- Stefanie Speidel
- Alejandra Pérez
- Santiago Rodrı́guez
- Pablo Arbeláez
- Danail Stoyanov
- Hani J. Marcus
- Sophia Bano
Institutionen
- University College London(GB)
- University College of Osteopathy(GB)
- National Hospital for Neurology and Neurosurgery(GB)
- The London College(GB)
- Centre for Artificial Intelligence and Robotics(IN)
- Shanghai Jiao Tong University(CN)
- Imperial College London(GB)
- UNSW Sydney(AU)
- Shanghai Artificial Intelligence Laboratory
- University of Amsterdam(NL)
- Jagiellonian University(PL)
- Computational Physics (United States)(US)
- Heidelberg University(DE)
- German Cancer Research Center(DE)
- National Center for Tumor Diseases(DE)
- University Hospital Heidelberg(DE)
- Muroran Institute of Technology(JP)
- Niigata University of Health and Welfare(JP)
- Konica Minolta (Japan)(JP)
- Universidad de Los Andes(CO)