Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The adoption paradox for veterinary professionals in China: high use of artificial intelligence despite low familiarity
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Introduction The global integration of artificial intelligence (AI) into veterinary medicine is advancing, yet its adoption in major markets like China remains uncharacterized. This study aimed to provide the first exploratory analysis of AI perception and adoption among veterinary professionals in China. Methods A cross-sectional survey was administered to 455 veterinary professionals in China from May to July 2025. Data on AI familiarity, adoption rates, application priorities, and perceived drivers and barriers were analyzed using descriptive statistics and thematic analysis. Results We identified a distinct adoption paradox: 71.0% of respondents incorporated AI into their workflow, yet 44.6% of these active users reported low familiarity with the technology. Adoption was primarily practitioner-driven and focused on core clinical tasks, including AI-assisted disease diagnosis (50.1%) and prescription calculation (44.8%). The primary barrier to use was concern about AI reliability and accuracy (54.3%). A strong consensus (93.8%) emerged supporting regulatory oversight of AI by veterinary authorities. Discussion The adoption paradox is driven by a practitioner-led, “inside-out” integration model where AI is used to augment clinical capabilities, countered by an “interpretability gap” that limits trust and familiarity. This contrasts with the more administrative, “outside-in” pattern seen in North America. The findings underscore a need for specialized veterinary AI tools, enhanced training focused on critical appraisal, and robust regulatory frameworks to safely harness AI’s potential in one of the world’s largest veterinary markets.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.