Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessment of Accuracy and Safety of LabTest Checker (LTC-AI)
0
Zitationen
9
Autoren
2023
Jahr
Abstract
Abstract Background In recent years, the implementation of artificial intelligence (AI) in healthcare is progressively transforming medical fields, with Clinical Decision Support Systems (CDSS) as a notable application. Laboratory tests are vital for accurate diagnoses, but their increasing reliance presents challenges. The need for effective strategies for managing laboratory test interpretation is evident from the millions of monthly searches on test results’ significance. The potential role of CDSS in laboratory diagnostics gains significance, however, more research needs to explore this area. Objective The primary objective of our study was to assess the accuracy and safety of LabTest Checker (LTC), a CDSS designed to support medical diagnoses by analyzing both laboratory test results and patients’ medical histories Methods This cohort study embraced a prospective data collection approach. A total of 101 patients were enrolled, aged 18 and above, in stable condition, requiring comprehensive diagnosis. A panel of blood laboratory tests was conducted for each participant. Participants utilized LabTest Checker for test result interpretation. Accuracy and safety of the tool were assessed by comparing AI-generated suggestions to experienced doctor (consultant) recommendations, considered the gold standard. Results The system achieved a 74.3% accuracy and 100% sensitivity for emergency safety and 92.3% sensitivity for urgent cases. It potentially reduced unnecessary medical visits by 41.6% and achieved an 82.9% accuracy in identifying underlying pathologies. Conclusion This study underscores the transformative potential of AI-based CDSS in laboratory diagnostics, contributing to enhanced patient care, efficient healthcare systems, and improved medical outcomes. LabTest Checker’s performance evaluation highlights the advancements in AI’s role in laboratory medicine.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.