Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evidence for AI in healthcare
1
Zitationen
6
Autoren
2024
Jahr
Abstract
Abstract In healthcare, novel AI solutions are being created to address some of the biggest challenges in the prognosis, diagnosis, and treatment of disease, as well as clinician workflows and service improvement. The emergence of generative AI presents a compelling opportunity to revolutionize diagnostics, treatment planning, medical research, and patient engagement. The hypothesized uses of generative AI are broad, ranging from medical education, providing information to patients, generating synthetic patient data for the validation of AI tools, to the analysis of continuous data from wearables to detect early signs of disease. Adoption of digital solutions and AI in healthcare is slower than in other industries. The majority of clinicians don’t have direct experience with AI technologies, only a quarter have recommended a digital therapeutic, and less than a fifth have prescribed one. Safety, quality and confidence can be built through appropriate governance, testing, careful implementation and appropriate clinical use. There is a responsibility for AI developers, health systems and providers, as well as regulators, to create clear expectations for evidence and solutions. AI solutions require unique model evidence (evidence for the underlying algorithm) and solution evidence (evidence for the product in which the algorithm is embedded). Models will need validation first on internal and then on external datasets (internal and external validation). This will indicate how accurate and reliable the model is, but will be limited to the dataset that a client has access to. By examining ways to evaluate evidence for AI in healthcare, we can develop a better understanding of how best to use AI to its most appropriate advantages.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.