Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Responsible artificial intelligence in clinical decision support systems requires good science: Lessons learned from an international roundtable discussion (Preprint)
0
Zitationen
16
Autoren
2023
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> In healthcare, where increasing efficiency is essential to the demand of scale, there is immense opportunity to incorporate advances in artificial intelligence (AI). However, particularly in healthcare, these technologies must be designed to be both effective and ethical. Our objective in a multidisciplinary international roundtable discussion (Canada, United States, United Kingdom), was to identify concepts, perspectives, and considerations for AI systems in healthcare settings that are designed, developed, and deployed with good intention to empower patients and healthcare providers in a safe, trustworthy, and ethical way. We refer to this notion as responsible AI (RAI). First, we discuss the role and opportunity of AI to support collaborative healthcare (clinicians and patients working together) and increase specialist capacity. Second, we outline risks and ramifications of poorly implemented AI including bias, implications of predictors to support diagnosis, and privacy and security considerations. Third, we discuss how these risks can be mitigated through conducting “good science” by addressing biases such as representative data, probing annotation bias, and the role of the biostatistician. We also outline the need to evaluate fit for purpose through transdisciplinary collaboration to address: explainability, fairness, interpretability, transparency, as well as the role of standards, auditing, and regulatory considerations. Finally, we detail four criteria outlining determinants, considerations and rationale for developing RAI. These determinants and considerations are meant to position new AI-powered healthcare technologies primed for responsible design supporting acceptability, appropriateness, feasibility, and adoption. Future directions should expand on additional factors and monitor responsible AI implementation success to validate these criteria. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.