Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Mitigated deployment strategy for ethical AI in clinical settings
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Clinical diagnostic tools can disadvantage subgroups due to poor model generalisability, which can be caused by unrepresentative training data. Practical deployment solutions to mitigate harm for subgroups from models with differential performance have yet to be established. This paper will build on existing work that considers a selective deployment approach where poorly performing subgroups are excluded from deployments. Alternatively, the proposed 'mitigated deployment' strategy requires safety nets to be built into clinical workflows to safeguard under-represented groups in a universal deployment. This approach relies on human-artificial intelligence collaboration and postmarket evaluation to continually improve model performance across subgroups with real-world data. Using a real-world case study, the benefits and limitations of mitigated deployment are explored. This will add to the tools available to healthcare organisations when considering how to safely deploy models with differential performance across subgroups.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.