Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Measures of socioeconomic advantage are not independent predictors of support for healthcare AI: subgroup analysis of a national Australian survey
0
Zitationen
6
Autoren
2023
Jahr
Abstract
ABSTRACT Applications of AI (artificial intelligence) have the potential to improve aspects of healthcare. However, studies have shown that healthcare AI algorithms also have the potential to perpetuate existing inequities in healthcare, performing less effectively for marginalised populations. Studies on public attitudes toward AI outside of the healthcare field have tended to show higher levels of support for AI amongst socioeconomically advantaged groups that are less likely to be sufferers of algorithmic harms. We aimed to examine the sociodemographic predictors of support for scenarios related to healthcare AI. The AVA-AI survey was conducted in March 2020 to assess Australians’ attitudes toward artificial intelligence in healthcare. An innovative weighting methodology involved weighting a non-probability web-based panel against results from a shorter omnibus survey distributed to a representative sample of Australians. We used multinomial logistic regression to examine the relationship between support for AI and a suite of sociodemographic variables in various healthcare scenarios. Where support for AI was predicted by measures of socioeconomic advantage such as education, household income, and SEIFA index, the same variables were not predictors of support for the scenarios presented. Variables associated with support for healthcare AI across all three scenarios included being male, having computer science or programming experience, and being aged between 18 and 34 years. Other Australian studies suggest that this group have a higher level of perceived familiarity with AI. Our findings suggest that while support for AI in general is predicted by indicators of social advantage, these same indicators do not predict support for healthcare AI. WHAT IS ALREADY KNOWN ON THIS TOPIC Artificial intelligence has the potential to perpetuate existing biases in healthcare datasets, which may be more harmful for marginalised populations. Support for the development of artificial intelligence tends to be higher amongst more socioeconomically privileged groups. WHAT THIS STUDY ADDS Whilst general support for the development of artificial intelligence was higher amongst socioeconomically privileged groups, support for the development of healthcare artificial intelligence was not. Groups that were more likely to support healthcare artificial intelligence were males, those with computer science experience, and younger people. HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE, OR POLICY Healthcare artificial intelligence is becoming more relevant for the public as new applications are developed and implemented. Understanding how public attitudes differ amongst sociodemographic subgroups is important for future governance of healthcare AI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.