Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Unavoidable futures? How governments articulate sociotechnical imaginaries of AI and healthcare services
34
Zitationen
1
Autoren
2023
Jahr
Abstract
Sociotechnical imaginaries of artificial intelligence (AI) play a key role in shaping the future of healthcare services. Academics have analysed the (largely utopian) corporate visions concerning AI but how governments construe sociotechnical imaginaries of AI has remained understudied. In this paper, we critically analyse how governments articulate sociotechnical imaginaries of AI in Dutch healthcare. We build a theoretical framework for studying these sociotechnical imaginaries, in which we emphasize a) imaginaries of AI and healthcare services b) their performativity and c) the socio-political context in which they are articulated. Backed by a critical multimodal discourse analysis of the Dutch policy programme ‘Valuable AI’, we identify three tactics for imagining the future of AI and healthcare services, i.e., tactics of 1) legitimation, 2) promotion and 3) reassurance. Each of these tactics ‘visualizes’ algorithmization and aims to evoke affective meanings concerning ‘the future’ of AI in healthcare. Our analysis highlights several underlying frictions and tension between the different governmental tactics. This leads to democratic, political, and professional risks. We conclude that explicit attention for these risks, which are now smoothened out in attractive sociotechnical imaginaries, is needed to open up debate and eventually realize democratic, balanced, and fair algorithmic futures.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.711 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.884 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.506 Zit.
Fairness through awareness
2012 · 3.301 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.193 Zit.