OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.04.2026, 09:52

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Trust Me, I Am an Intelligent and Autonomous System: Trustworthy AI in Africa as Distributed Concern

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

Abstract Over the last decade, we’ve witnessed the re-convergence of Human–computer Interaction (HCI) to emerging spaces such as artificial intelligence (AI), big data, edge computing and so on. Specific to the agentistic turn in HCI, researchers and practitioners have grappled with the central issues around AI as a research programme or a methodological instrument—from cognitive science emphasis on technical and computational cognitive systems to philosophy and ethics focus on agency, perception, interpretation, action, meaning, and understanding. Even with the proliferation of AI discourses globally, researchers have recognised how the discourse of AI from Africa is undermined. Consequently, researchers interested in HCI and AI in Africa have identified the growing need for exploring the potentials and challenges associated with the design and adoption of AI-mediated technologies in critical sectors of the economy as a matter of socio-technical interest or concern. In this chapter, we consider how the normative framing of AI in Africa—from ethical, responsible, and trustworthy—can be better understood when their subject matters are conceived as a Latourian “Distributed Concern”. Building on Bruno Latour’s analytical framing of “matters of facts” as “matters of concerns”, we argue that operationalising trustworthy AI as a distributed concern—which is ethical, socio-cultural, geo-political, economic, pedagogical, technical, and so on—entails a continual process of reconciling value(s). To highlight the scalable dimension of trustworthiness in AI research and design, we engaged in sustained discursive argumentation in showing how the procedural analysis of trust as a spectrum might explicate the modalities that sustained the normalisation of trustworthy AI as ethical, lawful, or robust.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEthics and Social Impacts of AIAI in Service Interactions
Volltext beim Verlag öffnen