Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Whose Truth? Pluralistic Geo-Alignment for (Agentic) AI
0
Zitationen
8
Autoren
2025
Jahr
Abstract
AI alignment describes the challenge of ensuring (future) AI systems behave in accordance with societal norms, values, and goals. Alignment is now central to research on foundation models and AI agents. Most recent work focuses on methods to prevent potentially harmful biases, account for social inequalities, improve AI safety, and enhance explainability. Notably, the debiasing 'corrections' applied to various stages of AI/ML workflows may lead to outcomes that diverge strongly from current statistical realities on the ground. For instance, text-to-image models may depict a balanced gender ratio of company leadership, despite existing imbalances. However, an often overlooked dimension is the geographic variability of alignment. What is considered appropriate, truthful, or legal can vary greatly between regions due to cultural differences, political realities, or legislation. Hence, some model outputs align without further knowledge of the user's geospatial context, while others are highly sensitive to it. Put differently, whether these outputs align varies geographically. E.g., statements about Kashmir cannot be generated without understanding the user's origin and current location. From a common-sense perspective, this problem is hardly new. In fact, Google Maps will render different administrative borders based on the user's location. Interestingly, in both knowledge representation and representation learning, spatiotemporal context, e.g., due to the monotonic nature of reasoning, remains a major challenge. Until very recently, these were largely theoretical problems. What is truly novel is the scale and level of automation at which AI systems now mediate knowledge, express opinions, and represent reality to millions of users across borders, often with little transparency or oversight regarding how context is handled. With agentic AI on the horizon, the urgency for pluralistic, geographically aware alignment, rather than one-size-fits-all solutions, is growing. Here, we motivate and formalize the vision of geo-alignment, outline how it goes beyond pluralistic alignment by offering learnable spatially explicit patterns, and suggest concrete avenues for future research.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.480 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.361 Zit.
Fairness through awareness
2012 · 3.258 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.