Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The 4th Workshop on Ethical Artificial Intelligence: Methods and Applications (EAI)
0
Zitationen
4
Autoren
2025
Jahr
Abstract
As computers increasingly make decisions about who gets a loan, a job, or even bail, the expansion of AI algorithms has provoked public concern about ethical issues, and the need to understand what constitutes AI algorithms and how they make decisions becomes ever more pressing. For example, an increasing number of high-profile news reports that widely-used algorithms have unfairly discriminated against some groups of people (e.g., by gender and race) in parole decisions and other major life events. Focusing more attention on ethical bias in learning algorithms is key to unlocking the potential of automated decision systems while ensuring fairness and accountability so that everyone can advance equally in society. Ethical AI has become increasingly important and it has been attracting attention from academia and industry, due to its increased popularity in real-world applications with fairness concerns. It also places fundamental importance on ethical considerations in determining legitimate and illegitimate uses of AI. Organizations that apply ethical AI have clearly stated well-defined review processes to ensure adherence to legal guidelines. Therefore, the wave of research at the intersection of ethical AI in data mining and machine learning has also influenced other fields of science, including computer vision, natural language processing, reinforcement learning, and social science.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.672 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.879 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.490 Zit.
Fairness through awareness
2012 · 3.298 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.184 Zit.