OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 11:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The EU AI Act, Lethal Autonomous Weapons, and the Imperative for Human-Centric AI

2026·0 Zitationen·European Scientific Journal ESJOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The rapid advancement of artificial intelligence (AI) and robotics introduces profound challenges to the military sector, particularly regarding the development of Lethal Autonomous Weapon Systems (LAWS). These systems, capable of identifying and engaging targets without direct human intervention, raise critical ethical and legal questions concerning accountability and human oversight. The integration of Lethal Autonomous Weapons Systems (LAWS) into modern arsenals necessitates a rigorous examination of the prevailing international legal and ethical landscape, particularly as these technologies challenge the foundational tenets of International Humanitarian Law (IHL). Central to this discourse is the inherent difficulty autonomous robotic systems may present in complying with the principle of distinction; specifically, the technical and moral challenge of reliably differentiating between active combatants and civilians, or distinguishing healthy soldiers from those who are hors de combat due to injury. This study investigates whether this phenomenon may be interpreted by some scholars as creating a potential regulatory gap resulting from the explicit exclusion of military and defense applications from the European Union AI Act (Regulation (EU) 2024/1689) (Artificial Intelligence Act, 2024). This study adopts a doctrinal legal methodology analysis combined with policy analysis. It examines the regulatory framework established by the EU Artificial Intelligence Act and evaluates its implications for the governance of Lethal Autonomous Weapon Systems (LAWS) in light of relevant principles of International Humanitarian Law, including distinction, proportionality, and accountability. It analyzes how the transition from automation to full algorithmic autonomy challenges the fundamental principles of International Humanitarian Law, specifically the requirements of distinction and proportionality. Furthermore, the article examines the strategic implications of automation bias and the potential erosion of human judgment in high-stakes decision-making, since at present, no commonly agreed definition of Lethal Autonomous Weapon Systems (LAWS) exists. Ultimately, the current fragmentation of the regulatory landscape, characterized by the exclusion of military AI from the EU AI Act of 2024, underscores the urgent need for a unified international governance body to ensure that the rapid evolution of autonomous force does not supersede the ethical and legal frameworks it is intended to serve.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AINeuroethics, Human Enhancement, Biomedical InnovationsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen