OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 04.05.2026, 01:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Reasonable Oversight of the Practical, Legal, and Ethical Boundaries Impacting Human Factors Trust in Automated Decision-Making

2026·0 Zitationen·Public Governance Regulation and LawOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2026

Jahr

Abstract

Reasonable oversight is considered standard care when traditional legal and ethical guardrails become unexpectedly challenged with the rapid advancement and adoption of automation technology such as AI-enabled systems. Understanding how humans interact with and trust automated decision-making (ADM) technology is critical for sustaining operational performance, organizational safety, ethical integrity, regulatory compliance, legal defensibility, and fiduciary duty. As AI-enabled systems assume greater influence over workplace decisions, failures to establish appropriate levels of trust can introduce legal vulnerabilities, organizational liability, and a heightened risk of negligent reliance. This study, conducted in the United States in 2025, examined the practical, legal, ethical, and governance boundaries of ADM through the lived experiences of 12 subject matter experts (SMEs) from three technology-integrated fields: human factors, technology management, and human–computer interaction (HCI). Automated decision-making refers to the capability of automation to generate data-driven recommendations and actions, often leveraging analytics and business intelligence, thereby shaping organizational operations and resource allocation. A qualitative narrative inquiry design was used to explore how participants construct meaning around trust calibration in human–automation collaboration within workplace settings. The findings reveal nuanced insights into trust formation and erosion, the centrality of transparency and explainability, stakeholder communication needs, and the ethical implications of automated failures. Participants emphasized that inadequate oversight, poor data provenance practices, and ambiguous accountability boundaries could amplify legal exposure and undermine stakeholder confidence. The study also identified organizational training, governance structures, and auditability mechanisms as critical safeguards against negligent misuse of AI. Additionally, the results demonstrate that trust in ADM technologies evolves dynamically over time and must be continuously monitored as systems are updated or gain autonomy. Stakeholder engagement, clear accountability lines, and proactive communication strengthen organizational legitimacy and reduce resistance. Together, these findings offer multidimensional insights into the sociotechnical, legal, ethical, and governance conditions under which trust in automation is strengthened or diminished, supporting the more responsible integration of automated decision-making into contemporary organizational operations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AIHuman-Automation Interaction and SafetyArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen