OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 03:23

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

AI-powered LCNC implementations and gender: a comparative study of role attribution bias

2025·0 Zitationen·AI and EthicsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2025

Jahr

Abstract

Abstract This study investigates whether AI-powered Low-Code/No-Code (LCNC) solutions may unintentionally generate gender-biased responses. We developed four AI-powered LCNC implementations (i.e., Spreadsheet-based, Workflow-based, Web-Application-based and Mobile-Application-based), using different generative AI models, including those from OpenAI, DeepSeek, Claude, and Google DeepMind, and evaluated their outputs in response to prompts designed to highlight potential gendered associations in roles, traits, and personal preferences. Our analysis consists of two parts. First, we applied a mixed-methods structured content analysis to systematically identify potential stereotypical patterns in the responses of the AI models. Second, we compared the outputs across the different AI models for each prompt to explore variations in gender bias-related behavior. Our findings raise an ethical concern: without appropriate policies and guidelines in place, AI-powered LCNC solutions may replicate or even amplify existing societal biases. This work contributes to ongoing discussions on responsible AI integration and bias-aware design, especially within the evolving LCNC ecosystem.

Ähnliche Arbeiten