OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.04.2026, 10:52

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Legal and Regulatory Frameworks Governing Generative AI for Enterprises

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

This chapter provides a comprehensive analysis of the legal and regulatory frameworks shaping the deployment of generative AI (GenAI) in enterprise contexts. As GenAI rapidly transforms business operations by automating tasks, enhancing decision-making, and driving innovation, enterprises face mounting pressure to comply with a fragmented and evolving global regulatory landscape. The chapter opens with the significance of GenAI adoption, emphasizing that legal and regulatory compliance is foundational not only for mitigating risks but also for sustaining trust, fairness, and innovation. The chapter outlines the global landscape, beginning with the European Union’s AI Act—a pioneering, risk-based regulation that classifies AI systems into tiers from unacceptable to minimal risk, imposing transparency, human oversight, and conformity obligations. The GDPR’s provisions on privacy by design, lawful processing, and data subject rights are detailed, highlighting how they affect generative AI systems. In the United States, the absence of a federal AI law has led to a patchwork of state-level laws (e.g., in California and Colorado) and federal agency guidance from the FTC and FCC, with new executive orders driving infrastructure localization and AI safety standards. China enforces a dual-track regulatory model emphasizing sovereignty and accountability—requiring AI-generated content labeling and domestic data storage. India’s Digital Personal Data Protection (DPDP) Act and anticipated AI Governance Act focus on consent, bias prevention, and local data storage. India’s courts and policymakers are actively shaping IP law interpretations, with landmark cases like ANI v. OpenAI challenging unauthorized content use for training. Countries like the UAE and Saudi Arabia balance innovation and control through national AI strategies, AI regulatory sandboxes, and novel constructs like “data embassies.” The Asia-Pacific region presents diverse governance models: prescriptive in China and South Korea, principle-based in Japan and Australia, and co-regulatory in Singapore, where the Model AI Governance Framework for GenAI and the AI Verify toolkit offer adaptive oversight. These approaches reflect varying national priorities—data sovereignty, economic competitiveness, and human-centric AI values. Key global trends include increased emphasis on risk management, transparency, explainability, and data provenance. Enterprises are adopting grounding techniques, human-in-the-loop reviews, and automated reasoning to mitigate hallucinations and ensure factual integrity. Enforcement of policy-based guardrails—such as Amazon Bedrock Guardrails—at the training, inference, and deployment stages ensures ethical AI use across industries. On intellectual property (IP), the chapter explores the challenges enterprises face regarding AI-generated content. Most jurisdictions do not yet recognize machine-generated works under copyright or patent laws. Notable legal actions, such as Getty Images v. Stability AI, highlight the risks of unlicensed data use in training models. The chapter recommends hybrid IP strategies involving human authorship claims, watermarked outputs, and enhanced licensing practices. Liability issues are explored in depth, clarifying the responsibilities of developers, deployers, and users of AI systems. Recent legal developments, including the Garcia v. Character.AI case, suggest courts may increasingly treat AI tools as “products” subject to liability laws. Section 230 protections in the U.S. are also narrowing as courts distinguish between hosting and content generation. To mitigate legal exposure, enterprises must implement rigorous governance frameworks, transparency mechanisms, and ethical use policies. The chapter also presents open-source compliance tools for multi-jurisdictional data governance, including ARX and Microsoft Presidio for data anonymization and privacy-preserving AI. These tools help enterprises comply with regulations like GDPR, CCPA, and DPDP while maintaining auditability and scalability. In conclusion, the chapter underscores that enterprises deploying generative AI must proactively align with a dynamic legal environment. This involves not only understanding varied jurisdictional requirements but also embedding responsible, ethical practices across the AI lifecycle. Through adaptive governance, technical safeguards, and legal foresight, organizations can leverage GenAI’s transformative potential while upholding societal values and minimizing regulatory and reputational risks.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Ethics and Social Impacts of AILaw, AI, and Intellectual PropertyArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen