Regulation in Motion
As AI capabilities expand, governments worldwide are grappling with how to regulate these systems. The legal landscape is evolving rapidly, with different jurisdictions taking varied approaches that shape how AI develops and deploys.
European Union
The AI Act
The EU AI Act (Regulation 2024/1689), which entered into force on August 1, 2024, represents the world's first comprehensive AI regulation. Key provisions include:
- Risk-based classification — Four tiers: unacceptable, high, limited, and minimal risk
- Prohibited applications (effective February 2025) — Bans on social scoring, untargeted facial recognition scraping, emotion recognition in workplaces/schools
- Transparency requirements — Users must know when interacting with AI chatbots or deepfakes
- High-risk oversight (effective August 2026-2027) — Strict requirements for AI in healthcare, employment, law enforcement, biometrics
Real-time biometric identification is banned except for narrow law enforcement exceptions (missing persons, imminent threats, terrorist attacks).
United States
Sectoral Approach
The US has eschewed comprehensive AI legislation in favor of sector-specific rules and executive guidance. Key developments include:
- Executive Order 14110 (October 2023) — Biden's comprehensive AI safety order was revoked by Trump on January 20, 2025, leaving federal AI policy in flux
- NIST AI Risk Management Framework — Voluntary guidance for identifying and managing AI risks (remains in effect)
- FTC Operation AI Comply (April 2024) — Enforcement initiative against deceptive AI claims under existing consumer protection law
- State-level action — California enacted 24 AI-related laws; Illinois and Colorado passed AI employment discrimination laws
The lack of comprehensive federal legislation means states are leading on AI regulation, creating a patchwork of requirements across jurisdictions.
China
Centralized Control
China has implemented AI regulations focused on content control and social stability, including:
- Requirements that AI-generated content align with "core socialist values"
- Regulations on recommendation algorithms
- Rules requiring AI systems to maintain "positive" discourse
These regulations prioritize state interests over individual rights, reflecting China's broader approach to technology governance.
Key Legal Battles
Copyright and Training Data
Over 50 lawsuits challenge whether AI training on copyrighted material constitutes infringement. Key cases include:
- Authors Guild v. OpenAI — Class action alleging mass ingestion of literary works to train ChatGPT
- The New York Times v. OpenAI & Microsoft — Alleges millions of articles were used without consent; seeks billions in damages
- Andersen v. Stability AI — Visual artists challenging Stable Diffusion and Midjourney
- Getty Images v. Stability AI — Claims infringement of 12+ million photographs
- RIAA v. Suno (June 2024) — First major lawsuit over AI music generation
In August 2025, Anthropic reached the first major settlement in these cases. No summary judgment decisions on fair use are expected until 2026.
Employment Discrimination
Landmark cases are establishing liability for AI-based hiring discrimination:
- EEOC v. iTutorGroup (2023) — First AI discrimination settlement ($365,000); AI automatically rejected applicants based on age
- Mobley v. Workday — Class action alleging AI screening discriminates by race, age, and disability; achieved class certification in May 2025
- July 2024 ruling — Federal judge ruled AI vendors (not just employers) can be held liable for discrimination
The EEOC has made AI bias a top enforcement priority, and research shows AI resume screeners prefer white-associated names in 85% of cases.
Liability and Accountability
When AI systems cause harm, who is responsible? Courts are addressing:
- Autonomous vehicle accidents
- Medical AI misdiagnoses
- Financial AI causing market disruption
Current legal frameworks struggle to assign responsibility for decisions made by AI systems.
Workplace Protections
Union Responses
Labor organizations are negotiating AI-related protections:
- Requirements for advance notice of AI implementation
- Worker input on AI tool selection
- Protections against AI-based surveillance and management
- Retraining guarantees for affected workers
Emerging Legislation
Some jurisdictions are considering or implementing:
- Right to human review of AI decisions
- Algorithmic transparency requirements
- AI impact assessments before deployment
- Worker consent for AI monitoring
Looking Ahead
The legal landscape for AI remains unsettled. Key questions include:
- Will international standards emerge, or will fragmentation continue?
- How will courts adapt existing legal concepts to AI?
- Can regulation keep pace with technological change?
- Will worker protections be strengthened or weakened?
The answers will significantly shape how AI impacts society, making legal developments essential to understanding technology resistance.