California’s AB 1018 vs the EU AI Act — same goal, different routes
Plain-English guide to how California’s new Automated Decision Systems bill compares to Europe’s AI law — and what OpenAI, Anthropic and other builders should actually do.
Why it matters
AI already helps decide who gets a job interview, a loan, a flat, or faster triage in hospital. California’s AB 1018 and the EU AI Act both aim to stop unfair or unsafe outcomes — but they use very different playbooks. If you build or sell AI, understanding both lets you design one sensible compliance approach and ship with fewer legal surprises.
In one minute: the big picture
EU AI Act = a risk-based law covering almost all AI. If your use case is on the “high-risk” list (e.g. hiring), you must pass a formal “conformity assessment,” keep detailed technical files and add human controls. Top fines can hit €35m or 7% of global turnover. Timetable: law in force since 1 Aug 2024; most high-risk rules apply from 2 Aug 2026 (some bits earlier/later).
California AB 1018 = a narrower law aimed at Automated Decision Systems (ADS) that make “consequential decisions” (jobs, housing, education, healthcare, finance, legal services). It splits duties between developers and deployers, requires plain-English notices, opt-out/appeal, and — if you’re using an ADS at scale — a third-party audit. Civil fines go up to $25,000 per violation. As of mid-September 2025, AB 1018 has cleared key committees and floor votes and is still being worked on in Sacramento.
What each law actually covers
EU AI Act: “Is your use high-risk?”
Europe decides duties by risk level. Hiring and other HR tools are listed as high-risk (Annex III), so you need a quality-management system, risk controls, technical documentation, human oversight and a conformity route before you place the system on the EU market. Think of it as a safety check + paperwork before you deploy, plus monitoring after.
AB 1018: “Is your decision consequential — and how many people are affected?”
California focuses on what the system does to people. If your ADS helps make important decisions (e.g. invite to interview, approve a loan), you must:
Run performance evaluations (developers) and share results and usage guidance with customers.
Give people clear notices before and after decisions, explain what data was used and how the system influenced the outcome, say whether a human will review it, and offer opt-out (with some exceptions) and an appeal route.
If your ADS directly impacts more than 6,000 people in any three-year period, get an independent third-party audit, publish a high-level summary, and be ready to hand unredacted reports to the Attorney General within 30 days if asked.
Plain take: Europe sorts you by risk category; California sorts you by real-world impact and scale.
Transparency and user rights
EU: you must be transparent about AI’s role, add human oversight, and (because most cases involve personal data) expect to run a Data Protection Impact Assessment under GDPR.
California: before finalising a decision, tell the person an ADS is being used and what it’s doing; after the decision, send a short written summary within five days. People can opt out in many situations, appeal, and correct data. This is very practical for HR teams and lenders because it maps to real steps in a workflow.
Audits and assurance
EU: your main hurdle is the conformity assessment for high-risk uses, plus ongoing monitoring once live. Think audit-style, but anchored in product safety law.
California: audits are triggered by scale (≥ 6,000 people in 3 years). That’s unusual — it targets systems that affect lots of people, regardless of model size — and it means you’ll need deployment-level metrics, not just model cards.
Penalties and timelines
EU: top penalties up to €35m or 7% of worldwide turnover (whichever is higher). Key dates: in force 1 Aug 2024; some duties already live in Feb/Aug 2025; high-risk rules mostly from 2 Aug 2026; some embedded product cases run to 2027. The Commission says there will be no delay.
California: civil penalties up to $25,000 per violation, plus injunctions and fees. Business groups warn that “per violation” could stack up fast; committee papers also note daily offences for missing audit submissions. Lawmakers are still tuning details and timing.
So what should OpenAI, Anthropic and similar builders do?
OpenAI
Stance: OpenAI has publicly asked California to harmonise state rules with national and global norms to avoid a patchwork. That signals they’ll support a clear baseline but dislike multiple, slightly different state laws.
Practical exposure: OpenAI’s models are general-purpose. AB 1018 bites when customers use them to make consequential decisions (e.g. CV screening). In the EU, those HR uses are high-risk, so buyers will expect documentation and help with conformity steps. Net: invest in customer-facing compliance kits and human-in-the-loop patterns now.
Anthropic
Stance: Anthropic endorsed SB 53, California’s “frontier model” bill — which helps with political capital in Sacramento and suggests the company is comfortable with thoughtful state-level rules.
Practical exposure: Same as OpenAI: AB 1018 applies when customers use Claude for hiring, lending, etc. In the EU, enterprise buyers will ask for high-risk documentation and controls. Make it easy to deploy Claude safely by default in those flows.
Bottom line: Build one evidence spine (docs, tests, oversight, logging) that covers EU high-risk and California consequential decisions, then add local wrappers (EU conformity admin; CA notices/appeals + audit readiness).
Where the two laws line up
Jobs and HR are regulated in both places. If your tool helps decide who to interview, promote or pay, assume you’re in scope.
Humans must be able to step in. Both require meaningful human oversight.
You need evidence. Expect to show bias and performance testing and keep records.
Be ready to share documents with regulators. (EU authorities / California AG).
Where they differ (and how to design around it)
Trigger: EU = risk list (Annex III). CA = impact + scale.
Assurance model: EU = conformity assessment + CE-style paperwork. CA = independent audits when you pass the 6,000-people threshold.
Penalties: EU = turnover-linked (can be massive). CA = per-violation fixed fines (can stack).
Geography: EU is harmonised across 27 countries. The US will likely be state-by-state unless Congress acts — hence OpenAI’s harmonisation push.
Risks and trade-offs to watch
Audit bottlenecks (CA): Lots of firms hitting the 6,000-people trigger could overwhelm auditors and raise costs; you’ll also worry about IP leakage — so insist on standard scopes and NDAs.
Mis-classification (EU): If you wrongly treat a high-risk system as low-risk, the penalties and “stop-ship” pain are severe. Get legal sign-off.
Policy volatility (CA): Sacramento is still tuning language and timelines; some coverage notes delays and cost debates. Track end-of-session outcomes.
Four practical steps (do these next)
Map your use cases. Tag anything touching jobs, pay, promotion, credit, housing, healthcare, education, legal services. Mark EU (Annex III high-risk) and CA (consequential).
Ship an “evidence kit.”
EU pack: risk file, data governance notes, intended purpose, human oversight design, test results, incident playbooks, conformity route summary.
CA pack: developer performance evaluations, easy-to-read notices, opt-out + appeal steps, AG-response pack, and audit-ready logs for the 6,000/3-year trigger.
Bake in human control. Add review checkpoints, reversal windows, and a simple path to fix wrong data. This satisfies both laws and calms buyers.
Plan the timeline. EU high-risk obligations land Aug 2026 (with other bits earlier); AB 1018 details are being finalised but audits and penalties are clearly on the table — budget time and money now.
“If your model helps hire or lend, assume you’re regulated — then prove it’s fair.”
UK/EU alignment note (for British readers)
If you operate in the UK, keep doing DPIAs under UK GDPR and follow NCSC good practice. For the EU, meet the AI Act high-risk regime. In the US, layer on AB 1018 notices/appeals and audit readiness where your deployments affect Californians at scale.
Disclaimer
This is general information, not legal advice. Check final statutory text and official guidance before making decisions.



