The Control Layer

The Control Layer

When Rivals Test Each Other’s AI: What OpenAI and Anthropic Just Did

In a stunning twist for an industry defined by secrecy, OpenAI and Anthropic just finished testing each other’s models for safety risks — and they’ve gone public with the results.

Amer Altaf's avatar
Amer Altaf
Aug 29, 2025
∙ Paid

This marks the first time two leading AI labs, usually locked in an arms race, have effectively “peer‑reviewed” each other’s systems. On 27 August 2025, the companies announced that OpenAI had run safety checks on Anthropic’s Claude Opus 4 and Claude Sonnet 4, while Anthropic reviewed OpenAI’s GPT‑4o, GPT‑4.1, o3, and o4‑mini. Both sets of findings were…

Keep reading with a 7-day free trial

Subscribe to The Control Layer to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Amer Altaf · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture