Anthropic revoked OpenAI's API access to its Claude family of AI models on 1 August 2025, after determining that OpenAI violated the terms of service. According to Anthropic spokesperson Christopher Nulty, OpenAI's technical staff were using Claude Code coding tools ahead of the GPT-5 launch, which constitutes a direct violation of the terms of service. Anthropic's commercial terms explicitly prohibit customers from using the service to build competing products or services, including training competing AI models.
OpenAI was connecting Claude to internal tools through special developer access (APIs) to evaluate Claude's capabilities in areas including coding, creative writing, and safety-related prompts involving categories like CSAM, self-harm, and defamation. OpenAI's chief communications officer Hannah Wong responded by stating that evaluating other AI systems is industry standard for benchmarking progress and improving safety. Anthropic has previously employed this tactic in July 2025, when it restricted AI coding startup Windsurf's direct access to its models after rumours emerged that OpenAI was set to acquire it.
The incident highlights tensions between competing technology companies in the AI market, where revoking API access has been a tactic used for years. Anthropic's chief science officer Jared Kaplan previously stated that it would be odd for them to be selling Claude to OpenAI. However, Anthropic will continue to ensure OpenAI has API access for benchmarking and safety evaluations as is standard industry practice, whilst OpenAI's API remains available to Anthropic.
Sources:
1.

2.
3.
