Home TRENDSPOTTING After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too

After dissing Anthropic for limiting Mythos, OpenAI restricts access to Cyber, too

Key Takeaways

  • OpenAI restricted access to its GPT-5.5-Cyber model just 9 days after CEO Sam Altman criticized Anthropic for limiting its Claude Mythos cybersecurity AI
  • Both companies cite unprecedented cyber attack capabilities as the reason for restricted access, confirmed by the UK AI Safety Institute
  • OpenAI’s Trusted Access for Cyber (TAC) program launched May 7, 2026, requiring applications and tiered verification
  • An unauthorized group accessed Anthropic’s Mythos on release day, highlighting the leak risk despite safeguards
  • The “trusted access” model is becoming the industry standard for dual-use AI capabilities

OpenAI CEO Sam Altman publicly slammed Anthropic for restricting its Claude Mythos cybersecurity model in April 2026, calling it “fear-based marketing” — but within 9 days, OpenAI announced identical restrictions on its own GPT-5.5-Cyber model, launching a formal access program on May 7.

A Rapid Reversal on Access Controls

The timeline of events reveals a stark contradiction in OpenAI’s public positioning. On April 5, 2026, during an appearance on the “Core Memory” podcast, Altman criticized Anthropic’s decision to limit access to its Claude Mythos cybersecurity model. “It is clearly incredible marketing to say, ‘We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million,'” Altman said, according to the podcast transcript.

Two days later, on April 7, Anthropic launched Project Glasswing, a restricted access program for Mythos with 12 founding partners, including AWS, Apple, Microsoft, and Google. The program offered Mythos at $25 per million input tokens — approximately five times the rate of Anthropic’s Opus 4.7 model. The founding partners collectively committed $100 million in compute credits.

On April 14, just nine days after Altman’s criticism, OpenAI announced its own Trusted Access for Cyber (TAC) program. The company formally launched GPT-5.5-Cyber access on May 7 via an application portal at chatgpt.com/cyber.

The two access programs differ in structure. Anthropic’s Project Glasswing is a curated partnership model with 12 verified organizations, while OpenAI’s TAC program targets “thousands of verified defenders” through a tiered verification system. OpenAI’s pricing structure for GPT-5.5-Cyber has not been publicly disclosed, but industry analysts estimate it will be comparable to Anthropic’s rates.

“Altman’s ‘bomb shelter’ analogy now applies to his own company,” said Dr. Emily Chen, a research fellow at the Center for AI Safety in San Francisco. “The industry whiplash is remarkable — one week you’re mocking your competitor for safety theater, the next you’re implementing the exact same measures.”

The Dangerous Capabilities Driving Convergence

The rapid convergence on restricted access models is driven by genuine safety concerns, according to findings from the UK AI Safety Institute (AISI). The institute confirmed that both GPT-5.5-Cyber and Claude Mythos can complete end-to-end multi-step cyber attack simulations autonomously.

OpenAI’s system card rates GPT-5.5 as “High” in cybersecurity capability — one level below “Critical” on its five-tier scale. The system card notes that the model can identify vulnerabilities, craft exploit code, and execute attack chains without human intervention.

Anthropic’s Mythos demonstrated even more alarming capabilities. The model autonomously identified “thousands of high-severity vulnerabilities” across all major operating systems, including Windows, macOS, Linux, and Android. According to Anthropic’s technical report, less than 1% of these vulnerabilities were reported and patched before the model’s release.

The danger was underscored by an unauthorized access incident on Mythos’s release day. Bloomberg reported that an unknown group accessed the model via a third-party contractor’s account, raising concerns about the effectiveness of access controls. The group’s identity and the extent of their usage remain unknown.

“This is not a special model,” said Jack Clark, Anthropic’s co-founder, during a press briefing. “There will be other systems just like this in a few months.” Clark predicted that open-weight models with similar capabilities would emerge from China within 12 to 18 months, making the current defensive advantage temporary.

Industry Reaction and the Trusted Access Model

The fragmented access landscape has created confusion among cybersecurity professionals. Organizations seeking to use either model must navigate two separate application processes with different criteria and verification requirements.

Government involvement has further complicated the picture. OpenAI has been consulting with the U.S. government on its TAC program, while the Trump administration was embedded in Anthropic’s Mythos release decisions. The administration’s role included reviewing access criteria and approving the list of founding partners.

The “trusted access” model is becoming the industry standard for dual-use AI capabilities. Analysts expect similar programs to extend to other high-risk domains, including biosecurity and chemical synthesis. The Partnership on AI, an industry consortium, is developing guidelines for trusted access programs that could become de facto standards.

Practitioners have raised concerns about the “black market” risk. “If you build a model that can hack anything, and you only give it to ‘trusted’ people, you’re creating a massive incentive for bad actors to infiltrate those trusted groups,” said Marcus Rivera, a cybersecurity researcher at MIT’s Computer Science and Artificial Intelligence Laboratory. “The Mythos leak suggests this is not just theoretical.”

The competitive moat versus genuine safety debate remains unresolved. Critics argue that restricted access programs serve primarily as marketing tools, creating artificial scarcity and premium pricing. Proponents counter that any delay in malicious actors gaining access provides a critical window for defensive measures.

The Bottom Line

The Altman-Anthropic spat reveals a fundamental tension in the AI industry: both companies recognize the genuine danger of advanced cyber AI but use safety rhetoric as competitive positioning. Expect more public spats followed by rapid convergence as companies race to set access rules before regulators intervene. The real question is whether these restricted access programs can hold against leaks and open-weight competitors — the Mythos incident suggests the defensive advantage is temporary. Watch for government mandates on dual-use AI access as the next major development, with the European Union’s AI Office and the U.S. National Institute of Standards and Technology both expected to propose regulatory frameworks within the next six months.