...
AI & Computing NewsCyber security NewsNews

Anthropic Mythos Accessed by Unauthorized Discord Group Since Launch Day 

The group of unauthorized users gained access to Anthropic's Mythos Preview through a third-party vendor environment on the same day Anthropic publicly announced Project Glasswing.

Key Takeaways

  • They accessed Claude Mythos Preview on April 7, through a third-party contractor’s environment.
  • The group made an educated guess about the model’s online location based on Anthropic’s URL formatting conventions for other models.
  • Members operate through a private Discord channel focused on gathering intelligence on unreleased AI models.
  • Anthropic confirmed it is investigating the report but said there is no evidence its core systems were impacted.

A small group of unauthorized users gained access to Claude Mythos Preview, Anthropic’s most powerful AI model, restricted to roughly 40 vetted organizations under Project Glasswing,  Bloomberg reported on April 21, 2026, citing documentation and a person familiar with the matter.

The company said it is investigating a report claiming unauthorized access to Claude Mythos through one of its third-party vendor environments.

The timing is striking, as the group gained access on April 7, the same day Anthropic publicly announced Project Glasswing and the restricted deployment of Mythos to select companies, including Apple, Microsoft, Google, and CrowdStrike. 

How the Group Got In and Who They Are

The access method, as Bloomberg reported, was not a sophisticated cyberattack against Anthropic’s core infrastructure. 

Instead, the group made an educated guess about the model’s online location based on knowledge of URL and endpoint formatting patterns Anthropic used for other AI models

According to TechCrunch, the group also tried additional strategies to gain entry, including leveraging credentials belonging to the person interviewed by Bloomberg for the story. 

That individual is employed at a third-party contractor that works with Anthropic and holds authorized access to the model through that relationship.

The group communicated through a private Discord channel dedicated to tracking information about unreleased AI models. As the outlet reported, members used bots to monitor GitHub repositories and other public-facing infrastructure for signals about models in development. 

The group provided Bloomberg with evidence of access, including screenshots and a live demonstration of Mythos running, to prove their claims. 

As TechCrunch confirmed, the group described intent as curiosity rather than harm, with one source saying members are interested in playing around with new models, not wreaking havoc with them. The group has been using Mythos regularly since gaining access on April 7.

Why This Is a Problem Regardless of Intent

The stated motivation of the group does not reduce the severity of the security failure. 

Anthropic has publicly described Mythos as too dangerous for general release. As Reuters confirmed in its report, Project Glasswing was built on the premise that Mythos capabilities were too advanced and dual-use for access outside a tightly controlled network. 

These capabilities include autonomous discovery of zero-day vulnerabilities, chaining exploits across operating systems, and writing working cyberattack code in hours.

Anthropic briefed the Cybersecurity and Infrastructure Security Agency (CISA), the Commerce Department, and senior government officials about Mythos capabilities due to the risk of offensive use

Internal documentation described Mythos as capable of exploiting vulnerabilities in ways that outpace defenders. Unauthorized access and continued use by a group, regardless of intent, represent a breach of the access architecture Project Glasswing was designed to enforce.

The Third-Party Vendor Problem

As Bloomberg notes, the access pathway ran through a third-party contractor’s environment, not through any of the 12 named Project Glasswing launch partners. 

This distinction matters technically. Anthropic’s restricted deployment system was designed to control which organizations could access Mythos through its API, but the breach did not involve compromising those organizations directly.

Instead, it exploited the access available through a contractor who sat at the boundary of that controlled environment, combined with the help of technical knowledge of how Anthropic structures its other model endpoints.

A Pattern of Security Gaps 

The incident follows a month of operational security issues at Anthropic. In late March 2026, the company reportedly exposed 512000 lines of Claude Code source code through a misconfigured npm package. 

Days earlier, internal documents describing Mythos, including draft materials on its cybersecurity capabilities, were left in an unsecured public-facing cache, leading to early public awareness of the model.

Each incident points to a consistent weakness: the gap between intended access controls and actual enforcement across contractors, build pipelines, and storage systems.

Anthropic has confirmed its investigation is ongoing and said there is no evidence so far that core systems were impacted. No further details have been released.

Source: Anthropic’s Mythos Model Is Being Accessed by Unauthorized Users

Fawad Malik

Fawad Malik is a digital marketing professional with over 15 years of industry experience, specializing in SEO, SaaS, AI, content strategy, and online branding. He is the Founder and CEO of WebTech Solutions, a leading digital marketing agency committed to helping businesses grow through innovative digital strategies. Fawad shares insights on the latest trends, tools, guides and best practices in digital marketing to help marketers and online entrepreneurs worldwide. He tends to share the latest tech news, trends, and updates with the community built around NogenTech.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button