Anthropic is getting closer to a federal rollout after the U.S. government began preparing to grant access to a version of its Mythos model to major agencies.
Bloomberg reported Thursday that the plan is now underway, even as officials inside and outside Washington worry that the tool could raise cybersecurity risks if it is not tightly controlled.
Cryptopolitan previously reported that Mythos is limited to select groups and intended for defensive cyber work, not broad commercial use. Gregory Barbaccia, federal chief information officer at the White House Office of Management and Budget, told Cabinet department officials in a Tuesday email that OMB was setting up protections so agencies could begin using Mythos.
Reportedly, the subject line of the message was “Mythos Model Access,” and it read that:
“We’re working closely with model providers, other industry partners, and the intelligence community to ensure the appropriate guardrails and safeguards are in place before potentially releasing a modified version of the model to agencies.”
White House opens the door as Anthropic Mythos heads toward agency use
That message landed while finance ministers, central bankers, and regulators were in Washington for the IMF and World Bank spring meetings, where senior financial officials warned that advanced AI from U.S. tech firms could expose weak spots in lenders’ cyber defenses and put the wider banking system under pressure.
Andrew Bailey, governor of the Bank of England and chair of the Financial Stability Board, said: “It is a very serious challenge for all of us. It reminds us how fast the AI world moves.”
Andrew then said regulators around the world would need to quickly assess the cyber risk that Anthropic’s Claude Mythos Preview could pose to the financial system.
Dan Katz, deputy head of the IMF, said: “The evolution of digital technology is posing immense risks from a cybersecurity perspective. This is really going to be absolutely essential on the international agenda for the next few months.”
Christine Lagarde, president of the European Central Bank, also pointed to Anthropic and Mythos as a case where a useful tool can become dangerous in the wrong hands.
“The development we’ve seen with Anthropic and Mythos is a good example of a responsible company that is suddenly thinking, ‘ah, that could be really good,’ but if it falls in the wrong hands, it could be really bad.”
Global officials press for rules while Anthropic limits Opus 4.7
Some officials called for a coordinated international response after Anthropic said earlier this month that Mythos had found “thousands of high-severity vulnerabilities, including some in every major operating system and web browser.”
Anthropic also warned that capabilities like these may spread quickly and not remain in safe hands. The company said it would “not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely.” It added: “The fallout for economies, public safety, and national security could be severe.”
Christine later told reporters that officials want a framework they can work within, but no real governance system is ready yet. She said: “Everybody is keen to have a framework within which to operate. I don’t think there is a governance framework that’s actually meant to mind those things. We need to work on that.”
Pip White, Anthropic’s head for the UK, Ireland, and northern Europe, said interest from executives picked up quickly after the news around the model. In an interview, Pip said: “We are putting our own safeguards and our own limitations around this product because we know how powerful it can be.”
On Thursday, Anthropic also released Opus 4.7, a new model built to do better on software engineering tasks. The company said Opus 4.7 can handle some coding work that used to require closer supervision, follow instructions better than older models, and inspect higher-resolution images to catch details in dense charts and complex pictures.
Even so, Anthropic said Opus 4.7 is less capable than Mythos, including in cyber use cases. During training, the company said it tested ways to “differentially reduce” the model’s cyber ability.
Source: Pexels