‘AI Godfather’ Yoshua Bengio calls for urgent global collaboration on Anthropic’s Mythos cybersecurity tools

(SeaPRwire) –   Yoshua Bengio, a renowned computer scientist often called one of the “godfathers of AI” for his pioneering work in deep learning, has spent years highlighting the dangers of the technology he helped build. He now argues that the emergence of models like Anthropic’s Mythos underscores the critical need for international institutions to collaborate on managing AI’s risks.

Claude Mythos, the latest model from Anthropic, is described as a major breakthrough in cybersecurity, capable of identifying thousands of “zero-day” vulnerabilities. These are software flaws unknown to the original developers that could allow hackers to circumvent security and steal sensitive information.

However, Anthropic has noted that these capabilities are dual-use, potentially facilitating complex cyberattacks on global infrastructure. Consequently, the company is limiting the initial release to a select group of firms to help them strengthen essential systems first.

The first organizations chosen by Anthropic to use Mythos are all U.S.-based technology companies that maintain much of the world’s digital infrastructure. Additionally, Anthropic has briefed the U.S. government and is beginning to provide access to the model for several federal departments and agencies.

While some have commended Anthropic’s cautious approach, the restricted rollout has sparked concerns about the concentration of power within a single American corporation. Because Anthropic alone decided who received access, many excluded governments and businesses are now seeking the model to protect their own infrastructure. This situation has intensified calls for AI governance to be managed on a broader, international scale.

“It is problematic for private individuals to be the sole arbiters of infrastructure security for the rest of the world,” Bengio stated in an interview. “We must consider the implications for all the nations and companies that were left out.”

Bengio, a recipient of the Turing Award—often referred to as the Nobel Prize of computing—is not the only one raising these concerns. The Bank of England, for instance, has requested access to Mythos for U.K. financial institutions, noting that Anthropic promised access would begin shortly. Furthermore, discussions at the IMF and World Bank spring meetings in Washington were sidetracked by the implications of Mythos. Policymakers expressed concern that such systems could reveal widespread vulnerabilities in the global financial network, while European regulators and executives noted they have yet to fully grasp the extent of the flaws the model has found.

For those outside the United States, the Mythos situation is likely to increase the demand for “AI sovereignty”—the pursuit of domestic AI infrastructure independent of foreign entities. Many nations are becoming increasingly hesitant to rely on American technology, particularly as the U.S. government is viewed as a less predictable partner that may use supply-chain control to meet policy goals. There is also a growing unease about being dependent on a small number of U.S. tech executives.

At the same time, the U.S. government is moving to guarantee its own access to the model. According to a White House Office of Management and Budget memo, federal agencies—including the Departments of Defense, Treasury, Commerce, and State—are preparing to implement a version of Mythos, with further instructions expected soon.

This initiative proceeds despite a legal conflict between Anthropic and the Pentagon, which previously labeled the company a supply-chain risk due to a row over AI safety protocols. Anthropic is currently fighting that label in court. Reports indicate that Anthropic CEO Dario Amodei is set to meet with White House Chief of Staff Susie Wiles this Friday to attempt to settle the matter.

Bengio is calling for significantly more international cooperation to manage these new cybersecurity threats, including the establishment of a regulatory body similar to the FDA to monitor advanced AI development. He suggested that governments, especially the U.S., should mandate that AI developers ensure their products do not harm the infrastructure of other nations, arguing that such high-stakes oversight should not be left to the private sector.

“There must be an agency specifically tasked with overseeing these types of decisions,” he said. “As AI grows more powerful, the need for international commitment becomes urgent. These risks will not be confined to U.S. citizens or infrastructure; therefore, this must be a global effort.”

The open-source question

Bengio also noted that any effective global response must involve an agreement with China, despite the intense AI competition between the U.S. and China.

While Bengio suggested that top Chinese AI models currently trail U.S. versions by about six months, he emphasized that this gap does not lessen the inherent risks.

Furthermore, China is making significant strides in open-source AI—where the model’s code is publicly available—which Bengio warned could pose a greater threat than proprietary systems like Mythos.

Unlike closed models, open-source systems can be downloaded and altered by anyone. Bengio pointed out that this allows users to remove safety guardrails, such as filters meant to prevent malicious activity, leaving the technology open to misuse.

As AI becomes more adept at finding and exploiting software bugs, he warned that open-source releases could provide malicious actors with powerful cyber-offensive tools.

This concern extends beyond AI to the tradition of open-source software itself, which has long been a cornerstone of internet security.

For years, open-source software was considered more secure because its public nature allowed many developers to find and fix flaws. However, advanced AI can now scan that public code at a massive scale to find weaknesses much faster than people can, potentially turning open infrastructure into a target. While Bengio remains a supporter of open-source for its transparency and democratic value, he warned that in an age of AI-driven cyberattacks, it could also become a major liability.

This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.

Category: Top News, Daily News

SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.