Australia’s financial regulator issues urgent cybersecurity warning over Mythos AI

Australia’s financial services industry faces an urgent cybersecurity threat from advanced AI systems like Anthropic’s Mythos, prompting the country’s corporate regulator to issue a stark warning to banks and financial firms.

The Australian Securities and Investments Commission (ASIC) published a letter on Friday telling the financial sector that “cyber risk has entered a new era” and companies must act now to strengthen their defenses. The warning comes as Mythos, an AI model with advanced coding capabilities, shows potential to identify cybersecurity vulnerabilities at an alarming speed.

“The clock is at a minute to midnight – if you aren’t on top of your cyber resilience already, the time to act and prepare is right now,” said Simone Constant, ASIC commissioner. She warned that frontier AI models create opportunities but also “materially increase risk, with the ability to expose vulnerabilities faster than many realize.”

The timing of this warning reflects growing concerns about the cybersecurity implications of rapidly advancing AI technology. Mythos represents a new class of AI systems with sophisticated coding abilities that could potentially be used by bad actors to find and exploit security weaknesses in financial systems. This creates a particularly dangerous scenario for Australia’s financial sector, which handles sensitive customer data and manages critical economic infrastructure.

Anthropic has released Claude Mythos Preview under Project Glasswing, a highly restricted access program that includes major tech companies like Amazon, Microsoft, Nvidia, and Apple. The limited rollout suggests even the AI company recognizes the potential risks of the technology. Anthropic did not immediately respond to requests for comment about ASIC’s warning.

The regulatory alert highlights a growing gap between financial institutions and their overseers when it comes to AI adoption. Research from Cambridge Centre for Alternative Finance published in April found that:

  • Financial institutions adopt AI at more than twice the rate of their regulators
  • Only two in 10 regulators report “advanced AI adoption”
  • Authorities significantly lag behind financial firms and lack data on emerging AI-related risks

This disparity creates a blind spot for regulators trying to monitor and combat AI-related threats. If supervisory bodies don’t understand the technology as well as the institutions they oversee, they may struggle to identify risks or respond effectively to emerging threats.

ASIC’s warning follows similar concerns raised by Australia’s banking regulator last month, which noted that the domestic financial services industry’s information security practices were struggling to keep pace with the rapid changes in AI technology. The consistent messaging from multiple regulatory bodies signals that Australian authorities view AI-powered cybersecurity threats as a clear and present danger.

The financial sector faces particular vulnerability because it operates critical infrastructure while handling vast amounts of sensitive personal and financial data. A successful cyberattack enabled by advanced AI could potentially cause widespread economic disruption and compromise millions of customers’ personal information.

Constant emphasized that financial firms shouldn’t wait for perfect clarity about AI threats before taking action. “Do not wait for perfect clarity to address the threat posed by new AI models. Instead, act now, and act with discipline, to strengthen the cyber resilience fundamentals that underpin your business,” she said.

The warning represents part of a broader global conversation about how to manage the risks of increasingly powerful AI systems while still allowing innovation to proceed. As AI capabilities advance rapidly, regulators worldwide are grappling with how to protect critical infrastructure without stifling technological progress.