By Phoebe Seers
LONDON, April 28 (Reuters) – The ability of central banks and financial regulators to monitor and combat the risks posed by powerful artificial intelligence models such as Anthropic’s Mythos has been called into question after a survey found authorities significantly lag financial firms in AI adoption and lack data on emerging harms.
Financial institutions are adopting AI at more than twice the rate of their supervisors, with just two in 10 regulators reporting “advanced AI adoption”, research published on Tuesday by the Cambridge Centre for Alternative Finance showed. Only 24% of authorities surveyed collect data on industry AI adoption, while 43% have no plans to start within the next two years, the report found.
“This empirical blind spot may undermine the prevailing optimism [on AI]. Authorities cannot successfully harness or oversee AI if they are navigating its adoption and risks without hard data,” the report said.
The research, prepared alongside the Bank for International Settlements, the International Monetary Fund and other multilateral institutions, involved surveying 350 traditional financial institutions and fintechs, more than 140 AI vendors, and 130 central banks and financial authorities spanning 151 countries.Â
Regulators and global standard‑setting bodies have stepped up warnings about the risks posed by the rollout of AI across the financial sector. Earlier in April, Anthropic released Mythos, viewed by cybersecurity experts as posing significant challenges to the banking industry and its legacy technology systems.
Regulators across the globe have engaged with banks over how prepared their legacy systems are for emerging frontier AI models.Â
The report highlights Mythos as an example of next‑generation systems that could soon be capable of exploiting software vulnerabilities at scale, potentially limiting the effectiveness of existing human governance and oversight mechanisms.
“Regulators generally maintain the principle that financial firms should remain accountable for harms, including cyberattacks, whether AI is built in-house or supplied by third parties, but that position becomes harder to apply in the context of more autonomous systems that are provided and managed by third-party vendors,” the authors wrote.
Moreover, traditional approaches to oversight by regulators may no longer be sufficient. The report says regulators must themselves adopt agentic AI capabilities, capable of taking actions without human oversight, to match the systems they oversee.
(Reporting by Phoebe Seers; Editing by Tommy Reggiori Wilkes/Keith Weir)

