Claude Mythos And The Future Of Cybersecurity Workforce Policy

Rahul Kumar
Rahul Kumar
Intern - Policy & Advocacy, CyberPeace
PUBLISHED ON
Apr 29, 2026
10

Introduction

In April 2026, Anthropic revealed Claude Mythos, an artificial intelligence application capable of finding security flaws in computer networks more effectively than human beings. The corporation claimed to have found hundreds of thousands of substantially serious vulnerabilities in established desktop operating systems and web-based browsers that have not been used for at least 20 years. This news has greatly alarmed those responsible for leading financial organisations, banks, and governments throughout the world. Nevertheless, this news demonstrates a much larger problem: we do not have enough cybersecurity professionals trained to do this kind of work. At the current estimate, there are 4.8 million cyber security professionals short of what is needed globally. There is a need to develop different kinds of workforce training programs to help prepare these professionals as we continue to see the emergence of new AI technologies.

What Is Claude Mythos ?

Anthropic created Claude Mythos as part of its Claude AI system, competing against ChatGPT and Google Gemini. In April 2026, expert testing revealed Mythos excelled at identifying problems in legacy code and suggested exploitation methods. It found a vulnerability that had existed for 27 years. Because of these advanced capabilities, Anthropic restricted access through “Project Glasswing,” giving it only to 12 major tech companies and 40 organizations managing critical software. Canadian Finance Minister François-Philippe Champagne called it an “unknown unknown.” Andrew Bailey of the Bank of England said regulators needed to examine what Mythos could mean for financial attacks. The European Union raised concerns. India’s Finance Minister Nirmala Sitharaman warned at SEBI’s Foundation Day on April 25, 2026, that cybersecurity is the single most pressing challenge facing markets today. She stated a single successful cyberattack on a major exchange or large broker could disrupt markets nationally and shake public confidence for years. Sitharaman emphasized that AI tools make attacks faster, more adaptive, and autonomous, capable of discovering system vulnerabilities and manipulating code.

The Real Problem: Discovery Versus Fixing

Mythos highlights a fundamental mismatch in cybersecurity. Finding a vulnerability does not guarantee it will be fixed. Organizations face challenges patching systems. Many use obsolete technology, and updates can break dependent components. Organizations in developing nations often lack financial resources for repairs or downtime. Critical systems like hospitals, banks, and power grids cannot go offline. Before Mythos, human hackers found vulnerabilities slowly. Now AI tools find weaknesses faster than they can be fixed, creating a dangerous gap. Ciaran Martin, former head of the UK’s National Cyber Security Centre, explained that Mythos is “a really good hacker” against unprotected systems. Organizations following basic security practices—regular updates, strong passwords, network protection, trained staff can likely defend against it. The UK AI Safety Institute concluded Mythos poses the biggest threat to poorly defended systems, noting: “We cannot say for sure whether Mythos Preview would be able to attack well-defended systems.”

The Workforce Challenge

The Mythos announcement exposes the real problem: we lack enough trained cybersecurity workers. There is a global shortage of 4.8 million workers against a current workforce of 5.5 million. In AI security specifically, 34 percent of needed skills are missing. But the harder problem is that AI is changing needed skills. Entry-level jobs monitoring security alerts are being automated. These were traditional career starting points. Young people learned basic skills and moved to advanced roles. Now these positions disappear while new AI security jobs emerge for which nobody has training. Organizations cannot hire fast enough for new AI roles because few people have these skills. This leads to a vicious cycle. With fewer entry-level positions available, there will be fewer young adults entering the job market which results in even fewer workers with this skill set; thus, the shortage of qualified applicants increases; this thereby increases organizations’ vulnerability. Without action taken immediately, this issue will continue to worsen 

Way Forward

  1. Clarify What Skills We Need

Governments and industry must work together to define what cybersecurity workers need in an AI world. Currently, aspiring professionals study networking, software, and vulnerability finding, but AI security training barely exists. Governments should work with universities and companies to clarify needed skills: understanding what AI tools can and cannot do in security, finding and fixing AI system problems.

  1. Support Workers Who Lose Jobs To Automation

Workers who find themselves losing their jobs due to automation will require government support. All too often without an alternative, these skilled and trained workers will leave their profession forever. The government will need to provide funding for training of displaced employees, support for those changing careers to become cyber security professionals.

  1. Create Clear Rules For AI Security Tools

When companies create powerful security tools, governments must understand their capabilities and risks. Companies should be required to thoroughly test tools before release, clearly explain what tools can do and their limitations, and explain safety and misuse prevention plans. Governments should monitor actual tool usage, not simply trust voluntary compliance.

  1. Focus On Basic Security First

Most attacks do not need advanced AI tools. They succeed because organizations have not implemented basic security. Some never update software, train employees, use strong passwords, protect data properly, or test defenses. Governments should require organizations, especially those managing critical systems, to implement these basics.

Conclusion

Claude Mythos matters not because it is a weapon of destruction, but because it forces hard questions: Do we have enough skilled workers? Are our systems well-protected? The answer is no. We face a shortage of 4.8 million cybersecurity workers and lack AI security training. Yet this is also an opportunity. Governments can invest in training, strengthen defenses, and create clear rules for AI security tools. Governments, organizations and educational institutions must collaborate to create viable Cybersecurity career pathways. We can act through either creating panic or creating a trained and prepared workforce to meet today’s challenges. The time is now.

References 

  1. https://www.bbc.com/news/articles/crk1py1jgzko
  2. https://red.anthropic.com/2026/mythos-preview/ 
  3. https://www.anthropic.com/project/glasswing 
  4. https://www.aisi.gov.uk/blog/our-evaluation-of-claude-mythos-previews-cyber-capabilities 
  5. https://www.bsg.ox.ac.uk/people/ciaran-martin 
  6. https://www.isc2.org/Insights/2024/10/Cybersecurity-Workforce-INSIGHTS-October-2024 
  7. https://decrypt.co/364141/anthropic-claude-mythos-serious-threat-overhyped-ai-security-institute 
  8. https://www.businesstoday.in/latest/economy/story/fm-nirmala-sitharaman-wants-sebi-regulated-entities-to-remain-exceptionally-vigilant-heres-why-527437-2026-04-25 
  9. https://www.theweek.in/news/biz-tech/2026/04/25/sebi-38th-anniversary-cybersecurity-concerns.html 

PUBLISHED ON
Apr 29, 2026
Category
TAGS
No items found.

Related Blogs