Cybercrime More Accessible Thanks to AI
Cybercrime is now more accessible to those less tech-savvy, thanks to AI. Researchers at the Palo Alto Networks Unit 42 found a couple of new Large Language Models (LLMs) gaining traction with cybercriminals.
The Palo Alto Networks Unit 42 researchers wrote that these malicious LLMs — models built or adapted specifically for offensive purposes — distinguish themselves from their mainstream counterparts by intentionally removing ethical constraints and safety filters during their foundational training or fine-tuning process.
How Cybercrime Became Easier
Researchers found that WORMGPT and KawaiiGPT can create ransomware codes to send to unsuspecting victims. PCGamer’s Jeremy Laird described WORMGPT as “ChatGPT’s evil twin.” He wrote that WORMGPT,
“was capable of generating a PowerShell script that could hunt down specific file types and encrypt data using the AES-256 algorithm. it could, for instance, encrypt all PDF files on a target Windows machine.”
Laird also wrote,
“WORMGPT in its effort to be as helpful as possible even added an option to extract user data via the anonymising Tor network.” The LLM was also capable of writing scripts that provide “credible linguistic manipulation for BEC and phishing attacks.” BEC is an acronym for Business Email Compromise.
KawaiiGPT is capable of generating spear-phishing messages with domain spoofing. The malicious LLM also uses Python scripting for lateral movement that uses the paramiko SSH library to connect to a host and execute commands, searching for and extracting target files, generating ransom notes with customisable payment instructions, and more. SSH is an acronym for Secure Shell, a protocol that allows authorized users to open remote shells on other computers.
How Cybercrime Can Become a National Security Risk
The researchers also reported,
“Traditional North Korean IT worker operations relied on highly skilled individuals recruited and trained from a young age within North Korea. Our investigation reveals a fundamental shift: AI has become the primary enabler allowing operators with limited technical skills to successfully infiltrate and maintain positions at Western technology companies.”
North Koreans unable to create their own code or communicate in English use AI to pass interviews and work at tech companies earning millions of dollars that fund North Korea’s weapons programs, Unit 42 reported.
Jeremy Laird wrote,
“Apparently, each LLM has a dedicated Telegram channel where tips and tricks are shared among the cybercriminal community, leading Unit 42 to conclude, ‘Analysis of these two models confirms that attackers are actively using malicious LLMs in the threat landscape.’”
The Unit 42 report concluded,
“The future of cybersecurity and AI is not about blocking specific tools, but about building systems that are resilient to the scale and speed of AI-generated malice. The ability to quickly generate a full attack chain, from a highly persuasive ransom note to working exfiltration code, is the threat we now face.”
