National Security News

Reporting the facts on national security

National Security News

AI Cyber

AI tools and chatbots are rocket fuel for massive increase in cyber crime

An AI-powered chatbot called “EscapeGPT” being offered for sale on a dark web forum.

Cyber criminals are weaponising advanced chatbots and AI technologies to amplify their attacks, marking a significant escalation in the digital arms race. Experts warn that these powerful AI tools are being deployed at a record scale and speed, creating a challenge for cyber security defenders worldwide.

Recent findings by Intel741, a cyber security company, highlight the increase in the use of AI among threat actors from the previous years. From document fraud and business email compromise (BEC), to writing malicious code for malware and deepfakes, their new report shows how criminals are using AI to make their operations more efficient.

Intel741 identified a threat actor who allegedly developed an AI tool that can automatically manipulate invoices for BEC attacks, where the scammer uses email to trick someone into sending money or sharing confidential company information. This tool is said to save attackers time by automating the process of finding and changing bank account information in PDFs.

The company predicts a surge in phishing and BEC attacks due to AI’s ability to effortlessly craft phishing pages, social media content, and email copy.

Intel741 also said: “Perhaps the most observed impact AI has had on cybercrime has been an increase in scams, particularly those leveraging deepfake technology.”

Notorious actors like Nigeria’s “Yahoo Boys” exploit trust built with AI-generated personas for romance scams. Intel741 reports a significant drop in deepfake service prices compared to 2023, making them more accessible. Additionally, some threat actors claim to bypass security using AI, raising concerns about compromised know-your-customer (KYC) verification.

Image of alleged romance scammers in Nigeria from ScamDigger Forum.

AI is also being used to search through stolen and leaked data. Intel741 identified a threat actor that used Meta’s Llama AI to search through breached data. The cyber firm said this could be helpful for extortionists who want to quickly find the most valuable information to threaten their targets with.

Analysts at Intel741 identified reports of a malicious AI service which can communicate in multiple languages, generate illegal information and malicious code, plan social-engineering attacks and crime activities, draft and generate phishing emails, and create fake news articles. Available as a subscription service from $90 USD per month to $900 for lifetime access. 

Further highlighting the rise of AI-powered attacks, other companies have also seen a dramatic increase. Research by SlashNext, a computer and network security company, shows that email phishing has skyrocketed since the launch of ChatGPT, “signalling a new era of cybercrime fuelled by generative AI.”  SlashNext recorded a staggering 4000 percent increase in such attacks, highlighting the magnitude of this threat.

Gone are the days of clumsy, error-filled phishing emails. Today’s hackers use advanced AI to craft sophisticated, persuasive scams that mimic legitimate communications. These AI-driven attacks are more convincing and efficient, allowing cybercriminals to target thousands simultaneously.

Cyber criminals are also exploiting AI chatbots that to write malicious code and scripts. They are “jailbreaking” popular chatbots, like OpenAI’s ChatGPT and Microsoft’s Copilot, to bypass safety features.

Post on a Reddit forum relating to jailbreaking ChatGPT.

Some criminals have created their own specialised chatbots, such as WormGPT, trained specifically on malware data to assist in crafting malicious software and phishing attacks.  These “rogue” chatbots, including FraudGPT, HackerGPT, and DarkBARD.

Example jailbreak prompt in ChatGPT. (Source: StationX)

Darktrace, a British cyber security company, state that email and social engineering attacks are just the “tip of the iceberg”.

The company highlighted that AI can be used across all steps of a cyber attack, stating: “It’s likely that AI will be applied to amplify attack effectiveness across every stage of the kill chain.”

Darktrace show how AI can be used at every stage of a cyber attack.

Darktrace explained: “Identifying exactly when and where AI is being applied may not ever be possible. Instead, defenders will need to focus on preparing for a world where threats are unique and are coming faster than ever before.”

In the coming months and years, Darktrace expect to see an increase in AI that can automatically build viruses and complex attacks, deepfakes designed to elicit trust, AI that bypasses security checks, and AI that is used to gather information about you online to help with targeting.

Val Dockrell is a London-based Senior Investigator and Open Source Intelligence (“OSINT”) specialist who has led in-depth investigations in multiple jurisdictions around the world. She also speaks several languages and is a member of the Fraud Women’s Network. Her X (formerly Twitter) handle is @ValDockrell.