AI is already making online crimes easier. It could get much worse.

Cherepanov and Strichik were confident that their discovery, which they called PromptLock, represented a turning point in generative AI, demonstrating how the technology could be exploited to create highly resilient malware attacks. They published a Blog post Announcing that they had discovered the first example of AI-powered ransomware, they quickly became a target On a large scale worldwide Media attention.

But the threat was not quite as dramatic as it first seemed. The day after this post was published, a team of researchers from New York University He declared responsibility, Explaining that the malware was not, in fact, a full-fledged attack released into the wild, but rather a research project, designed solely to prove that it was maybe To automate every step of the ransomware campaign, which is what they said they did.

PromptLock may have turned out to be an academic project, but it’s the real bad guys We are Using the latest artificial intelligence tools. Just as software engineers use artificial intelligence to help write code and Check for errorsHackers use these tools to reduce the time and effort needed to coordinate an attack, lowering the barriers to less experienced attackers trying something.

Lorenzo Cavallaro, a professor of computer science at University College London, says the possibility that cyberattacks will now become more common and more effective over time is not a remote possibility but “a pure fact.”

Some in Silicon Valley warn that artificial intelligence is on the cusp of being able to carry out fully automated attacks. But most security researchers say this claim is exaggerated. “For some reason, everyone is just focused on the idea of ​​malware, like superhuman AI hackers, which is completely ridiculous,” says Marcus Hutchins, principal threat researcher at security firm Expel and famous in the security world for bringing down a giant global ransomware attack called WannaCry in 2017.

Instead, experts say we should pay more attention to the more pressing risks posed by artificial intelligence, which is already accelerating and increasing the scale of fraud. Criminals are increasingly exploiting the latest deepfakes technology to impersonate people, deceive victims and steal huge sums of money. These AI-enhanced cyberattacks are expected to become more frequent and more destructive, and we need to be prepared.

Spam and beyond

Attackers began adopting production AI tools almost immediately after ChatGPT came on the scene at the end of 2022. These efforts began, as you might imagine, with spam generation, and lots of it. Last year, a report from Microsoft said That in the year leading up to April 2025, the company prevented $4 billion worth of fraud and fraudulent transactions, “many of which were likely assisted by AI content.”

At least half of spam emails are now generated using MBAs, according to estimates by researchers at Columbia University, the University of Chicago, and Barracuda Networks, who Analyzed Nearly 500,000 malicious messages were collected before and after ChatGPT was released. They also found evidence that AI is increasingly being deployed in more complex schemes. They looked into targeted email attacks, which impersonate a trusted person in order to trick someone working within an organization out of money or sensitive information. By April 2025, they found that at least 14% of these types of focused email attacks were created using LLMs, up from 7.6% in April 2024.

Leave a Reply