AI programs can help developers automate certain tasks, reducing the time needed to piece together software. But those same benefits also apply to hackers looking for cheap and quick ways to break into enterprise, governmental, and private computer networks to steal data, spy on rival nations, or extract ransom payments from everyday people
“I think we are entering into a new era of security,” Tom Goldstein, a professor of computer science and director of the Center for Machine Learning at the University of Maryland, told Yahoo Finance.
“We’re in a really turbulent time where things are changing really rapidly,” he added. “And in the short term, it certainly seems to be the case that these new AI-based tools are a bigger advantage to the attackers than they are to the defenders.”
In November, Anthropic (ANTH.PVT) reported that a Chinese state-sponsored group used its Claude coding tool to orchestrate a large-scale cyber espionage campaign.
According to the company, the attackers “manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases.” Those targets included “large tech companies, financial institutions, chemical manufacturing companies, and government agencies,” the company said.
Anthropic CEO Dario Amodei at the Code with Claude developer conference on May 22, 2025, in San Francisco. (Don Feria/AP Content Services for Anthropic) ·ASSOCIATED PRESS
Earlier this month, OpenAI (OPAI.PVT) released its own report indicating that it believes its coding platform poses a “high” level of cybersecurity risk.
Cybersecurity professionals are using the same technology to find potential vulnerabilities in software and patch them to protect organizations against attacks.
Still, just as hackers were ahead of defenders before the introduction of generative AI, for now at least, the bad guys continue to hold a lead over the good guys.
Read more: How AI, unemployment, and interest rates could shape the stock market and your investments next year
Cybersecurity is part cat-and-mouse game, part numbers game. More sophisticated attackers are constantly probing software for vulnerabilities — a bad line of code here, a small error there — to crack into a company or government’s network.
Others launch massive phishing attacks — emails or text messages sent to thousands of people, in the hope that one person will click a link or download a file so the hacker can take over a victim’s computer and demand payment to unlock it or steal their banking information.
The problem for cybersecurity workers and everyday people, though, is that generative AI is supercharging hackers’ abilities.
“[Enterprises are] struggling to deal with the scale and level of attacks that they have, regardless of AI, and now they’re about to struggle a lot more, because you’re going to see a lot more vulnerabilities discovered that can lead to initial compromise of a network,” explained Chris Thompson, distinguished engineer at IBM (IBM).
IBM’s Chris Thompson says hackers are leveraging generative AI to launch faster attacks. (AP Photo/Richard Drew, File) ·ASSOCIATED PRESS
Thompson, who founded IBM’s X-Force Adversary Services team and co-founded Offensive AI Con, said the speed of attacks will begin to increase significantly as hackers automate certain tasks, just as defenders automate their own programs to defend against attacks.
“You’re now seeing a mature LLM [large language model] being able to conduct those attacks at speed against … 1,000 organizations at the same time,” Thompson said. “You’re only limited by your budget … and your …. risk tolerance of getting caught.”
Cybercriminals who aim to steal cash from the average person through phishing campaigns are also getting AI upgrades. LLMs help them write more convincing emails without the telltale spelling or grammatical errors found in older phishing emails and texts.
StockStory aims to help individual investors beat the market.
“Every email has a 10% to 12% clickthrough rate, which actually is quite high when you really think about it,” CrowdStrike (CRWD) president Michael Sentonas said.
“A lot of the reports today say when attackers use LLMs to write the same email to create the same phish, the clickthrough rate jumps to at least 54%,” he added. “I’ve seen people say as high as 60%, so you’re kind of talking about four times higher.”
Sentonas said North Korean hackers have taken advantage of generative AI to create realistic résumés for job postings overseas with the hopes of both securing jobs that can funnel money to the regime and expanding the country’s industrial espionage capabilities.
LLMs can help improve the efficiency of cybersecurity workers, increasing the number of vulnerabilities they find in less time, helping them patch software or use those vulnerabilities for offensive hacking.
Generative AI can also give junior cybersecurity professionals a boost on the job.
“Somebody who’s a junior today has capabilities like a very experienced cyber defender,” Sentonas said. “We can get that same level of benefit to the defender who suddenly has the ability to triage, to respond, [and] has a playbook on how to do incident response.”
And so the game continues.
Sign up for Yahoo Finance’s Week in Tech newsletter. ·yahoofinance
Email Daniel Howley at dhowley@yahoofinance.com. Follow him on Twitter at @DanielHowley.
For the latest earnings reports and analysis, earnings whispers and expectations, and company earnings news, click here
Read the latest financial and business news from Yahoo Finance.