Mystery Hacker Used AI To Automate 'Unprecedented’ Cybercrime Rampage

2 godzin temu

Mystery Hacker Used AI To Automate 'Unprecedented’ Cybercrime Rampage

A hacker allegedly exploited Anthropic, the fast-growing AI startup behind the popular Claude chatbot, to orchestrate what authorities describe as an “unprecedented” cybercrime campaign targeting nearly 20 companies, according to a report released this week.

The report, published by Anthropic and obtained by NBC News, details how the hacker manipulated Claude to pinpoint companies vulnerable to cyberattacks. Claude then generated malicious code to pilfer sensitive data and cataloged information that could be used for extortion, even drafting the threatening communications sent to the targeted firms.

NBC News reports:

The stolen data included Social Security numbers, bank details and patients’ sensitive medical information. The hacker also took files related to sensitive defense information regulated by the U.S. State Department, known as International Traffic in Arms Regulations.

It’s not clear how many of the companies paid or how much money the hacker made, but the extortion demands ranged from around $75,000 to more than $500,000, the report said.

Jacob Klein, head of threat intelligence for Anthropic, said the campaign appeared to be the work of a hacker operating outside the U.S., but did not provide any additional details about the culprit.

We have robust safeguards and multiple layers of defense for detecting this kind of misuse, but determined actors sometimes attempt to evade our systems through sophisticated techniques,” Klein said.

Anthropic’s findings come as an increasing number of malicious actors are leveraging AI to craft fraud that is more persuasive, scalable, and elusive than ever. A SoSafe Cybercrime Trends report reveals that 87% of global organizations encountered an AI-driven cyberattack over the past year, with the threat gaining momentum.

AI is dramatically scaling the sophistication and personalization of cyberattacks,” said Andrew Rose, Chief Security Officer at SoSafe. “While organizations seem to be aware of the threat, our data shows businesses are not confident in their ability to detect and react to these attacks.”

Artificial intelligence is not only a tool for cybercriminals – it is also broadening the vulnerabilities within organizations. As companies rush to adopt AI-driven tools, they may inadvertently expose themselves to new risks.

Even the benevolent AI that organisations adopt for their own benefit can be abused by attackers to locate valuable information, key assets or bypass other controls,” Rose continued.

“Many firms create AI chatbots to provide their staff with assistance, but few have thought through the scenario of their chatbot becoming an accomplice in an attack by aiding the attacker to collect sensitive data, identify key individuals and gather useful corporate insights,” he added.

Tyler Durden
Wed, 08/27/2025 – 18:50

Idź do oryginalnego materiału