Anthropic said a Chinese state-linked group it tracks as GTG-1002 ran a largely automated spying campaign using the company’s Claude Code model. The firm says it stopped the operation in mid‑September 2025 after the model was used against about 30 targets, which it lists as big tech firms, banks, chemical makers and some government bodies.
Anthropic framed the case as the first public example of large‑scale, AI‑led intrusions. But security researchers and people who study AI reacted with strong doubt and called for more proof.
Some critics said the account looked exaggerated, and others noted Anthropic gave almost no technical details or indicators of compromise (IOCs) to back the claim. Requests for more information from BleepingComputer were not answered.
Some of the public responses kept their words blunt. One expert wrote that the report was “made up” or that Anthropic overstated the incident. Cybersecurity specialist Kevin Beaumont also weighed in: “I agree with Jeremy Kirk’s assessment of the Anthropic’s GenAI report. It’s odd. Their prior one was, too,” pointing at gaps in the report and a lack of hard evidence. Another researcher was equally critical: “This Anthropic thing is marketing guff. AI is a super boost but it’s not skynet, it doesn’t think, it’s not actually artificial intelligence (that’s a marketing thing people came up with),”
Observers noted that current defenses and detections likely catch many automated tools already, so the real impact might be limited unless Anthropic shares concrete data. The rapid spread of skepticism shows how much the security community wants clear IOCs and technical detail before accepting a major claim about AI doing large‑scale intrusions.
Anthropic stands by its account and argues the event shows new ways attackers can use AI tools. The debate highlights a wider need for more transparency and shared evidence when firms make serious public claims about cyberattacks.
Leave a comment