Google’s Threat Intelligence Group says it identified what it believes is the first-ever case of hackers using artificial intelligence to develop a zero-day exploit.
The group said in a Tuesday blog post that it had “observed prominent cyber crime threat actors partnering to plan a mass vulnerability exploitation operation,” using a zero-day vulnerability allowing them to bypass the two-factor authentication of an unnamed “popular open-source, web-based system administration tool.”
The exploit required valid user credentials first, but bypassed the second authentication factor, which is often also used to secure crypto accounts and wallets.
AI has been increasingly used in both cybersecurity and by crypto hackers seeking to carry out exploits or scams. AI company Anthropic claimed last month that its recent AI model, Claude Mythos, found thousands of software vulnerabilities across major systems.
Google said it had “high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” as the script for the exploit included a hallucination and a format “highly characteristic” of an AI model’s training data.
The report did not specify the threat actor, but Google said that China and North Korea have “demonstrated significant interest in capitalizing on AI for vulnerability discovery.”
LLMs excel at high-level flaw identification
Google said the vulnerability did not stem from “common implementation errors” like memory corruption, but a “high-level semantic logic flaw” where the developer hardcoded a trust assumption.
This implies the attackers used a frontier large language model (LLM), as the models excel at identifying high-level flaws and “hardcoded static anomalies,” Google added.
Related: AI agents like OpenClaw could drain crypto wallets via ‘malicious skills’: CertiK
Several malware families, such as PROMPTFLUX, HONESTCUE and CANFAIL also use LLMs for defense evasion, generating decoy or filler code to camouflage malicious logic, Google said.
LLM vulnerability discovery capabilities compared with other discovery mechanisms. Source: Google
Industrialized LLM abuse is increasing
LLM access abuse is becoming industrialized as threat actors have built automated pipelines to cycle through premium AI accounts, pool API keys, and bypass safety guardrails at scale — effectively running adversarial operations subsidized by trial account abuse.
“By leveraging anti-detect browsers and account-pooling services, actors are attempting to maintain high-volume, anonymized access to premium LLM tiers, effectively industrializing their adversarial workflows.”
Google concluded that as organizations continue integrating LLMs into production environments, the AI software ecosystem has emerged as a primary target for exploitation.
It observed adversaries increasingly targeting the integrated components that grant AI systems their utility, such as autonomous skills and “third-party data connectors,” but threat actors have yet to achieve breakthrough capabilities to bypass the core security logic of frontier models, it stated.
Magazine: How AI just dramatically sped up the quantum risk for Bitcoin
Read the full article here













