Researchers have identified a number of routers capable of injecting harmful code and stealing cryptocurrency
Researchers from the University of California published a paper revealing that some third-party AI large language model routers pose serious security vulnerabilities capable of resulting in crypto theft. The paper identified four distinct attack vectors, including malicious code injection and credential extraction, with co-author Chaofan Shou warning on X that 26 LLM routers are actively injecting malicious tool calls and stealing user credentials.
LLM agents increasingly route requests through third-party API intermediaries that aggregate access to providers such as OpenAI, Anthropic, and Google. However, these routers terminate TLS connections and retain full plaintext access to every message passing through them.
The researchers tested 28 paid routers and 400 free routers gathered from public communities, and their findings were alarming. Nine routers actively injected malicious code, two deployed adaptive evasion triggers, 17 accessed researcher-owned Amazon Web Services credentials, and one drained Ether from a researcher-controlled wallet. Prefunded decoy wallets were used in the experiment, with the total value lost reported as below $50.
The researchers also identified a feature present in many AI agent frameworks called “YOLO mode,” in which the agent executes commands automatically without seeking user confirmation. They warned that previously legitimate routers can be silently weaponized without the operator’s knowledge.
Their recommended short-term mitigation is that developers never allow private keys or seed phrases to pass through an AI agent session, while the long-term solution requires AI companies to cryptographically sign their responses so that the instructions an agent executes can be mathematically verified as originating from the authentic model.