ChainStreet
WHERE CODE MEETS CAPITAL
Loading prices…
Powered by CoinGecko
AI

Grok Wallet Drained of $174K in Sophisticated Prompt Injection Attack via Gifted NFT

Threat actors executed a prompt injection attack on the Base blockchain, draining $174,000 from an AI-controlled wallet by exploiting automated permission keys.

Grok Wallet Drained of $174K in Sophisticated Prompt Injection Attack via Gifted NFT

An automated AI wallet associated with Grok, the artificial intelligence model developed by xAI, suffered a security compromise on the Base blockchain. The incident, which occurred on May 4, resulted in the loss of approximately $174,000 in DRB tokens. The attack relied on a combination of a gifted non-fungible token (NFT) and a prompt injection technique that bypassed the agent’s behavioral safeguards.

Key Takeaways
  • Grok AI wallet on Base loses $174,000 in DRB tokens after a sophisticated prompt injection attack bypasses internal behavioral safeguards.
  • Attackers exploit automated permission keys via a gifted NFT to liquidate 3 billion tokens before returning 80 percent of stolen value.
  • The incident exposes critical vulnerabilities in autonomous agent finance where Morse code obfuscation triggers unauthorized on-chain transactions without human-in-the-loop oversight.
Listen to this article
READY

Exploit Mechanics and Permission Escalation

The Grok wallet previously interacted with Bankrbot, a decentralized finance agent hosted on the Base network. This history left the wallet with a positive token balance and an active connection to the finance platform. An attacker gifted a “Bankr Club Membership” NFT to the wallet address to initiate the compromise.

The NFT acted as a permission key, unlocking advanced tool capabilities for the agent within the Bankrbot ecosystem. These capabilities included the authority to sign and execute financial transfers autonomously. The attacker subsequently sent a message to the agent containing instructions encoded in Morse code. The AI decoded the message, publicly engaged with the attacker’s account, and interpreted the encoded output as a legitimate command to execute a transfer.

Transaction Execution and Asset Recovery

The Bankrbot system processed the request and transferred 3 billion DRB tokens to an address controlled by the attacker. Perpetrators quickly bridged the assets, converted the tokens into USDC, and liquidated a portion of the stolen funds.

On-chain analysis confirmed that the attacker returned roughly 80 to 88 of the stolen value to the original Grok wallet in ETH and USDC shortly after the transaction. The threat actor deleted the social media account linked to the exploit immediately following the partial return of funds.

Advertisement · Press Release

Genuine News Deserves Honest Attention.

High-conviction projects require an intelligent audience. Connect with readers who value sharp reporting.

👉 Submit Your PR

Systemic Risks in Agentic Finance

Security researchers identified the incident as a critical failure in the automated permission model. The attack succeeded due to three specific architectural weaknesses:

  • The wallet maintained public visibility while holding active financial integration.
  • The NFT utility expanded the agent’s capabilities without sufficient human-in-the-loop oversight.
  • The prompt injection obfuscation successfully bypassed the safety filters governing agent behavior.

Security audits noted that the attacker deleted the original prompt before forensic teams captured the full execution path, complicating the long-term attribution of the threat.

Chain Street’s Take

The exploit forces a hard look at the blurry line between “helpful AI” and “autonomous agent with liquid assets.” The Grok wallet acted as a visible target because it accumulated tokens through previous, legitimate interactions. The NFT gift effectively escalated permissions, while the Morse code injection weaponized the agent’s own helpfulness.

The partial return of funds and the immediate account deletion suggest the attacker prioritized testing systemic boundaries over permanent theft. However, the event exposes the core risk of agentic finance: once an AI gains direct on-chain execution rights, even minor errors in intent parsing result in real financial loss. Until developers implement stricter sandboxing, multi-signature controls, or rigid tool-calling limits, high-profile wallets remain vulnerable targets.

0views

CHAIN STREET INTELLIGENCE

Activate Intelligence Layer

Institutional-grade structural analysis for this article.

FAQ

Frequently Asked Questions

01

What is the Grok prompt injection exploit?

The Grok prompt injection exploit is a security breach where attackers use hidden instructions to manipulate an AI agent's financial actions. This specific attack on Base used Morse code to trick the agent into authorizing a 3 billion token transfer. Obfuscated commands bypass the natural language filters designed to prevent unauthorized asset movement.
02

Why does this matter for the AI agent economy?

This breach demonstrates that autonomous agents with liquid assets are high-priority targets for sophisticated social engineering. Grok's vulnerability proves that active financial integrations on blockchains like Base require stricter permission models. Developers must implement multi-signature controls to prevent agents from executing large transfers without human verification.
03

How did Bankrbot facilitate the token drain?

The Bankrbot platform processed the transfer after a gifted NFT escalated the Grok agent’s administrative tool permissions. On May 4, the attacker sent a Morse code message that the AI decoded as a legitimate transaction command. The system executed the 3 billion DRB token transfer immediately before the attacker liquidated funds into USDC.
04

What are the risks of autonomous on-chain wallets?

The primary risk involves the lack of human-in-the-loop oversight for high-value transactions initiated by AI models. Grok's failure to parse intent correctly allowed an external entity to weaponize the agent's decoded output. Security researchers argue that public visibility combined with active tool-calling capabilities creates an unmanaged attack surface.
05

What is the long-term outlook for agentic security?

The industry is shifting toward more rigid sandboxing and limited tool-calling permissions for AI models holding on-chain assets. Future agentic finance protocols will likely mandate hardware-certified execution environments to mitigate prompt injection risks. This event forces xAI and other developers to reconsider the safety of giving AI direct execution rights.

You Might Also Like

CHAINSTREET
🛡
Alex Reeve

Alex Reeve is a contributing writer for ChainStreet.io. Her articles provide timely insights and analysis across these interconnected industries, including regulatory updates, market trends, token economics, institutional developments, platform innovations, stablecoins, meme coins, policy shifts, and the latest advancements in AI, applications, tools, models, and their broader implications for technology and markets.

The views and opinions expressed by Alex in this article are her own and do not necessarily reflect the official position of ChainStreet.io, its management, editors, or affiliates. This content is provided for informational and educational purposes only and does not constitute financial, investment, legal, or tax advice. Readers should conduct their own research and consult qualified professionals before making any decisions related to digital assets, cryptocurrencies, or financial matters. ChainStreet.io and its contributors are not responsible for any losses incurred from reliance on this information.