Company didn't notice its chatbot was being abused for (at least) 4 months.
Dan Goodin – 9 Apr 2025
Spammers used OpenAI to generate messages that were unique to each recipient, allowing them to bypass spam-detection filters and blast unwanted messages to more than 80,000 websites in four months, researchers said Wednesday.
The finding, documented in a post published by security firm SentinelOne’s SentinelLabs, underscores the double-edged sword wielded by large language models. The same thing that makes them useful for benign tasks—the breadth of data available to them and their ability to use it to generate content at scale—can often be used in malicious activities just as easily. OpenAI revoked the spammers’ account after receiving SentinelLabs’ disclosure, but the four months the activity went unnoticed shows how enforcement is often reactive rather than proactive.