February 10, 2025
News analysis: The big AI platforms are emerging as frontline early warning systems, detecting nation-state hackers at the outset of their campaigns. Can this help save the threat intel industry?

The cyber threat intelligence business has struggled to become a major market category, hampered by stale data, limited information sharing, and the high costs of traditional detection and response tools.
But artificial intelligence (AI) may be poised to change that. Tech giants like Microsoft, Google, and OpenAI are quietly transforming into early warning systems, using AI to track malicious actors — sometimes down to the individual level — before they launch malware campaigns.
By monitoring attempts to abuse their platforms, these companies are uncovering fresh, actionable intelligence in real time, offering a glimpse of how AI-driven platforms could finally deliver the timely, cost-effective threat detection the cybersecurity industry has been chasing for years.
Just recently, Google Threat Intelligence Group (GTIG) shared data on how it caught nation-state hackers linked to Iran, China, North Korea, and Russia attempting to misuse its Gemini gen-AI tool for activities ranging from reconnaissance on U.S. defense networks to drafting malicious scripts aimed at bypassing corporate security measures.
According to Google, Iranian government-backed hackers were among the heaviest users of Gemini, probing vulnerabilities and exploring phishing techniques designed to compromise government and defense entities. Chinese groups, including multiple PRC-backed APTs, similarly leveraged the AI model for scripting tasks, Active Directory maneuvers, and stealthy lateral movement within target networks. North Korean operatives used it to explore free hosting providers, craft malicious code, and draft cover letters and job proposals to embed clandestine IT workers inside Western companies.
By watching Gemini’s queries, Google boasted that it can anticipate an attacker’s next steps, an advantage that effectively turns the platform into an early-warning system for cyber campaigns. It also puts AI providers in an unfamiliar role: policing who gets to use their technology and for what ends, with potential legal and ethical questions still up in the air.