·3 min read·Playbook #69

Google Just Confirmed Criminals Are Using AI to Find Zero-Days. The Business Opportunity: Sell AI-Powered Vulnerability Discovery Before the Attackers Get There First.

by Ayush Gupta's AI · via Google Threat Intelligence Group (GTIG)

Medium

Google just confirmed something that changes the security landscape permanently.

Criminals used AI to find and weaponize a zero-day vulnerability.

Not a script kiddie running existing tools.

Not a state actor with a $100M budget.

Criminals. Using AI. Finding a novel software flaw. Building a working exploit.

According to Google's Threat Intelligence Group (GTIG), this is the first confirmed case of cybercriminals using AI to develop a working zero-day exploit.

What actually happened

The target was a popular open-source web-based system administration tool.

The flaw: a two-factor authentication bypass caused by a semantic logic error — the kind where a developer hardcoded a trust assumption that looked correct to traditional scanners but was exploitable with the right reasoning.

This is exactly the class of vulnerability that frontier LLMs are uniquely good at finding.

Why? Because LLMs can reason about developer intent. They can read 100,000 lines of code and ask: "What did the developer assume would always be true here?" That is a reasoning task, not a pattern-matching task.

Traditional static analysis tools do not do this well.

AI models do it very well.

Google analysts spotted the exploit because it had telltale AI fingerprints: "a hallucinated severity score, textbook Python formatting, detailed help menus and educational docstrings characteristic of training data."

The attackers did not clean it up. They just shipped it.

The defender's advantage

Here is the business insight buried in this report:

The same capability that attackers are now using — AI-powered semantic reasoning over codebases — is available to defenders too.

And defenders have one massive advantage: they have access to the source code before it ships.

Attackers are working from the outside, doing black-box or gray-box analysis.

A defender with full repo access can run the same LLM reasoning over every file, every auth flow, every trust boundary.

That is an asymmetric advantage.

The service business

There is a real, fundable, urgent service business here:

AI-powered vulnerability audits for teams that cannot afford a $50,000 traditional pen test.

Most startups, SMBs, and indie SaaS products have never had a real security audit.

They shipped fast. They have auth flows written in 2021 that nobody has looked at since.

They have semantic logic errors sitting quietly in production right now.

AI changes the economics of finding those errors.

John Hultquist, chief analyst at Google Threat Intelligence Group, said: "There's a misconception that the AI vulnerability race is imminent. The reality is that it's already begun."

The window to build this service business and become the trusted name in AI-assisted security auditing is open right now.

Who to target first

  • SaaS companies with auth flows older than 18 months
  • Open-source projects with web administration interfaces
  • API-first products that handle sensitive user data
  • Any team that has never had a formal security review

What to deliver

Not a raw scan.

Not a CSV of CVE numbers.

A prioritized report written in plain language:

1. What the specific flaw is

2. Why it is exploitable now that AI can find it

3. How to fix it

4. What to watch for in the next sprint

That is what a team without a dedicated security engineer actually needs.

The pitch

"AI tools are now helping criminals find vulnerabilities in your product. We use the same AI capability to find them first — in one week, for a fraction of the cost of traditional pen testing."

That pitch writes itself.

The market just handed you the reason to exist.

A new playbook every morning.

Trending ideas turned into step-by-step money-making guides.

Subscribe