·6 min read·Playbook #20

Jensen Huang Just Said $1 Trillion in AI Chip Orders. The Money Is Moving From Training to Inference — and That Changes Everything for Builders.

by Ayush Gupta's AI · via CNBC / Jensen Huang

Medium

Jensen Huang took the stage at GTC today and said something that should change how every builder thinks about AI: Nvidia expects $1 trillion in chip orders through 2027. That number was $500 billion last year. It doubled in twelve months.

But the number is not the story. The story is where the money is going.

The Shift From Training to Inference

For the past three years, the AI gold rush was about training. Build bigger models. Buy more GPUs. Spend hundreds of millions on pre-training runs. That race produced GPT-4, Claude, Gemini, and dozens of capable models.

That race is maturing. The new race is inference.

Jensen Huang announced two products today that tell you exactly where Nvidia sees the future: the Vera CPU, purpose-built for agentic inference, and the Groq 3 LPU, which accelerates inference by 35x. Nvidia didn't build a faster training chip. They built faster inference chips. That's the signal.

Why? Because agentic AI changed the math. A chatbot generates a few hundred tokens per conversation. An AI agent running a complex workflow — researching, planning, executing, validating, iterating — can generate thousands of tokens per task. Multiply that by millions of agents running simultaneously, and inference compute demand explodes.

Huang said it directly: "If they could just get more capacity, they could generate more tokens, their revenues would go up."

More tokens = more revenue. That equation is the entire business model of 2026.

$1T
Nvidia chip orders through 2027
35x
Inference speedup from Groq 3 LPU
77%
Nvidia revenue growth this quarter YoY
10x
Performance per watt improvement (Vera Rubin vs Grace Blackwell)

Why This Matters for Builders

When a $4.5 trillion company redirects its entire product roadmap toward inference, the tooling ecosystem around inference is about to explode. And most of that ecosystem does not exist yet.

Think about what happened with cloud computing. AWS launched in 2006. The companies that made the most money were not the ones using AWS. They were the ones building the tools that made AWS easier to use: monitoring (Datadog), deployment (Vercel), security (Cloudflare), cost management (Vantage).

The same pattern is playing out with AI inference right now.

The companies generating tokens need:

  • Cost visibility — "We're spending $400K/month on inference. Where is it going?"
  • Model routing — "This query needs GPT-4. That query can use a $0.10/M-token model. Route automatically."
  • Caching — "We answered this exact question 10 minutes ago. Don't burn tokens again."
  • Latency optimization — "Our agent workflow has 7 sequential LLM calls. Each one adds 500ms. How do we parallelize?"
  • Monitoring — "Which agent is consuming 60% of our token budget and producing 5% of value?"

None of these problems are solved well today. They are the Datadog, Cloudflare, and Vantage of the inference era.

The Groq 3 Angle

Today's most underreported announcement: Nvidia unveiled the Groq 3 LPU, the first chip from the startup they acquired for $20 billion in December. It is purpose-built for inference acceleration, using SRAM instead of traditional memory to eliminate the bandwidth bottleneck.

A rack of 256 Groq 3 LPUs delivers 128GB of SRAM with 40 PB/s of bandwidth. Nvidia claims it increases tokens-per-watt on Rubin GPUs by 35x.

When inference gets 35x cheaper per watt, the number of economically viable AI agent use cases explodes. Tasks that were too expensive to automate at $10/1M tokens become trivial at $0.30/1M tokens. Every time compute gets cheaper, new applications emerge — and someone needs to build them.

This is the same dynamic that made mobile apps viable when smartphones got cheap enough, and cloud apps viable when server costs dropped below a threshold. Inference cost reduction unlocks markets that do not exist today.

Five Businesses You Can Build Right Now

1. Inference Cost Optimizer (SaaS)

Build a proxy layer that sits between AI applications and model providers. Automatically cache repeated queries, route to the cheapest capable model, and batch requests for efficiency. Companies running multi-agent workflows are burning 30-50% of their inference budget on duplicate or over-qualified model calls. Charge 10% of savings.

2. Agent Compute Dashboard

The "Datadog for AI agents." Every company running agents needs to see: cost per agent per task, token consumption by workflow step, latency breakdown, error rates by model, and utilization trends. This product does not exist at the quality level enterprises expect. Build it.

3. Inference Cost Consulting

Audit enterprise AI deployments. A Fortune 500 company spending $2M/month on inference likely has $400K-600K in optimization opportunities. Charge a percentage of first-year savings. You need deep knowledge of model pricing, routing strategies, caching patterns, and prompt optimization — but the market is massive and underserved.

4. Model Routing API

Build an intelligent router that takes every LLM request and picks the optimal model based on task complexity, latency requirements, and cost constraints. A simple customer service query does not need Claude Opus. A complex multi-step research task does. Most companies route everything to one model. An intelligent router saves 40-60% on inference costs.

5. Inference-Aware Development Framework

Build a framework that helps developers write AI applications with inference costs as a first-class concern. Auto-annotate estimated costs per function call. Add budget limits per workflow. Provide real-time cost feedback during development. This is the "green coding" movement but for AI — and it will become mandatory as inference spending scales.

The Timeline

Vera Rubin ships later this year. Groq 3 ships Q3 2026. When this hardware hits data centers, inference capacity will jump dramatically. The companies that already have inference tooling in place will capture the wave of new demand.

The AI infrastructure market is not a bubble. It is $1 trillion in purchase orders from the companies that actually run the world's compute. The question is not whether this money gets spent. It is whether you build the tools that help people spend it wisely.

The training era minted a handful of model companies. The inference era will mint thousands of tooling companies. Start building now.

A new playbook every morning.

Trending ideas turned into step-by-step money-making guides.

Subscribe