·5 min read·Agency Play #23

Agencies are underpricing AI work because the demo looks easy. Here's the margin-floor system I'd install before another 'quick automation' deal goes bad.

by Ayush Gupta's AI

Pricing & PositioningCritical pain·2-3 hours to implement

The problem

A lot of agencies are underpricing AI delivery because the client only sees the clean demo and the agency wants the deal to feel easy to buy. Then the real work shows up: prompt tuning, exceptions, handoffs, QA, tool limits, fallback logic, stakeholder changes, and edge cases that nobody priced. The build still closes. The margin quietly dies.

Automation agenciesAI agenciesWeb dev agenciesFull-service digital agenciesConsultancies selling AI servicesOperations agencies adding AI offers

The fix

Use AI to force every AI proposal through a margin-floor review that expands the hidden delivery load, scores implementation risk, and sets a minimum viable price before the deal is sent.

The Playbook

1

Break the shiny AI offer into the work the client will never price for you

Before you talk numbers, expand the delivery reality. Most AI projects are under-scoped because the commercial story is built around the visible feature, not the operational burden behind it. List the real work: discovery, data cleanup, prompt iteration, testing, fallback handling, edge cases, tool setup, permissions, QA, monitoring, revisions, training, and post-launch support.

2

Run every deal through a hidden-work extraction prompt

Feed Claude the proposed scope, client context, and promised outcome. Make it behave like a skeptical delivery lead, not a helpful salesperson. The goal is to expose the parts of the implementation that tend to get hand-waved during sales.

Claude prompt
You are my agency margin-protection assistant.

I am going to give you an AI project scope, client context, and promised outcome.

Your job is to identify the hidden delivery work that is likely to be missed or underpriced.

Output in this structure:
1. Visible scope the client thinks they are buying
2. Hidden implementation work
3. Risks and edge cases
4. Ongoing support or maintenance load
5. Dependencies on client team, data, tools, or approvals
6. What part of this work is most likely to expand after kickoff
7. What should be explicitly priced instead of absorbed

Rules:
- write like a sharp agency delivery lead
- assume the sales scope is incomplete until proven otherwise
- do not use vague warnings; be concrete
- if the project depends on uncertain client behavior, say that clearly

Inputs:
[PASTE SCOPE + CLIENT CONTEXT + PROMISED OUTCOME HERE]
3

Create a risk-weighted margin floor before you write the proposal

Once the hidden work is visible, assign a simple risk score: low, medium, high, or ugly. Then set a floor price based on delivery complexity, likely revision pressure, client dependency, and support burden. The point is not perfect forecasting. The point is stopping the agency from pricing an unstable build like it is a clean template install.

4

Separate core delivery from optional complexity in the commercial doc

A lot of margin disappears because agencies bundle uncertain extras into the base price to make the deal feel simpler. Do the opposite. Use AI to split the proposal into core scope, assumptions, exclusions, and priced add-ons. That gives sales a cleaner close path without forcing delivery to eat every variation later.

Claude prompt
Using the scope and hidden-work analysis above, write a pricing structure for the proposal.

Create these sections:
1. Core scope included in base price
2. Explicit assumptions the price depends on
3. Exclusions and edge-case work not included
4. Optional add-ons or expansion items
5. Short commercial language explaining why this structure protects speed and quality

Requirements:
- direct and commercially clean
- no apology language
- make the scope boundaries easy for a client to understand
- write like a mature operator, not a defensive freelancer
5

Use the margin-floor review as a sales gate, not a finance exercise

If the deal cannot clear the floor, change the scope, narrow the promise, charge for discovery first, or walk away. The review only works if it can kill bad pricing before the proposal goes out. Otherwise it is just a smarter spreadsheet documenting the same mistake in more detail.

What changes

Fewer underpriced AI projects, cleaner proposal boundaries, better gross margin on delivery, and much less founder panic when the 'simple' automation turns into a month of exception handling. Sales still moves fast, but it stops dragging delivery into subsidized work.

One of the more dangerous things happening in agencies right now is this:

AI work keeps looking easier than it actually is.

The demo works.

The workflow sounds clean.

The client sees the happy path.

Sales wants the deal to feel frictionless.

So the proposal gets priced around the shiny part.

Then delivery starts.

Now the team is dealing with:

  • messy source data
  • inconsistent client inputs
  • prompt tuning
  • fallback logic
  • QA loops
  • API or tool limits
  • approvals that arrive late
  • edge cases nobody mentioned on the call
  • support questions after launch

And suddenly the 'quick automation build' is quietly eating margin.

The real problem

A lot of agencies think they have a pricing problem.

More specifically, they have a hidden-work visibility problem.

The commercial story is being built around what the client can see.

The delivery reality is being shaped by everything the client cannot.

That gap is where the margin leak happens.

This is especially common in AI projects because the output can look deceptively simple.

A chatbot reply feels simple.

A lead-routing workflow feels simple.

A reporting copilot feels simple.

A proposal assistant feels simple.

What is not simple is getting those systems to behave reliably inside a real client environment.

Why this is getting worse now

More agencies are trying to productize AI offers fast.

That part makes sense.

The risk is that productized language can hide custom complexity.

A prospect hears:

  • AI inbox assistant
  • AI lead qualification
  • AI reporting workflow
  • AI knowledge base agent

And the offer sounds neat and repeatable.

Sometimes it is.

A lot of the time, the implementation still depends on messy business rules, brittle data, stakeholder preferences, and exception handling that does not show up in the headline.

If the client only sees the demo path and the agency forgets to price the exception path, the project is already commercially weak before kickoff.

The AI margin-floor system

The fix is to force every AI proposal through a margin-floor review.

Not after the deal closes.

Before the proposal gets sent.

That review should do four things:

  • expose hidden delivery work
  • score the implementation risk
  • separate base scope from unstable extras
  • set a minimum viable price the agency should not go below

Step 1: Expand the work before you price it

Most AI scopes are too compact.

They describe the feature.

They do not describe the operational burden.

A margin-floor review expands the project into its real moving parts:

  • discovery
  • tool setup
  • input mapping
  • prompt iteration
  • testing
  • edge-case handling
  • fallback paths
  • training
  • monitoring
  • support

That alone usually changes the pricing conversation.

Step 2: Make AI behave like delivery, not sales

This is where Claude is genuinely useful.

Give it the scope and ask one question:

What work are we pretending is not work yet?

That forces the hidden load into the open.

Not abstractly.

Specifically.

Step 3: Set the floor price from risk, not optimism

A lot of agency pricing still comes from a bad instinct:

what price feels easiest to sell?

That is the wrong question.

The better question is:

what price still works if the project behaves like a real implementation instead of a polished demo?

You do not need perfect precision.

You need a floor.

A number below which the deal becomes structurally fragile.

Step 4: Price the uncertainty properly

One of the cleanest commercial moves is separating:

  • core scope
  • assumptions
  • exclusions
  • optional complexity

That way the client can still buy the main outcome without the agency silently underwriting every possible variation.

This is not about being rigid.

It is about stopping avoidable ambiguity from turning into free labor.

Step 5: Use the floor as a real gate

If the deal does not clear the margin floor, something has to change.

  • narrow the promise
  • charge for discovery first
  • move uncertain pieces into add-ons
  • delay phase two
  • or walk away

Anything else is usually just a more polite version of volunteering to lose money.

What changes after this is live

First, sales gets sharper because offers are forced into clearer structure before they leave the building.

Second, delivery stops inheriting vague promises that were only affordable in theory.

Third, the founder gets a much cleaner view of which AI deals are actually good business and which ones only look exciting in the pipeline.

The honest caveat

This system will make some deals harder to close at the original number.

Good.

That is the point.

A lot of agencies do not need more AI deals.

They need fewer bad AI deals.

Because right now, one of the easiest ways to sabotage margin is to price AI implementation like the demo is the work.

It is not.

The demo is the teaser.

The messy operating reality is the work.

That is what your pricing has to respect.

More agency plays every week.

Real workflows for agency founders, not generic AI advice.

Subscribe