·4 min read·Agency Play #16

Prospects are starting to ask how your agency uses AI before they sign. Here's the one-page policy that stops deals from stalling.

by Ayush Gupta's AI

Proposal & SalesHigh pain·2 hours to implement

The problem

A lot of agencies are still treating AI usage like an internal workflow detail, while prospects increasingly treat it like a buying-risk question. Mid-deal, someone asks whether you use AI, what gets human-reviewed, where client data goes, and whether deliverables are actually original. If the answer is vague, the deal slows down fast.

SEO agenciesContent agenciesWeb dev agenciesBranding studiosFull-service digital agenciesAutomation agencies

The fix

Create a one-page AI usage policy that explains where AI helps, where humans stay accountable, how client data is handled, and what quality controls exist so prospects stop filling the gaps with worst-case assumptions.

The Playbook

1

List exactly where AI already touches your workflow

Do not start with branding language. Start with operational honesty. Where does AI actually assist right now: research, outlines, drafts, QA, reporting summaries, internal briefs, code support, or automation? If you cannot describe your real workflow clearly, the policy will sound slippery because it is.

2

Draw a hard line between AI assistance and human accountability

Prospects do not just want to know whether you use AI. They want to know who is still responsible for accuracy, judgment, originality, and client fit. Your policy should explicitly say what always gets human review before it reaches the client.

Claude prompt
You are helping me write an AI usage policy page for my agency.

I will give you my services, workflow, and how my team currently uses AI.

Write a one-page client-facing policy with these sections:
1. Why we use AI
2. Where AI helps in our workflow
3. What is always reviewed or finalized by humans
4. How we handle client data and sensitive information
5. What clients should expect in terms of quality, originality, and accountability
6. Short FAQ for procurement or cautious buyers

Requirements:
- plain English
- confident, not defensive
- no hype, no vague ethics theater
- sound like a practical operator, not a legal team

Agency details:
[PASTE SERVICES + WORKFLOW + AI USAGE HERE]
3

Add the two answers prospects really want

Most AI policy pages stay generic and miss the actual commercial questions. Answer these directly: does client data get put into third-party AI tools, and who checks the final work before delivery? If those answers are muddy, you will keep getting follow-up emails and procurement drag.

4

Turn the policy into a reusable sales asset

Put it on your site, link it in proposals, and keep a short version ready for procurement questionnaires. The goal is not to wait until someone gets nervous. The goal is to remove uncertainty before it becomes a trust problem. Good agencies will increasingly need an answer here the same way they already need a security or process answer.

Claude prompt
Using the policy above, create two more assets:

1. A short proposal section called "How we use AI in delivery"
2. A direct email reply for a prospect who asks: "Do you use AI in your process?"

Requirements:
- both should match the same commercial message
- keep the proposal section under 120 words
- keep the email under 150 words
- make both feel calm, transparent, and premium
5

Review it quarterly so sales is never making up the answer live

AI usage inside agencies changes fast. New tools get added. New review steps matter. New client sensitivity shows up. Revisit the page every quarter so your sales team is not improvising policy in the middle of a deal. That is how small trust gaps turn into avoidable procurement friction.

What changes

Deals stop stalling on vague AI questions. Prospects get a clean answer before they worry, sales sounds more credible, and the agency looks more controlled and mature instead of hand-wavy about how the work is actually produced.

A quiet shift is happening in agency sales right now.

Prospects are starting to ask how the work gets made.

Not just what you deliver.

How you deliver it.

Specifically:

Do you use AI?

What for?

Who checks the work?

Does client data go into those tools?

Are we paying premium fees for something a machine drafted?

A year ago, a lot of agencies could dodge that conversation.

Now it is showing up earlier in deals, especially once there is a procurement person, an ops lead, or a cautious founder in the loop.

And this is where a surprising number of agencies still look sloppy.

Not because they use AI.

Because they do not have a clean answer.

The problem is not AI usage. It is ambiguity.

Most agencies already use AI somewhere in the stack.

That part is normal.

What buyers dislike is vagueness.

If they ask how your team uses AI and the answer sounds improvised, defensive, or suspiciously broad, they fill in the blanks themselves.

Usually with the least flattering version.

  • maybe the agency is automating too much
  • maybe junior staff are just prompting and passing it off as expertise
  • maybe sensitive data is being thrown into random tools
  • maybe originality and QA are weaker than the proposal suggests

That is how deals slow down.

Not because the prospect is anti-AI.

Because the risk feels undefined.

The fix is a one-page AI usage policy

This does not need to be a giant legal document.

It should be a practical commercial asset.

One page.

Plain English.

Answers the obvious questions before they become objections.

A good policy should cover four things clearly:

  • where AI helps in your workflow
  • where humans stay accountable
  • how client data is handled
  • what quality controls exist before delivery

That is enough to calm most reasonable buyers.

If a prospect has to guess how AI fits into your delivery process, they will usually guess a version that makes your agency look less premium.

What to say

The best version is not apology language.

It is operator language.

Something like:

We use AI where it improves speed, structure, research support, and internal efficiency. We do not outsource accountability to it. Final strategy, judgment, QA, and client-facing decisions remain owned by humans on our team.

That framing matters.

You are not hiding AI.

You are putting it in its place.

The two questions you should answer directly

Most policy pages get too fluffy and avoid the parts that actually matter.

Answer these directly:

1. Does client data go into AI tools?

If yes, under what conditions?

If no, say so.

If only sanitized or approved material is used, say that.

If certain data categories are excluded, say that too.

Prospects want to know you have thought about this before they asked.

2. Who reviews the final work?

This is the trust hinge.

A buyer can live with AI assistance much more easily than they can live with unreviewed AI output.

So say exactly what gets reviewed by a strategist, editor, operator, developer, or account lead before delivery.

That one sentence does a lot of work.

Where this helps commercially

The obvious benefit is fewer awkward email threads.

But the bigger benefit is positioning.

An agency with a clear AI usage policy looks more mature than one that answers the question ad hoc every time.

It signals control.

It signals thoughtfulness.

It signals that AI is part of a process, not a shortcut being smuggled into the engagement.

That matters more now because buyers are increasingly curious about margin structure, quality control, and whether they are paying for judgment or just output.

A clean policy lets you answer all three without sounding weird about it.

Make it part of the sales system

Do not bury this in a footer and hope nobody asks.

Use it actively.

  • link it in proposals
  • include a short version in your sales deck
  • keep a canned response for procurement questionnaires
  • give account leads the same talking points so the answer stays consistent

That way the first AI conversation feels calm and normal.

Not like the buyer uncovered something you hoped would stay fuzzy.

The honest caveat

If your current AI usage is messy, this process will expose that.

Good.

Better to clean it up now than improvise through bigger deals later.

Because this is becoming one of those small operational signals buyers use to judge whether an agency is actually buttoned up.

Not every prospect will ask.

Enough of the good ones will.

And if your answer is still being invented live on the call, you are making the sale harder than it needs to be.

More agency plays every week.

Real workflows for agency founders, not generic AI advice.

Subscribe