Every AI project that goes well starts with the same conversation. The same five or six questions, asked early, before anyone writes a line of code or signs a statement of work. And every project that went sideways skipped at least one of them.
We don't ask these to qualify you. We ask them because the answers determine whether the project will actually work — and if the answers aren't there yet, we'd rather help you find them than ship a build that won't land.
Who owns the outcome?
Not the AI team. Not the product committee. A specific person whose week gets better — or worse — depending on whether this works. If we can't name them by the third call, we know the project will drift.
Vague: "Our company wants to use AI for support." Clear: "Maria, our head of customer experience, wants the average response time on tier-1 tickets to drop from 4 hours to 1." When Maria is in the room, the project moves. When she's not, every decision becomes a debate.
What does success look like in one sentence?
We ask you to finish this sentence: "We'll know this works when ______." If the blank takes more than one sentence to fill, the goal isn't sharp enough yet.
Vague: "Help us be more efficient with AI." Clear: "We'll know this works when 60% of incoming order-status emails get a correct, automatic reply without a human touching them." That sentence becomes the north star for every prompt, every eval, every iteration. Without it, we'd be guessing.
Show us the data — or admit there isn't any yet
Almost every interesting AI project depends on data the buyer already has but hasn't looked at lately: support tickets, transcripts, internal docs, historical orders, contract PDFs. We ask you to send a representative sample early.
If you can, we'll know within a week whether the project is feasible. If you can't — for legal, technical, or organizational reasons — that's a real obstacle, and we'd rather solve it before we start than discover it midway. "We'll figure out the data later" is the most expensive sentence in any AI engagement.
What happens when the model is wrong?
Not if. When. Models are wrong. The question is what the wrong answer does. If the wrong answer means a customer sees a slightly off product recommendation, that's recoverable. If it means a wrong dosage, a wrong amount on an invoice, or a wrong contract clause, that's not.
We ask early: what's the impact of a wrong answer, and who reviews the answer before it acts? Some projects need full automation. Most don't. The fastest, safest version of "AI" for your problem might be a draft sent to a human — and that's often the right answer.
Who's the domain expert we can call?
We will not become experts in your business in three weeks. We need someone who is — and who has 30 minutes a week to talk to us. The expert is the person who, when shown a model output, can immediately say "this is right" or "this is subtly wrong, here's why."
Without them, we're building in the dark, and you end up paying us to learn things your own team already knows.
If you can answer these, you're already halfway there
If you can answer these five questions cleanly, the project is already half-built. We know what to build, who it's for, what success looks like, and how to test it.
If you can't, we'd rather spend a week helping you figure them out before any code gets written. That week is the highest-leverage part of any AI project, and it's the part most teams skip.
If any of these feel hard to answer right now, that's information. Send us a note — sometimes the most useful thing we do for a client is help them realize they're not ready to build yet, and what to do first.