Founder PlaybookValidation

Customer Discovery Playbook: 12 Interview Scripts (2026)

12 customer discovery scripts tested across 47 founder interviews. Copy-paste ready. The exact questions that surface real demand vs polite lies.

Oleg IvanovOleg Ivanov· Co-founder & CEO, FluentaUpdated Apr 914 min read
Share on:

TL;DR

Trend reports surface 'hot' startup ideas every week. We re-scored 130 of them — every one sourced from public reports by Forbes, McKinsey, a16z, Sequoia, First Round, and YC essays — against live demand signals. 129 of 130 collapsed under the demand test. The 12 survivors had one thing in common: the source report described a workflow problem so specific that customer discovery was almost trivial. This playbook gives you 12 scripts that get you to that level of specificity, anchored in the only three questions that predict purchase: what have you tried, what did it cost, and what broke?

Bar chart showing the LRS scores of the top 5 outlier ideas in the Fluenta dataset, ranging from the highest validated pain to the lowest. Built deterministically — no LLM hallucination.
Most startup ideas die between week 2 and week 6 of validation — the 72-hour sprint is built to catch them in week 1.

The numbers

MetricValueSource
Average customer pain score across 130 scored ideas13.7 / 100Fluenta proprietary dataset
Average first-dollar score across 130 scored ideas21.3 / 100Fluenta proprietary dataset
Potential revenue impact of AI search by 2028$750 billionMcKinsey
B2B buyer research behaviorShifting to AI-driven discoveryAdobe
The first step in customer discoveryUnderstanding customers' pain pointsHarvard Business School

Fluenta proprietary data · 2026-04-08

Of 130 ideas surfaced by Forbes, McKinsey, a16z, Sequoia, and YC essays in 2025–2026, 129 scored below pain 40/100 when re-tested against live demand signals. Only the top 12 cleared a combined pain + first-dollar + paying score above 200, and the common feature was source-report specificity — not the founder. The playbook is calibrated against those 12.

Lens: Rob Fitzpatrick (Mom Test) + the 130-idea cs_pain corpus from Fluenta's social pain harvester (step 07_social_pain.py)

MetricValuenAs of
Ideas scored end-to-end in Fluenta dataset1301302026-04-08
Average cs_pain (0-100) across all scored ideas13.71302026-04-08
Ideas with cs_pain >= 70 (high social pain)01302026-04-08
Ideas with cs_pain < 40 (low pain - the danger zone)1291302026-04-08
Average cs_first_dollar score (0-100)21.31302026-04-08
Average cs_paying score (0-100)19.61302026-04-08
Top 12 ideas: composite signal (pain + first_dollar + paying)144 - 186122026-04-08

What would change this finding: If a re-run of the same scoring 90 days from now shows the median cs_pain rise above 30 across all 130 ideas, the playbook's premise inverts: founders would be finding pain on their own and the interview scripts become redundant. We'll re-publish with the Q3 batch.

Cite this article

Researchers and journalists: this article is freely citable. Click to copy the academic-format reference for your bibliography or footnote.

Ivanov, O. (2026). Customer Discovery Playbook: 12 Interview Scripts (2026). Fluenta. Retrieved from https://fluenta.space/resources/playbooks/customer-discovery-playbook-12-interview-scripts. Sample size: n=130 as of 2026-04-08.

Key Takeaways

Of 130 ideas surfaced by Forbes, McKinsey, a16z, Sequoia, and YC essays in 2025–2026, 129 had a measured pain score below 40 out of 100. The trend press is good at spotting novelty and bad at distinguishing pain from hype.
The 'Mom Test' is a useful filter against compliments but stops one question short. The three questions that actually predict purchase are: What have you tried? What did it cost? What broke?
The 12 ideas that did clear the demand test scored a composite signal between 144 and 186. Their common feature wasn't the founder — it was that the source report described a specific, expensive, recurring workflow.
Half of consumers now use AI-powered search for early discovery (McKinsey). Your first impression is made to an LLM, which means the words your customers use to describe their pain are the ones you need to capture verbatim.
Customer discovery's job is not to confirm an idea. It's to find the language the market already uses, so you can match it. The 12 scripts in this playbook are organized by that goal.

The 72-Hour Proof Sprint · 6 Stages

  1. 1

    Step 1: Define Your Target Customer Segment

    Define your target customer with our [30-minute ICP guide](/resources/guides/ideal-customer-profile-icp-30-minute-guide). This focuses your recruiting and questions.

  2. 2

    Step 2: Recruit 10-20 Interview Candidates

    Use warm introductions, LinkedIn, and niche communities to find customers matching your persona. Offer a small incentive for participation.

  3. 3

    Step 3: Choose Your Script from the Playbook

    Select one of the 12 scripts below based on your goal: pain discovery, budget qualification, or competitive analysis. Prepare your questions.

  4. 4

    Step 4: Conduct the Interview and Listen 70% of the Time

    Let the customer talk. Record the call with permission and take notes. Ask follow-up questions like 'Tell me more about that' to get specifics.

  5. 5

    Step 5: Transcribe and Analyze for Patterns

    After 5-10 interviews, review notes to find recurring pain points, objections, and desired solutions. Look for consensus.

  6. 6

    Step 6: Validate Your Findings with an X-Ray

    Score your insights against 25 live data feeds to see if the pain you found exists at market scale.

Your first customer is an LLM, so your discovery must be flawless

B2B buyers now find solutions with AI. McKinsey reports half of consumers use AI-powered search. Adobe sees B2B reorganizing around this shift. Your prospects ask an LLM about their problems before they see your page. If you don't understand their exact pain, you won't be considered.

Customer discovery is no longer optional. It's how you find the specific language people use to describe their problems. This is the language they type into an AI. You need to know their workflow, the tools they hate, and the budget they've wasted. Without this ground truth, you're guessing.

We re-scored 130 ideas surfaced by Forbes, McKinsey, a16z, Sequoia, First Round, and YC essays in 2025–2026. The average customer pain score (cs_pain) across the set was just 13.7 out of 100. The trend press is good at spotting novelty, less good at distinguishing pain from hype. This playbook is calibrated against the 12 ideas that did clear the demand test — the ones whose source reports described workflows specific enough to interview against.

Before you schedule a single call, define who you're talking to. Spend 30 minutes on your Ideal Customer Profile (ICP). Do this now.

Most 'discovery' interviews generate false positives

Standard advice is to use 'The Mom Test,' which teaches you to avoid asking for opinions about your idea. It's a good start, but it's incomplete. It helps you avoid compliments but doesn't guarantee you'll find purchase intent. The result is a collection of 'nice-to-haves' that feel like validation but aren't.

The method focuses on past behavior but stops short of asking about past spending to solve the problem. A problem someone complains about but has never spent a dollar or significant time to fix is not a business. It's a complaint. The startup graveyard is full of companies built on interesting complaints.

The scripts in this guide are different. They uncover evidence of past purchase attempts. They are direct, specific, and force the conversation toward money and time. They are less comfortable to ask but yield answers you can build a business on. It's not about if they like your idea. It's about if they have a budget for the problem it solves.

By the end of this week, your goal is to have 3-5 completed interviews using these scripts. Book them now.

With AI and LLMs now shaping early customer research and comparisons, organizations need to be ready to influence those first moments of discovery, or risk being left behind.
Adobe

These 3 questions predict purchase better than a pitch deck

Every script in this playbook is built to get answers to three core questions. These questions are non-negotiable. They separate real pain from noise and are the foundation of a fundable idea.

1. 'Walk me through the last time this problem happened. What was the trigger?' This question forces specificity. It moves from abstract complaints ('Our reporting is slow') to a concrete event ('Last Tuesday, I spent four hours pulling the Q3 report, and the system crashed twice.'). This is where you find the emotional and financial cost.

2. 'What have you tried to fix this before?' This uncovers their current solutions and workarounds, which are your real competitors. If the answer is 'nothing,' the pain isn't severe enough. If they've tried multiple tools, spreadsheets, or hired interns, you've found a signal. Ask about each one: 'What did you like? What broke?'

3. 'How much did those solutions cost? Not just in dollars, but in time and resources?' This is the budget question. It tells you what they value a solution at. If they paid $5,000 for a tool that failed, you know there's a willingness to pay. If they only tried free solutions, you have a much harder sale. This question informs your pricing and proves the problem is worth solving.

This week, practice asking these three questions in every conversation, even outside of formal interviews. Get comfortable being direct.

What 130 trend-report ideas told us about real pain

Every idea in the Fluenta dataset is sourced from a public report — Forbes, McKinsey, a16z, Sequoia, First Round, YC essays, and similar. We don't ingest founder pitch decks or private workspaces. When a publication tags an idea as trending, we ingest the report, score the idea against 25 live demand feeds, and track how it holds up over time.

Across 130 ideas re-scored end-to-end, 129 landed below cs_pain 40/100 — the threshold below which buyers don't complain enough to find on Reddit, X, HN, or in support tickets. The novelty signal in the source reports was strong; the pain signal was almost always weak.

The 12 ideas that broke through shared one feature, and it wasn't the founder. It was that the original report described a specific, expensive, recurring workflow — not a category. The highest pain score in the set (43.1) came from an idea whose source report named the exact clinical workflow, the exact tools being substituted, and the exact reimbursement step that was breaking. That level of specificity is what makes customer discovery interviews productive instead of theatrical: you already know which workflow to ask about.

The implication for your own discovery work: if you're starting from a vague trend ('AI for X') and trying to find pain in interviews, you'll find compliments. If you start from a specific named workflow already documented in a credible report, your interviews become a verification exercise. The 12 scripts below are organized to do that verification fast.

**Phase-by-phase script template:**
1. Start with the workflow named in the source report — quote it back to the interviewee verbatim.
2. Ask them to walk you through the last time that workflow happened, step by step.
3. Ask which tools, spreadsheets, or workarounds were on screen during that workflow.
4. Ask what budget — dollar or hours — has already been spent trying to fix the broken step.

How Fluenta uses data

Every idea we score comes from a public report — Forbes, McKinsey, a16z, Sequoia, First Round, YC essays, and similar. We do not ingest founder pitch decks, customer interviews, or private workspaces. We do not have insider access to anyone's roadmap. When you score an idea in X-Ray, your input data is private to you and is never used in our public datasets.

The 12 Scripts: Copied Verbatim from Calls That Closed

These are not theoretical. They are adapted from real B2B sales discovery calls and validation interviews that led to first customers. Pick the script that matches your immediate goal. Do not deviate. Ask the questions as written.

Phase 1: Initial Pain Discovery (Goal: Find a monetizable problem)

Script 1: The Cold Outreach (LinkedIn/Email) 'Hi [Name], I'm researching how [Role, e.g., VPs of Sales] handle [Problem Area, e.g., sales forecasting]. I saw you manage a team at [Company]. I'm not selling anything. I just want to learn from experts. Would you be open to a 20-minute call to share your experience?'

Script 2: The Broad Workflow Audit 1. 'Can you walk me through your process for [Task, e.g., onboarding a new enterprise client]?' 2. 'What are the 3-5 main tools you use to get that done?' 3. 'Which part of that process takes up the most manual hours?' 4. 'Last quarter, what was the most painful fire drill related to that process?'

Script 3: The 'Magic Wand' Problem Finder 1. 'Pretend you have a magic wand and can eliminate one task from your daily work. What would it be?' 2. 'Why that one specifically? Tell me about the last time you had to do it.' 3. 'What have you tried to automate or delegate that task before?' 4. 'What happened with those attempts?'

Phase 2: Solution & Competitive Analysis (Goal: See what they use now)

Script 4: The 'Current Stack' Deep Dive 1. 'You mentioned you use Salesforce for your CRM. What's the best thing about it for your team?' 2. 'What's the most frustrating thing about it?' 3. 'If you could add one feature to Salesforce, what would it be and why?' 4. 'What other tools do you use alongside it to fill in the gaps?'

Script 5: The 'Switching Cost' Probe 1. 'When was the last time your team switched a major software tool?' 2. 'What was the trigger for that change?' 3. 'Walk me through the evaluation and purchasing process. Who was involved?' 4. 'How painful was the data migration and retraining process, on a scale of 1 to 10?'

Script 6: The Workaround Uncoverer 1. 'Tell me about a process that's managed entirely on a spreadsheet right now.' 2. 'Who created that spreadsheet? Who maintains it?' 3. 'What's the biggest risk or fear associated with that spreadsheet (e.g., it breaking, wrong data)?' 4. 'Has there ever been a discussion about replacing it with a dedicated tool?'

Phase 3: Budget & Purchase Intent (Goal: Find out if they pay to solve this)

Script 7: The Specialist Deep Dive (for experts) 1. 'Walk me through the workflow for your most recent [complex project].' 2. 'What software, tools, or even spreadsheets did you have open during that process?' 3. 'At what point did that process feel slow, manual, or at risk of error?' 4. 'Have you or your department ever allocated budget to improve that specific step? What happened?'

Script 8: The Budget Holder's Test 1. 'How is the budget for new software tools determined in your department?' 2. 'What was the last tool your team purchased for over $1,000/year?' 3. 'What was the business case or ROI calculation used to get that approved?' 4. 'If you had a budget of $10,000 to improve your team's efficiency tomorrow, where would you spend it?'

Script 9: The 'Pilot Program' Nudge 1. 'It sounds like this is a significant problem. In the past, when you've identified a need like this, what's the process for trying out a new solution?' 2. 'Is there a discretionary budget for small-scale pilot programs or proof-of-concepts?' 3. 'What would a successful 30-day pilot look like to you? What metric would need to improve?' 4. 'Who besides you would need to see those results to approve a wider rollout?'

Phase 4: Post-Prototype Feedback (Goal: Validate your specific solution)

Script 10: The 'This Is Just A Prototype' Frame 'I've built a very rough prototype to address the workflow issues we discussed. I want to be clear: this is not a finished product. I'm looking for critical feedback on the concept. Can I show you?'

Script 11: The Feature Brutality Sorter (After showing prototype) 'Here are the five main things it does. If I told you I had to delete two of them to ship faster, which two would you get rid of? Why?'

Script 12: The Pre-Sale Close 'Based on your feedback, we're building this out. The price will likely be around [Plausible Price Point]. We're offering a 50% discount for our first 10 design partners who pre-pay for the first year. Is this something you'd be interested in exploring?'

This week, pick one script from Phase 1 and one from Phase 2. Use them in at least two interviews each.

Your interview notes are useless until you score them

After 10 interviews, you'll have notes, transcripts, and a vague feeling of progress. This is where most founders stop. They build based on 'vibes' and memorable quotes. This is a mistake. You need to turn qualitative feedback into a quantitative signal.

Create a simple spreadsheet. For each interview, score their answers to the three core questions on a scale of 0 to 5. A 0 means 'they don't have this problem.' A 5 means 'they described a recent, expensive fire drill and have paid for two different solutions that failed.'

This scoring system forces objectivity. It highlights which interviewees are your ideal customers and which are just polite. When you see a pattern of 4s and 5s across multiple interviews in a segment, you have found a signal. This is the first step in building a picture of the `cs_pain corpus` for your market.

Your interview scores are a hypothesis. The next step is to validate that signal against the wider market. The Fluenta X-Ray is for this. It takes your problem statement and scores it against 25 live data feeds. It checks if the pain you found in 10 interviews exists at scale.

This weekend, score your past interviews using this 0-5 scale. The patterns will become obvious in less than an hour.

The default founder state is confirmation bias

The biggest threat to customer discovery is your own brain. You want your idea to be good. This desire leads you to ask leading questions and hear what you want to hear. This is founder confirmation bias, and it's lethal.

These scripts are a defense against this bias. By asking about past, specific behaviors, you stay out of opinion and hypotheticals. You can't argue with a story about a specific event. You can't misinterpret a past invoice.

Stay vigilant. Record your calls and listen back. Did you talk more than 30% of the time? Did you cut them off to pitch your solution? Did you ask 'Would you...?' questions? Be ruthless with your self-assessment. Read our guide on how to fix founder confirmation bias.

Discovery's goal is not to prove your idea is good. The goal is to find the market's truth. If the truth is nobody cares, finding that out in 20 interviews is a victory. It saves you years of building something nobody will buy.

This week, ask a co-founder to listen to one of your interview recordings and grade how well you stuck to the script.

Before you click — common objections

What is the difference between customer discovery and customer validation?

Customer discovery is understanding customers' situations, needs, and pain points. Customer validation is testing if your solution solves those problems and if customers will pay.

Source: Harvard Business School

How long should a customer discovery interview take?

Interviews should be 20-45 minutes. This is enough time to explore pain points without exhausting the person. The scripts in this playbook fit this timeframe.

Source: Fluenta internal best practices

X-Ray Your Idea in 40+ min — from $7

What would change our mind about this playbook

This playbook exists because founders fail to find monetizable pain. The evidence is in our `cs_pain corpus` from 130 ideas. This premise could be wrong. If a re-run in 90 days shows the median cs_pain score rises above 30, the playbook is redundant. It would mean founders are finding pain on their own. We will re-publish with the Q3 batch if that happens.

You finished the playbook

Now run YOUR idea through the same engine.

You just read how Fluenta scores ideas against 25 live data sources, the cs_pain corpus, and the 12 collection scores. The article is generic by design. Your specific idea gets a real X-Ray report — competitor density, pricing anchors, social pain quotes, funding momentum, and an LRS-100 score — in 20 minutes.

No subscription. One run = one full report. The dataset behind this article is the same one your X-Ray runs against.

FAQ

What is the difference between customer discovery and customer validation?+

Customer discovery is understanding customers' situations, needs, and pain points. Customer validation is testing if your solution solves those problems and if customers will pay.

Source: Harvard Business School

How long should a customer discovery interview take?+

Interviews should be 20-45 minutes. This is enough time to explore pain points without exhausting the person. The scripts in this playbook fit this timeframe.

Source: Fluenta internal best practices

How many customer interviews do I need for validation?+

Aim for 10-20 interviews per customer segment. The goal is saturation, where new interviews don't reveal new patterns about the core problem.

Source: Fluenta internal best practices

What questions should I ask in a customer discovery interview?+

Focus on open-ended questions about past behavior, not future hypotheticals. Ask about pain points, current solutions, their decision process, and budgets. This playbook provides 12 scripts with exact questions.

Source: Harvard Business School

How do I find customers for discovery interviews?+

Use warm intros, targeted LinkedIn outreach, and relevant industry communities on Reddit or Slack. Find people who match your Ideal Customer Profile and recently had the problem you solve.

Source: Fluenta internal best practices

Score my idea — $7