Mastering the User Interview: Mistakes to Avoid When Talking to Customers
Great product decisions rarely spring from a single “Eureka!” moment. They come from systematic learning-especially from conversations with the people you want to serve. But interviews can easily generate misleading “insights” if you prompt, steer, or misinterpret what you hear. This guide explains the most common mistakes in customer interviews and replaces them with concrete, research‑backed techniques to elicit clear, unbiased, and useful evidence.
Mistake 1: Asking Leading Questions
A leading question contains the answer you want to hear (“Wouldn’t it be easier if we added one‑click checkout?”). These push participants toward agreement and away from describing their own reality. Nielsen Norman Group puts it bluntly: leading questions “interject the answer we want to hear in the question itself,” making it hard for participants to say anything else. (Nielsen Norman Group)
Fix it:
Replace “Wouldn’t it be easier if…?” with “Walk me through what made checkout hard last time.”
Strip out justifications and causal words that hint at a “correct” response. NN/g even recommends being wary of “because” in questions; it nudges people to rationalize. (Nielsen Norman Group)
Mistake 2: Fishing for Opinions and Hypotheticals
Customers are famously generous with opinions about hypothetical products-but opinions and promises about the future are fragile. Rob Fitzpatrick’s The Mom Test is a helpful antidote: “Talk about their life instead of your idea.” In short, ask about specific past behavior, not imagined futures. (momtestbook.com)
Fix it:
Swap “Would you use a budgeting app that…?” for “Tell me about the last time you tried to control spending. What did you do?”
Ask for artifacts: “Can you show me your spreadsheet / screenshots / emails from that time?”
Mistake 3: Over‑relying on What People Say (and Ignoring the Say‑Do Gap)
Psychologists Richard Nisbett and Timothy Wilson showed decades ago that people often can’t accurately report the true causes of their behavior; verbal reports can be confident but wrong. (Deep Blue Repositories) This doesn’t mean interviews are useless-but it does mean you should treat claims about motives and future behavior as hypotheses.
Fix it:
Anchor on observable history: “What happened the last time…?” “How did you decide between X and Y?”
Pair interviews with behavioral data (analytics, logs, usability tests) to triangulate the truth. (Nielsen Norman Group)
Mistake 4: Ignoring Social Desirability and Interviewer Effects
Participants want to look competent and agreeable. That’s social desirability bias: people “present themselves in a socially acceptable way,” especially on sensitive topics. (PMC) They also behave differently when observed (the Hawthorne effect). In UX research, that can surface as over‑polite praise or atypical, “performative” behaviors. (Nielsen Norman Group)
Fix it:
Normalize imperfection: “There are no right answers-broken workflows are especially helpful for us.”
Prefer neutral, nonjudgmental phrasing and let silence do some work; people often add detail to fill a pause.
Whenever possible, ask them to show you how they do something in their natural tool or environment.
Mistake 5: Double‑Barreled, Vague, or Jargon‑Filled Questions
“Tell us about your onboarding and training experience” asks about two different things; answers will be mushy. Survey methodologists warn that wording and even answer order can systematically skew responses; Pew Research Center documents large effects from phrasing and order, and recommends asking one clear question at a time with neutral wording. (Pew Research Center)
Fix it:
Split compound prompts: “First, walk me through onboarding. Next, tell me about training.”
Avoid jargon and inside baseball; use the participant’s own words, not yours.
Pilot your discussion guide with a teammate; mark any question someone has to ask you to clarify.
Mistake 6: Talking Too Much (or Multitasking)
Moderators who over‑explain, defend the design, or fill every silence shape what they hear. Multitasking (e.g., typing while moderating) harms rapport and depth. NN/g lists poor rapport, leading, insufficient probing, and multitasking among top facilitation pitfalls. (Nielsen Norman Group)
Fix it:
Separate roles: one moderator, one note‑taker.
Aim for short prompts; let participants narrate.
Use laddering prompts-“What else?” “Can you say more?” “What made that difficult?”-and then zip it.
Mistake 7: Recruiting the Wrong People
Insights from non‑representative users can nudge roadmaps off course. Rule #1, says NN/g: study participants who represent your target audience; tighten your screener to avoid bias. (Nielsen Norman Group)
Fix it:
Define who is in and who is out before recruiting.
Use disqualifiers that filter out professional testers and friends of the team.
Use quotas to capture meaningful subgroups (e.g., new vs. experienced admins).
Mistake 8: Skipping Consent and Ground Rules
Ethical basics matter. The UK government’s user‑research guidance is straightforward: open each session by obtaining informed consent, explain what will happen, and help participants settle in with easy opening questions. (GOV.UK)
Fix it:
Before recording, get explicit permission; explain who will see/hear it and how it will be used.
Reiterate that criticism helps; your job is to learn, not to sell.
Mistake 9: Asking “Why?” Too Soon (or Like a Prosecutor)
“Why” can sound accusatory, prompting rationalizations or defensiveness. NN/g recommends avoiding judgmental questions and starting with easy, open ones to build comfort. (Nielsen Norman Group)
Fix it:
Prefer “What was going through your mind?” or “What led you to do X?”
Use time anchors: “When was the last time…?” “What happened right before/after?”
Mistake 10: Treating One Interview as Truth
Qualitative research is about patterns, not anecdotes. In a widely cited study, Guest, Bunce, and Johnson found many common themes appearing by ~12 interviews, with “basic elements” visible as early as 6. (University of Warwick) In multi‑site or cross‑cultural work, Hagaman & Wutich found you may need 20–40 interviews to saturate themes across sites. (ERIC)
Fix it:
Plan multiple, small rounds; stop when new sessions aren’t adding meaningfully new themes for your specificquestion and population.
Document what changed in your guide between rounds.
Mistake 11: Treating Interviews Like Surveys (or Vice Versa)
Surveys and interviews both suffer from response biases (acquiescence, recency, social desirability), but you mitigate them differently. Pew highlights how question format, order, and “agree/disagree” designs can distort results-and why forced‑choice is often better than “select all that apply” for sensitive topics. (Pew Research Center)
Fix it:
When you need breadth or estimates, use a well‑designed survey.
When you need depth (workflows, pain stories, mental models), use interviews.
For confidence, triangulate-combine qualitative interviews with quantitative data to see the same question from multiple angles. (Nielsen Norman Group)
Mistake 12: Failing to Plan (and Then Over‑Scripting)
A good interview looks conversational but is meticulously planned. The GOV.UK Service Manual advises crafting a discussion guide with open, neutral prompts like “How do you…?” and “What are the different ways you…?”-while leaving space to follow the participant. (GOV.UK)
Fix it:
Write a one‑page plan: objective, who/where, roles, warm‑ups, 6–8 core prompts, 10 follow‑ups.
Pilot the guide with a colleague; mark any leading words or double‑barreled prompts to fix.
A Research‑Backed Toolkit for Unbiased Questions
Use these patterns to reduce bias and collect specific, checkable evidence.
Start neutral and easy (rapport):
“Tell me about your role and a typical week.” (GOV.UK recommends beginning with easy, nonjudgmental questions.) (GOV.UK)
Elicit recent, specific behavior (The Mom Test in action):
“Talk about their life, not your idea.” Ask: “When was the last time this happened? Walk me through it step‑by‑step.” (momtestbook.com)
Time‑bound prompts:
“In the past month, how many times did you [action]? What triggered it the last time?” (Anchoring to concrete timeframes counters fuzzy recall. (Deep Blue Repositories))
Artifact and evidence prompts:
“Can you show me the spreadsheet/message/template you used?”
“What did you try first? What did you try second?”
Nonjudgmental follow‑ups:
“What made that harder than you expected?”
“What else did you consider before choosing X?”
“If you could wave a magic wand, what would be different about that step?” (Frame as their process and constraints-not your feature list.)
Avoid or rephrase these:
Leading: “Don’t you think it would be easier if…?” → “How did you try to make it easier last time?” (Nielsen Norman Group)
Hypothetical: “Would you use/pay for…?” → “When did you last pay for something similar? How did you decide the price was worth it?” (momtestbook.com)
Double‑barreled: “How was onboarding and training?” → Split into two questions. (Pew Research Center)
Judgmental “why”: “Why didn’t you do X?” → “What got in the way of doing X?” (Nielsen Norman Group)
Running a High‑Quality Interview (Before / During / After)
Before the interview
Clarify the decision. What product decision will this interview inform? If the answer isn’t clear, postpone the session.
Recruit the right mix. Use a screener to ensure participants match your target audience; avoid “professional testers.” (Nielsen Norman Group)
Draft a lean discussion guide. 6–8 core prompts, each with 2–3 probes. Mark any leading words for removal. (GOV.UK offers practical examples of neutral prompts.) (GOV.UK)
Ethics and setup. Prepare an intro script that sets expectations, explains recording, and seeks informed consent. (GOV.UK)
During the interview
Warm up and normalize. “There are no right answers. We’re learning, and critical feedback is genuinely helpful.” (Helps reduce social desirability.) (PMC)
Observe, don’t perform. Be mindful of the Hawthorne effect; keep tasks natural and your presence low‑key. (Nielsen Norman Group)
Talk less, listen more. Short prompts; long answers. Use silence strategically.
Probe for specifics and sequence. “Then what?” “How did you know to do that?” “What happened next?”
Capture evidence. Screenshots or artifacts (with permission) beat recollections.
After the interview
Debrief immediately. Moderator and note‑taker compare takeaways while memory is fresh.
Code for themes, not quotes. Pull behaviors, triggers, constraints, and workarounds into an affinity map.
Triangulate. Compare interview themes to analytics funnels, support tickets, or usability findings; resolve contradictions explicitly rather than cherry‑picking. (Nielsen Norman Group)
Stop at saturation. When new sessions aren’t adding meaningfully new insights for your decision and population, you’re close to done-often around a dozen participants for a relatively homogenous group, more for cross‑site work. (University of Warwick)
A 10‑Minute Discussion‑Guide Makeover
Paste your current questions into a doc and apply this checklist line‑by‑line:
One idea per question? If you see “and,” split it. (Double‑barreled questions confuse.) (Pew Research Center)
Neutral words only? Remove “Why didn’t you…,” “Wouldn’t it be better if…,” and “because” prompts that imply causality or blame. (Nielsen Norman Group)
Past behavior over future intent? Swap hypotheticals for “last time” prompts. (momtestbook.com)
Plain language? Replace jargon with the participant’s own terms; if you must use a term of art, define it first. (Nielsen Norman Group)
Probes ready? Add two neutral probes per question (“What else?” “Can you show me?”).
Opening and closing scripted? Start easy; end by asking, “Is there anything we didn’t ask that we should have?” (GOV.UK recommends easing in.) (GOV.UK)
When Interviews Aren’t the Right Tool
Interviews excel at why and how-workflows, needs, and decision criteria-not at measuring frequencies or predicting exact behavior. For sensitive topics or where wording may skew responses, surveys need careful craftsmanship (forced choice often beats “select all”). Use the right tool and combine methods to build confidence. (Pew Research Center)
Pulling It Together
If you remember only three rules, make them these:
Stay neutral. No leading, no selling. (Remove “because,” ditch “Wouldn’t it be better…”, and split double‑barreled prompts.) (Nielsen Norman Group)
Ask about the past, not the future. Talk about their life, not your idea. (momtestbook.com)
Triangulate. Interviews reveal context and motivation; pairing them with behavioral data guards against the limits of self‑report. (Deep Blue Repositories)
Do these consistently and your user interviews will stop producing polite fiction and start generating the kind of evidence that earns trust with engineers, convinces stakeholders, and, most importantly, leads to products people are delighted to use.


