33% less bad data, 3x faster responses: Why survey design matters more than you think.

Reading time
1 min
Words
Adrien Vermeirsch
Published date
March 25, 2025

From market sizing to due diligence, your questionnaire could be the silent culprit behind fieldwork delays, runaway sampling costs, and flawed insights. Our new study shows how poor survey design isn’t just a pain for respondents — it’s a liability for your firm.

Your survey is only as good as the people taking it.

The format, content, and length of your survey impact whether a respondent will put in the attention and effort your survey requires.

Yet, many surveys are designed as if respondents have unlimited patience for endless questions and unnecessary complexity. The consequence? People you actually want in your survey rush through, disengage, or abandon the survey altogether. Meanwhile, fraudsters and bad actors push through for the payout.

For consulting and private equity (PE) firms that rely on survey data to make critical decisions, poor survey design poses real risks: 

  • Data distortion: Inattentive, bored, or confused responses; more drop-outs.
  • Higher costs: Longer field times, more time spent on data cleaning, and costly do-overs.
  • Shallow insights: When respondents don’t give their all, the depth and accuracy of your findings suffer.

Potloc’s latest Research on Research (RoR) study reveals how small survey tweaks can drive massive improvements in data quality while reducing field time and costs. Our analysis provides actionable insights for consulting and PE firms seeking to optimize their primary research investments.

Survey experience directly impacts data quality.

Data quality is the collection of reliable and authentic data, achieved when people are honest, attentive, and engaged when taking a survey. We believe that data quality emerges from a combination of three factors:

What kind of experience are you putting survey-takers through? Is it quick and smooth, or repetitive and tedious, without sufficient incentives? The more engaged your respondents are, the more honesty and effort they’ll put in their survey responses.

Where are the respondents coming from? As we’ll see, not all supply sources provide equally honest, attentive, and engaged respondents—or involve the same acquisition costs and field times.

What measures are in place to monitor responses and filter out inadequate respondents? While this step is essential, it’s equally crucial to recognize that no amount of data cleaning will remove all fraudulent or suboptimal respondents from a sample.

Our study: Testing the impact of survey design.

Building on Potloc’s first research on research study about the importance of sample sources, we ran a second investigation focused on survey design. We surveyed 3,000 U.S. adults across five different sample sources, testing the impact of key survey optimizations on respondent engagement, drop-out rates, and their cascading effects on sampling cost and field time:

Screening questions (the questions that qualify or disqualify respondents) serve as the first filter in a survey, ensuring that only relevant respondents proceed to the core questionnaire. We tested a loose vs. a robust screener. 
The way questions are structured has a direct influence on how respondents engage with the survey, the depth of their responses, and their likelihood of completing the questionnaire. We tested the relative impact of different formats (ranking, auto-sum, multiple-select, open-ended, etc.)
The time required for respondents to complete the survey plays a significant role in dropout rates and respondent fatigue. We tested a shorter vs. a longer survey.

Our study: Key findings.

At a glance.

33% less bad data with the right screener questions.

3x faster completion with the right question formats.

30% fewer dropouts by optimizing survey length.

1. Tough screeners filtered out 33% more bad respondents.

Your screening questions (the first few questions of your survey) give you an opportunity to provide well-intentioned respondents with the best possible experience — while also terminating bad respondents (i.e. those who don’t fit the target criteria or fraudsters who may lie to maximize their chances of qualifying for the survey).

We tested two types of screeners:

  • Loose screener: A more frictionless experience, increasing feasibility but allowing some fraudulent, ineligible, or inattentive participants to pass through more easily.
  • Robust screener: A less frictionless experience, but stricter criteria to help filter out fraudulent, ineligible, and inattentive respondents.
research-on-research-2-phones
The robust screener reduced the proportion of bad respondents by 33%, improving data quality.

Key results

  • A robust screener led to greater data quality, reducing the proportion of bad respondents in the sample by 33%. 
  • However, the robust screener improved data quality at a cost — disqualifying bad-fit participants cut the incidence rate by 28% (i.e. lowered the total number of respondents qualified).

Takeaway

Stricter data quality standards require slightly more time and investment. Consulting and PE firms should calibrate their screening criteria based on the level of rigor required for a given project. 

  • Robust screeners reduce the level of fluff and fraud in your end data, but they also strain feasibility and sampling costs. 
  • When conducting research for high-stakes projects where accuracy is essential, a robust screener is a valuable investment. A looser screener may be appropriate when crunched for time and budget, but additional quality controls will be necessary later in the process to mitigate data quality risks.

2. Question formats can be tweaked for engagement and speed.

You want your core survey to collect as much data as possible — while ensuring a smooth respondent experience to enhance engagement.

Ranking vs. Multi-Select Questions

    • Ranking questions: Require respondents to rank multiple options in order of preference, which can provide richer insights but also increase cognitive effort.
    • Multi-select questions: Allow respondents to select all applicable answers, offering a simpler and faster alternative.
research-on-research-2-graph1

Autosum vs. Multi-Select Questions

    • Autosum questions: Require respondents to distribute values (e.g., allocating 100 points among different choices), increasing engagement but also cognitive load.
    • Multi-select questions: Provide a quicker and easier response method.
research-on-research-2-graph2

Projective vs. Straightforward Questions

    • Projective questions: Require respondents to engage in imaginative exercises (e.g., "If this product were a person, how would you describe it?"), encouraging deeper responses.
    • Straightforward questions: Directly ask for opinions or preferences, reducing complexity.
research-on-research-2-graph3

Takeaway

Firms should select question formats based on research goals. When depth is critical, some complexity may be justified. However, when engagement and speed are top priorities, minimizing cognitive load is a better approach.

  • Simpler questions often yield the same insights with fewer dropouts. While complex question formats can generate deeper insights, they also increase dropout risk and response bias as respondents lose patience.
  • Smooth, familiar question formats should be prioritized for projects requiring fast turnaround and broad market insights, simpler, more familiar question formats (e.g. multi-select) can improve efficiency — especially when speed and feasibility are top priorities.

3. Cutting survey length by 40% reduced dropouts by 30%.

While each choice made for the survey design can significantly impact the dropout rate, the overall survey length also affects respondent fatigue and the likelihood of dropping out. 

We tested two survey lengths: 

  • A shorter survey: 7 minutes long
  • A longer survey: 12 minutes long.

Longer surveys increase the risk of respondent disengagement, rushed answers, and satisficing (selecting the easiest answer rather than the most accurate one).

research-on-research-2-graph4-

Takeaway

Be strategic about survey length – shorter surveys improve data quality and completion rates. 

  • When high response quality is critical, firms should aim for the shortest survey length possible without sacrificing necessary insights.
  • If a longer survey is required, engagement strategies such as interactive formats, clear incentives, and thoughtful question pacing should be used to maintain attention.

Survey design tips for consulting & PE firms.

  • Survey design is a hidden competitive advantage. In survey research, quality isn’t just about filtering bad data — it’s about attracting good data in the first place. No amount of data cleaning will make up for a survey that bores, confuses, or frustrates respondents. A poorly designed survey isn’t just a pain for respondents — it’s a liability impacting your time, budget, and the quality of your insights.

  • Reality check: We need respondents more than they need us. While a well-structured questionnaire can improve engagement, securing a steady, high-quality respondent pool requires something more fundamental: fair compensation. The next time a provider promises to deliver 100 CFOs in a week on $2 incentives, consider what that really means for the credibility of your data. 

From feasibility to survey design, every choice impacts the quality, speed, and cost of insights. Getting it right requires experience, expertise, and the right tools.

Enlist experts to help put your best survey forward.

Get faster, more reliable results by scripting, hosting, translating, and launching your survey with Potloc. Our experts help you perfect the content, flow, and structure of your questionnaire to meet market research gold standards — and your tightest timelines.

VIEW OUR GUIDE