Research
Defender
Less Fraud = Quality Data
and Better Insights
Whether you use our full-service Research Services or our self-serve Research Desk, we incorporate our Research Defender platform to verify that your data isn’t fraudulent. It is the most sophisticated tool available to create a comprehensive view of your data quality.
When you use our services powered by
Research Defender, you can expect:
Improvement
in data quality and reduced time manually cleaning the data
Protection
from click farms and bots, and the elimination of professional survey takers
Prevention
of duplication
Real-time scoring
of the veracity of open-ended responses
Validation
of active email and domain at the time of panel signup
DEMYSTIFYING THE FRAUD MIRAGE
How much of your “good data” is bad?
Three Key Questions:
- How much fraud is in a typical study?
- How easy is it to detect fraud?
- What are the implications of fraudulent data in your study?
Most quantitative research draws on online panels of respondents, but panels are increasingly plagued by fraud from bad actors who seek to hide who they are or where they are. While not all bad data is fraud, all fraudulent data is bad. We asked three key questions about fraud.
To answer these questions we fielded a 10-minute survey with 1,928 American consumers, leveraging Census-based quotas to ensure balance. We applied our award-winning Research Defender fraud prevention software but did not terminate any responses based on its fraud flags. We also employed a reputable program to clean our data post-fielding to have a benchmark for comparison.
There is a “fraud mirage” of data that passes data cleaning but is actually fraudulent
We found that 31% of respondents – nearly 1-in-3 total respondents – failed at least one Research Defender check. By contrast, just 10% of respondents were flagged in data cleaning. The difference between the 31% of respondents flagged by Research Defender and the 9% flagged represents a sizable “fraud mirage” because the data has much more fraud than can be detected through cleaning.
Most fraud is undetectable by traditional methods
The fraud mirage persists across all of Research Defender’s flags. In all, only 94 of the 603 cases flagged by Research Defender – about 16% of all fraud – were also flagged by data cleaning. Data cleaning caught about 20% of known duplicates and a much lower percentage of fraud detected through advanced digital fingerprinting, suspicious open-ended responses, and respondents attempting dozens of surveys in the past 24 hours. We similarly found that most fraudulent respondents passed usual attention checks.
FRAUD LEADS TO BROAD BIAS
Comparing respondents who passed all Research Defender checks with those who failed one or more, we found sizable response differences across key outcomes in all measured domains, including health outcomes, policy and political preferences, and brand awareness and consideration.
Areas where we observe bias
- Demographics (including sex, race, income, and education)
- Benchmark measures of physical health & wellness
- Policy preferences & vote choice
- Brand funnel metrics across categories