We use cookies to give you the most relevant experience. By clicking Accept All, you consent to our use of cookies.

Privacy PolicyDo Not Sell My Personal Information

Research
Defender

Less Fraud = Quality Data
and Better Insights

Whether you use our full-service Research Services or our self-serve Research Desk, we incorporate our Research Defender platform to verify that your data isn’t fraudulent. It is the most sophisticated tool available to create a comprehensive view of your data quality.

When you use our services powered by
Research Defender, you can expect:


How much of your “good data” is bad?

Most quantitative research draws on online panels of respondents, but panels are increasingly plagued by fraud from bad actors who seek to hide who they are or where they are. While not all bad data is fraud, all fraudulent data is bad. We asked three key questions about fraud.

To answer these questions we fielded a 10-minute survey with 1,928 American consumers, leveraging Census-based quotas to ensure balance. We applied our award-winning Research Defender fraud prevention software but did not terminate any responses based on its fraud flags. We also employed a reputable program to clean our data post-fielding to have a benchmark for comparison.

There is a “fraud mirage” of data that passes data cleaning but is actually fraudulent

We found that 31% of respondents – nearly 1-in-3 total respondents – failed at least one Research Defender check. By contrast, just 10% of respondents were flagged in data cleaning. The difference between the 31% of respondents flagged by Research Defender and the 9% flagged represents a sizable “fraud mirage” because the data has much more fraud than can be detected through cleaning.

Most fraud is undetectable by traditional methods

The fraud mirage persists across all of Research Defender’s flags. In all, only 94 of the 603 cases flagged by Research Defender – about 16% of all fraud – were also flagged by data cleaning. Data cleaning caught about 20% of known duplicates and a much lower percentage of fraud detected through advanced digital fingerprinting, suspicious open-ended responses, and respondents attempting dozens of surveys in the past 24 hours. We similarly found that most fraudulent respondents passed usual attention checks.

FRAUD LEADS TO BROAD BIAS

Comparing respondents who passed all Research Defender checks with those who failed one or more, we found sizable response differences across key outcomes in all measured domains, including health outcomes, policy and political preferences, and brand awareness and consideration.