Highlights from our “The State of Fraud” webinar Q&A
In our latest webinar, Steve Snell, our EVP and Head of Research, walked through what more than 4 billion Research Defender scans so far in 2025 reveal about today’s fraud landscape. The session covered things like month-by-month spikes tied to seasonality and events, regional and source differences, the outsized impact of hyperactive survey-takers, and why fraud and inattention are distinct problems that require different defenses. Attendees also learned about practical guardrails and how teams can integrate Defender via API or Research Desk to scan, score, and block in milliseconds.
You can view the entire recording here.
Let’s take a look at some of the topics we covered during the Q&A sessions after the webinar.
Steve noted that audience type shifts risk—B2B’s higher CPIs attract more misrepresentation, while B2C sees seasonal hyperactivity—and that these labels reflect online client targeting for online work only.
Question: What do we mean by B2B vs. B2C in our data?
Answer: B2B refers to online surveys where respondents answer on behalf of their company (e.g., IT decision makers, small-business owners). B2C refers to consumer surveys where people share their own attitudes and behaviors. Research Defender doesn’t see offline work, so these definitions apply to online projects and are set with each client’s targeting.
During the webinar, we looked at how the summer season shows more survey fraud.
Question: Is fraud seasonality driven by more bad actors or the same ones getting busier?
Answer: Mostly the same actors ramping up. We see hyperactive activity spike when legitimate traffic dips or demand surges, like holiday weeks or heavy polling cycles. The share of hyperactive respondents rises when there are many open surveys and fewer real people online.
Question: Are “foreign bots” stealing ideas being tested?
Answer: We don’t see strong evidence that idea theft is a primary motive. Higher-CPI studies attract more misrepresentation and location spoofing because payouts are larger. The dominant incentive is still financial gain, not espionage.
In the session, Steve explained how permissioning and decimal precision limit practical geolocation accuracy.
Question: Why do IP duplicates drop while lat-long duplicates rise?
Answer: Lat-long data is only precise when users opt in to share more decimal places. Approximate sharing clusters many nearby people at the same coordinates, so duplicates happen even among legitimate respondents. It’s useful as one signal, but imperfect on its own.
Question: What happens to feasibility if we throttle hyperactive survey takers?
Answer: Setting a sensible cap removes a small slice of traffic but a disproportionately harmful slice of responses. Think long right tails with weird behavior. Cutting those attempts improves data quality with limited impact on speed. Internally we often use thresholds lower than 100 attempts in 24 hours without major feasibility issues.
Question: Can I route third-party consumer or B2B sample through Research Defender?
Answer: Yes. Most customers use a lightweight API or JavaScript call that scans in milliseconds to score and block before entry. You can also use Research Desk to buy DIY sample and blend external sources, then run them through Defender’s modules: digital fingerprinting, suspicious-tech checks, prior-behavior and duplicate checks, activity thresholds, an attentiveness review item, and a hidden honeypot.
Question: How do you distinguish bots from click farms?
Answer: We do not label traffic as bot vs. click farm in production. We block based on observed markers like proxies, VPNs, WebRTC manipulation, and batch-style entrances. Today we believe more fraud still comes from human operators than true bots, though that balance may shift with generative AI.
Question: What actually motivates fraud?
Answer: Money. Fraud communities share tactics openly, and modern tools make it easy to hide identity or location. Signals like VPNs are probabilistic. We score behavior cumulatively. A VPN plus other odd markers pushes the score toward block territory, while a VPN alone is not an automatic fail.
Question: After scanning, how much reconciliation do clients still do?
Answer: Fraud and inattention are different. Blocking fraud does not remove all inattention. Reconciliation rates vary by audience, survey length, difficulty, and relevance. Typical consumer studies reconcile about 5–15 percent. B2B can run 10–20 percent because misprofiling and inattentiveness risks are higher even without outright fraud.
Question: Where can I learn more about sources and exchanges?
Answer: We covered exchanges, devices, and source patterns in our June webinar. Here is a link to the June webinar for more information:
Want the full context? Watch the recording, or email us at reps@repdata.com.
###
About Research Defender
With a goal to help the sample and market research industry create a clean, healthy, and efficient ecosystem, Research Defender has created a secure platform to help our clients take control of their traffic and the quality of their product. Research Defender facilitates high-quality and efficient transactions across the online research ecosystem for both buyers and sellers of sample.
About Rep Data
Rep Data provides full-service data collection solutions for primary researchers, helping expedite data collection for primary quantitative research studies, with a hyper-focus on data quality and consistent execution. The company’s mission is to be a reliable, repeatable data collection partner for approximately 500 clients, including market research agencies, management consultancies, Fortune 500 corporations, advertising agencies, brand strategy consultancies, universities, communications agencies, public relations firms, and more.
Media Contact:
media@repdata.com