We use cookies to give you the most relevant experience. By clicking Accept All, you consent to our use of cookies.

Privacy PolicyDo Not Sell My Personal Information

Worried about Agentic Survey Respondents? We’ve Got You Covered

Fraudsters are getting better at looking good. We’ve long seen fraudsters pass attention checks and provide passable responses, but as they turn to advances in AI, especially large language models (LLMs) that can be refined to mimic human survey responses, fraudsters are camouflaging themselves as quality respondents. This presents a major threat to survey researchers because the best-looking fraud is still bad data that may undermine your research.

How do we fight agentic survey respondents when they so closely mimic human behavior? When it comes to detecting this kind of fraud, it’s less a question of what they say and more how they say it. Research Defender deploys several defenses explicitly targeting LLMs, automated scripts, and technology-based deception. We identify these patterns through the behaviors fraudsters exhibit when they interact with survey platforms and data collection tools.

Explicit LLM and Crawler Detection

  • Blocking LLM Trackers: The system tracks and blocks automated LLM trackers, Operators, and Agent workflow systems.
  • Device Signaling Checks: It specifically looks for the device types that LLMs 'signal' and has explicitly blocked the device type signaled by the OpenAI Operator.
  • Automated Script Detection: The system looks for automated drivers and scripts, such as Selenium.
  • LLM Checks: Research Defender implements LLM Checks – using models to analyze models – as part of its review of open-ended text responses.

Sophisticated Tech Profiling

  • Advanced Digital Fingerprinting: This capability identifies duplicates, known fraudsters, and respondents who use technology to misrepresent their identity or location (who or where they are).
  • Statistical Inference: It uses statistical inference to identify user agents and machines. The tools also tracks web drivers, crawlers, and other agents.
  • Virtual Machine Check: A check executed via JavaScript accesses the machine at hand to verify the "advertised" value of the machine versus the "real" value.

Behavioral Tracking

  • Open-Ended Response Scoring: This tool flags open-ended responses that are nonsensical or irrelevant, but also looks for pattern associated with agentic responses, including responses that are copy-pasted, too long, or typed at an impossibly fast speed.
  • Behavioral Trend Analysis: Research Defender examines behavioral trends like programmatic browsing, mouse tracking, teleporting mouse (implausible cursor movement), programmatic typing, and copy/paste mechanisms.
  • Hyper-Activity Detection: Research Defender has visibility into the survey ecosystem and can identify respondents attempting many surveys within a 24-hour period. This tracking ensures that if an LLM/Operator starts crawling across the ecosystem, that activity can be tracked.

Fraud is smart and evolving quickly. If your only line of defense is evaluating survey responses, you are likely not detecting a majority of fraud present in your research. Tech-enabled fraud requires tech forward solutions.