While some employees may shun AI, the temptation to use it is very real for others. The field can be “dog-eat-dog,” says Bob, making labor-saving tools attractive. To find the highest-paying gigs, crowdworkers often use scripts that highlight lucrative jobs, search job applicant reviews, or join higher-paying platforms that vet employees and applicants.
CloudResearch began developing an internal ChatGPT detector last year after its founders saw the technology’s potential to undermine their business. Co-founder and CTO Jonathan Robinson says the tool consists of capturing keystrokes, asking questions to which ChatGPT responds differently than humans, and looping people through to rate free text responses.
Others believe that researchers should take it upon themselves to build trust. Justin Sulik, a cognitive science researcher at the University of Munich who uses CloudResearch to find participants, says basic decency — fair pay and honest communication — goes a long way. If employees are confident that they are still getting paid, applicants can simply ask at the end of a survey if the participant used ChatGPT. “I think online workers are being unfairly accused of doing things that office workers and academics could be doing all the time, which only makes our own workflows more efficient,” says Sulik.
Ali Alkhatib, a social computing researcher, suggests it might be more productive to consider how underpaid crowdworkers could encourage the use of tools like ChatGPT. “Researchers need to create an environment where employees can take their time and really think,” he says. Alkhatib cites work by Stanford researchers who developed a line of code that tracks how long a microtask takes so applicants can calculate how to pay minimum wage.
Creative study design can also help. When Sulik and his colleagues wanted to measure the contingency illusion, a belief in the causal relationship between unrelated events, they asked participants to move a cartoon mouse over a grid and then guess which lines won them the cheese . Those prone to the illusion opted for more hypothetical rules. Part of the design intent was to keep things interesting, Sulik says, so the world’s Bobs wouldn’t go extinct. “And no one is going to train an AI model to play your specific game.”
ChatGPT-inspired suspicion could make things harder for crowdworkers, who already have to watch out for phishing scams that collect personal data through bogus tasks and spend unpaid time taking qualification tests. After an increase in low-quality data in 2018 caused a bot panic on Mechanical Turk, demand for monitoring tools increased to ensure employees were who they claimed to be.
Phelim Bradley, the CEO of Prolific, a UK-based crowdwork platform that vets participants and applicants, says his company has begun work on a product to identify ChatGPT users and train or remove them. But he has to stay within the limits of the privacy legislation of the EU General Data Protection Regulation. Some detection tools “can be quite intrusive if not done with the consent of the participants,” he says.