Online labor markets dramatically lower the cost and hassle of conducting experiments. On Amazon’s Mechanical Turk, it is easy to run multiple experiments per week. Figuring out how to run experiments isn’t that hard, as there are already some nice tutorials available.
However, what I felt was missing from the field was a discussion of why, precisely, we can trust results from online experiments. This was the motivation for a new paper, jointly written with Dave Rand (who wrote up part of this study here on the Dolores Labs blog) and Richard Zeckhauser.
While we make the practical and theoretical case for online experimentation, we believe that acceptance of online results as “valid” will come after people start seeing how easy and reliably one can replicate previous studies. This is why blogs like Experimental Turk and Deneme—both of which report results from AMT experiments—are so helpful. In our paper, we continue this process by replicating three results that are fairly well established.
In one experiment for the economists, we show—contra the usual intuition—that at least some Turkers are financially motivated, despite the very low stakes. After performing an initial text transcription task, workers were offered some randomly chosen amount of money to do an additional transcription. Results show the counts of people who agreed (“Yes”) and the counts of people who did not agree (“No”), by amount offered.
Nothing too surprising—offer to pay more and more workers will accept—but at this stage in the development of online experiments as a methodology, “surprising” would probably be bad news.
Anyway, the full paper is here. We’d love to get comments and feedback—it’s not too late to earn a place in our coveted “thanks” footnote!