Dave Rand is giving a lunchtime Berkman talk titled “The Online Laboratory: Taking Experimental Social Science onto the Internet.” It is based on a paper with John Dorton and Richard Zeckhauser called “The Online Laboratory: Conducting Experiments in a Real Labor Market,” which is available online (go to Dave’s homepage).
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
He begins by warning against mistaking correlation for causation. But, he says, you of course need more than that. For one thing, you need subjects, time and money. “In social psych, they use smart tricks.” In experimental economics (Dave’s field), they use monetary incentives. In field studies, they use surprise. The Internet can help in all these fields: Easy recruitment, many subjects, little effort. But few economists are using online experiments because it’s harder to pay people to participate … until online labor markets came along that make it easy to recruit and pay subjects.
Dave is going to focus on Amazon’s Mechanical Turk as an online labor market, although there are many other out there. The name comes from an 18th century chess playing robot that actually had a person hidden inside. Amazon’s MT farms out tasks to lots of people who are paid relatively little. They pay small amo9ounts of money for short tasks that are easy for humaans but hard for computers: labeling photos, completing surveys, etc. “Human Intelligence Tasks” = HITs. (Dave notes that he has no affiliation with Amazon.) The minimum reservation (= min amt you’re willing to work for) is $1.38. Dave usually pays $0.20- 0.40 flat rate per task, and bonus of up to a dollar for each 5 min task. Last time he did this, he got 1,500 people in 2.5 days.
Q: What about selection bias?
A: Most of this talk is about why using the Net is a reasonable approach to economics research.
Who are the Turkers? Modal age is around 30, with a tail going out through the 40s and 60s. 35% from US, 45% from India, and the rest from everywhere else. (Locations are self-reported.) You can limit respondents by country, over 18, and satisfactory previous work at MT. It’s completely anonymous to the experimenter. Amazon takes 10%. The motivation overwhelmingly is to make money, which Dave likes because you have a way of controlling the size of the incentive and because that’s how experimental economics works. Of course, you don’t know if they’re watching TV at the same time (or whatever).
Q: How much data do you have to throw away?
A: A fair bit. At the end we often ask a survey question that depends upon people having read the instructions. About 30% of the people fail.
Q: Is there going to be legislation or IRB regs to prevent paying people under the minimum wage?
A: I think most of what I’m running pay around min wage, but it’s a good question.
Q: Miriam Cherry is saying we ought to pay min wage.
Dave points out it’s really hard to use MT to do experiments that require feedback.
Education level of the Turkers: Most in the US and elsewhere have a bachelor’s degree. Most people not from the US are making less than $15,000/yr.
What’s great about it: Fast, cheap, easy, incentive-compatible, cross-cultural (with some strange bias — US Turkers may have diff motives than Indian Turkers, etc.) and great potential for field studies.
Isn’t this a biased sample? Yes, but most experimental economic studies are done on college undergrads. You get much more age, SES, geographic variation. You do lose control over what else people are doing while doing the experiment. So, you need replication studies.
He looks at an aggregation of results about the dictator game and the trust game, for dollar or for nothing, and it turns out that stakes don’t matter as much as economists thought.
Dave’s group has replicated various experiments. E.g., they found that priming works (read a religious passage before playing the prisoner’s dilemma), although it depends on the person (cooperation goes down for those who don’t believe in God and goes up for those who do).
Dave’s IRB does not require consent from the subjects since what h’s doing is so consistent with that goes on at MT.
Overall, Dave thinks it’s great. It opens the door to anyone with a research idea. OTOH, it may result in the “file drawer” problem: Only positive results get published. Journals ought to require scientists to say how many times they’ve run the experiment.
Dave goes through a few cool studies, which I will not get right. But I see Ethan Zuckerman liveblogging. Go there now.
Pitfalls: Turkers don’t pay attention. So, put in attentino-checking questinos. Likewise for understanding instructions. Also, non-random attrition, i.e., people who leave after the first couple of questions because they don’t like them. So, check for it, and give them an initial “hook task,” such as transcribing text, but they don’t get paid until they’ve done the second part.
People ask how it feels to run a sweatshop, but Dave hasn’t lost sleep about it. For one thing, he pays generously compared to MT wages.
Q: Does this crowd out people willing to donate work for free, e.g., at Wikipedia.
Q: Is it easier to believe you’re anonymous on Turk than if you’re doing real world experiments?
A: Don’t know. Could be an empirical study of that.