Joho the Blog » [berkman] Jérôme Hergeux on the motives of Wikipedians
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[berkman] Jérôme Hergeux on the motives of Wikipedians

Jérôme Hergeux is giving a Berkman lunch talk on “Cooperation in a peer prodiuction economy: experimental evidence from Wikipedia.” He lists as co-authors: Yann Algan, Yochai Benkler, and Mayo Fuster-Morell.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jérôme explains the broader research agenda behind the paper. People are collaborating on the Web, sometimes on projects that compete with or replace major products from proprietary businesses and institutions. Standard economic theory doesn’t have a good way of making sense of this with its usual assumptions of behavior guided by perfect rationality and self-interest. Instead, Jérôme will look at Wikipedia where people are not paid and their contributions have no signaling value on the labor market. (Jérôme quotes Kizor: “The problem with Wikipedia is that it only works in practice. In theory it can never work.”)

Instead we should think of contributing to Wikipedia as a Public Goods dilemma: contributing has personal cost and not enough countervailing personal benefit, but it has a social benefit higher than the individual cost. The literature has mainly focused on the “prosocial preferences” that lead people to include the actions/interets of others, which leads them to overcome the Public Goods dilemma.

There are three classes of models commonly used by economists to explain prosocial behavior:

First, the altruism motive. Second, reciprocity: you respond in kind to kind actions of others. Third, “social image”: contributing to the public good signals something that brings you other utility. (He cites Napoleon: “Give me enough meals and I will win you any war.”)

His research’s method: Elicit the social prefs of a representative sample of Wikipedia contributors via an online experiment, and use those preferences to predict subjects’ field contributions to the Wikipedia project.

To check the reciprocity motive, they ran a simple public goods game. Four people in a group. Each has $10. Each has to decide how much to invest in a public project. You get some money back, but the group gets more. You can condition your contribution on the contributions of the other group members. This enables the researchers to measure how much the reciprocity motive matters to you. [I know I’m not getting this right. Hard to keep up. Sorry.] They also used a standard online trust game: You get some money from a partner, and can respond in kind.

Q: Do these tests correlate with real world behavior?

A: That’s the point of this paper. This is the first comprehensive test of all three motives.

For studying altruism, the dictator game is the standard. The dictator can give as much as s/he wants to the other person. The dictator has no reason to transfer the money. This thus measures altruism. But people might contribute to Wikipedia out of altruism just to their own Wikipedia in-group, not general altruism (“directed altruism”). So they ran another game to measure in-group altruism.

Social image is hard to measure experimentally, so they relied on observational data. “Consider as ‘social signalers’ subjects who have a Wikipedia user page whose size is bigger than the median in the sample.” You can be a quite engaged contributor to Wikipedia and not have a personal user page. But a bigger page means more concern with social image. Second, they looked at Barnstars data. Barnstars are a “social rewarding practice” that’s mainly restricted to heavy contributors: contribute well to a Wikipedia article and you might be given a barnstar. These shows up on Talk pages. About half of the people move it to their user page where it is more visible. If you move one of those awards manually to your user page, Jérôme will count you as a social signaller, i.e., someone who cares about his/her image.

He talks about some of the practical issues they faced in doing this experiment online. They illustrated the working of each game by using some simple Flash animations. And they provided calculators so you could see the effect of your decisions before you make them.

The subject pool came from registered Wikipedia users, and looked at the number of edits the user has made. (The number of contributions at Wikipedia follows a strong power law distribution.) 200,000 people register at Wikipedia account each month (2011) but only 2% make ten contributions in the their first month, and only 10% make one contribution or more within the next year. So, they recruited the cohort of new Wikipedia contributors (190,000 subjects), the group of engaged Wikipedia contributors (at least 300 edits) (18,989), and Wikipedia administrators (1,388 subjects). To recruit people, they teamed up with the Wikimedia Foundation to put a banner up on a Wikipedia page if the user met the criteria as a subject. The banner asked the reader to help with research. If readers click through, they go to the experiment page where they are paid in real money if they complete the 25 minute experiment within eight hours.

The demographics of the experiment’s subjects (1,099) matched quite closely the overall demographics of those subject pools. (The pool had 9% women, and the experiment had 8%).

Jérôme shows the regression tables and explains them. Holding the demographics steady, what is the relation between the three motives and the number of contributions? For the altruistic motive, there is no predictive power. Reciprocity in both games (public and trust) is a highly significant predictive. This tells us that reciprocal preference can lead you from being a non-contributor to being an engaged contributor; once you’re an engaged contributor, it doesn’t predict how far you’re going to go. Social image is correlated with the number of contributions; 81% of people who have received barnstars are super-contributors. Being a social signaler is associated with a 130% rise in the number of contributions you make. By both user-page length and barnstar, social image motivates for more contributions even among super-contributors.

Reciprocity incentivizes contributions only for those who are not concerned about their social image. So, reciprocity and social image are both at play among the contributors, but among separate groups. I.e., if you’re motivated by reciprocity, you are likely not motivated by social image, and vice versa.

Now Jérôme focuses on Wikipedia administrators. Altruism has no predictive value. But Wikipedia participation is negatively associated with reciprocity; perhaps this is because admins have to have thick skins to deal with disruptive users. For social image, the user page has significant revelance for admins, but not barnstars. Social image is less strong among admins than among other contributors.

Jérôme now explores his “thick skin hypothesis” to explain the admin results. In the trust game, look at how much the trustor decides how much to give to the stranger/partner. Jérôme ’s hypothesis: Among admins, those who decide to perform more of their policing role will be less trusting of strangers. There’s a negative correlation among admins between the results from the trust game and their contributions. The more time they say they do admin edits, the less trusting they are of strangers in the tests. That sort of make sense, says Jérôme. These admins are doing a valuable job for which they have self-selected, but it requires dealing with irritating people.

QA

Q: Maybe an admin is above others and is thus not being reciprocated by the group.

A: Perfectly reasonable explanation, and it is not ruled out by the data.

Q: Did you come into this with an idea of what might motivate the Wikipedians?

A: These are the three theories that are prevalent. We wanted to see how well they map onto actual field behavior.

Q: Maybe the causation goes the other way: working in Wikipedia is making people more concerned about social image or reciprocity?

A: The correlations could go in either direction. But we want to know if those explanations actually match what people do in the field.

Q: Heather Ford looks at why articles are deleted for non-Western topics. She found the notability criteria change for people not close to the topics. Maybe the motives change depending on how close you are to the event.

A: Sounds fascinating.

Q: Admins have an inherent bias in that they focus on the small percentage of contributors who are annoying jerks. If you spend your time working with jerks, it affects your sense of trust.

A: Good point. I don’t have the data to answer it.

Q: [me] If I’m a journalist I’m likely to take away the wrong conclusions from this talk, so I want to make sure I’m understanding. For example, I might conclude that Wikipedia admins are not motivated by altruism, whereas the right conclusion is (isn’t it?) that the standard altruism test doesn’t really measure altruism. Why not ask for self-reports to see?

A: Economists are skeptical about self-reports. If the reciprocity game predicts a correlation, that’s significant.

Yochai Benkler: Altruism has a special meaning among economists. It refers to any motivation other than “What’s in it for me?” [Because I asked the question, I didn’t do a good job recording the answers. Sorry.]

Q: Aren’t admins control freaks?

A: I wouldn’t say that. But control is not a pro-social motive, and I wanted to start with the theories that are current.

Q: You use the number of words someone writes on a user page as a sign of caring about social image, but this is in an context where people are there to write. And you’re correlating that to how much they write as editors and contributors. Maybe people at Wikipedia like to write. And maybe they write in those two different places for different reasons. Also, what do you do with these findings? Economists like to figure out which levers we pull if we’re not getting enough contributors.

Q: This sort of data seems to work well for large platforms with lots of users. What’s the scope of the methods you’re using? Only the top 100 web sites in the world?

A: I’d like to run this on all the peer production platforms in the world. Wikipedia is unusual if only because it’s been so successful. We’re already working on another project with 1,000 contributors at SourceForge especially to look at the effects of money, since about half of Open Source contributions are for money.


Fascinating talk. But it makes me want to be very dumb about it, because, well, I have no choice. So, here goes.

We can take this research as telling us something about Wikipedians’ motivations, about whether economists have picked the right three prosocial motivations, or about whether the standard tests of those motivations actually correlate to real-world motivations. I thought the point had to do with the last two alternatives and not so much the first. But I may have gotten it wrong.

So, suppose instead of talking about altruism, reciprocity, and social image we instead talk about the correlation between the six tests the researchers used and Wikipedia contributions. We would then have learned that Test #1 is a good predictor of the contribution levels of beginner Wikipedians, Test predicts contributions by admins, Test #3 has a negative correlation with contributions by engaged Wikipedians, etc. But that would be of no interest, since we have (ex hypothesis) not made any assumptions about what the tests are testing for. Rather, the correlation would be a provocation to more research: why the heck does playing one of these odd little games correlate to Wikipedian productivity? It’d be like finding out that Wikipedian productivity is correlated to being a middle child or to wearing rings on both hands. How fascinating!… because these correlations have no implied explanatory power.

Now let’s plug back in the English terms that indicate some form of motivation. So now we can say that Test #3 shows that scoring high in altruism (in the game) does not correlate with being a Wikipedia admin. From this we can either conclude that Wikipedia admins are not motivated by altruism, or that the game fails to predict the existing altruism among Wikipedia admins. Is there anything else we can conclude without doing some independent study of what motivates Wikipedia admins? Because it flies in the face of both common sense and my own experience of Wikipedia admins; I’m pretty convinced one reason they work so hard is so everyone can have a free, reliable, neutral encyclopedia. So my strong inclination – admittedly based on anecdote and “common sense” (= “I believe what I believe!”) – is to conclude that any behavioral test that misses altruism as a component of the motivation of someone who spends thousands of hours working for free on an open encyclopedia…well, there’s something hinky about that behavioral test.

Even if the altruism tests correlate well with people engaged in activities we unproblematically associate with altruism – volunteering in a soup kitchen, giving away much of one’s income – I’d still not conclude from the lack of correlation with Wikipedia admins that those admins are not motivated by altruism, among other motivations. It just doesn’t correlate with the sort of altruism the game tests for. Just ask those admins if they’d put in the same amount of time creating a commercial encyclopedia.

So, I come out of Jérôme’s truly fascinating talk feeling like I’ve learned more about the reliability of the tests than about the motivations of Wikipedians. Based on Jérôme’s and Yochai’s responses, I think that’s what I’m supposed to have learned, but the paper also seems to be putting forward interesting conclusions (e.g., admins are not trusting types) that rely upon the tests not just correlating with the quantity of edits, but also being reliable measures of altruism, self-image, and reciprocity as motives. I assume (and thus may be wrong) that’s why Jérôme offered an hypothesis to explain the lack-of-trust result, rather than discounting the finding that admins lack trust (to oversimplify it).

(Two concluding comments: 1. Yochai’s The Leviathan and the Penguin uses behavioral tests like these, as well as case studies and observation, to make the case that we are a cooperative species. Excellent, enjoyable book. (Here’s a podcast interview I did with him about it.) 2. I’m truly sorry to be this ignorant.)

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon