Joho the Blogberkman Archives - Page 3 of 19 - Joho the Blog

December 6, 2011

[berkman] Jeff Jarvis on Publicness

Jeff Jarvis is giving a lunch time talk about his new book, Public Parts. He says he’s interested in preserving the Net as an open space. Privacy and publicness depend on each other. Privacy needs protection, he says, but we are becoming so over-protective that we are in danger of losing the benefits of publicness. (He apologizes for the term “publicness” but did not want to use the marketing term “publicity.”)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins with a history of privacy. In 1890, Brandeis wrote an article about privacy, in response to the rise of Kodak cameras. The NYT wrote about “fiendish Kodakers lying in wait.” Teddy Roosevelt banned photo-taking in public parks. Technology seems often to raise privacy concerns. After Gutenberg some authors did not want their name associated with works. Some say that privacy arose in Britain as a result of the creation of the back stairs. As tech advances, we need to find new norms. Instead, we tend to legislate to try to maintain the status quo.

Now for publicness, he begins by referring to Habermas: the public sphere arose in the 18th C in coffee houses and salon as a counterweight to the power of governments. But, Canadian researchers began The Making Publics Project that concluded that people had the tools for making publics before the 18th C. E.g., printed music, art, etc. all enabled the creation of publics. When a portrait of a Dutch gentleman was shown in Venice, if a Dutch man showed up, he looked like “them,” which helped define the Venetians as “us” (for example).

Mass media made us into a mass. It pretended to speak for us. Online, though, we can each make a public. E.g., Occupy Wall Street, and before that Arab Spring. He recounts tweeting angrily, and after a few glasses of wine, “Fuck you Washington! It’s our money.” Someone suggested to him that there were these new things called “hashtags,” and that this one should be #FUwashington. 110,000 tweets later, the hashtag had become a platform. “People viewed in this empty vessel what they wanted to.” Indeed, the first recorded use of #occupywallstreet was in a tweet that consisted of: “#fuwashington #occupywallstreet.” [Note: It might be #OWS.] Now the public is a network.

We’re going through a huge transition, he says. He refers to the Gutenberg Parenthesis. Before Gutenberg, knowledge was passed around, person to person. It was meant to honor and preserve ancient knowledge. After Gutenberg, knowledge became linear. There are beginnings and ends and boxes around things. It’s about product. There’s a clear sense of ownership. It honors current knowledge and its authors. Then you get to the other side of the parenthesis, and there are similarities. More passing it around, more remixing, less sense of ownership. The knowledge we revere starts to become the network itself. Our cognition of the world changes. The CTO of the Veterans Admin calls the Internet the Eighth Continent. “I used to think of the Internet as a medium,” but now he thinks of it more as a place, although there are problems with the place metaphor. (“All metaphors are wrong,” interjects Doc Searls. “That’s why they work.”) It was a hard transition into the parenthesis, and it’ll be hard coming out of it. It took 50 years after Gutenberg for books to come into their own, and 100 years to recognize the impact of books. We’re still looking at the Net using our the past as our analog.

To talk about publicness, Jeff had to go through “the gauntlet of privacy.” He looked for a good definition of privacy. Control is part of it, but “privacy” is an empty vessel itself. “I came to believe that privacy should be seen as an ethic.” It’s about the responsibility for making ethical decisions about sharing it. People and companies have different responsibilities here, of course. “There should be an ethic that people should be able to know who has access to their information. And it should be portable.” He gives a shout out to Doc Searls’ projectVRM.

If privacy is an ethic of knowing, publicness is an ethic of sharing. Not everything should be shared, of course, but there’s a generosity of sharing that should have us thinking about how sharing can benefit us. “I shared info about my prostate cancer on line, which means I was sharing information about my non-functioning penis. Why would I do that?” He has friends who learned of this because he was public, and some who shared with them great information about what he was about to go through. One guy started out under a pseudonym but then started using his real name. A woman told her story about how her husband died needlessly. Jeff refers to Xeni Jardin‘s posting of her mammogram and how this will likely save some lives. [Xeni, we are all thinking about you! And love you!]

“I am not utopian,” Jeff says, “because I’m not predicting a better world.” But we should be imagining the best that can happen, as well as the worst. There are many benefits to publicness. Bringing trust. Improving relationships. It enables collaboration. It disarms the notion of the stranger. It disarms stigmas: coming outside the closet disarms the old stigma (although, Jeff adds, no one should be forced out of a closet). Gov’t is too often secret by default, and that should be switched; the same is not true for individuals where the default should always be a choice. We should make it clear that the Internet is a shitty place to put secrets. Facebook has made mistakes about privacy, but 800M have joined because they want to share. Zuckerberg believes he is not changing but enabling human nature. By nature we want to share.

Jeff got accused by someone of “over-sharing” which he finds an odd phrase. It means “shut up.” The guy does not have to follow Jeff or read his blog. “I wasn’t over-sharing. He was over-listening.”

Companies should share more because it opens up the ability to collaborate. In What Would Google Do? Jeff speculated about a company that might design cars collaboratively. Many scoffed. But Local Motors is now doing it.

When Google pulled out of China, they did the right thing, he says. But can we expect companies to protect the Internet? Nah. Google did a devil’s deal with Verizon. Gov’t also can’t protect the Net. Jeff went to the E-G8 where he asked Sarkozy to take a Hippocratic Oath “to first do no harm.” Sarkozy replied that it’s not harm to protect your children. There are unintended consequences, e.g., danah boyd’s study of the consequence of COPPA. More than half of the 12 yr olds had Facebook pages, most of which had been created with the help of parents, violating the terms of use. Thus, COPPA is requiring families to lie. COPPA has resulted in young people being the worst served segment on the Net because it’s too risky to build a kid site. We need to protect our children, but we also have to protect the Net.

So, who has to protect the Net? We do. The people of the Net. Jeff went back to the Sullivan Principles (while noting that he’s not equating YouTube censorship with Apartheid) about corporate responsibility when dealing with South Africa. We need a discussion of such principles for doing business on the Net. The discussion will never end, he says, but it gives us something to point at. His own principles, he says, are wrong, but they are: 1. You have a right to connect. (Not that you have a right to demand a connection, but you can’t be disconnected.) 2. Privacy as an ethic of knowing and publicness as an ethic of sharing. 3. What’s public is a public good. The Germans allow citizens to demand Google pixelate Street View, resulting in a degradation of a useful tool. Google is taking pictures of public places in public views. Illinois and MA do not allow you to audio record police officers. Reducing what’s public reduces the value of the public. What are the principles at work here? 4. Institution’s info should become public by default. 5. Net neutrality. 6. The Net must remain open and distributed. “The fact that no one has sovereignity is what makes the Net the Net.”

“I am not a technodeterminist,” he says. “We are a point of choice. We need to maintain our choices. If we don’t protect them, companies and well-meaning and ill-meaning companies will take away those choices.” He points to Berkman as a leading institute for this. “I don’t blame Sarkozy for holding the event. I blame us for not holding our own event, the WE-G8, because it is our Internet.”

Jeff now does The Oprah.

Q: How about Google Plus requiring real names?
A: Anonymity has its place on line. So do pseudonyms. They protect the vulnerable. But I understand that real names improves he discourse. I get the motivation, but they screwed it up. They were far too literal in what someone’s identity is. I think Google knows this now. They’re struggling with a principle and a system. I do understand trying to avoid having the place overrun by fake identities and spam.

Q: German Street View is really about scale. It’s one thing for someone to take a picture of your house. It’s another for Google to send a car to drive down every street and post the pictures for the world. For some people it crosses the ethics of privacy. Why isn’t that a valid choice?
A: But it’s a public view. If you own the building, do you own the view of it? But you’re right about scale. But we need to protect the principle that what is public is the public good.

Q: We have a vacation rental. Any bad guy can use Street View to see if it’s worth robbing.
A: Riverhead LI used Google Earth to look for pools in backyards that had no permits. People were in an uproar. But it could also save children’s lives.

Q: [me] Norms are not the same as ethics. Can you talk about the difference? To what extent should privacy as an ethic of knowing be a norm? Etc.
A: Privacy as an ethic should inform the norms. I’ve been talking about my desire for a return of the busy signal… [missed a bit of this.]

Q: What about the ethics of having info shared for you? As people post photos of each other, enormous amounts of info will be shared…
A: We’re trying to adjust to this as a society. Currently, FB tells me if I’m tagged in a photo and lets me say no. It’s wrong if someone tricks you out of info, or violates a presumed confidence. Tyler Clemente who committed suicide after a picture of him was posted…the failure was human, not the technology’s.
A: Why don’t we share all of our health? We’d get more support. We’d have more data that might help. But health insurance would misuse it. Job applicants being disqualified? We could regulate against this. The real reason is stigma. “In this day and age, for anyone to be ashamed of sickness is pathetic.” The fact that we can use illness against people says more about our society.
A: Part of your message is that publicness is our best weapon against stigma.

Q: [espen andersen, who also blogged this talk] In Norway the gov’t publishes how much money people make. That arose when you had to go down to City Hall to get the info. Now there are FB mashups. So what about info that’s used for unintended purposes? And how about the Data Storage Directive that in Europe requires the storage of data “just in case.”
A: Helen Nissenbaum says the key to privacy is context. But it’s hard to know what the context is in many cases. Apparently Norway is rethinking its policy. But there was a cultural benefit that it’d be a shame to lose. Google threatened to pull Gmail out of Germany because of the data storage requirements. Why in the US does email have less protection than mail.
Q: I’m a member of the group suing the Norwegian govt on the grounds that that law is unconstitituional. But no one ever sets targets.

Q: Public by default, private by necessity: Yes. Where’s the low-hanging fruit for universities?
A: Lessig reminds us that if we only use govt data to get the bastards, govt will see openness as an enemy. We need also to be showing the positive benefit of open data. Universities will be in the next wave of disruption of the Net. Around the world, how many instructors write a lecture about capillary action, and how many of them are crap? The fact that you have Open Course lets you find the best lectures in the world. You can find and reward the best. Local education becomes more like tutoring. Why should students and teachers be stuck with one another? I’m reading DIY U and it’s wonderful. It’ll change because of the economics of education.

A: [I had trouble hearing this long question. He recommended going back to Irving Goffman, and pointed out that Net publicness is different if you’re famous.]
A: You’re talking about what a public is. We have thought that the public mean everyone. But now we can create limited publics around things. (Jeff points to a problem with circles in G+ : People think they create private spheres, but they don’t.) FB confused a public with the public; when it changed the defaults, people thought they were talking to a public but were in fact talking to the public.

Q: [me] Norms of privacy help define publics. Are you arguing for a single norm? Why not? [this was my question and I actually asked it much worse than this.]
A: I’m arguing for choice.
Q: Are Americans wrong for being modest in saunas?
A: Nope. [I’ve done a terrible job of capturing this.]


December 3, 2011

Berkman Buzz

This week’s Berkman Buzz:

  • John Palfrey and Jonathan Zittrain advocate in Science for better data for a better Internet:

  • Mayo Fuster Morell discusses the Spanish Revolution and the Internet: link

  • Jonathan Zittrain warns that the personal computer is dead: link

  • Zeynep Tufekci explores the pack mentality in journalism: link

  • The Citizen Media Law Project writes about undercover police monitoring of the Occupy protests in Nashville: link

  • Weekly Global Voices: “Global Voices Podcast: Technology that Empowers!”

Comments Off on Berkman Buzz

November 20, 2011

Berkman Buzz

This week’s Berkman Buzz:

  • VIDEO: Justin Reich discusses technology and educational equality:

  • Juan Carlos de Martin publishes an op-ed in La Stampa on Italy’s digital agenda [in Italian] link

  • The Citizen Media Law Project reviews an ACLU/NAACP lawsuit revolving around an ad at the Philly International Airport link

  • Dan Gillmor argues against SOPA link

  • Herdict attends the first EU Hackathon link

  • Harry Lewis reviews this week’s government attacks on freedom of speech and thought link

  • Weekly Global Voices: “Zambia: Porn Video Sparks Debate on Gender, Culture and Morality” link

Comments Off on Berkman Buzz

November 5, 2011

Berkman Buzz

This week’s Berkman Buzz:

  • Wendy Seltzer reports on last week’s ICANN public meeting:

  • The OpenNet Initiative makes its global filtering data available for download and reuse: link

  • Ethan Zuckerman explores the phenomenon of the “rebuttal tweet”: link

  • Jeffrey Schnapp discusses physicality in the libraries of the future:

  • The Citizen Media Law Project keeps track of ACTA: link

  • Weekly Global Voices: “Global Voices Podcast: Bridging the Language Gaps”

1 Comment »

October 29, 2011

Berkman Buzz

This week’s Berkman Buzz:

  • Ethan Zuckerman explores mapping and storytelling at Occupy Wall Street: link

  • Dan Gillmor critiques the WikiLeaks payments blockade: link

  • The Citizen Media Law Project spots Bigfoot fighting for free speech: link

  • Herdict covers China’s censorship of the ‘Occupy’ movement: link

  • Weekly Global Voices: “United Kingdom: At Age 77, a Life of Inspiration”

Comments Off on Berkman Buzz

October 25, 2011

[berkman] [2b2k] Michael Nielsen on the networking of science

Michael Nielsen is giving a Berkman talk on the networking of science. (It’s his first talk after his book Reinventing Discovery was published.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins by telling the story of Tim Gowers, a Fields Medal winner and blogger. (Four of the 42 living Fields winners have started blogs; two of them are still blogging.) In January 2009, Gowers started posting difficult problems on his blog, and work on the problem in the open. Plus he invited the public to post ideas in the comments. He called this the Polymath Project. 170,000 words in the comments later, ideas had been proposed and rapidly improved or discarded. A few weeks later, the problem had been solved at an even higher level of generalization.

Michael asks: Why isn’t this more common? He gives an example of the failure of an interesting idea. It was proposed by a grad student in 2005. Qwiki was supposed to be a super-textbook about Quantum Mechanics. The site was well built and well marketed. “But science is littered with examples of wikis like this…They are not attracting regular contributors.” Likewise many scientific social networks are ghost towns. “The fundamental problem is one of opportunity costs. If you’re a young scientist, the way you build your career is through the publication of scientific papers…One mediocre crappy paper is going to do more your career than a series of brilliant contributions to a wiki.”

Why then is the Polymath Project succeeding? It just used an unconventional means to a conventional means: they published two papers out of it. Sites like Qwiki that are an end in themselves are not being exploited. We need a “change in norms in scientific culture” so that when people are making decisions about grants and jobs, people who contribute to unconventional formats are rewarded.

How do you achieve a change in the culture. It’s hard. Take the Human Genome project. In the 1990s, there wasn’t not a lot of advantage to individual scientists to share their data. In 1996, the Wellcome Trust held a meeting in Bermuda and agreed on principles that said that if you took more than a thousand base pairs, you need to release it to a public database and be put into the public domain. The funding agencies baked those principles into policy. In April 2000, Clinton and Blair urged all countries to adopt similar principles.

For this to work, you need enthusiastic acceptance, not just a stick beating scientists into submission. You need scientists to internalize it. Why? Because you need all sorts of correlative data to make lab data useful. E.g., Sloane Digital Sky Survey: a huge part of the project was establishing the calibration lines for the data to have meaning to anyone else.

Many scientists are pessimistic about this change occuring. But there’s some hopeful precedents. In 1610 Galileo pointed his telescope at Saturn. He was expecting to see a small disk. But he saw a disk with small knobs on either side — the rings, although he couldn’t resolve the image further. He sent letters to four colleagues, including Kepler that scrambled his discovery into an anagram. This way, if someone else made the discovery, Galileo could unscramble the letters and prove that he had made the discovery first. Leonardo, Newton, Hooks, Hyugens all did this. Scientific journals helped end this practice. The editors of the first journals had trouble convincing scientists to reveal their info because there was no link between publication and career. The editor of the first scientific journal (Philosophical Transactions of the Royal Society) goaded scientists into publishing by writing to them suggesting other scientists were about to disclose what the recipients of the letter were working on. As Paul David [Davis? Couldn’t find it via Google] says, the change to the modern system was due to “patron pressure.”

Michael points out that Galileo immediately announced the discovery of four moons of Jupiter in order to get patronage bucks from the Medicis for the right to name them. [Or, as we would do today, The Comcast Moon, the Staples Moon, and the Gosh Honey Your Hair Smells Great Moon.]

Some new ideas: The Journal of Visualized Experiments videotapes lab work, thus revealing tacit knowledge. Geiger Science (from Springer) publishes data sets as first-class objects. Open Research Computation makes code into a first-class object. And blog posts are beginning to show up on Google Scholar (possible because they’re paying attention to tags?). So, if your post is being cited by lots of articles, your post will show up at Scholar.

[in response to a question] A researcher claimed to have solved the P not-P problem. One of the serious mathematicians (Cook) said it was a serious solution. Mathematicians and others tore it apart on the Web to see if it was right. About a week later, the consensus was that there was a serious obstruction, although they salvaged a small lemma. The process leveraged expertise in many different areas — statistical physics, logic, etc.

Q: [me] Science has been a type of publishing. How does scientific knowledge change when it becomes a type of networking?
A: You can see this beginning to happen in various fields. E.g., People at Google talk about their sw as an ecology. [Afterwards, Michael explained that Google developers use a complex ecology of libraries and services with huge numbers of dependencies.] What will it mean when someone says that the Higgs Boson has been found at the LHC? There are millions of lines of code, huge data sets. It will be an example of using networked knowledge to draw a conclusion where no single person has more than a tiny understanding of the chain of inferences that led to this result. How do you do peer review of that paper? Peer review can’t mean that it’s been checked because no one person can check it. No one has all the capability. How do you validate this knowledge? The methods used to validate are completely ad hoc. E.g., International Panel on Climate Change has more data than any one person can evaluate. And they don’t have a method. It’s ad hoc. They do a good job, but it’s ad hoc.

Q: Classification of Finite Groups were the same. A series of papers.
A: Followed by a 1200 word appendix addressing errors.

Q: It varies by science, of course. For practical work, people need access to the data. For theoretical work, the person who makes the single step that solves it should get 98% of the credit. E.g., Newton v. Leibniz on calculus. E.g., Perleman‘s approach to the PoincarĂ© conjecture.
A: Yes. Perelman published three papers on a pre-press server. Afterward, someone published a paper that filled in the gaps, but Perelman’s was the crucial contribution. This is the normal bickering in science. I would like to see many approaches and gradual consensus. You’ll never have perfect agreement. With transparency, you can go back and see how people came to those ideas.

Q: What is validation? There is a fundamental need for change in the statistical algorithms that many data sets are built on. You have to look at those limitations as well as at the data sets.
A: There’s lots of interesting things happening. But I think this is a transient problem. Best practices are still emerging. There are a lot of statisticians on the case. A move toward more reproducible research and more open sharing of code would help. E.g., many random generators are broken, as is well known. Having the random generator code in an open repository makes life much easier.

Q: The P v not-P left a sense that it was a sprint in response to a crisis, but how can it be done in a more scalable way?
A: People go for the most interesting claims.

Q: You mentioned the Bermuda Principles, and NIH requires open access pub one year after paper pub. But you don’t see that elsewhere. What are the sociological reasons?
Peter Suber: There’s a more urgent need for medical research. The campaign for open access at NSF is not as large, and the counter-lobby (publishers of scientific journals) is bigger. But Pres. Obama has said he’s willing to do it by executive order if there’s sufficient public support. No sign of action yet.

Q: [peter suber] I want to see researchers enthusiastic about making their research public. How do we construct a link between OA and career?
A: It’s really interesting what’s going on. A lot of discussion about supporting gold OA (publishing in OA journals, as opposed to putting it into an OA repository). Fundamentally, it comes down to a question of values. Can you create a culture in science that views publishing in gold OA journals as better than publishing in prestigious toll journals. The best way perhaps is to make it a public issue. Make it embarrassing for scientists to lock their work away. The Aaron Swartz case has sparked a public discussion of the role publishers, especially when they’re making 30% profits.
Q: Peter: Whenever you raise the idea of tweaking tenure criteria, you unleash a tsunami of academic conservativism, even if you make clear that this would still support the same rigorous standards. Can we change the reward system without waiting for it to evolve?
A: There was a proposal a few years ago that it be done purely algorithmic: produce a number based on the citation index. If it had been done, simple tweaks to the algorithm would have been an example: “You get a 10% premium for being in a gold OA journal, etc.”
Q: [peter] One idea was that your work wouldn’t be noticed by the tenure committee if it wasn’t in an OA repository.
A: Spiers [??] lets you measure the impact of your pre-press articles, which has had made it easier for people to assess the effect of OA publishing. You see people looking up the Spiers number of a scientist they just met. You see scientists bragging about the number of times their slides have been downloaded via Mendeley.

Q: How can we accelerate by an order of magnitude in the short term?
A: Any tool that becomes widely used to measure impact affects how science is done. E.g., the H Index. But I’d like to see a proliferation of measures because when you only have one, it reduces cognitive diversity.

Q: Before the Web, Erdos was the moving collaborator. He’d go from place to place and force collaboration. Let’s duplicate that on the Net!
A: He worked 18 hours a day, 365 days/year, high on amphetamines. Not sure that’s the model :) He did lots of small projects. When you have a large project, you bring in the expertise you need. Open collaboration has the unpredictable spread of expertise that participates, and that’s often crucial. E.g., Einstein never thought that understanding gravity required understanding non-standard geometries. He learned that from someone else [missed who]. That’s the sort of thing you get in open collaborations.

Q: You have to have a strong ego to put your out-there paper out there to let everyone pick it apart.
A: Yes. I once asked a friend of mine how he consistently writes edgy blog posts. He replied that it’s because there are some posts he genuinely regrets writing. That takes a particular personality type. But the same is true for publishing papers.
Q: But at least you can blame the editors or peer reviewers.
A: But that’s very modern. In the 1960s. Of Einstein’s 300 papers, only one was peer reviewed … and that one was rejected. Newton was terribly anguished by the criticism of his papers. Networked science may exacerbate it, but it’s always been risky to put your ideas out there.

[Loved this talk.]


October 23, 2011

Berkman Buzz

This week’s Berkman Buzz:

  • The Digital Public Library of America announces $5 Million in Funding from the Sloan Foundation and Arcadia Fund:

  • Ethan Zuckerman recaps Beth Coleman’s presentation on “Tweeting the Revolution”

  • John Palfrey describes teaching at the Harvard Graduate School of Design on the history, present, and future of libraries

  • Rebecca MacKinnon examines why censorship is a central issue in Tunisian political discourse and debates

  • The Youth and Media Project launches a new website

  • The Citizen Media Law Project reports on how one doctor’s complaint turned a public database private

  • Weekly Global Voices: “Israel: Joy and Anger Continue Over Shalit Deal”

Comments Off on Berkman Buzz

October 18, 2011

[berkman] Yochai Benkler on his new book

Yochai Benkler is giving a talk about his new and wonderful book, The Penguin and the Leviathan. (I interviewed him about it here.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Yochai begins by pointing to Occupy Wall Street as teaching us much about cooperation and collaboration.

On Oct. 23, 2008, Alan Greenspan acknowledge to Rep. Henry Waxman that his model of the world was wrong. “I made a mistake in presuming that the self interest of organizations…was such that they were best capable of protecting their own shareholders.” We live in a world built around a mistaken model of human motivation, Yochai says. The basic error is not that we are sometimes self-interested, for we are. The mistake is thinking we could build our systems assuming that we are more or less uniformly self-interested. We’ve built systems that try to get incentives right, or that try to get punishment right. But now scientific selfishness has retreated, and we should model our systems on this new knowledge.

In 1968 Gary Becker said that we could model crime by thinking it of a pay-off model: the benefits of the crime vs. the cost of the penalty. So, we get Three Strikes laws. In another domain, the Jenson and Murphy paper on incentive pay for top management assumes that every level of the enterprise will try to shirk and put more in their pockets, so (the theory goes) you should increase the stock options at the top. But that hasn’t worked very well for companies in terms of return to stockholders; you get misalignment from this model. This model is like Becker’s: it’s about getting the incentives and penalties right. Yochai tells of a mother trying to get her three year old into a car by threatening to take five cents off the child’s allowance. “This model penetrates everywhere,” he says.

This intellectual arc is everywhere. Evolutionary biology has moved from group selection to selfish gene through kin altruism and direct reciprocity. Economics also: strong assumptions of self-interest. Political theory, from Downs, to Olson, to Hardin: all assume the inability to come together on a shared set of goals. Management science and organizational sociology: From Taylor to Weber to Schumpeter through Williamson. Although there are counter narratives in each of these fields, selfishness is the dominant model.

And yet on line we see how easily we cooperate. “Things that shouldn’t have worked, have worked.” He draws a 2×2: market based and non-market based vs. decentralized and centralized. In each, there have been huge successes of social production. This is in fact a new solution space.

In each of the aforementioned disciplines, there is now a development of more complex models that take account of cooperation. E.g., evolution: indirect reciprocity; cooperation emerges much more easily in the new models. Economics: shift to experimental and modeling away from self-interest, and the development of neuroeconomics. Political: Eleanor Ostrom on the commons. Management science: Work on team production and networks; high commitment, high-performance organizations.

The core insight of all of these fields is that the model of uniform self-interest is inadequate. Then there’s debate.

Yochai compares Dawkins in The Selfish Gene (1976) and Martin Nowak (2006). Dawkins says we are born selfish. Nowak says: “Perhaps the most remarkable aspect of evolution is its ability to generate cooperation in a competitive world.” It’s an old debate, Yochai says, citing Kropotkin vs. Spencer vs. Boaz vs. Margaret Mead. The debate is now swinging toward Kropotkin, e.g., neural research that shows empathy via brain scans: a partner’s brain lights up in the same way when s/he sees the other person undergoing pain. He points to the effect of oxytocin on trust, and for the first time in Berkman history makes a reference to monogamous voles.

Why does this matter, Yochai asks. He refers to an experiment by Lee Ross et al. Take a standard Prisoner’s dilemma. All predictions say that everyone should defect. Take the same game and give it to American students, Israeli fighter pilots, etc., and told them either “You’re going to play the Community Game” or “The Wall Street Game.” The former 70% opened cooperatively and kept cooperating through the 7 rounds. The latter opened at 30% cooperative. The 30% in the Community Game represent a significant segment that has to be dealt with in a cooperative system. But there’s a big middle that will one or the other depending on what they understand their context to be. So, concludes Yochai, it’s important to design systems that lets the middle understand the system as cooperative.

So, we move from tough on crime to community policing. That changes all sorts of systems, including technical, organizational, institutional, and social. Community policing has been widely adopted because it’s generally successful. We see that we have success with actual practices that depend not on reward and punishment and monitoring, but on coperation. We’re finding out about this online, but it’s not happening just online.

Yochai says that he’s just at the beginning of an investigation about this. There’s a limit to how much we can get out of evolution, he says. It’s hard to design systems on the basis of evolution. Instead, we see a lot of work across many different systems.

But we still want to know: Won’t money help? The answer is what’s called “crowding out.” We care about material interests, but we also care about fairness. We have emotional needs. We have social motivations. What if these interests don’t align? The Titmuss-Arrow debate 1970/1 about the motivations for donating blood. A 2008 study (Mellstrom and Johannsesson) paid people money to give blood. When you allow them to give the money away, it increased the number of people who gave blood. Adding money can suppress an activity more than it increases it. That’s crowding out. It’s not uniform in the population. Designing systems is much harder than coming up with a material reward that appeals to people’s self-interest. We do not have full answers here

Think of cooperative human systems in three vectors. 1. Conceptual: from rationality as univeral self-interest to diversity of motivations. 2. Design: Cooperative human systems designed on behaviorally realistic, evidence-based design. Politics: We cannot separate out incentives from fairness, ethics, empathy, solidarity.

Yochai points to a number of factors, but focuses on fairness: of outcomes, of intentions, and of processes.

Outcomes: What counts as fair is different in different cultures, especially when you move outside of market economies. In market societies, 50:50 is the norm for fairness. Once it gets to 30:70, people will walk away. But you can change that if you change the framing, e.g., “You got lucky.” But there is no single theory of justice. Yochai looks at a study of the cement trucking industry. It turns out that there are large pay disparities. They also differ in what they say they pay for: performance, or equally time. They don’t always do what they say, though. But when you look at real performance measures, you have fewer accident and out of service events if the company is accurate in what it says, no matter what it says.

We don’t have an agreed upon theory of justice, he says. This explains the 99% vs. 53% debate around the Occupy Wall Street. This is a debate over basic moral commitments without which a system cannot function. There is no way to resolve it either through neutral principles or by efficiency arguments.

Intentions also matter to fairness. When you Where bad intentions excluded (e.g., it was just a roll of the dice), then there’s much less negative reciprocity.

Processes: Tyler (2003) showed that procedural justice correlated with internalized compliance. Yochai points to the militarization of the police as they deal with the OWS. The image projected to the crowd is one of lack of regard for process. He compares this to a massive demonstration of Israel in which the police stood a good distance away, and a different relationship was fostered.

We can see a revival of the “sharing nicely” idea we teach our children. In science. In business. Science is beginning to push back against the assumption of selfishness. It turns out that we aren’t universally self-interested. Different people respond differently, and each person responds differently in different contexts.

We need a new field of cooperative human systems design that accounts for the diversity of motivation, and that takes seriously the issue of “crowding out”: adding incentives can result in worse outcomes.

And, Yochai concludes, we need a renewed view of our shared humanity.

Q: Fascinating. But: The passage from evolution to the social sciences has long been discredited. Also, it’s too simple to say that the solution to the banking problem is that we need more cooperation. The banks are supported by a set of interests bigger than that.
A: You say sociobiology has been discredited. That’s true of the early to mid 1980s but is no longer a good description. The social sciences and anthro have been moving to evolutionary models. Economics too. What was in the 1980s was resolved, now, especially in the social sciences, is unresolved. Second, sure, bankers self-select and control the system. The real answer is that it’s a lot of work. When you have a system optimized for money, and money is the social signal, it self-selects for people driven by that. We need long-term interventions to increase cooperation. E.g., the person who can work with Open Source at, say, IBM, is different than the person who can work her/his way up a hierarchy; the company therefore has to train itself to value those who cooperate.

Q: I just went through MIT’s tutorial that instructed me how my ideas would be licensed. I said that maybe there should information in your office about how to contribute more openly. How do systematize open, collaborative forms across the entire educational system?
A: Lots of people in this room are working on this problem in different ways. We fight, we argue, we persuade. Look at university open access publication. We use our power within the hierarchy of universities to raise a flag and to say we can do it a new way. That allows the next person to use us as an example. After I released Wealth of Networks for free on the Web, I got emails from all sorts of people wanting to know how to negotiate that deal for themselves. Universities should be easy.

Q: What are the burning policy implications of this shift in the way we rule the world? What would you change first?
A: I should note that I don’t address that in the book. We need an assessment of community policing and the big board [?] approach. The basic question is whether we continue to build a society based on maximizing total group, or one that trades off some growth for a more equitable distribution of outcomes. The point is much broader than open access, patent, copyright, etc. The deregulatory governance model is based on an erroneous model of interests. But all of my work is done on the micro level, not the level of organizations. But we know that the idea that musicians need the payoffs afforded by infinite copyright is false; we have empirically data about that. So there are places where the relation between the micro interests and institutional interventions is tight. But I don’t talk about that much in the book.

Q: I’ve looked at pay inequality in Japan and the US. The last thing that matters to the level of compliance with regulations is the gap between CEO and workers. The deterrents are very effective in the US, explaining [couldn’t hear it]. Compliance is much better in the US because the penalties are effective deterrents.
A: First, once you’re talking about the behavior of an organization, we don’t have the same kind of data on what happens within a corporate decision. When people see themselves as agents, there can be conflicts between the individual and the organization. For that you need external enforcement.
Q: Jail time makes a huge difference.
A: Then how do you explain the findings that amount of tax options predicts probability of tax fraud. Same baseline enforcement, but whether you had stock options predicts tax fraud. Adding money and punishment certainly has an effect on behavior. But it depends on whether that intervention has better effects than other interventions. But we only have a little bit of data.

Q: If a high school principal came to you who serves many interests and types of people, how could your ideas influence her or him?
A: My mother founded two schools and a volunteer organization. The lessons are relatively straightforward: Higher degrees of authority and trust, structure with clearly set goals, teamwork, less hierarchical distance between students and teachers, less high-stress testing.


October 13, 2011

Berkman Center applications

The Berkman Center is accepting applications for fellowships. Good luck!


October 9, 2011

Berkman Buzz

This week’s Berkman Buzz:

  • Dan Gillmor writes about Steve Jobs’ legacy:

  • CMLP posts a guide to citizen journalism from #OccupyWallStreet:

  • Ethan Zuckerman recaps Ramesh Srinivasan’s talk on Digital Diversity:

  • Herdict discovers an interview with an Internet censor:

  • Wendy Seltzer discusses how to keep Android open:

  • Weekly Global Voices: “Slovakia: New Draft Law Threatens Internet Freedom”

Comments Off on Berkman Buzz

« Previous Page | Next Page »