Joho the BlogMarch 2010 - Page 3 of 4 - Joho the Blog

March 15, 2010

Granny D’s eulogy

Granny D — Doris Haddock — died at age 100 on March 9, 2010. When she was 89, she walked 3,200 miles across the country on behalf of campaign finance reform, a law undone a few months ago by an activist conservative Supreme Court. Dennis Burke was her friend and co-author. I’ve posted Dennis’ eulogy for her.


March 13, 2010

[2b2k] Distributed decision-making at Valve

The title of this post is the subtitle of an article in Game Developer (March 2010) by Matthew S. Burns about the production methods used by various leading game developers. (I have no idea why I’ve started receiving copies of this magazine for software engineers in the video game industry, which I’m enjoying despite — because — it’s over my head.) According to the article, Valve — the source of some of the greatest games ever, including Half-life, Portal, and Left4Dead — “works in a cooperative, adaptable way that is difficult to explain to people who are used to the top-down, hierarchical management at most other large game developers.” Valve trusts its employees to make good decisions, but it is not a free-for-all. Decisions are made in consultation with others (“relevant parties”) because, as Erik Johnson says, “…we know you’re going to be wrong more often than if you made decisions together.” In addition, what Matthew calls “a kind of decision market” develops because people who design a system also build it, so you “‘vote’ by spending time on the features most important” to you. Vote with your code.

Valve also believes in making incremental decisions. Week by week. But what does that do to long-term planning? Robin Walker says that one of the ways she (he?) judges how far they are from shipping by “how may pages of notes I’m taking from each session.” That means Valve “can’t plan more than three months out,” but planning out further than that increases the chances of being wrong.

Interesting approach. Interesting article. Great games.


March 12, 2010

[2b2k] Harry Lewis on ways the Net is making us stupider

Harry Lewis has begun a series of posts on how the Net makes us know less, rather than more. Apparently, it’s not just Google that’s making us stupid. His first post is about the astounding French criminal libel prosecution in which the editor of a scholarly journal could conceivably go to jail for publishing a negative review of a book.

I’ve read the editor’s reasonable and reasoned response [pdf], and that the case has gotten as far as it has is bottomlessly awful.


Berkman Buzz

This week’s Berkman Buzz:

Doc Searls responds to Pew’s Future of the Internet IV survey: link

Ethan Zuckerman blogs John Wilbanks’ talk on generativity in science: link

Herdict is looking for a few good sheep: link

Harry Lewis on the “madness” of criminal libel in France: link

Internet & Democracy chews changes from the Office of Foreign Assets Control: link

CMLP reviews the copyright confusions of Walsh against Walsh: link

Chilling Effects tallies up “innocent infringer” damages: link

Future of the Internet updates the TiVo / EchoStar saga: link

A year ago in the Buzz: “Introducing MediaCloud” link

Weekly Global Voices: “Haiti: Two Months Later” link


March 11, 2010

Now hiring: The future of privacy in this country

From an email being circulated:

From: “Privacy”
Date: March 11, 2010 11:32:22 AM EST

To: undisclosed-recipients:;

Subject: [VACANCY ANNOUNCEMENT] Director of Privacy Policy and Senior Advisor

For privacy professionals looking for a challenging and rewarding assignment, consider a position as a privacy leader at the U.S. Department of Homeland Security Privacy Office. As the Director of Privacy Policy and Senior Advisor at the Department’s head quarters Privacy Office, you will have direct policy responsibility for complex and cutting edge privacy issues such as social media, cloud computing, information security and risk management. The DHS Privacy Office is looking for an expert recognized in the privacy community who possesses creative and analytical problem solving skills and can build in privacy solutions into the Homeland Security mission. Individuals should have the ability to lead change, lead people, build coalitions and resolve problems within the Department and at an inter-agency level. You will be working with one of the leading privacy offices in the Federal Government. If you are excited by what may be the challenge of a lifetime, we look forward to hearing from you.

The link for the position is provided below.

How about getting someone fantastic into that position? Maybe you…?


March 10, 2010

[2b2k] Authority as having the first word

Because of some talks I’m giving, I’ve been thinking about how to put the concrete effects the change in expertise has for the authority of business. I want to say that in the old days, we took expertise and authority as the last word about a topic. Increasingly, the value of expertise and authority is as the first word — the word that frames and initiates the discussion.

I realize that this sounds better than it thinks, so to speak. But there are some aspects of it that I like. 1 I do think that we are moving away in some areas from thinking that we have to settle issues; we are finding much value in the unsettling of ideas, for that allows for more nuance, more complexity, and more recognition that our ability to know our world is quite limited. 2 And I do think that there is a type of expertise that has value as the first word — think about some of your favorite bloggers who throw an idea out into the world so the world can plumb it for meaning, veracity, and relevance. 3 Finally, I do think that insisting on having the last word — and thus closing the conversation — often will be seen as counter-productive and arrogant.

Unfortunately, that maps imperfectly to the snappy aphorism that expertise is moving from having the last word to being the first word.


March 9, 2010

[berkman] John Wilbanks on making science generative

John Wilbanks of Creative Commons (and head of Science Commons) is giving a Berkman lunchtime talk about the threats to science’s generativity. He takes Jonathan Zittrain‘s definition of generativity: “a system’s capacity to produced unanticipated change through unfiltered contributions from broad and varied audiences.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

[NOTE: Ethan Zuckerman has posted his far superior bloggage]

ScienceCommons tries to spark the sort of creativity and innovation in science that we find in the broader cultural Net. Scientists often resist the factors that have increased generativity in other realms: Science isn’t very accessible, it’s hard to master, and it’s not very transferable because the sciences exist as guild-disciplines. He says MIT had to build a $400M building to put scientists into the same room so they’d collaborate. There’s a tension, he says, between getting credit for your work and sharing your work. People think that it ought to be easy to build a science commons, but it’s not.

To build a common and increase generativity, John looks at three key elements: data, tools, and text. First, he looks at these from the standpoint of law. Text is copyrighted, but we can change the law and we can use Creative Commons. Tools include contracts and patents. Contracts govern the moving of ideas around, and they are between institutions, not between scientists. Data is mainly governed by secrecy.

The resistance turns out not to be from the law but from incentives, infrastructure, and institutions. E.g. the National Institutes of Health Public Access requires scientists to make their work available on line within 12 months if the scientist has taken any NIH money. Before it was required, only 4% of scientists posted their work. Now it’s up over 70%, and it’s rising. Without this, scientists are incented to withhold info until the moment of maximum impact.

To open up data, you need incentives and infrastructure if you’re going to make it useful to others. People need incentives to label their data, put it into useful formats, to take care of the privacy issues, to carefully differentiate attribution and citation (copy vs. inspiration). So far, data doesn’t have the right set of incentives.

To open up tools, we’re talking about physical stuff, e.g., recombinant DNA. Scientists don’t get funded to make copies. “The resistance is almost fractal,” he says, at each level of opening up these materials.

We need a “domain name system for data” if we’re going to get Net effects. But there’s no accepted data infrastructure on the Web for doing this, unlike Google’s role for text pages.

Science is heading back to the garage, in the Eric Von Hippel sense. [He’s sitting next to me at the table!] You can buy a gene sequencer on eBay for under $1,000. You can go to People around the world are doing this. In SF, a group is doing DIY sequencing, creating toxin detectors, etc. The price of parts and materials are dropping the way memory prices and printer prices did. We need an open system, including a registry, in part because that’s the most responsive way to respond to bad genes made by bad people.

“PC or TiVo for science?” John asks. PC’s are ugly, but they give us more control over our tools and will let us innovate faster.

Q: [salil] You focus on experimental sciences. Are these obstacles present in mathematical and computer sciences? Data and tools are not a big part of math. Not making one’s work available right now in my field counts as a disadvantage. Specialization is an issue (what you call a guild)…
A: Math and physics are at the extreme of the gradient of openness, while chemistry probably sits at the other end. The lower the cost of publishing, the more disclosure there is. So, in math there isn’t as much institutional, systemic resistance because you don’t need a lot of support from an institution to be a great mathematician.
A: Guilds serve a purpose. But when you think about the competency of a system overall, it comes from the abstraction of expertise into tools. In the research sciences, microspecialization has come at the expense of abstraction. But it’s easier and easier to put knowledge into the tools because we can put lots into computers; that won’t revolutionize math, but it will have more of an effect on sciences with physical components. Science Commons stays away from math because it’s working.

Q: [Eric Von Hippel] State of patents?
A: Most of the time in science, patents are trading cards; they’re about leverage and negotiations than about keeping people from using them. If we think about data as prior art, if we funnel it correctly, it becomes harder to get stupid patents. Biotech patents should be dealt with through an robust public domain strategy. “We tend to get wound up about IP, but then you go out in the field and people are just doing stuff.” Copyright is more stressful because patents time out after 20 yrs.

Q: [ethanz] Clearly, the legal response is a tiny part of a larger equation. If you were coming into it now, not trying to put forward this novel legal framework, where would you start?
A: Funders. Starting with the law lets us engage everyone in the conversation, because as the legal group we don’t create text, tools, or data. But we’re focusing on the funder-institution relation. We want funders to write clauses that reserve the right to put stuff into the commons. “If the funders mandate, the universities tend to accept.” Also, it gets easier to do high-quality research outside the big universities. Which means the small schools can do deals with the funders to make their faculty more attractive to the funder. The funder can also specify that the scientists will annotate their data. The funder has the biggest interest in making sure that science is generative.

Q: Then why aren’t funders requiring the data be open?
A: Making data legally open is easy. Making it useful to others is difficult. Curating it with enough metadata, publishing it on the Web, making it machine readable, making it persistent — none of those infrastructures exist for that, with some exceptions (e.g., the genome). So, the Web has to become capable of handling data.
Q: [ethanz] One reason that orgs like CC have been successful is that they put into law something that is a norm on the Web. Math and physics are so open is that they’re open; it’s the norm. The institutional culture within these disciplines has a lot to do with it. How do you shape norms?
A: Carolina Rossini and I have been working on a paper about the university as a curator of norms. CC lets you waive all your rights. We’ve thought about writing a series of machine readable norms like CC contracts but with no law in the middle. E.g., citation is a norm. E.g., non-endorsement is a norm that says that if you use my data, you can’t imply that I agree with you. But the norm that I should mark my data clearly, should have a persistent URL, are things laws can’t govern but should be norms. We use Eric’s ideas here. E.g., branding something with an open trademark.
A: [carolina] We need a bottom up approach based on norms and a top down approach based on law and policy. If you don’t work with both, they will clash.
A: Our lawyer Tim says that norms scale far better than the law. You can’t enforce the law all the time.

Q: [me] “Making the Web capable of handling data”? How? Semantic Web? What scale?
A: It’s a religious question. My sect says that ontologies are human. We should be using standard formats, e.g., OWL, RDF. Some ontologies will be used by communities, and if they area expressed in standard ways, they can be stitched together. From my view: name things in clear and distinct ways. 2. Put them into OWL or other languages in the correct way. 3. Let smart people who need connected data do so, and let them publish. It’ll be a mix of top down standards setting and bottom up hacking. I’m a big SemWeb fan, but I get very scared of people saying that they have THE ontology. It’ll be messy. It won’t be beautiful. The main thing is to make it easy for people to wire data sets together. Standard URIs and standard formats are the only way to do this. We’ve seen this in the life sciences. Communities that need to write big data together treat it the way Linux packages get rolled together into a release. You’ll see data distributions emerge that represent different religions. If it works, people will use it. They’ll be flame wars, license wars, and forking, and chaos, and 99% of the projects will die. You should be able to boot your databases into a single operating system that understands it.

Q: Researchers are incented to make their work available and open. Frequently, institutions get in the way of that. Are you looking at CC-style MTA’s [material transfer agreements]?
A: We published some last year. The first adopter was the Cure Huntingtons Disease and then the Personal Genome Project. We’re going to foundations. We want to get the institutions out of the way, but only the funders can change the experience. NIH requires you to provide a breeding pair of genetically altered mice, kept in a storage facility in Maine [I think]. NIH is moving away from MTAs, going with a you-agree-by-opening agreement.

Q: Privacy?
A: Big issue. Sometimes used as an excuse for not sharing data, but privacy makes the issues we’ve been talking about look simple. It’s a long-term problem. Genomes are not considered as personally identifying, although your license plate is. “There will be a reckoning.” JW’s advice: If you’re dealing with humans, be careful.

Q: Scientists are already overwhelmed by requests. More open, more tagged, means more requests.
A: Yes, we have to design with the negative impacts in mind. We need social filtering, etc. I worry about the scientist in eastern Tennessee or Botswana who’s a genius and can’t get access. If enough of the data is available, maybe you can get a community that answers many of the questions. People generally get into science because they like to talk with people. They’re more likely than most to share. But you have to make it part of the culture that it’s easy. One of the ideas behind the open source trademark concept is that you have to build up a certain amount of karma before I’ll read your email. People are the answer. Most of the time.

Q: Incentives to motivate institutions, but how incentives for individuals to move them in this direction?
A: PLOS was created because Mike Eisner was so pissed at closed journals that he created a business to compete with them. In anthropology, the Society is trying to go more closed, but groups of scientists are trying to go more open access. There’s a battle for the discipline’s soul. Individuals in these institutions are driving it. The key is to get the first big adopters going. Everyone wants to be in the top ten, especially when the first three are Harvard, Yale and MIT. American Chemistry Society is not going to go open any time soon because they make lots of money selling abstracts.

Q: [eric von hippel] I hope you realize how wonderful you all are.


March 8, 2010

Harold Feld explains spectrum

Harold Feld is one of my favoritist writers about the FCC, telecommunications, spectrum, and the whole enchilada. He has posted a piece that explains the intricacies of some recent spectrum policy announcements. It matters a whole lot.

This is not to say that Harold’s post is easy. It unpacks lots, but there’s lots packed in there. Skip to the first subheading for the Explainer part of the piece, and then keep going. And, bear in mind that Harold is partisan and admits it. Which I like. (I also find it refreshing when Brooke Gladstone in passing interjects where stands on an issue.)


March 7, 2010

[moi] [2b2k] Interview on universities and open access

I was honored a few weeks ago to be the special guest and keynoter of Oklahoma State University’s Research Week. Here’s an interview with OSU Prof. Bill Handy. [LATER that morning: Here’s a page where OSU students are commenting on it.]

[NEXT DAY:] Several open access advocates are annoyed with me because I seem to imply, against my better knowledge, that open access journals are not peer reviewed. I do know better and almost always make that point when talking about open access. More important is the point itself: Many open access journals (e.g., are indeed peer-reviewed.

I do have to point out for the record, however, that (despite the title of screen of this interview) I am not a professor at Harvard or anywhere. (I’m open to offers though.) I am a senior researcher at Harvard’s Berkman Center for Internet & Society. That is not a faculty position, and does not carry either the obligations or the prestige of one.

(Also, the overly-attentive reader will have noticed that I have switched from the [ahole] preface to [moi]. I introduced the former this year as part of my resolution to be a bigger ahole about blogging interviews I’ve done. But, I found myself blogging interviews I’ve done with other people under titles such as “[ahole] Interview of Mary Jones,” implying that Mary Jones is the ahole. So, from now on, it’s [moi].”


March 6, 2010

[bsw] Broadband Strategy Initiative: Phil Bellario on scenario planning

In the latest Broadband Strategy Week series of video interviews, I talk with Phil Bellario, Director of Scenario Planning for the FCC’s Omnibus Broadband Initiative. We talk about how you plan for an unplannable future, and how much of the future plans rest upon the present.

Comments Off on [bsw] Broadband Strategy Initiative: Phil Bellario on scenario planning

« Previous Page | Next Page »