Joho the Blog » 2010 » January

January 31, 2010

The Simpsons stole my plot!

The Simpsons episode tonight — “Million Dollar Maybe — uses the same basic plot device as my young adult, self-published novel My $100 Million Secret: Homer wins the lottery but can’t tell Marge because he bought the ticket when he should have been at an event with her. In mine, Jake wins the lottery but can’t tell his parents because they’re morally opposed to it. In the Simpsons, hilarious events ensue. In mine, a young lad tries to do right and non-hilarious events ensue.

Speaking of ensuing, someone call a lawyer!

1 Comment »

[2b2k] Clay Shirky, info overload, and when filters increase the size of what’s filtered

Clay Shirky’s masterful talk at the Web 2.0 Expo in NYC last September — “It’s not information overload. It’s filter failure” — makes crucial points and makes them beautifully. [Clay explains in greater detail in this two part CJR interview: 1 2]

So I’ve been writing about information overload in the context of our traditional strategy for knowing. Clay traces information overload to the 15th century, but others have taken it back earlier than that, and there’s even a quotation from Seneca (4 BCE) that can be pressed into service: “What is the point of having countless books and libraries whose titles the owner could scarcely read through in his whole lifetime? That mass of books burdens the student without instructing…” I’m sure Clay would agree that if we take “information overload” as meaning the sense that there’s too much for any one individual to know, we can push the date back even further.

The little research I’ve done on the origins of the phrase “information overload” supports Clay’s closing point: Info overload isn’t a problem so much as the water we fishes swim in. When the term was popularized by Alvin Toffler in 1970’s Future Shock, Toffler talked about it as a psychological syndrome that could lead to madness (on a par with sensory overload, which is where the term came from). By the time we hit the late 1980s and early 1990s, people aren’t writing about info overload as a psychological syndrome, but as a cultural fact that we have to deal with. The question became not how we can avoid over-stimulating our informational organs but how we can manage to find the right information in the torrent. So, I think Clay is absolutely spot on.

I do want to push on one of the edges of Clay’s idea, though. Knowledge traditionally has responded to the fact that what-is-to-be-known outstrips our puny brains with the strategy of reducing the size of what has to be known. We divide the world into manageable topics, or we skim the surface. We build canons of what needs to be known. We keep the circle of knowledge quite small, at least relative to all the pretenders to knowledge. All of this of course reflects the limitations of the paper medium we traditionally used for the preservation and communication of knowledge.

The hypothesis of “Too Big to Know” is that in the face of the new technology and the exponentially exponential amount of information if makes available to us, knowledge is adopting a new strategy. Rather than merely filtering — “merely” because we will of course continue to filter — we are also including as much as possible. The new sort of filtering that we do is not always and not merely reductive.

A traditional filter in its strongest sense removes materials: It filters out the penny dreadful novels so that they don’t make it onto the shelves of your local library, or it filters out the crazy letters written in crayon so they don’t make it into your local newspaper. Filtering now does not remove materials. Everything is still a few clicks away. The new filtering reduces the number of clicks for some pages, while leaving everything else the same number of clicks away. Granted, that is an overly-optimistic way of putting it: Being the millionth result listed by a Google search makes it many millions of times harder to find that page than the ones that make it onto Google’s front page. Nevertheless, it’s still much much easier to access that millionth-listed page than it is to access a book that didn’t make it through the publishing system’s editorial filters.

But there’s another crucial sense in which the new filtering technology is not purely reductive. Filters now are often simultaneously additive. For example, blogs act as filters, recommending other pages. But blogs don’t merely sift through the Web and present you with what they find, the way someone curating a collection of books puts the books on a shelf. Blogs contextualize the places they point to, sometimes at great length. That contextualization is a type of filter that adds a great deal of rich information. Further, in many instances, we can see why the filter was applied the way it was. For blogs and other human-written pieces, this is often explained in the contextualization. At Wikipedia, it takes place in the “About” pages where people explain why they have removed some information and added others. And the point of the Top 100 lists and Top Ten Lists that are so popular these days is to generate reams and reams of online controversy.

Thus, many of our new filters reflect the basic change in our knowledge strategy. We are moving from managing the perpetual overload Clay talks about by reducing the amount we have to deal with, to reducing it in ways that simultaneously add to the overload. Merely filtering is not enough, and filtering is no longer a merely reductive activity. The filters themselves are information that are then discussed, shared, and argued about. When we swim through information overload, we’re not swimming in little buckets that result from filters; we are swimming in a sea made bigger by the loquacious filters that are guiding us.

[Note: Amanda Lynn at fatcow has provided a Belorussian translation. Thanks!]

14 Comments »

January 30, 2010

[2b2k] Continuing the total re-org

I’m in a mixed up state. I’ve continued to re-organize the two chapters I’d written. They are now 2.5 chapters. And I’ve done a little new writing in that third chapter, since I’ve decided that if I’m going to talk about the history of facts (a big subsection that I think is on an interesting topic, but probably doesn’t fit into the book), I should also talk about the differences between facts and information.

So, I’ve cut and pasted, added new material, worked on transitions, and created an outline of the whole mess, but I’m too close to it and can’t tell if it works at all. I have to find a full day when I can sit down and read it all through. Until then, I feel like I’m cooking in the dark.

4 Comments »

January 29, 2010

Sunlight’s answer to the Supreme Court’s Naivety

The Sunlight Foundation has posted seven steps the government should take to help make campaign finance more transparent now that the Supreme Court has handed political discourse in the paid media to the highest bidders.

18 Comments »

Lewis Hyde’s objection to the Google Books settlement

Here is a letter Lewis Hyde sent to Judge Denny Chin who is considering the proposed Google Books settlement. I’ve also appended a supporting letter written by Eric Saltzman. The issue is that the newly-proposed trustee overseeing the handling of “orphaned works” (i.e., works that are still in copyright but whose copyright holders cannot be found) still does not have the power to adequately represent the interests of the rights holders, especially when it comes to allowing companies that are not Google to license the works. Granting Google a monopoly on these works seems like too much of a reward for Google’s scanning of them (which I’ve costs about $30/book), and does not seem to serve the interests of the rights holder or — more important, from my point of view — the overall social good of increasing access to these works. (Note: I am not a lawyer.)

So, here are the letters, minus some addresses, etc.:

 


27 January 2010 

Dear Judge Chin:   

I write to amend the letter of objection that I wrote last August in regard to The Authors Guild, Inc., et al. v. Google Inc. (Case No. 1:05-cv-08136-DC).  My August letter is on file with your office as Document 480.   

I shall here limit my remarks to provisions of the amended settlement that are changed from the original settlement, specifically to the role of the newly proposed trustee for orphan works.   

I object to the fact that, despite the amended settlement’s creation of an Unclaimed Works Fiduciary (UWF), the monopoly powers that Google and the Books Rights Registry will acquire, should the Court approve the orphan works elements of the settlement, still stand.  The settling parties have limited the role of the UWF such that he may discharge some duties of the registry in some circumstances, but little else.  He cannot act fully on behalf of the rightsholders of unclaimed books; he cannot, for example, license their work to third parties.   

To put this another way, it is still the case that an approved settlement will in essence grant the settling parties unique compulsory licenses for the exploitation of orphan works.  But why make such licenses unique?  If the Court and the settling parties believe that they can authorize compulsory licenses of any sort, why not go the extra step and grant such licenses broadly so that competing providers can enter this market?   

To address the problem of monopoly in the market for digital books the UWF should be empowered to act as a true trustee.  As such, he should make every effort to locate lost owners, communicate to them their rights under the approved settlement, and pay them their due.  Absent their instructions to the contrary, he should deliver the works of lost owners to the public through the efficiencies of a fully competitive market.   

As Chief Justice Rehnquist has written in regard to the larger purposes of our copyright laws:  “We have often recognized the monopoly privileges that Congress has authorized … are limited in nature and must ultimately serve the public good…” (Fogerty v. Fantasy, Inc., 510 U.S. 517 (1994)).  In regard to both content owners and the public, then, the fiduciary needs to operate in an open economy of knowledge and, for that, he will need the freedom to license work to other actors.   

(Note:  I have asked my attorney, Eric Saltzman, to separately address the question of the UWF’s authority to license orphaned works to others; please see the attached addendum to this letter.)   

Yours sincerely, 

Lewis Hyde

Richard L. Thomas Professor of English

Kenyon College 

Addendum 

Eric F. Salzman

Re: The Authors Guild, Inc., et al. v. Google Inc. (Case No. 1:05-cv-08136-DC). 

Dear Judge Chin: 

My client, Lewis Hyde, tells the Court in his letter of January 27th that the new proposed settlement cannot be fair to the owners of the copyrights in the orphan works and to the public unless it allows the Unclaimed Works Fiduciary to make licenses to other providers to allow competition with the monopoly plan that Google and the Plaintiffs now propose to the Court.   

I would like to offer the Court additional support for Professor Hyde’s objection and suggestion.   

If the named plaintiffs or others who “opt in” to the settlement wish to sign on to it with their own copyrights (and if it survives any antitrust process), then that shall be their prerogative.  However, the combination in this class action lawsuit of inadequate representation and significant actual conflicts among the so-called class should make the Court skeptical of granting a monopolistic license of the absent members’ copyrights.   

If the Court does decide to approve a settlement of the case, it should not approve one where Plaintiff’s counsel have consented to deliver the licenses for the orphan works to just one licensee. 

It would be a complete fiction to say that Plaintiffs’ attorneys have adequately represented the orphan works authors and their successors in interest in this case.  The original settlement proposal clearly demonstrated counsel’s willingness and ability to compromise or, at least, to ignore the orphan works owners’ interests in favor of the named plaintiffs who engaged them and whose assent they needed to cut the deal.  

The problem of plaintiff counsel shaping a settlement attractive to the clients before them at the expense of absent class members is a well-discussed problem in class action jurisprudence.  This Court may take notice of an incentive in that direction, the more than fifty million dollars of fees that Google has agreed to pay to Plaintiffs’ counsel if the settlement goes through.   

Allow me to point out two methods whereby the proposed settlements seriously shortchanged the orphan works owners to enrich other class members at their expense.  

The proposed settlement provides that “Google will make a Cash Payment of at least $60 per Principal Work, $15 per Entire Insert and $5 per Partial Insert for which at least one Rightsholder has registered a valid claim by the opt-out deadline” (Emphasis supplied). According to the settlement, total payments will amount to $45 million.  

By definition, no orphan work Rightsholders could meet this registration condition.  Thus was the settlement engineered so that the rightsholders of orphan works and their successors-in-interest would not and could not get any share of the up-front payments total.  

Evidently, in dividing up the scores of millions of dollars that defendant Google was ultimately willing to pay up-front (i.e., unrelated to yet unproven forthcoming revenues) to settle the lawsuit, counsel felt no obligation to share any of it with the orphan works owners, even if the rightsholder should later appear and wish to register and claim that payment.  This very large slice of the pie would go only to the known rightsholders, their de facto clients. 

This economic discrimination against the orphan works rightsholders went beyond just up-front payments. It also took unclaimed (after five years) revenues from exploitation of the orphan works and assigned them to the known rightsholders of other books, thus promising still further enrichment of the client sub-class with actual control over the settlement.   

That particular feature drew such unpleasant attention to the bias in representation in favor of the known rightsholders (and disfavoring the orphan works rightsholders) that it was written out of the settlement proposal now before the Court.  Nevertheless, the Plaintiffs’ counsel who now urge the court to approve this revised settlement agreement are the same counsel who, in the first settlement go-around, assured the Court then (as they do now) that they had adequately represented the entire class, including the orphan works rightsholders. 

Commonality and adequacy of representation are two touchstones for class certification.  “The adequacy inquiry under Rule 23 (a) (4) serves to uncover conflicts of interest between named parties and the class they seek to represent.” Amchem Prods. v. Windsor, 521 U.S. 591 at 625 (1997).  

In Amchem, the Supreme Court upheld the Third Circuit Court’s decertification of the class because it found that “…the settling parties achieved a global compromise with no structural assurance of fair and adequate representation for the diverse groups and individuals affected. The Third Circuit found no assurance here that the named parties operated under a proper understanding of their representational responsibilities. That assessment is on the mark.” Id at 595. 

As demonstrated above, much less than promising the “structural assurance of fair and adequate representation for the diverse groups and individuals affected”, the settlements that were and are proposed to this Court suggest that advantaging the named class members at the expense of the unrepresented orphan works rightsholders was a goal successfully achieved during the settlement negotiation. 

Accordingly, if the Court will entertain a settlement, it should itself take on the burden of making sure that the orphan works rightsholders interests are well protected.  At this point, the best way to do so is to free the orphan works from the monopoly straitjacket that the proposed settlement forces on them.   

Let the parties live with the deal they made for the parties who were, in fact, adequately and aggressively represented. For the inadequately represented sub-class, the orphan works rightsholders, the Court should empower the UWF (or similar fiduciary) to license their works into the open market. With this authority going forward, the UWF will, as well, be able to adjust licensing of digital rights in these works to the market conditions in an area that is still very new and sure to develop in ways that are, today, impossible to predict.   

Professor Hyde’s objection addresses the two enormous flaws in the proposed settlement:  1. the actual conflicts within the class together with the failure of adequate representation of the orphan works rightsholders, and 2.  the anti-competitive effect of the full copyright term license it would grant to Google only.  The first undermines both the process by which the settlement was achieved and, correspondingly, the public confidence in the courts.  The second hurts both the orphan works rightsholders and the strong public interest in access to the knowledge and creativity these books offer.   

Short of a initiating a new attempt at settlement — with new counsel for the orphan works rightsholders — the changes Professor Hyde proposes would achieve a result that would be fair for all the parties and for the public.   

Very truly yours, 

Eric F. Saltzman, Attorney 

1 Comment »

January 28, 2010

The iPad is the future of the past of books

The iPad definitely ups the Kindle’s ante. Unfortunately, it ups the Kindle ante by making an e-book more like a television set.

Will it do well? I dunno. Probably. But is it the future of reading? Nope. It’s the high-def, full-color, animated version of the past of reading.

The future of reading is social. The future of reading blurs reading and writing. The future of reading is the networking of readers, writers, content, comments, and metadata, all in one continuous-on mash.

 


Tim Bray writes:

Compared to my laptop, the iPad lacks a keyboard, software development tools, writers’ tools, photographers’ tools, a Web server, a camera, a useful row of connectors for different sorts of wires, and the ability to run whatever software I choose. Compared to my Android phone, it lacks a phone, a camera, pocketability, and the ability to run whatever software I choose. Compared to the iPad, my phone lacks book-reading capability, performance, and screen real-estate. Compared to the iPad, my computer lacks a touch interface and suffers from excessive weight and bulk.

It’s probably a pretty sweet tool for consuming media, even given the unfortunate 4:3 aspect ratio. And consuming media is obviously a big deal for a whole lot of people.

For creative people, this device is nothing.

19 Comments »

January 27, 2010

[2b2k] Total rewrite of Chapters 1 and 2

I’ve been working diligently on Chapter Two, titled “Knowledge as Network.” Today, I threw it out and threw out Chapter 1 while I was at it. I’m radically restructing both.

I was 7,500 words into Chapter 2, so this counts as both a possible advance and a setback.

The Chapter 2 I had almost completed began with an anecdote about expertise, then talked about the evolutionary origins of knowledge as a way to know more than can fit into one human brain. Then, onto the development of systematic methods of knowing, starting with Abu Ali al-Hasan ibn al-Haytham. Then on to repeatability as a part of that method, and why repeatability really aims at our not having to repeat experiments; it’s too expensive. Then there was an acknowledgment of the change in our thinking brought about by Thomas Kuhn and then by what’s loosely called post modernism. (Very brief on the latter! It’d be longer if I understood it better.) The key point: Our system of knowledge is about putting in stopping points for inquiry, in part by providing a system that puts in stopping points for our investigation of credentials. The aim of the system is to get you an answer quickly so you can stop searching. The system works. One side effect: it creates experts.

The current draft of Chapter 2 (that is, the draft I’m replacing) then gives a brief history of experts by looking at the rise of think tanks, which correlates with Taylorism and the US progressive movement. I spend a couple of paragraphs on RAND since it gave us our modern picture of The Expert. This then leads to a section on the new networked expertise. That section begins by wondering why Isaiah Berlin’s distinction between hedgehogs and foxes has become so oft-referenced recently. It’s probably because hedgehogism hasn’t worked out so well for us, and because the Net looks like a fox’s dream. But my real point is that both of those approaches use the old strategy of knowledge: faced with a world too big to know, we limit the task either by limiting the field we cover (hedgehogs) or how deep we dig (foxes). At the level of the network, we now have a new type of expertise that transcends the hedgehog-fox distinction. (What’s happening is really quite Hegelian, although I don’t say that in the chapter.)

Then I look briefly at the Challenger Commission as a positive example of how expertise used to work, and contrast that with MITRE’s approach of giving clients access to a network of connected experts, some of whom may disagree. Finally, I ask the reader to hold a physical book in her hands (which she may well be doing while she’s reading this, of course) and consider the basic physical facts about it. I go through these one by one and show the correlation with the old idea of expertise. Change the medium from a book to a network and the properties of expertise should also change. This leads me to the final section of Chapter 2 in which I go through a typology of 6 forms of networked expertise, prefaced by a short discussion attempting to differentiate what I’m talking about from the Wisdom of the Crowd.

That’s what the chapter was. It had problems. It gives the reader another half chapter of history before getting anywhere close to the point. I can’t afford to postpone for yet another 4,000 words why the reader should care about this topic.

Yesterday, it was my turn to present to the little book writers group at the Berkman Center. I didn’t give them the chapters to read since I knew I’d be changing them drastically. After I described the outline of the two chapters, one of the participants — am I allowed to say that it was Ethan Zuckerman? — had the same reaction as my literary agent, who had read the first chapter: Interesting stuff to have gotten out of your system, but it needs to be moved further back. In the course of the conversation, it became clear to me that I need to hit the reader in the face early on with one of the most basic assertions of the book: The old strategy of knowledge has been to manage the overload by limiting what we know, but we are now developing a new strategy in response to the fact that the Net is hugely inclusive.

So, this morning I sat in front of a wide-screen monitor divided into three parts: Chapter 1, Chapter 2, Blank Chapter. I copied and pasted and quickly patched together a new Chapter 1. At the moment, it begins with a brief history of the data-information-knowledge-wisdom pyramid and uses that as an example of knowledge’s old strategy of managing overload by reducing it. This lets me get straight to the big, broad point. It proceeds from there, although I’m not sure I have it arranged right. (Maybe I should add back in all the stuff about information overload? Hard to know at the moment.)

I’ve only begun reconstructing Chapter 2. At the moment, I think it will be a history of knowledge with a focus on our culture’s idea that it’s like a building that needs a firm foundation. But that gets at an idea I might want to end the book with, so I’m really not sure at the moment what Chapter 2 should be.

But now it’s time for lunch. Nuking chapters sure builds an appetite in an author!

10 Comments »

January 26, 2010

[berkman] Julie Cohen on networked selves

Julie Cohen is giving a Berkman lunch on “configuring the networked self.” She’s working on a book that “explores the effects of expanding copyright, pervasive surveillance, and the increasingly opaque design of network architectures in the emerging networked information society.” She’s going to talk about a chapter that “argues that “access to knowledge” is a necessary but insufficient condition for human flourishing, and adds two additional conditions.” (Quotes are from the Berkman site.) [NOTE: Ethan Zuckerman's far superior livebloggage is here.]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The book is motivated by two observations of the discourse around the Net, law, and policy in the U.S.

1. We make grandiose announcements about designing infrastructures that enable free speech and free markets, but at the end of the day, many of the results are antithetical to the interests of the individuals in that space by limiting what they can do with the materials they encounter.

2. There’s a disconnect between the copyright debate and the privacy debate. The free culture debate is about openness, but that can make it hard to reconcile privacy claims. We discuss these issues within a political framework with assumptions about autonomous choice made by disembodied individuals…a worldview that doesn’t have much to do with reality, she says. It would be better to focus on the information flows among embodied, real people who experience the network as mediated by devices and interfaces. The liberal theory framework doesn’t give us good tools. E.g., it treats individuals as separate from culture.

Julie says lots of people are asking these questions. They just happen not to be in legal studies. One purpose of her book is to unpack post modern literature to see how situated, embodied users of networks experience technology, and to see how that affects information law and policy. Her normative framework is informed by Martha Nussbaum‘s ideas about human flourishing: How can information law and policy help human flourishing by providing information to information and knowledge? Intellectual property laws should take this into account, she says. But, she says, this has been situated within the liberal tradition, which leads to indeterminate results. You lend it content by looking at the post modern literature that tells us important things about the relationship between self and culture, self and community, etc. By knowing how those relationships work, you can give content to human flourishing, which informs which laws and policies we need.

[I'm having trouble hearing her. She's given two "political reference points," but I couldn't hear either. :(]

[I think one of them is everyday practice.] Everyday practice is not linear, often not animated by overarching strategies.

The third political reference point is play. Play is an important concept, but the discussion of intentional play needs to be expanded to include “the play of circumstances.” Life puts random stuff in your way. That type of play is often the actual source of creativity. We should be seeking to foster play in our information policy; it is a structural condition of human flourishing.

Access to knowledge isn’t enough to supply a base for human flourishing because it doesn’t get you everything you need, e.g., right to re-use works. We also need operational transparency: We need to know how these digital architectures work. We need to know how the collected data will be used. And we also need semantic discontinuity: Formal incompleteness in legal and technical infrastructures. E.g., wrt copyright to reuse works you shouldn’t have to invoke a legal defense such as fair use; there should be space left over for play. E.g., in privacy, rigid arbitrary rules against transacting and aggregating personal data so that there is space left over for people to play with identity. E.g., in architecture, question the norm that seamless interoperability makes life better, because it means that data about you moves around without your having the ability to stop it. E.g., interoperability among social networks changes the nature of social networks. We need some discontinuity for flourishing.

Q: People need the freedom to have multiple personas. We need more open territory.
A: Yes. The common pushback is that if you restrict the flow of info in any way, we’ll slide down the slippery slope of censorship. But that’s not true and it gets in the way of the conversation we need to have.

Q: [charlie nesson] How do you create this space of playfulness when it comes to copyright?
A: In part, look at the copyright law of 1909. It’s reviled by copyright holders, but there’s lots of good in it. It set up categories that determined if you could get the rights, and the rights were much more narrowly defined. We should define rights to reproduction and adaptation that gives certain significant rights to copyright holders, but that quite clearly and unambiguously reserves lots to users, with reference to the possible market effect that is used by courts to defend the owners’ rights.
Q: [charlie] But you run up against the pocketbooks of the copyright holders…
A: Yes, there’s a limit to what a scholar can do. Getting there is no mean feat, but it begins with a discourse about the value of play and that everyone benefits from it, not just crazy youtube posters, even the content creators.

JPalfrey asks CNesson what he thinks. Charlie says that having to assert fair use, to fend off lawsuits, is wrong. Fair uyse ought to be the presumption.

Q: [csandvig] Fascinating. The literature that lawyers denigrate as pomo makes me think of a book by an anthropologist and sociologist called “The Internet: An Ethnographic Approach.” It’s about embodied, local, enculturated understanding of the Net. Their book was about Trinidad, arguing that if you’re in Trinidad, the Net is one thing, and if you’re not, it’s another thing. And, they say, we need many of these cultural understandings. But it hasn’t happened. Can you say more about the lit you referred to?
A: Within mainstream US legal and policy scholarship, there’s no recognition of this. They’re focused on overcoming the digital divide. That’s fine, but it would be better not to have a broadband policy that thinks it’s the same in all cultures. [Note: I'm paraphrasing, as I am throughout this post. Just a reminder.]

A: [I missed salil's question; sorry] We could build a system of randomized incompatibilities, but there’s value in having them emerge otherwise than by design, and there’s value to not fixing some of the ones that exist in the world. The challenge is how to design gaps.
Q: The gaps you have in mind are not ones that can be designed the way a computer scientist might…
A: Yes. Open source forks, but that’s at war with the idea that everything should be able to speak to everything else. It’d

Q: [me] I used to be a technodeterminist; I recognize the profound importance of cultural understandings/experience. So, the Internet is different in Trinidad than in Beijing or Cambridge. Nevertheless, I find myself thinking that some experiences of the Net are important and cross cultural, e.g., that Ideas are linked, there’s lots to see, people disagree, people like me can publish, etc.
A: You can say general things about the Net if you go to a high enough level of abstraction. You’re only a technodeterminist if you think there’s only way to get there, only one set of rules that get you there. Is that what you mean?
Q: Not quite. I’m asking if there’s a residue of important characteristics of the experience of the Net that cuts across all cultures. “Ideas are linked” or “I can contribute” may be abstractions, but they’re also important and can be culturally transformative, so the lessons we learn from the Net aren’t unactionably general.
A: Liberalism creeps back in. It’s acrappy descriptional tool, but a good aspirational one. The free spread of a corpus of existing knowledge…imagine a universal digital library with open access. That would be a universal good. I’m not saying I have a neutral prescription upon which any vision of human flourishing would work. I’m looking for critical subjectivity.

A: Network space changes based on what networks can do. 200 yrs ago, you wouldn’t have said PAris is closer to NY than Williamsburg VA, but today you might because lots of people go NY – Paris.

Q: [doc] You use geographic metaphors. Much of the understanding of the Net is based on plumbing metaphors.
A: The privacy issues make it clear it’s a geography, not a plumbing system. [Except for leaks :) ]

[Missed a couple of questions]

A: Any good educator will have opinions about how certain things are best reserved for closed environments, e.g., in-class discussions, what sorts of drafts to share with which other people, etc. There’s a value to questioning the assumption that everything ought to be open and shared.

Q: [wseltzer] Why is it so clear that it the Net isn’t plumbing? We make bulges in the pipe as spaces where we can be more private…
A: I suppose it depends on your POV. If you run a data aggregation biz, it will look like that. But if you ask someone who owns such a biz how s/he feels about privacy in her/his own life, that person will have opinions at odds with his/her professional existence.

Q: [jpalfrey] You’re saying that much of what we take as apple pie is in conflict, but that if we had the right toolset, we could make progress…
A: There isn’t a single unifying framework that can make it all make sense. You need the discontinuities to manage that. Dispute arise, but we have a way to muddle along. One of my favorite books: How We Became Post-Human. She writes about the Macy conferences out of which came out of cybernetics, including the idea that info is info no matter how it’s embodied. I think that’s wrong. We’re analog in important ways.

11 Comments »

Vote for your favorite “game changer”

WeMedia is letting us all vote for our favorite “game changers.” Congrats to the two Berkman projects for making the list: Global Voices and Herdict.

Be the first to comment »

January 25, 2010

I’ve got a Franklin Fellowship to work with the State Dept.

I’m very happy to say that I’ve been granted a Franklin Fellowship to work with the US State Department for the next year. I’ll be working with the eDiplomacy group that is working on providing Web 2.0 platforms for internal use, with the semi-secret aim of nudging State from a need-to-know to a need-to-share culture. (This is not exactly how eDiplomacy explains its charter, but it’s how I understand it.)

Franklin Fellowships were established by the State Department in 2006 in order to bring in people from the private and non-profit sectors. I’m working as a volunteer, with my travel expenses covered in part by a grant from Craig Newmark, founder of CraigsList. (Thank you, Craig!) Because I’ll be on-site in DC only a few times a month, I’ll be able to continue as a senior researcher at the Berkman Center. (I’ve also begun doing some work for Harvard Law Library’s digital lab.)

I’ve already spent time with the group. They’re, well, wonderful. They’ve already delivered tools for knowledge sharing (e.g., Diplopedia) and for connecting expertise across every boundary (e.g., The Sounding Board), and they’ve got some very interesting projects in the works. These are dedicated State Dept. employees, some with considerable experience under their belts, who are on fire about the possibilities for making State smarter, more innovative and creative, more responsive, more engaged, and more human, but always within the proper security constraints. Fascinating fascinating.

14 Comments »

Next Page »