The deadline for my book is looming, but I spoke today with Michael Edson, Director of Web and New Media Strategy at the Smithsonian, and I’d love to include his idea for a Smithsonian Commons.
The Smithsonian Commons would make publicly available digital content and information drawn from the magnificent Smithsonian collections, allowing visitors to interact with it, repost it, add to it, and mash it up. It begins with being able to find everything about, say Theodore Roosevelt, that is currently dispersed across multiple connections and museums: photos, books, the original Teddy bear, recordings of the TR campaign song, a commemorative medal, a car named after him, contemporary paintings of his exploits, the chaps he wore on his ranch…But Michael is actually most enthusiastic about the “network effects” that can accrue to knowledge when you let lots of people add what they know, either on the Commons site itself or out across the whole linked Internet.
Smithsonian Commons goes way beyond putting online as much of our national museum as possible — which should be enough to justify its creation. It goes beyond bringing to bear everything curators, experts, and passionate visitors know to increase our understanding of what is there. By allowing us to discover connections, link in and out, and add ideas and knowledge, what used to be a “mere” collection will be an embedded part of countless webs of knowledge that in turn add value to one another. That is to say, we will be able to take up the objects of our heritage in ways that will make them more distinctly and uniquely ours than ever before.
Let’s hope Smithsonian Commons goes from idea to a national — global — center of ideas, creativity, knowledge, and learning.
The curator starts by presenting the engine with a basic set of keywords. CIThread scours the Web for relevant content, much like a search engine does. Then the curator combs through the results to make decisions about what to publish, what to promote and what to throw away.
As those decisions are made, the engine analyzes the content to identify patterns. It then applies that learning to delivering a better quality of source content. Connections to popular content management systems make it possible to automatically publish content to a website and even syndicate it to Twitter and Facebook without leaving the CIThread dashboard.
There’s intelligence on the front end, too. CIThread can also tie in to Web analytics engines to fold audience behavior into its decision-making. For example, it can analyze content that generates a lot of views or clicks and deliver more source material just like it to the curator. All of these factors can be weighted and varied via a dashboard.
I like the idea of providing automated assistance to human curators…
Eszter Hargittai and her team have done research that shows that digital youngsters are not as savvy as we would like them to be, over-relying on Google’s rank ordering of results, etc.
It’s important to have actual data to look at — thanks, Eszter! — even though it confirms what we should all probably know by now: When it comes to information, we’re a lazy, sloppy species that vastly over-estimates its own wisdom.
“…. the statistic that we have been using is between the dawn of civilisation and 2003, five exabytes of information were created. In the last two days, five exabytes of information have been created, and that rate is accelerating. And virtually all of that is what we call user-generated what-have-you. So this is a very, very big new phenomenon.”
He concludes — and I certainly agree — that we need digital curation. He says that digital curation consists of “Authenticity, Veracity, Access, Relevance, Consume-ability, and Produce-ability.” “Consume-ability” means, roughly, that you can play it on any device you want, and “produce-ability” means something like how easy it is to hack it (in the good O’Reilly sense).
JP seems to be thinking primarily of knowledge objects, since authenticity and veracity are high on his list of needs, and for that I think it’s a good list. But suppose we were to think about this not in terms of curation — which implies (against JP’s meaning, I think) a binary acceptance-rejection that builds a persistent collection — and instead view it as digital recommendations? In that case, for non-knowledge-objects, other terms will come to the fore, including amusement value, re-playability, and wiseacre-itude. In fact, people recommend things for every reason we humans may like something, not to mention the way we’s socially defined in part by what we recommend. (You are what you recommend.)
Beth Noveck is deputy chief technology officer for open government and leads President Obama’s Open Government Initiative. She is giving a talk at Harvard. She begins by pointing to the citizenry’s lack of faith in government. Without participation, citizens become increasingly alienated, she says. For example: the rise of Tea Parties. A new study says that a civic spirit reduces crime. Another article, in Social Science and Medicine, correlates civic structures and health. She wants to create more opportunities for citizens to engage and for government to engage in civic structures — a “DoSomething.gov,” as she lightly calls it. [NOTE: Liveblogging. Getting things wrong. Missing things. Substituting inelegant partial phrases for Beth's well-formed complete sentences. This is not a reliable report.]
Beth points to the peer to patent project she initiated before she joined the government. It enlists volunteer scientists and engineers to research patent applications, to help a system that is seriously backlogged, and that uses examiners who are not necessarily expert in the areas they’re examining. This crowd-sources patent applications. The Patent Office is studying how to adopt peer to patent. Beth wants to see more of this, to connect scientists and others to the people who make policy decisions. How do we adapt peer to patent more broadly, she asks. How do we do this in a culture that prizes consistency of procedures?
This is not about increasing direct democracy or deliberative democracy, she says. The admin hasn’t used more polls, etc., because the admin is trying to focus on action, not talk. The aim is to figuring out ways to increase collaborative work. Next week there’s a White House on conf on gov’t innovation, focusing on open grant making and prize-based innovation.
The President’s first executive action was to issue a memorandum on transparency and open gov’t. This was very important, Beth says, because it let the open gov folks in the administration say, “The President says…” President Obama is very committed to this agenda, she says; after all, he is a community organizer in his roots. Simple things like setting up a blog with comments were big steps. It’s about changing the culture. Now, there’s a culture of “leaning forward,” i.e., making commitments to being innovative about how they work. In Dec., every agency was told to come up with its own open govt plan. A directive set a road map: How and when you’re going to inventory all the data in your agency and put it online in raw, machine-readable form? How are you going to engage people in meaningful policy work? How are you going to engage in collaboration within govt and with citizens? On Tuesday, the White House collected self-evaluations, which are then evaluated by Beth’s office and by citizen groups.
How to get there. First, through people. Every agency has someone responsible for open govt. The DoT has 200+ on their open govt committee. Second, through platforms (which, as she says, is Tim O’Reilly’s mantra). E.g., data.gov is a platform.
Transparency is going well, she thinks: White House visitor logs, streaming the health care summit, publishing White House employee salaries. More important is data.gov. 64M hits in under a year. Pew says 40% of respondents have been there. 89M hits on the IT dashboard that puts a user-friendlier interface to govt spending. Agencies are required to put up “high value” data that helps them achieve their core mission. E.g., Dept. of Labor has released 15 yrs of data about workplace exposure to toxic chemicals, advancing its goal of saving workers’ lives. Medicare data helps us understand health care. USDA nutrition data + a campaign to create video games to change the eating habits of the young. Agencies are supposed to ask the public which data they want to see first, in part as a way of spurring participation.
To spur participation, the GSA now has been procuring govt-friendly terms of service for social media platforms; they’re available at apps.gov. It’s now trying to acquire innovation prize platforms, etc.
Participation and collaboration are different things, she says. Participation is a known term that has to do with citizens talking with govt. But the exciting new frontier, she says, is about putting problems out to the public for collaborative solving. E.g., Veterans Benefits Admin asked its 19,000 employees how to shorten wait times; within the first week of a brainstorming competition, 7,000 employees signed up and generated 3,000 ideas, the top ten of which are being implemented. E.g., the Army wikified the Army operations manual.
It’s also about connecting the public and private. E.g., the National Archives is making the Federal Registry available for free (instead of for $17K/yr), and the Princeton Internet center has made an annotatable. Carl Malamud also. The private sector has announced National Lab Day, to get scientists out into the schools. Two million people signed up.
She says they know they have a lot to do. E.g., agencies are sitting on exebytes of info, some of which is on paper. Expert networking: We have got to learn how to improve upon the model of federal advisory commissions, the same group of 20 people. It’s not as effective as a peer to patent model, volunteers pooled from millions of people. And we don’t have much experience using collaboration tools in govt. There is a recognition spreading throughout the govt that we are not the only experts, that there are networks of experts across the country and outside of govt. But ultimately, she says, this is about restoring trust in govt.
Q: Any strategies for developing tools for collaborative development of policy?
A: Brainstorming techniques have been taken up quickly. Thirty agencies are involved in thinking about this. It’s not about the tools, but thinking about the practices. On the other hand, we used this tool with the public to develop open govt plans, but it wasn’t promoted enough; it’s not the tools but the processes. Beth’s office acts as an internal consultancy, but people are learning from one another. This started with the President making a statement, modeling it in the White House, making the tools available…It’s a process of creating a culture and then the vehicles for sharing.
Q: Who winnowed the Veterans agency’s 3,000 suggestions?
A: The VA ideas were generated in local offices and got passed up. In more open processes, they require registration. They’ve used public thumbs up and down, with a flag for “off topic” that would shrink the posting just to one link; the White House lawyers decided that that was acceptable so long as the public was doing the rating. So the UFO and “birther” comments got rated down. They used a wiki tool (MixedInk) so the public could write policy drafts; that wiki let users vote on changes. When there are projects with millions of responses, it will be very hard; it makes more sense to proliferate opportunities for smaller levels of participation.
A: We’re crowd-sourcing expertise. In peer to patent, we’re not asking people if they like the patent or think it should be patented; we’re asking if they have info that is relevant. We are looking for factual info, recognizing that even that info is value-laden. We’re not asking about what people feel, at least initially. It’s not about fostering contentious debate, but about informed conversation.
A: What do you learn from countries that are ahead of the curve on e-democ, e.g., Estonia? Estonia learned 8 yrs ago that you have to ask people to register in online conversations…
A: Great point. We’re now getting up from our desks for the first time. We’re meeting with the Dutch, Norway, Estonia, etc. And a lot of what we do is based on Al Gore’s reinventing govt work. There’s a movement spreading particularly on transparency and data.gov.
Q: Is transparency always a good approach? Are there fields where you want to keep the public out so you can talk without being criticized?
A: Yes. We have to be careful of personal privacy and national security. Data sets are reviewed for both before they go up on data.gov. I’d rather err on the side of transparency and openness to get usover the hump of sharing what they should be sharing. There’s value in closed-door brainstorm so you can float dumb ideas. We’re trying to foster a culture of experimentation and fearlessness.
[I think it's incredible that we have people like Beth in the White House working on open government. Amazing.]
Here’s a post from last July — ok, so I’m a little behind in my reading — that describes the Tuttle Club’s first consulting engagement. An open, self-selected group of people converge for an open session with the potential client. They talk, sketch, and do some improv, out of which emerges a set of topics and people for more focused discussion.
This is semi-emergent expertise. I add the “semi” because the initial starting conditions are quite focused, so the potential areas of collaboration and outcomes are thus fairly constrained. But compared to traditional Calf Sock Expertise (i.e., highly paid and trained men in blue suits who believe that focus is the only efficient way to proceed), this is wildly emergent.
As part of my Be A Bigger A-Hole resolution, let me note that the Harvard Business Review blog has just run a post of mine that looks at the history of the DIKW pyramid and why it doesn’t make that much sense.
…The Republicans are better at questioning the President than you are.
I learned more about both sides of the issues than I have by listening to official press conferences. Getting neutrality out of the way seems to help when the issues are by nature contentious. Having the media mediate puts into the middle a force that (a) fears that taking up the opposition’s side too strongly will look like partisanship, and (b) is looking for “news,” i.e., headlines. It turns out that getting to hear the back-and-forth of the groups that have skin in the game can be better than inviting in a skinless third party.
Of course, it helps that not only is our President articulate and informed, he tries to engage substantively and accords his opponents appropriate dignity. And it’s to the Republican’s credit that they invited him in, gave the session enough time, and treated him civilly.
The iPad definitely ups the Kindle’s ante. Unfortunately, it ups the Kindle ante by making an e-book more like a television set.
Will it do well? I dunno. Probably. But is it the future of reading? Nope. It’s the high-def, full-color, animated version of the past of reading.
The future of reading is social. The future of reading blurs reading and writing. The future of reading is the networking of readers, writers, content, comments, and metadata, all in one continuous-on mash.
Compared to my laptop, the iPad lacks a keyboard, software development tools, writers’ tools, photographers’ tools, a Web server, a camera, a useful row of connectors for different sorts of wires, and the ability to run whatever software I choose. Compared to my Android phone, it lacks a phone, a camera, pocketability, and the ability to run whatever software I choose. Compared to the iPad, my phone lacks book-reading capability, performance, and screen real-estate. Compared to the iPad, my computer lacks a touch interface and suffers from excessive weight and bulk.
It’s probably a pretty sweet tool for consuming media, even given the unfortunate 4:3 aspect ratio. And consuming media is obviously a big deal for a whole lot of people.
Want to see one way to use the Web to teach? Berkman‘s Jonathan Zittrain and Stanford Law’s Elizabeth Stark are teaching a course called Difficult Problems in Cyberlaw. It looks like they have students creating wiki pages for the various topics being discussed. The one on “The Future of Wikipedia” is a terrific resource for exploring the issues Wikipedia is facing.
Among the many things I like about this approach: It implicitly makes the process of learning — which we have traditionally taken as an inward process — a social, outbound process. By learning this way. we are not only enriching ourselves, but enriching our world.
My only criticism: I wish the pages had prominent pointers to a main page that explains that the pages are part of a course.