Joho the Blogberkman Archives - Page 2 of 19 - Joho the Blog

June 4, 2012

Remixing the President

Aaron Shaw has a very interesting post on what sure looks like contradictory instructions from the White House about whether we’re free to remix photos that have been released under a maximally permissive U.S. Government license. Aaron checked in with a Berkman mailing list where two theories are floated: It’s due to a PR reflex, or it’s an attempt to impose a contractual limitation on the work. There have been lots of other attempts to impose such limitations on reuse, so that “august” works don’t end up being repurposed by hate groups and pornographers; I don’t know if such limitations have any legal bite.

Dan Jones places himself clearly on the side of remixing. Here’s the White House original:

And here’s Dan’s gentle remix:

Bring it, Holder! :)

4 Comments »

May 30, 2012

Interop: The Book

John Palfrey and Urs Gasser are giving a book talk at Harvard about their new book, Interop. (It’s really good. Broad, thoughtful, engaging. Not at all focused on geeky tech issues.) NOTE: Posted without re-reading

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

[The next day: Nathan Matias has posted a far better live-blog post of this event.)

JP says the topic of interop seems on the face of it like it should be “very geeky and very dull.” He says the book started out fairly confined, about the effect of interop on innovation. But as they worked on it, it got broader. E.g., the Facebook IPO has been spun about the stock price’s ups and downs. But from an interop perspective, the story is about why FB was worth $100B or more when its revenues don’t indicate any such thing. It’s because FB’s interop in our lives make it hard to extract. But from this also come problems, which is why the subtitle of the book Interop talks about its peril.

Likewise, the Flame virus shows that viral outbreaks cannot be isolated easily. We need fire breaks to prevent malware from spreading.

In the book, JP and Urs look at how railroad systems became interoperable. Currency is like that, too: currencies vary but we are able to trade across borders. This has been great for the global economy, but it make problems. E.g., the Greek economic meltdown shows the interdependencies of economies.

The book gives a concise def of interop: “The ability to transfer and render userul data and other information across systems (including organizations), applications or components.” But that is insufficient. The book sees interop more broadly as “The art and science of working together.” The book talks about interop in terms of four levels: data, tech, humans, and institutions.

They view the book as an inquiry, some of which is expressed in a series of case studies and papers.

Urs takes the floor. He’s going to talk about a few case studies.

First, how can we make our cities smarter using tech? (Urs shows an IBM video that illustrates how dependent we are on sharing information.) He draws some observations:

  • Solutions to big societal problems increasingly depend on interoperability — from health care to climate change.

  • Interop is not black or white. Many degrees. E.g., power plugs are not interoperable around the world, but there are converters. Or, international air travel requires a lot of interop among the airlines.

  • Interop is a design challenge. In fact, once you’ve messed up with interop, it’s hard to make it right. E.g., it took a long time to fix air traffic control systems because there was a strongly embedded legacy system.

  • There are important benefits, including systems efficiency, user choice, and economic growth.

Urs points to their four-layer model. To make a smart city, the tech the firefighters and police use need to interop, as do their data. But at the human layer, the language used to vary among branches; e.g., “333” might code one thing for EMTs and another for the police. At the institutional layer, the laws for privacy might not be interoperable, making it hard for businesses to work globally.

Second example: When Facebook opened its APIs so that other apps could communicate with FB, there was a spike in innovation; 4k apps were made by non-FB devs that plug into FB. FB’s decision to become more interoperable led to innovation. Likewise for Twitter. “Much of the story behind Twitter is an interop question.”

Likewise for Ushahidi; after the Haitian earthquake, it made a powerful platform that enabled people to share and accumulate info, mapping it, across apps and devices. This involved all layers of the interop stack, from data to institutions such as the UN pitching in. (Urs also points to safe2pee.org :)

Observations:

  • There’s a cycle of interop, competition, and innovation.

  • There are theories of innovation, including generativity (Zittrain), user-driven innovation (Von Hippel) and small-step innocations (Christensen).

  • Caveat: More interop isn’t always good. A highly interop business can take over the market, creating a de facto monopoly, and suppressing innovation.

  • Interop also can help diffuse adoption. E.g., the transition to high def tv: it only took off once the tvs were were able to interoperate between analog and digital signals.

Example 3: Credit cards are highly interoperable: whatever your buying opportunity is, you can use a selection of cards that work with just about any bank. Very convenient.

Observations:

  • this level of interop comes with costs and risks: identity thefts, security problems, etc.

  • The benefits outweigh the risks

  • This is a design problem

  • More interop creates more problems because it means there are more connection points.

Example 4: Cell phone chargers. Traditionally phones had their own chargers. Why? Europe addressed this by the “Sword of Damocles” approach that said that if the phone makers didn’t get their act together, the EC would regulate them into it. The micro-USB charger is now standard in Europe.

Observations:

  • It can take a long time, because of the many actors, legacy problems, and complexity.

  • It’s useful to think about these issues in terms of a 2×2 of regulation/non-regulation, and collaborative-unilateral.

JP back up. He is going to talk about libraries and the preservation of knowledge as interop problems. Think about this as an issue of maintaining interop over time. E.g., try loading up one of your floppy disks. The printed version is much more useful over the long term. Libraries find themselves in a perverse situation: If you provide digital copies of books, you can provide much less than physical books. Five of the 6 major publishers won’t let libraries lend e versions. It’d make sense to have new books provided on an upon standard format. So, even if libraries could lend the books, people might not have the interoperable tech required to play it. Yet libraries are spending more on e-books, and less on physical. If libraries have digital copies and not physical copies, they are are vulnerable to tech changes. How do we insure that we can continuously update? The book makes a fairly detailed suggestion. But as it stands, as we switch from one format to another over time, we’re in worse shape than if we had physical books. We need to address this. “When it comes to climate change, or electronic health records, or preservation of knowledge, interop matters, both as a theory and as a practice.” We need to do this by design up front, deciding what the optimal interop is in each case.

Q&A

Q: [doc searls] Are there any places where you think we should just give up?

A: [jp] I’m a cockeyed optimist. We thought that electronic health records in the US is the hardest case we came across.

Q: How does the govt conduct consultations with experts from across the US. What would it take to create a network of experts?

A: [urs] Lots of expert networks that have emerged, enabled by tech that fosters from the bottom up human interoperability.
A: [jp] It’s not clear to me that we want that level of consultation. I don’t know that we could manage direct democracy enabled in that way.

Q: What are the limits you’d like to see emerge on interop. I.e., I’m thinking of problems of hyper-coherence in bio: a single species of rice or corn that may be more efficient can turn out to be with one blight to have been a big mistake. How do you build in systems of self-limit?

[urs] We try to address this somewhat in a chapter on diversity, which begins with biodiversity. When we talk about interop, we do not suggest merging or unifying systems. To the contrary, interop is a way to preserve diversity, and prevent fragmentation within diversity. It’s extremely difficult to find the optimums, which varies from case to case, and to decide on which speed bumps to put in place.
[jp] You’ve gone to the core of what we’re thinking about.

Q: Human autonomy, efficiency, and economic growth are three of the benefits you mention, but they can be in conflict with one another. How important are decentralized systems?

[urs] We’re not arguing in favor of a single system, e.g., that we have only one type of cell phone. That’s exactly not what we’re arguing for. You want to work toward the sweet spot of interop.
[jp] They are in tension, but there are some highly complex systems where they coexist. E.g., the Web.

Q: Yes, having a single cell phone charger is convenient. But there may be a performance tradeoff, where you can’t choose the optimum voltage if you standard on 5V. And an innovation deficit: you won’t get magnetic plugs, etc.

[urs] Yes. This is one of the potential downsides of interop. It may lock you in. When you get interop by choosing a standard, you freeze the standard for the future. So one of the additional challenge is: how can we incorporate mechanisms of learning into standards-setting?
Cell phone chargers don’t have a lot of layers on top of them, so the standardization doesn’t have quite the ripples through generativity. And that’s what the discussion should be about.

Comments Off on Interop: The Book

The virtue of self-restraint

Last night was the annual Berkman dinner. Lovely. It inevitably turned into a John Palfrey love fest, since he is leaving Harvard very soon, and he is much beloved.

People stood and spoke beautifully and insightfully about what John has meant to them. (Jonathan Zittrain and Ethan Zuckerman in addition were predictably hilarious.) But Terry Fisher gave us an insight that was so apt about John and so widely relevant that I’ll share it.

Terry listed some qualities of John’s. The last he talked about started out as John’s very occasional sarcasm; he can be bitingly perceptive about people, but always in private. So, said Terry, JP’s remarked-upon and remarkable niceness and kindness are not due to naivete. Rather, they arise in part from another of John’s virtues: self-restraint. And, from that, said Terry, comes much of the kindness that generally characterizes the Berkman Center.

That struck me not only as true of John, but as such an important quality for civic discourse. Self-restraint is an enabling virtue: its exercise results in a further qualities that make life on a crowded planet more fruitful and enjoyable.

Comments Off on The virtue of self-restraint

May 29, 2012

[berkman] Dries Buytaert: Drupal and sustaining collaborative efforts

Dries Buytaert [twitter:Dries] , the founder of Drupal and co-founder of Acquia, is giving a Berkman lunch talk about building and sustaining online collaborations.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Drupal is an open source content manager, Dries says. In the past twelve years, Drupal has “grown significantly”: 71 of the top 100 universities use it, 120 nations use it, the White House uses it, 2 of of the 3 top music companies use it, the King of Belgium uses it. [Dries is Belgian :) ] The NY Stock Exchange is converting from a proprietary Java solution to Drupal. Five of the 6 top media companies use it. One out of 50 wesbites run on Drupal. Drupal has 10,000+ modules, 300,000 downloads a month, 1.5M unique visitors a month at drupal. org. And it’s free as in beer.

Today he’s going to talk about: history, open source, community, the evolution of software, and how to grow and sustain it.

History

Dries began writing Drupal in his dorm room, more or less by accident. He wrote a message board for the Linux project, in part to learn PHP and MySQL. About a year later he released Drupal 1.0 as open source, as “a full-featured content management/discussion engine…suitable to setup a news-driven comunity or portal site similar to kuro5hin.org and slashdot.org” (as it said in the original annoucement). “It took me about 30 seconds to come up with the name Drupal, a terrible name.”

Three years later (v.4.1) he says it still looked “pretty crappy.” Two years laer,in 2005, 30 develoeprs showed up for the first DrupalCon, in Antwerp. There are now several year. By 2011, it was looking quite good, and 3,200+ developers showed up at DrupalCon. There are now weekly meetings around the world.

There were growing pains, he says. He tells us about The Big Server Meltdown. In 2004, the servers failed. Dries put up a blank page with a PayPal button to raise $3,000 for a server. Within 24 hours, they’d raised $10,000. One of the CTOs of Sun shipped him a $8,000 machine. Then Open Source Labs in Portland OR offered to house the servers. “That’s just one anecdote. In the history of Drupal, it feels like we’ve had hundreds of these.” (There are currently 8 staff members. They organize conferences and keep the servers up. )

But, Dries says, this shows a weakness in open source: you suddenly have to raise $3,000 and may not be able to do so. That’s a reason he started Acquia, which provides support for Drupal.

Open Source

Drupal is open source: It’s gratis, anyone can look at the source code, they can modify the code, and they can share it. The fact that it’s free sometimes let’s them win bids, but open source “is not just a software license. It’s a collaboration model.” “Open source leads to community.” And “ultimately, that leads to innovation.”

Dries shows photos of the community’s embrace of Drupal (and its logo). “Drupal is successful today because of the community.”

Q: How do we know there will be enthusiastic support a few years down the road? How do we know it won’t have a Y2K problem?

A: There isn’t an easy answer. Things can go wrong. We try to keep it relevant. We have a good track record of innovation and keeping the right trends. And a lot of it comes down to keeping the community engaged. We have a large ecosystem. They volunteer their time, but the are all making money; they have an economic interest in keeping Drupal relevant.

Community

“Drupal doesn’t win just because it’s cheaper. It wins because it’s better.” It is technically superior because it has thousands of developers.

Evolution of software

Dries points to a common pattern: From innovation to bespoke systems to products to commoditization. In each step, the reach becomes wider. Proprietary software tends to stop at the products stage; it’s hard to become a commodity because proprietary software is too expensive. This is an important opportunity for open source.

Growing large projects

Is Drupal’s growth sustainable? That’s a reason Dries founded the Drupal Association, a non-profit, in 2006. It helps maintain drupal.org, organizes events, etc. But Drupal also needs companies like Acquia to get it into new areas. It needs support. It needs people who can talk to CIOs in large companies.

Open source Joomla recently hired some developers to work on their core software, which has led some of the contributors to back off. Why should they contribute their time if Joomla is paying some folks? [Joomla’s experience illustrates the truth of the Wealth of Networks: Putting money into collab can harm the collab.] Drupal is not going to do that. (Acquia develops some non-open source Drupal tools.)

IBM and RedHat are the top contributors to Linux. What companies might make that sort of strategic investment in Drupal? Instead of one or two, how about hundreds? So Dries created “Large Scale Drupal,” a membership org to jointly fund developments. It’s new. They contribute money and get a say in where it’s spent. The members are users of Drupal. E.g., Warner Music. Module developers can get funded from LSD. Two people run it, paid by Acquia. There has not been any pushback from the dev community because there’s no special backdoor by which these projects get added to the Drupal core. In fact, the money is then spent to fund developers. Dries sets the technical roadmap by listening to the community; neither the Drupal Association or LSD influences that.

Of these collaborative projects often start as small, volunteer-driven projects. But then they become institutionalized when they grow. Trade routes are like that: they were originally worn into the ground, but then become driven by commercial organizations, and finally are governed by the government. Many others exhibit the same pattern. Can open source avoid it?

Q&A

If you’re thinking of starting an open source commercial company, you could do dual licensing, but Drupal has not made that choice.

Q: How much does Drupal contribute to the PHP community?
A: A little. There are tribes: some are active in the PHP tribe, others in the Drupal tribe. It’s unfortunate that there isn’t more interaction. Dries says he’d love to grow Acquia enough so that it can put a couple of people on PHP, because if PHP isn’t successful, neither is Drupal.

Q: Governance?
A: We don’t have a lot of decision-making structure. I’ve always been opposed to formal voting. We work through discussion. We debate what should be in the core. Whoever wants to participates in the debate. Ultimately we’re structured like Linux: there are two people who are committing changes to a core version of Drupal. For every major version I pick someone to work alongside me. When we release the version, he or she becomes the maintainer of it. I move on to the next version and select someone to be my co-maintainer. The 15,000 modules are maintained by the community.

Q: Do your biggest contributors agree to programming standards?
A: We are strict about our coding and documentation standards. I make the final decisions about whether to accept a patch. Patches go through a workflow before they reaches me.

Q: What advice would you give to someone trying to attract people to a project?
A: If people can make money through your project, it will grow faster. We built a community on trust and respect; we make decisions on technical merit, not dollars. We have a darwinian model for ideas; bad ideas just die. See what rises to the top. Include it in the next version. Then put it into the core, if it’s worth it. The down side is that it’s very wasteful. I could tell people “If you do x, it will get in,” but I try to get out of the way. People have taken Drupal in sorts of directions, e.g., political campaigns, elearning platforms, etc.

Q: [me] How important are you to Drupal these days?
A: I think I’m more important as the face of Drupal than I used to be. In the governance sense I’m less important. I was the lead developer, the admin for the servers, etc., at the beginning. The “hit by a bus factor” was very risky. Nowadays, I don’t write code; I just review code. I still have a lot of work, but it’s much more focused on reviewing other people’s work and enabling them to make progress. If I were to die, most things would continue to operate. The biggest pain would be in the marketing . There are a lot of leaders in Drupal. One or two people would emerge or be elected to replace what I do.

Q: What’s hard for Drupal?
A: One of our biggest risks is to keep nimble and lean. It takes longer to make decisions. We need to continue to evolve the governance model to encourage us to accelerate decision making. Also, we have some real technical issues we need to address, and they’re huge projects. Volunteers can only accomplish so much. LSD is perfectly positioned to tackle the hardest problems. If we did it at the pace of the volunteers, it would take years.

Comments Off on [berkman] Dries Buytaert: Drupal and sustaining collaborative efforts

March 16, 2012

Berkman Buzz

This week’s Berkman Buzz

  • Ethan Zuckerman unpacks ‘Kony 2012’ [link]

  • The metaLAB introduces the world to Biblio, your new library friend [link]

  • The CMLP explores the First Amendment issues surrounding the Fluke/Limbaugh incident [link]

  • Mako Hill encourages greater communication about DRM [link]

  • Aaron Shaw reviews a new paper on “wiki surveys” [link]

  • A Global Voices Guide to SXSW [link]

Comments Off on Berkman Buzz

March 3, 2012

[berkman] Berkman Buzz

This week’s Berkman Buzz

  • Ethan Zuckerman explores civic video [link]

  • Berkman & the MIT Center for Civic Media examine “truthiness” [link]

  • danah boyd announces The Kinder & Braver World Project: Research Series [link]

  • Mayo Fuster Morell reports on the OWS Forum on the commons [link]

  • The Internet & Democracy Project releases new paper on Internet’s impact on Russian politics, media, and society [link]

  • Zambia: Ban Ki-moon Calls on Nation to Respect Gay Rights [link]

1 Comment »

February 7, 2012

[berkman] From Freedom of Information to Open data … for open accountability

Filipe L. Heusser [pdf] is giving a Berkman lunchtime talk called “Open Data for Open Accountability.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

How is the open Web been changing accountability and transparency? Filipe is going to share two ideas: 1. The Web is making the Freedom of Information Act (FOIOA) obsolete. 2. An open data policy is necessary to keep freedom of information up to date, and to move toward open accountability.

Lots of people praise transparency, he says. There are multiple systems that benefit from it. Felipe shows a map of the world that shows that most parts of the world have open government policies, although that doesn’t always correlate with actual openness. We continue to push for transparency. One of the cornerstones of transparency policy is freedom of information regulation. In fact, FOIA is part of a long story, going back at least back to 1667 when a Finnish priest introduced a bill into the Swedish parliament. [Entirely possible I heard this wrong.]

Modern FOI laws require governments to react to requests and to proactively provide information. (In response to a question, Filipe says that countries have different reasons for putting FOI laws in place: as a credential, to create a centralized info system (as in China), etc.), etc. Felipe’s study of 67 laws found five clusters, although overall they’re alike. One feature they share: They heavily rely on reactive transparency. This happens in part because FOI laws come out of an era when we thought about access to documents, not about access to data. That’s one big reason FOI laws are increasingly obsolete. In 2012, most of the info is not in docs, but is in data sets.

Another reason: It’s one-way information. There’s no two-way communication, and no sharing. Also, gatekeepers decide what you can know. If you disagree, you can go to court, which is expensive and slow.

In May 2009, data.gov launched. The US was the first country to support an open data policy. Sept. 2009 the UK site launched. Now many have, e.g., Kenya and the World Bank. These data are released in machine-readable formats. The open data community thinks this data should be available raw, online, complete, timely, accessible, machine processable, reusable, non-discriminatory and with open licenses.

So, why are these open data initiatives good news? For one thing, it keeps our right to FOI up to date: we can get at the data sets of neutral facts. For another, it enables multiway communication. There are fewer gatekeepers you have to ask permission of. It encourages cheap apps. Startups and NGOs are using it to provide public service delivery.

Finally, Felipe runs an NGO that uses information to promote transparency and accountability. He says that access to open data changes the rules of accountability, and improves them. Traditional gov’t accountability moves from instituational and informal to crowd-source and informal; from a scarcity of watchdogs to an abundance of watchdogs; and from an election every four years to a continuous benchmark. We are moving from accountability to open accountability.

Global Voices started a project called technology for transparency, mapping open govt apps. Also, MySociety, Ushahidi, Sunlight Foundation, andCuidadano Inteligente (Felipe’s NGO). One of CI’s recent apps is Inspector of Interests, which tries to identify potential conflicts of interest in the Chilean Congress. It relies on open data. The officials are required to release info about themselves, which CI built an alternative data set to contrast with the official one, using open data from the Tax and Rev service and the public register. This exposed the fact that nearly half of the officials were not publishing all their assets.

It is an example of open accountability: uses open data, machine readable, neutral data, the crowd helps, and provides ongoing accountability.

Now Felipe points to evidence about what’s going on with open data initiatives. There is a weird coalition pushing for open data policies. Gov’ts have been reacting. In three years, there are 118 open data catalogs from different countries, with over 700,000 data sets. But, although there’s a lot of hype, there’s lots to be done. Most of the catalogs are not driven at a national level. Most are local. Most of the data in the data catalogs isn’t very interesting or useful. Most are images. Very little info about medical, and the lowest category is banking and finance.

Q: [doc] Are you familiar with miData in the UK that makes personal data available? Might this be a model for gov’t.

Q: [jennifer] 1. There are no neutral facts. Data sets are designed and structured. 2. There are still gatekeepers. They act proactively, not reactively. E.g., data.gov has no guidelines for what should be supplied. FOIA meets demands. Open data is supplied according to what the gatekeepers want to share. 3. FOIA can be shared. 4. What’s the incentive to get useful open data out?
Q: [yochai] Is open data doing the job we want? Traffic and weather data is great, but the data we care about — are banks violating privacy, are we being spied on? — don’t come from open data but from FOIA requests.
A: (1) Yes, but FOI laws regulate the ability to access documents which are themselves a manipulation to create a report. By “neutral facts” I meant the data, although the creation of columns and files is not neutral. Current FOI laws don’t let you access that data in most countries. (2) Yes, there will still be gatekeepers, but they have less power. For one thing, they can’t foresee what might be derived from cross-referencing data sets.
Q: [jennifer] Open data doesn’t respond to a demand. FOI does.
A: FOI remains demand driven. And it may be that open data is creating new demand.

Q: [sascha] You’re getting pushback because you’re framing open data as the new FOI. But the state is not going to push into the open data sets the stuff that matters. Maybe you want to say that WikiLeaks is the new FOI, and open data is something new.
A: Yes, I don’t think open data replaces FOI. Open data is a complement. In most countries, you can’t get at data sets by filing a FOI request.

Q: [yochai] The political and emotional energy is being poured into open data. If an administration puts millions of bits of irrelevant data onto data.gov but brings more whistleblower suits than ever before,…to hold up that administration as the model of transparency is a real problem. It’d be more useful to make the FOI process more transparent and shareable. If you think the core is to make the govt reveal things it doesn’t want to do, then those are the interesting interventions, and open data is a really interesting complement. If you think that you can’t hide once the data is out there, then open data is the big thing. We need to focus our political energy on strengthening FOI. Your presentation represents the zeitgeist around open data, and that deserves thinking.
Q: [micah] Felipe is actually quite critical of data.gov. I don’t know of anyone in the transparency movement who’s holding up the Obama gov’t as a positive model.
A: Our NGO built Access Inteligente which is like WhatDoTheyKnow. It publishes all the questions and responses to FOI requests, crowdsourcing knowledge about these requests. Data.gov was the first one and was the model for others. But you’re right that there are core issues on the table. But there might be other, smaller, non-provocative actions, like the release of inoffensive data that lets us see that members of Congress have conflicts of interest. It is a new door of opportunity to help us move forward.

A: [juan carlos] Where are corporations in this mix? Are they not subject to social scrutiny?

Q: [micah] Can average citizens work with this data? Where are the intermediaries coming from?
A: Often the data are complex. The press often act as intermediaries.

Q: Instead of asking for an overflow of undifferentiated data, could we push for FOI to allow citizens’ demands for data, e.g., for info about banks?
A: We should push for more reactive transparency

Q: [me] But this suggests a reframing: FOI should be changed to enable citizens to demand access to open data sets.

Q: We want different types of data. We want open data in part to see how the govt as a machine operates. We need both. There are different motivations.

Q: I work at the community level. We assume that the intermediaries are going to be neutral bodies. But NGOs are not neutral. Also, anyone have examples of citizens being consulted about what types of data should be released to open data portals?
A: The Kenya open data platform is there but many Kenyans don’t know what to do with it. And local governments may not release info because they don’t trust what the intermediaries will do it.

1 Comment »

January 28, 2012

Berkman Buzz

This week’s Berkman Buzz

  • Jonathan Zittrain hosts Computers Gone Wild [link]

  • Yochai Benkler discusses the Megaupload indictment [link]

  • Zeynep Tufekci argues that Twitter’s new tweet blocking policy is good for free speech [link]

  • Wayne Marshall explores nationalism and tradition in Congolese hip-hop [link]

  • Ethan Zuckerman liveblogs the launch of David Weinberger’s “Too Big To Know” [link]

  • Weekly Global Voices: Serbia: The Media War Against Angelina Jolie [link]

Comments Off on Berkman Buzz

January 16, 2012

Berkman Buzz

This week’s Berkman Buzz

  • Dan Gillmor explores the role of the news ombudsman[link

  • danah boyd is “Generation Flux” [link]

  • Ethan Zuckerman liveblogs Wael Abbas’s talk on video and social media in pre-revolution Egypt [link]

  • metaLAB reviews Jeffrey Schnapp’s new Electric Information Age Book [link]

  • Herdict needs help with a mystery [link]

  • Weekly Global Voices: Kenya/Somalia: Twitter War: Kenyan Army Versus Al Shabaab [link]

(This was scraped from the Berkman page via ScraperWiki)

Comments Off on Berkman Buzz

January 12, 2012

[2b2k] [berkman] Alison Head on how students seek information

Alison Head, who is at the Berkman Center and the Library Information Lab this year, but who is normally based at U of Washington’s Info School, is giving a talk called “Modeling the Information-Seeking Process of College Students.” (I did a podcast interview with her a couple of months ago.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Project Information Literacy is a research project that reaches across institutions. They’ve (Michael Eisenberg co-leads the project) surveyed 11,000 students on 41 US campuses to find out how do students find and use information. They use voluntary samples, not random samples. But, Alison says, the project doesn’t claim to be able to generalize to all students; they look at the relationships among different kinds of schools and overall trends. They make special efforts to include community colleges, which are often under-represented in studies of colleges.

The project wanted to know what’s going through students’ heads as they do research. What’s it like to be a student in the digital age? “How do students define the research process, how do they conceptualize it” throughout everyday school life, including non-course-related research (e.g., what to buy).


Four takeaways from all five studies:

1. “Students say research is more difficult for them than ever before.” This is true both for course-related and everyday life research. Teachers and librarians denied this finding when it came out. But students describe the process using terms of stress (fear,angst, tired, etc.) Everyday-life research also had a lot of risk associated with it, e.g., when researching medical problems.


Their research led the project to come up with a preliminary model based on what students told them about the difficulties of doing research that says in the beginning part of research, students try to define four contexts: big picture, info-gathering, language, situational. These provide meaning and interpretation.


a. Big picture. In a focus group, a student said s/he went to international relations class and there was an assignment on how Socrates would be relevant to a problem today. Alison looked at the syllabus and wondered, “Was this covered?” Getting the big picture enables students to get their arms around a topic.


b. Info gathering. “We give students access to 80 databases at our small library, and they really want access to one,” says Barbara Fister at Gustavus Adolphus.


c. Language. This is why most students go to librarians. They need the vocabulary.


d. Situational. The expectations: how long should the paper be, how do I get an A, etc.? In everyday life, the situational question might be: how far do I go with an answer? When do I know enough?


Students surveyed said that for course related research they almost always need the big picture, often need info-gathering, sometimes need language, and sometimes need situational. Students were 1.5x more likely to go to a librarian for language context. For everyday-life, big picture is often a need, and the others are needed only sometimes. Many students find everyday-life research is harder because it’s open-ended, harder to know when you’re done, and harder to know when you’re right. Course-related research ends with a grade.


2. “Students turn to the same ‘tried and true’ resources over and over again.”. In course research, course readings were used 97% of the time. Search engines: 96%. Library databases: 94%. Instructors: 88%. Wikipedia: 85%. (Those are the 2010 results. In 2009, everything rose except course readings.) Students are not using a lot of on-campus sources. Alison says that during 20 years of teaching, she found students were very disturbed if she critiqued the course readings. Students go to course readings not only to get situational context, but also to get big picture context, i.e., the lay of the land. They don’t want you critiquing those readings, because you’re disrupting their big picture context. Librarians were near the bottom, in line with other research findings. But “instructors are a go-to source.” Also, note that students don’t go online for all their info. They talk to friends, instructors, etc.


In everyday life research, the list in order is: Search engines 95%, Wikipedia 84%, friends and family 87%, personal collection 75%, and government sites 65%.


Students tend to repeat the same processes.


3. “Students use a strategy of predictability and efficiency.” They’re not floundering. They have a strategy. You may not like it, but they have one. It’s a way to fill in the context.


Alison presents a composite student named Jessica. (i) She has no shortage of ideas for research. But she needs the language to talk about the project, and to get good results from searching. (ii) Students are often excited about the course research project, but they worry that they’ll pick a topic “that fails them,” i.e., that doesn’t let them fulfill the requirements. (iii) They are often risk-averse. They’ll use the same resource over and over, even Project Muse for a science course. (“I did a paper on the metaphor of breast cancer,” said one student.) (iv) They are often self-taught and independent, and try to port over what they learned in high school. But HS works for HS, and not for college. (iv) Currency matters.


What’s the most difficult step? 1. Getting started 84%. 2. Defining a topic 66%. Narrowing a topic 62%. Sorting through irrelevant results 61%. Task definition is the most difficult part of research. For life research, the hardest part is figuring out when you’re done.


So, where do they go when they’re having difficulty in course research? They go to instructors, but handouts fall short: few handouts the project looked at discussed what research means (16%). Six in ten handouts sent students to the library for a book. Only 18% mention plagiarism, and few of those explained what it is. Students want email access to the instructor. Second, most want a handout that they can take with them and check off as they do their work. Few hand-outs tell students how to gather information. Faculty express surprise at this, saying that they assume students know how to do research already, or that it’s not the prof’s job to teach them that. They tend not to mention librarians or databases.


Students use ibrary databases (84%), OPAC (78%), study areas (72%), check library shelves (55%), cafe (48%). Only 12% use the online “Ask a librarian” reference. 20% consult librarians about assignments, but 24% ask librarians about the library system.


Librarians use a model of scholarly thoroughness, while students use a model of efficiency. Students tend to read the course materials and then google for the rest.


Alison plays a video:



How have things changed? 1. Students contend with a staggering amount of information. 2. They are always on and always being notified. 3. It’s a Web 2.0 sharing culture. The old days of dreading group projects are ending; satudents sometimes post their topics on Facebook to elicit reactions and help. 4. The expectations from information has changed.


“Books, do I use them? Not really, they are antiquated interaces. You have to look in an index, way in the back, and it’s not hyperlinke.”

[I moderated the Q&A so I couldn’t liveblog it.]
TAGS: -berkman

3 Comments »

« Previous Page | Next Page »