Joho the BlogOctober 2011 - Joho the Blog

October 31, 2011

The Firefox difference

Sebastian Anthony points to a distinguishing philosophy of Firefox that was not clear to me until I read it. The title is “Firefox is the cloud’s biggest enemy,” which he in the comments admits is not entirely apt. Rather, Firefox wants you to own and control your data; it uses the cloud, but encrypts your data when it does. This is a strong differentiation from Google Chrome and Microsoft IE.

Comments Off on The Firefox difference

October 29, 2011

Where I’ll be

I’m trying out posting where I’ll be talking. I haven’t before because it seems to me to have no value except as boasting. Some of the events are open to the public, but I’m also going to mention some that aren’t because I guess I’m boasting. Let me know how obnoxious you find this. I’d be happy never to do it again. Anyway…

Monday I’m keynoting the DLF Fall Forum in Baltimore.

Tuesday I’m giving at talk in White Plains at the Westchester Library System Annual Meeting.

Friday I’m on a panel discussing “What’s next in the social media revolution?” at the the National Archives. There’s a Social Media Fair and Reception at 5:30, and then the panel at 7pm.


Berkman Buzz

This week’s Berkman Buzz:

  • Ethan Zuckerman explores mapping and storytelling at Occupy Wall Street: link

  • Dan Gillmor critiques the WikiLeaks payments blockade: link

  • The Citizen Media Law Project spots Bigfoot fighting for free speech: link

  • Herdict covers China’s censorship of the ‘Occupy’ movement: link

  • Weekly Global Voices: “United Kingdom: At Age 77, a Life of Inspiration”

Comments Off on Berkman Buzz

October 26, 2011

[2b2k] Will digital scholarship ever keep up?

Scott F. Johnson has posted a dystopic provocation about the present of digital scholarship and possibly about its future.

Here’s the crux of his argument:

… as the deluge of information increases at a very fast pace — including both the digitization of scholarly materials unavailable in digital form previously and the new production of journals and books in digital form — and as the tools that scholars use to sift, sort, and search this material are increasingly unable to keep up — either by being limited in terms of the sheer amount of data they can deal with, or in terms of becoming so complex in terms of usability that the average scholar can’t use it — then the less likely it will be that a scholar can adequately cover the research material and write a convincing scholarly narrative today.

Thus, I would argue that in the future, when the computational tools (whatever they may be) eventually develop to a point of dealing profitably with the new deluge of digital scholarship, the backward-looking view of scholarship in our current transitional period may be generally disparaging. It may be so disparaging, in fact, that the scholarship of our generation will be seen as not trustworthy, or inherently compromised in some way by comparison with what came before (pre-digital) and what will come after (sophisticatedly digital).

Scott tentatively concludes:

For the moment one solution is to read less, but better. This may seem a luddite approach to the problem, but what other choice is there?

First, I should point out that the rest of Scott’s post makes it clear that he’s no Luddite. He understands the advantages of digital scholarship. But I look at this a little differently.

I agree with most of Scott’s description of the current state of digital scholarship and with the inevitability of an ever increasing deluge of scholarly digital material. But, I think the issue is not that the filters won’t be able to keep up with the deluge. Rather, I think we’re just going to have to give up on the idea of “keeping up” — much as newspapers and half hour news broadcasts have to give up the pretense that they are covering all the day’s events. The idea of coverage was always an internalization of the limitation of the old media, as if a newspaper, a broadcast, or even the lifetime of a scholar could embrace everything important there is to know about a field. Now the Net has made clear to us what we knew all along: most of what knowledge wanted to do was a mere dream.

So, for me the question is what scholarship and expertise look like when they cannot attain a sense of mastery by artificial limiting the material with which they have to deal. It was much easier when you only had to read at the pace of the publishers. Now you’d have to read at the pace of the writers…and there are so many more writers! So, lacking a canon, how can there be experts? How can you be a scholar?

I’m bad at predicting the future, and I don’t know if Scott is right that we will eventually develop such powerful search and filtering tools that the current generation of scholars will look betwixt-and-between fools (or as an “asterisk,” as Scott says). There’s an argument that even if the pace of growth slows, the pace of complexification will increase. In any case, I’d guess that deep scholars will continue to exist because that’s more a personality trait than a function of the available materials. For example, I’m currently reading Armies of Heaven, by Jay Rubenstein. The depth of his knowledge about the First Crusade is astounding. Astounding. As more of the works he consulted come on line, other scholars of similar temperament will find it easier to pursue their deep scholarship. They will read less and better not as a tactic but because that’s how the world beckons to them. But the Net will also support scholars who want to read faster and do more connecting. Finally (and to me most interestingly) the Net is already helping us to address the scaling problem by facilitating the move of knowledge from books to networks. Books don’t scale. Networks do. Although, yes, that fundamentally changes the nature of knowledge and scholarship.

[Note: My initial post embedded one draft inside another and was a total mess. Ack. I’ve cleaned it up – Oct. 26, 2011, 4:03pm edt.]


October 25, 2011

[berkman] [2b2k] Michael Nielsen on the networking of science

Michael Nielsen is giving a Berkman talk on the networking of science. (It’s his first talk after his book Reinventing Discovery was published.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins by telling the story of Tim Gowers, a Fields Medal winner and blogger. (Four of the 42 living Fields winners have started blogs; two of them are still blogging.) In January 2009, Gowers started posting difficult problems on his blog, and work on the problem in the open. Plus he invited the public to post ideas in the comments. He called this the Polymath Project. 170,000 words in the comments later, ideas had been proposed and rapidly improved or discarded. A few weeks later, the problem had been solved at an even higher level of generalization.

Michael asks: Why isn’t this more common? He gives an example of the failure of an interesting idea. It was proposed by a grad student in 2005. Qwiki was supposed to be a super-textbook about Quantum Mechanics. The site was well built and well marketed. “But science is littered with examples of wikis like this…They are not attracting regular contributors.” Likewise many scientific social networks are ghost towns. “The fundamental problem is one of opportunity costs. If you’re a young scientist, the way you build your career is through the publication of scientific papers…One mediocre crappy paper is going to do more your career than a series of brilliant contributions to a wiki.”

Why then is the Polymath Project succeeding? It just used an unconventional means to a conventional means: they published two papers out of it. Sites like Qwiki that are an end in themselves are not being exploited. We need a “change in norms in scientific culture” so that when people are making decisions about grants and jobs, people who contribute to unconventional formats are rewarded.

How do you achieve a change in the culture. It’s hard. Take the Human Genome project. In the 1990s, there wasn’t not a lot of advantage to individual scientists to share their data. In 1996, the Wellcome Trust held a meeting in Bermuda and agreed on principles that said that if you took more than a thousand base pairs, you need to release it to a public database and be put into the public domain. The funding agencies baked those principles into policy. In April 2000, Clinton and Blair urged all countries to adopt similar principles.

For this to work, you need enthusiastic acceptance, not just a stick beating scientists into submission. You need scientists to internalize it. Why? Because you need all sorts of correlative data to make lab data useful. E.g., Sloane Digital Sky Survey: a huge part of the project was establishing the calibration lines for the data to have meaning to anyone else.

Many scientists are pessimistic about this change occuring. But there’s some hopeful precedents. In 1610 Galileo pointed his telescope at Saturn. He was expecting to see a small disk. But he saw a disk with small knobs on either side — the rings, although he couldn’t resolve the image further. He sent letters to four colleagues, including Kepler that scrambled his discovery into an anagram. This way, if someone else made the discovery, Galileo could unscramble the letters and prove that he had made the discovery first. Leonardo, Newton, Hooks, Hyugens all did this. Scientific journals helped end this practice. The editors of the first journals had trouble convincing scientists to reveal their info because there was no link between publication and career. The editor of the first scientific journal (Philosophical Transactions of the Royal Society) goaded scientists into publishing by writing to them suggesting other scientists were about to disclose what the recipients of the letter were working on. As Paul David [Davis? Couldn’t find it via Google] says, the change to the modern system was due to “patron pressure.”

Michael points out that Galileo immediately announced the discovery of four moons of Jupiter in order to get patronage bucks from the Medicis for the right to name them. [Or, as we would do today, The Comcast Moon, the Staples Moon, and the Gosh Honey Your Hair Smells Great Moon.]

Some new ideas: The Journal of Visualized Experiments videotapes lab work, thus revealing tacit knowledge. Geiger Science (from Springer) publishes data sets as first-class objects. Open Research Computation makes code into a first-class object. And blog posts are beginning to show up on Google Scholar (possible because they’re paying attention to tags?). So, if your post is being cited by lots of articles, your post will show up at Scholar.

[in response to a question] A researcher claimed to have solved the P not-P problem. One of the serious mathematicians (Cook) said it was a serious solution. Mathematicians and others tore it apart on the Web to see if it was right. About a week later, the consensus was that there was a serious obstruction, although they salvaged a small lemma. The process leveraged expertise in many different areas — statistical physics, logic, etc.

Q: [me] Science has been a type of publishing. How does scientific knowledge change when it becomes a type of networking?
A: You can see this beginning to happen in various fields. E.g., People at Google talk about their sw as an ecology. [Afterwards, Michael explained that Google developers use a complex ecology of libraries and services with huge numbers of dependencies.] What will it mean when someone says that the Higgs Boson has been found at the LHC? There are millions of lines of code, huge data sets. It will be an example of using networked knowledge to draw a conclusion where no single person has more than a tiny understanding of the chain of inferences that led to this result. How do you do peer review of that paper? Peer review can’t mean that it’s been checked because no one person can check it. No one has all the capability. How do you validate this knowledge? The methods used to validate are completely ad hoc. E.g., International Panel on Climate Change has more data than any one person can evaluate. And they don’t have a method. It’s ad hoc. They do a good job, but it’s ad hoc.

Q: Classification of Finite Groups were the same. A series of papers.
A: Followed by a 1200 word appendix addressing errors.

Q: It varies by science, of course. For practical work, people need access to the data. For theoretical work, the person who makes the single step that solves it should get 98% of the credit. E.g., Newton v. Leibniz on calculus. E.g., Perleman‘s approach to the PoincarĂ© conjecture.
A: Yes. Perelman published three papers on a pre-press server. Afterward, someone published a paper that filled in the gaps, but Perelman’s was the crucial contribution. This is the normal bickering in science. I would like to see many approaches and gradual consensus. You’ll never have perfect agreement. With transparency, you can go back and see how people came to those ideas.

Q: What is validation? There is a fundamental need for change in the statistical algorithms that many data sets are built on. You have to look at those limitations as well as at the data sets.
A: There’s lots of interesting things happening. But I think this is a transient problem. Best practices are still emerging. There are a lot of statisticians on the case. A move toward more reproducible research and more open sharing of code would help. E.g., many random generators are broken, as is well known. Having the random generator code in an open repository makes life much easier.

Q: The P v not-P left a sense that it was a sprint in response to a crisis, but how can it be done in a more scalable way?
A: People go for the most interesting claims.

Q: You mentioned the Bermuda Principles, and NIH requires open access pub one year after paper pub. But you don’t see that elsewhere. What are the sociological reasons?
Peter Suber: There’s a more urgent need for medical research. The campaign for open access at NSF is not as large, and the counter-lobby (publishers of scientific journals) is bigger. But Pres. Obama has said he’s willing to do it by executive order if there’s sufficient public support. No sign of action yet.

Q: [peter suber] I want to see researchers enthusiastic about making their research public. How do we construct a link between OA and career?
A: It’s really interesting what’s going on. A lot of discussion about supporting gold OA (publishing in OA journals, as opposed to putting it into an OA repository). Fundamentally, it comes down to a question of values. Can you create a culture in science that views publishing in gold OA journals as better than publishing in prestigious toll journals. The best way perhaps is to make it a public issue. Make it embarrassing for scientists to lock their work away. The Aaron Swartz case has sparked a public discussion of the role publishers, especially when they’re making 30% profits.
Q: Peter: Whenever you raise the idea of tweaking tenure criteria, you unleash a tsunami of academic conservativism, even if you make clear that this would still support the same rigorous standards. Can we change the reward system without waiting for it to evolve?
A: There was a proposal a few years ago that it be done purely algorithmic: produce a number based on the citation index. If it had been done, simple tweaks to the algorithm would have been an example: “You get a 10% premium for being in a gold OA journal, etc.”
Q: [peter] One idea was that your work wouldn’t be noticed by the tenure committee if it wasn’t in an OA repository.
A: Spiers [??] lets you measure the impact of your pre-press articles, which has had made it easier for people to assess the effect of OA publishing. You see people looking up the Spiers number of a scientist they just met. You see scientists bragging about the number of times their slides have been downloaded via Mendeley.

Q: How can we accelerate by an order of magnitude in the short term?
A: Any tool that becomes widely used to measure impact affects how science is done. E.g., the H Index. But I’d like to see a proliferation of measures because when you only have one, it reduces cognitive diversity.

Q: Before the Web, Erdos was the moving collaborator. He’d go from place to place and force collaboration. Let’s duplicate that on the Net!
A: He worked 18 hours a day, 365 days/year, high on amphetamines. Not sure that’s the model :) He did lots of small projects. When you have a large project, you bring in the expertise you need. Open collaboration has the unpredictable spread of expertise that participates, and that’s often crucial. E.g., Einstein never thought that understanding gravity required understanding non-standard geometries. He learned that from someone else [missed who]. That’s the sort of thing you get in open collaborations.

Q: You have to have a strong ego to put your out-there paper out there to let everyone pick it apart.
A: Yes. I once asked a friend of mine how he consistently writes edgy blog posts. He replied that it’s because there are some posts he genuinely regrets writing. That takes a particular personality type. But the same is true for publishing papers.
Q: But at least you can blame the editors or peer reviewers.
A: But that’s very modern. In the 1960s. Of Einstein’s 300 papers, only one was peer reviewed … and that one was rejected. Newton was terribly anguished by the criticism of his papers. Networked science may exacerbate it, but it’s always been risky to put your ideas out there.

[Loved this talk.]


What “I know” means

If meaning is use, as per Wittgenstein and John Austin, then what does “know” mean?

I’m going to guess that the most common usage of the term is in the phrase “I know,” as in:

1. “You have to be careful what you take Lipitor with.” “I know.”
2. “The science articles have gotten really hard to read in Wikipedia.” “I know.”
3. “This cookbook thinks you’ll just happen to have strudel dough just hanging around.” “I know.”
4. “The books are arranged by the author’s last name within any one topic area.” “I know.”
5. “They’re closing the Red Line on weekends.” “I know!”

In each of these, the speaker is not claiming to have an inner state of belief that is justifiable and true. The speaker is using “I know” to shape the conversation and the social relationship with the initial speaker.

1., 4. “You can stop explaining now.”
2., 3. “I agree with you. We’re on the same side.”
5. “I agree that it’s outrageous!”

And I won’t even mention words like “surely” and “certainly” that are almost always used to indicate that you’re going to present no evidence for the claim that follows.


October 23, 2011

Waiting for the Italian spring

Journalist and friend Luca de Biase wonders why the Italians have not risen up against the unabashed corruption of the Berlusconi years.

Italians are living an “after war”, a cultural war that devastated the country. Rebels have conquered the government and have destroyed peace, in Italy. Fear, urgencies, finances, are concentrating attention on the short term. Italians can rebel again. But most of all, they need perspective and peace.

How to get peace?

Luca suggests a direction more than an answer:

Italians, probably, don’t really need a rebellion. They need a shared vision based on facts and reality (not on ideology and reality shows): a deep cultural change, that helps them in understanding their shared project, that helps rebuild a perspective and that makes them look ahead with an empirically based hope.

Although Luca does not say so in this piece, I suspect he looks to the Internet as a tool for forging that shared vision and project.

(Luca has invited me to the Italian Internet Governance conference in Trento in November for a panel discussion. Perhaps part of our discussion can be whether the lack of an Italian Spring indicates a failure of the Internet as a political/cultural tool. After all, if we’re going to give some credit to the Net for its role in Arab Spring, then shouldn’t it get some of the blame? Or, should we wonder how much worse the Italian situation would be if there were no alternative at all to Berlusconi’s Orwellian control of the mass media?)


Berkman Buzz

This week’s Berkman Buzz:

  • The Digital Public Library of America announces $5 Million in Funding from the Sloan Foundation and Arcadia Fund:

  • Ethan Zuckerman recaps Beth Coleman’s presentation on “Tweeting the Revolution”

  • John Palfrey describes teaching at the Harvard Graduate School of Design on the history, present, and future of libraries

  • Rebecca MacKinnon examines why censorship is a central issue in Tunisian political discourse and debates

  • The Youth and Media Project launches a new website

  • The Citizen Media Law Project reports on how one doctor’s complaint turned a public database private

  • Weekly Global Voices: “Israel: Joy and Anger Continue Over Shalit Deal”

Comments Off on Berkman Buzz

October 22, 2011

[2b2k] Joi Ito on transforming the Media Lab from place to network

In an interview with Will Knight at Technology Review, Joi Ito explains some of what he hopes to accomplish as director of the MIT Media Lab, and shows why he was a brilliant choice for the position. “It’s becoming more like a Media Lab ‘network’ than a Media Lab ‘place.'”

And is the case with networked knowledge, the value is in the differences it encompasses:

To take all the different technologies that we have and to connect it with someone like a Kevin Rose, or an Ev Williams, or a Shawn Fanning, that’s a really interesting two-way thing because the students here get how a real Internet startup guy thinks about a product, and how they think about design, and, you know, Shawn [Fanning] can meet people who do real math. To me, that’s a huge synergy that we don’t currently get from, say, relationships with some of the big companies we have.

As for Joi’s interests: ” …for me it’s Internet startups, openness, and human rights. I definitely have that bias.” What an excellent set of biases!

[Disclosure: Like much of the tech world, I count Joi as a friend.]

Comments Off on [2b2k] Joi Ito on transforming the Media Lab from place to network

October 21, 2011

[dpla] second session

Maura Marx introduces Jill Cousins of Europeana who says that we all agree that we want to make the contents of libraries, museums and archives archives available for free. We agree on interoperability and open metadata. She encourages us to adopt the Europeana Data Model. Share our source code. Build our collections together. So, we’re starting with a virtual exhibition of migration of Europeans to American. The DPLA and Europeana will demonstrate the value of their combined collections — text and images — by digitizing material and making it available as an exhibition. (Maura thanks Bob Darnton for building European ties.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Maura Sullivan, president elect of the American Library Association, moderates a panel about visions of the DPLA. Each panelist gets 5-7 minutes

John Palfrey: It’s a bridge we’re building as we walk over it. But it has 5 aspects. 1. Digitizing projects. It’ll be a collection of collections. We should be digitizing in common ways with common formats. But, DPLA will also be: 2. Code. SourceForge for Libraries. Anyone can take and reuse it, including public libraries. 3. Metadata. That’s what makes info findable and usable. It’s the special sauce of librarians. But we haven’t done it yet. We need open access to metadata. 4. Tools and services that ride on top of a common platform. E.g., extraMuros, Scannebagos. 5. Community.

Peggy Rudd, Texas State Library and Archives Commission. We want to see someone walking down the street with a cellphone who says, “I’m going to DPLA it.” We should take as a guiding idea that all people in the country ought to have access to the infrastructure of ideas. We have to think about access. Those of us in public libraries are going to be the digital literacy corps. Public libraries are going to be the institutions that can ensure that people can discover things and will help people evaluate what they find, ensuring what they find is relevant, and help people get the most out of the DPLA.

Brewster Kahle, The Internet Archive. I grew up in a paper world. But I believe the Archivist is right: If it’s not online, it doesn’t exist. There are now two large scale digital library projects in the US. Ten million books are available from a commercial source, and 2M that are public (at But let’s step back and see where we want to be: Lots of publishers and authors who are paid; a diversity of libraries; everyone can be a reader, no matter what language, proclivities, disabilities. Let’s go and get 10M ebooks. 2M public domain (free), 7M out of print (digitized to be lent), 1M in print (buy ebook and lend them). Libraries ought to ebooks and circulate them, one loan at a time per one book. DPLA ought to help libraries buy new eBooks to lend them, as well as scanning the core 10M book collection, and enable al libraries get the digital collections. At this point, a 10M ebook collections requires about $30K of computers, which is within the budget of many libraries. For this, we would get universal access to all knowledge. How do we stay on track? Follow the money: is the money being well spent. And follow the bits: the bits should be put in many places. “Together we can build a digital America that is free to all.”

Amanda French begins with John Donne, “Sunrising.” [I am here heavily paraphrasing!] For most, the sun rising is a beginning, but for lovers it is an ending. The unruly sun of the digital text is rising, calling us to work, whereas I would rather snuggle in bed with a book. Love can exist in a commercial relationship, but that’s not ideal. I would like a library that supports me in all my moods, from contemplation to raucous sociality. We need proof of love. Physical libraries manifest that love. The DPLA must manifest itself as more than a web site, many quiet and generous services to readers, developers…technical and social. While I agree that if it isn’t online, it doesn’t exist, but if it’s only only online, it only half exists. And I want a physical building. Not just a server center. [Again: I’ve poorly paraphrased.]

Jill Cousins, Europeana. We want the DPLA because we get access to your stuff. [Laughter] But DPLA can improve on Europeana with open data, Open Source, Open Licensing. Also, we should be interopable. Our new strategic plan has four aspects. 1. Aggregating content as an trusted source. 2. Facilitating, supporting cultural eritage. 3. Distributing: Wherever people are. 4. Engaging: New ways to participate in cultural heritage. Europeana currentlu has 20M items, multiple languages. I’m particularly interested in the APIs so material can be distributed to where people will use it. (She points to content about the US that is in their distributed collection.) To facilitate: Labeling content so users know it’s in the public domain. What’s in the PD in analog form ought to stay in the PD in digital form. Engage: Cultivate new way for users to participate in their cultural heritage. One project: People are asked to bring their memorabilia from WWI. So, why DPLA: We are the generation that can give acccess to the analog past. If we don’t digitize it and put it online, will our kids?

Carl Malamud. When I think of the DPLA, I think of the Hoover Dam and the Golden Gate Bridge. There’s a tremendous reservoir of knowledge waiting to be tapped. Our Internet is flooded with only certain types of knowledge, and other types are not available to all. E.g., our law and policies — the operating system of our society — are not openly available because private fences have enclosed. E.g., if you’re a creator, you draw on imagery that has accumulated over thousands of years. Creative workers must stand on the shoulders of giants. But much of that image is locked up in for-profit corps that have built walls around public domain material. Even the Smithsonian only allows its images to be used by paying for them. We already have beautiful museums and bottomless libraries. What if the DPLA created a common reservoir that we could tap into. What if the Hathi Trust put everything that have into a common pool. Another metaphor: A bridge that connects our capitol to the rest of the country. DC is a vast storehouse. Most of the resources are hidden. We need public works projects for knowledge. A national digitization project, a decade long. Deploy the Internet Core of Engineers. “If a self-appointed librarian in an old church can publish 2M books, why can’t our government do more?

[I had to see a man about a dog, and missed a couple of questions.]

Q: How do we transform the use of public libraries?
Peggy: They have to evolve, and many are evolving already. E.g., user-created content. 46% of low-income families don’t have computers or Internet access.

Q: Bandwidth is a critical issue, particularly in rural areas. I hope that the DPLA realizes it’s going to have data-heavy materials. How are we going to build bandwidth to the public libraries?
Peggy: I’m happy to see the Gates Foundation here. They’ve worked with local libraries to provide and maintain bandwidth. 5mb is not enough when kids swarm in after school.

Q: Imagine an Ecuadoran American mother who is a part time student. She belongs to a lot of communities. I want to make sure that the coding of the DPLA recognizes that we each live in multiple communities.
Peggy: We all agree.

Q: First, in 1991 a White House conf was talking about not just scanning, but enable people to send in their materials (e.g., super8 family movies) that could be digitized. Second, DPLA has a huge potential for freeing up resources at the local library so it can spend its resources on customizing content to what that community needs, or let the person customize the library for herself.

Q: How does an ordinary person get involved in DPLA right now. Lobbying?
John: Lots of ways. Mobilization counts. The effect on local libraries needs to be explained; no one here thinks or wants the DPLA to hurt local public libraries. That’s a crazy thought. But that needs to be explained. I would be so sorry if this project led to the closing of a single library. And, yes, I think we should have a way for individuals to donate. How can you get involved in the setting up of this project: Deciding what the DPLA is an open process. There are six workstreams. Today is meant in part as an invitation to join in those workstreams. There will be meetings over the next 18 months; the meetings will be open. Come. We need people to build with what we create. We need people to think of new use cases. In April 2013 when we come together for the launch, if there are ten more people attending, that will be a sign of success.

Q: What do you have in the collection for children, 0-8? Why will a parent want to use the DPLA?
John: The DPLA needs to create a common infrastructure so people can create libraries and services out of the combined collection. But as a parent of a six and 9 year old, we’ll keep buying paper books and reading to our kids. The DPLA is not a replacement.
Peggy: Univ. of Texas in Arlington did a study at what engages students in the study of the history of Texas. Students perform better on tests if they had a greater interaction with real documents. We’re bringing history to the classrooms.
Carl: The Encyclopedia of Life has pictures of bugs, etc. And the Smithsonian has a great online resource [didn’t catch it], and the net thing the kid will want to do is visit the Smithsonian.
Amanda: If it isn’t online people don’t know it exists. If they know …[Ack. Lost the rest of this post. Noooooo]

1 Comment »

Next Page »