Joho the Blog » archives

July 24, 2012

[preserve] Lightning Talks

A series of 5-min lightning talks.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Christie Moffatt of the National Library of Medicine talks about a project collecting blogs talking about health. It began in 2011. The aim is to understand Web archiving processes and how this could be expanded. Three examples: Wheelchair Kamikaze. Butter Compartment. Doctor David’s Blog. They were able to capture them pretty well, but with links to outside, outside of scope content, and content protected by passwords, there’s a question about what it means to “capture” a blog. The project has shown the importance of test crawls, and attending to the scope, crawling frequency and duration. The big question is which blogs they capture. Doctors who cook? Surgeons who quilt? Other issues: Permissions. Monitoring when the blogs end, change focus, or move to a new url. E.g., a doctor retired and his blog changed focus to about fishing.

Terry Plum from Simmons GSLIS talks about a digital curriculum lab. It was set up to pull in students and faculty around a few different areas. They maintain a collection of open source applications for archives, museums, and digital libraries. There are a variety of teaching aids. The DCL is built into a Cultural Heritage Informatics track at Simmons.

Daniel Krech of Library of Congress works at the Repository Development Center. The RDC works with people managing collections. The RDC works on human-machine interfaces. One project involves “sets” (collections). “We’ve come up with some new and interesting ways to think about data.” They use knot, set, and hyper theory, but they also sometimes use a physical instantiation of a set — it looks like knotted yarn — to help understand some very abstract ideas.

Kelsey [Keley?]Shepherd of Amherst represents the Five College Digital Task Force. (She begins by denying that the Scooby Gang was based on the five colleges.) They don’t share a digital library but want to collaborate on digital preservation. They are creating shared guidelines for preservation-ready digital objects. They are exploring models for funding and organizational structure. And they are collaborating on implementing a trusted digital perservation repository. But each develops its own digital preservation policy.

Jefferson Baily talks about Personal Digital Archiving at the Library of Congress. He talks about the source diary for The Widwife’s Tale. That diary sat on a shelf for 200 years before being discovered as an invaluable window on the past. Often these archives are the responsibility of the record creators. The LoC therefore wants to support community archives, enthusiasts, and citizen archivists. They are out and about, promoting this. See

Carol Minton Morris with DuraSpace and the NDSA (National Digital Stewardship Alliance) talks about funding archiving through “hip pocket resources.” They’re looking into Technology and publishing projects at Kickstarter have only raised $9M out of the $100M raised there; most of it goes to the arts. She points to some other microfinance sites, including IndieGoGo and She encourages the audience to look into microfinancing.

Kristopher Nelson from LoC Office of Strategic Initiatives talks about the National Digitial Stewardship Residency, which aims at building a community of professionals who will advance digital archiving. It wants to bridge classroom education and professional experience, and some real world experience. It will start in June 2013 with 10 residents participating in the 9 month program.

Moryma Aydelott, program specialist at LoC talks about Tackling Tangible Metadata. The LoC’s digital data is on lots of media: 300T on everything from DVDs to DAT tapes and Zip disks. Her group provides a generic workflow for dealing with this stuff — any division, any medium. They have a wheeling cart for getting at this data. They make the data available “as is.” It can be hard to figure out what type of file it is, and what application is needed to read it. Right now, it’s about getting it on the server. They’ve done about 6.5T of material, 700-800 titles, so far. But the big step forward is in training and in documenting processes.

Be the first to comment »

[preserve] Michael Carroll on copyright and deigital preservation

Michael Carroll, from American University Washington College of Law, is talking about “Copyright and Digital Preservation: The Role of Open Licenses.” (Michael is on the board of Creative Commons.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Michael begins with a comparison to environmentalism: Stewardship of valuable resources, and long-term planning. There are cognitive challenges, and issues in providing institutional incentives. (He recommends sucking in as much data as possible, and worrying about adding the metadata later, perhaps through crowdsourcing.)

Michael notes that copyright used to be an opt-in and opt-out system; you had to register, and deposit a copy. Then you had to publish with a ©; anything published before 1989 that doesn’t have the © is in the public domain. You had to renew after 28 years, and the majority of copyrights (60%) were not renewed. We therefore had a growing public domain.

The court in Golan upheld Congress’ right to restore copyright for works published outside the US. This puts the public domain at risk, he says. He also points to the Hathi case in which they’ve been sued for decisions they made about orphan works. There is a dangerous argument being made there that if archiving occurs within the library space, fair use goes away. The legal environment is thus unstable.

Now that copyright is automatic and lasts for 70 years after the author’s death, managing the rights in order to preserve the content is fraught with difficulty.

He reminds us that making a copy to preserve the work is unlikely to have market harm to the copyright owner, and thus ought to be legal under fair use, Michael says. “You ought to have a bias toward believing you have a Fair Use right to preserve things.”

He asks: “Can the preservation community organize itself to be the voice of tomorrow’s users on issues of copyright policy and copyright estate planning?” For orphan works, copyright term shortening, exceptions to DRM rules, good practices open licensing in the long term…

And he asks: How can you get the FBs and Googles et al. to support long-term preservation? Michael suggests marking things that already in the public domain as being in the public domain. Otherwise, the public domain is invisible. And think about “springing” licenses, e.g. an open license that only goes into effect after a set time or under a particular circumstance.

Be the first to comment »

[preserve] Anil Dash on archiving the Internet

Anil Dash (one of my heroes, and is also hilarious) is talking at a Library of Congress event on Digital Preservation, part of the National Digital Information Infrastructure and Preservation Program. Anil’s talk is called “Make a Copy.” (Anil is now at ThinkUp.)

Live Blogging

Getting things wrong. Making fluid talks sound choppy. Missing important points. Not running a spellpchecker. This is not a reliable report. You have been warned, people!

Anil says he’s a geek interested in the social impacts of tech on culture, govt, and more. He started Expert Labs a few years ago to enable tech to talk with policy makers. Expert Labs built ThinkUp. He wants to talk about the issues that this group or archivists confronts every day that the tech community doesn’t know about. He warns us that this means he’s starting with depressing stuff. So…

…Picture the wholesale destruction of your wedding photos, or other deeply personal mementos. They are being destroyed by an exclusive, private, ivy league club: Facebook. FB treats memories as disposable. “Maybe if I were a 25 year old billionaire, I’d think of these as disposable, too.” “The terms of service of digital social networks trumps the Constitution in terms of what people can share and consume.” Our ordinary conversations are treated as disposable, at Facebook, Twitter, Microsoft, etc. They explicitly say that they can delete all of your content at any time for any reason. “100s of millions of Americans have accepted that. That should be troubling to those of us who care about preservation.”

You can opt out, but not without compromising your career and having severe social cost. And you can’t rely upon the rest of the Web, because “there’s a war ranging against the open Web.” “The majority of time spent on the Web in the US is spent in an application,” not on pages. Yet we’re still archiving Web pages but not those applications. “They are gaslighting the Web,” Anil says, referring to the old movie. E.g., you can leave FB comments on Anil’s blog, but when you click from FB to his blog, FB gives you a warning that the site you’re going to is untrustworthy. “I don’t do that to them,” he says, even though they’ve consistently “moved the goal posts” on privacy, and he has registered his site with FB.

After blogging this, Anil got a message from a tech at FB saying that it was a bug that’s being fixed. But suppose he hadn’t blogged it, or FB had missed it? “The best case scenario is that we’re left fixing their bugs.” He adds, “That’s pretty awful, because they’re not fixing our bugs. And we’re helping them to extend their prisons over the Web.” And is the only way to get our words preserved is to agree to Twitter’s ToS so that we’ll get archived by the Library of Congress, which has been archiving tweets. Anil says that he’s conscientiously tried to archive his own works for his new baby, but it shouldn’t rely on that much effort by an individual.

And, he says, that’s just the Web, not the apps. You can’t crawl his phone and preserve his photos. And when FB buys Instagram which has a billion photos, and only 5% of the content FB has bought has been preserved…? And yet the Instagram acquisition is considered a success by the Valley. If you’re a Pharaoh, your words are preserved. Anil is worried about the rest of the conversations.

“If I were to ask you what is the most watched form of video, what would you say?” Anil guesses that it’s animated gifs. And we don’t archive them. “We’re talking about the wrong things.” We’re arguing that we should be using Ogg Vorbis, but the proprietary forms are the ones that are most used. The standards ecology is getting more complicated. “We need to reflect back to the tech community that they have an obligation to think about preservation.” They’ve got money and resources. Shouldn’t they be contributing?

We’re losing metadata, he says. You can’t find Instagram photos because they have no Web presence and are short on metadata. Flickr, on the contrary, has lots of metadata. The Instagram owners are now multi-millionaires and are undermotivated to fix this problem. Maybe we’ll get something in 5 years, but then we will have lost a full decade of people’s photos. There’s no way to assign Instagrams open licenses at this point.

Indeed, “they are bending the law to make archiving illegal.” You can’t hack your own phone. You can’t copy your own photos from one device to another.

“Content tied to devices dies when those devices become obsolete.” The obsolesence cycle is becoming faster every year.

So, what should we do?

The technologists building these devices don’t know about the work of archivists. They don’t know that what this group is doing is meaningful. Many are young and don’t yet have experiences they want to preserve. They may not have confronted their own mortality yet.

But, the Web at its base level is about making copies. So, if we get things on the Web as opposed to in apps, we win. Apps should be powered by, or connected to, a Web experience. How can we take advantage of the fact that every time you go to a Web page, you’re copying it? How can we take advantage of the CDN’s, which are already doing a lot of the work needed for preservation?

“There is also a growing class of apps that want to do the right thing.” E.g., TimeHop, that sends you an email reminding you of what you tweeted, etc., a year ago. This puts a user experience around the work of preservation. They’re marketing the value of the preservation community, but they don’t know it yet. Or Brewster, an iPhone address book that hooks up to all the address books you have on social services, reminding you to connect with people you haven’t touched in a while. This is a preservation app, although Brewster doesn’t know.

Then, how do we mine our personal archives? (He notes that his company’s tool, ThinkUp, is in this space.) His Nike fuel band captures data about his physical activity. The Quantified Self movement is looking at all sorts of data. “They too are preservationists, and they don’t know it.”

Then there are institutions. People revere the Library of Congress. Senior people at Twitter speak in a hushed voice when they say, “The tweets go to the LoC.” Take advantage of the institution’s authority. Don’t be shy. Meet them halfway. And say, “By the way, look at my cool email address.”

“PR trumps ToS.” ThinkUp archived the FB activity of the White House. At the time, FB’s ToS forbid archiving it for more than 24 hours. But the WH policy requires it. I said, “Please, FB, please cut off the White House’.” It turns out that FB was already planning on revising the policy. “What a great conversation we would have gotten to have.” You are our advocates, says Anil. You have an obligation to speak on our behalves.

The public is already violating “Intellectual Property” rules. “We don’t look at YouTube as the Million Mixers March, but that’s what it is.” It’s civil disobedience: People violating the law in public under their own names. These are people who recognize the value of preserving cultural works that otherwise would disappear. Sony won’t sell you a copy of Michael Jackson’s Thriller, but there are copies on YouTube. The heart and soul of those posting those videos is preservation. “All they want to do is what you do: make a copy of what matters to them.”


April 15, 2012

Timbuktu librarians, scholars, and citizens preserving ancient documents and Islamic heritage

On April 1, rebels overran Timbuktu, so, according to a Reuters article, librarians, scholars, and citizens in this important site of Islamic learning are hiding away thousands of irreplaceable manuscripts. “Estimates for the total number of historic documents in the city, some of them from the 13th century, range from 150,000 to five times that number,” says Pascal Fletcher, the article’s author.

In fact, citizens lined up to deny armed rebels access to the archives where 20,000 ancient manuscripts are stored.

From the article:

ome texts were stashed for generations under mud homes and in desert caves by proud Malian families who feared they would be stolen by Moroccan invaders, European explorers and then French colonialists. Now many fear the rampaging rebels, who carry AK-47s instead of muskets, lances and swords.

Brittle, written in ornate calligraphy, and ranging from scholarly treatises to old commercial invoices, the documents represent a compendium of learning on everything from law, sciences and medicine to history and politics. Some experts compare them in importance to the Dead Sea Scrolls.

(I came across this article in the really useful aggregation site Library News, which (disclosure) comes out of our Harvard Library Innovatino Lab.)

Be the first to comment »

July 7, 2011

1888 talking doll speaks again…by Science!

The National Park Service site has a fascinating article about the discovery of a very early talking doll made by Thomas Alva Edison. This was apparently the first commercially-available phonograph recording ever.

The artifact is a ring-shaped cylinder phonograph record made of solid metal, preserved by the National Park Service at Thomas Edison National Historical Park. Phonograph inventor Thomas Edison made the record during the fall or winter of 1888 in West Orange, New Jersey. On the recording, an unidentified woman recites one verse of the nursery rhyme “Twinkle, twinkle, little star”. The voice captured on the 123-year-old record had been unheard since Edison’s lifetime. The recording represents a significant milestone in the early history of recorded sound technology.

To “play” the recording, scientists at the Lawrence Berkeley National Laboratory did a 3D scan of the grooves and reproduced the sound without having to touch the physical material.

Each phonograph was made live, rather than reproduced from a master, which adds just a little more of thrill to listening to it. You can hear the recording here.

[Tip of the hat to my brother Andy for the link. And see the excellent article in Science magazine.]


June 24, 2011

Tagging the National Archives

The National Archives is going all tag-arrific on us:

The Online Public Access prototype (OPA) just got an exciting new feature — tagging! As you search the catalog, we now invite you to tag any archival description, as well as person and organization name records, with the keywords or labels that are meaningful to you. Our hope is that crowdsourcing tags will enhance the content of our online catalog and help you find the information you seek more quickly.

Nice! (Hat tip to Infodocket for the tip)

Be the first to comment »

June 14, 2011

Linked Open Data take-aways

I just wrote up an informal trip report in the form of “take aways” from the LOD-LAM conference I attended a cople of weeks ago. Here is a lightly edited version.


Because it was an unconference, it was too participatory to enable us to take systematic notes. I did, however, interview a number of attendees, and have posted the videos on the Library Innovation Lab blog site. I actually have a few more yet to post. In addition, during the course of one of the sessions (on “Explaining LOD-LAM”), a few of us began constructing a FAQ.

Here’s some of what I took away from the conference.

- There is considerable momentum around linked open data, starting with the sciences where there is particular research value in compiling huge data sets. Many libraries are joining in.

- LOD for libraries will enable a very fluid aggregation of information from multiple types of sources around any particular object. E.g., a page about a Hogarth illustration (or about Hogarth, or about 18th century London, etc.) could quite easily aggregate information from any data set that knows something about that illustration or about topics linked to that illustration. This information could be used to build a page or to do research.

- Making data and metadata available as LOD enables maximal re-use by others.

- Doing so requires expertise, but should be less massively difficult than supporting many other standards.

- For the foreseeable future, this will be something libraries do in addition to supporting more traditional data standards; it will be an additional expense and effort.

- Although there is continuing debate about exactly which license to use when publishing library data sets, it seems that usually putting any form of license on the data other than a public domain waiver of licenses is likely to be (a) futile and (b) so difficult to deal with that it will inhibit re-use of the data, depriving it of value. (See the 4-star license proposal that came out of this conference.)

- The key point of resistance against LOD among libraries, archives and museums is the justified fear that once the data is released into the world, the curating institutions can no longer ensure that the metadata about an object is correct; the users of LOD might pick up a false attribution, inaccurate description, etc. This is a genuine risk, since LOD permits irresponsible use of data. The risk can be mitigated but not removed.

1 Comment »

June 8, 2011

MacKenzie Smith on open licenses for metadata

MacKenzie Smith of MIT and Creative Commons talks about the new 4-star rating system for open licenses for metadata from cultural institutions:

The draft is up on the LOD-LAM site.

Here are some comments on the system from open access guru Peter Suber.


November 30, 2010

[bigdata] Ensuring Future Access to History

Brewster Kahle, Victoria Stodden, and Richard Cox are on a panel, chaired by the National Archive’s Director of Litigation Jason Baron. The conference is being put on by Princeton’s Center Internet for Technology Policy.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Brewster goes first. He’s going to talk about “public policy in the age of digital reproduction.” “We are in a jam,” he says, because of how we have viewed our world as our tech has change. Brewster founded the Internet Archive, a non-profit library. The aim is to make freely accessible everything ever published, from the Sumerian texts on. “Everyone everywhere ought to have access to it” — that’s a challenge worthy of our generation, he says.

He says the time is ripe for this. The Internet is becoming ubiquitous. If there aren’t laptops, there are Internet cafes. And there a mobiles. Plus, storage is getting cheaper and smaller. You can record “100 channel years” of HD TV in a petabyte for about $200,000, and store it in a small cabinet. For about $1,200, you could store all of the text in the Library of Congress. Google’s copy of the WWW is about a petabyte. The WayBack machine uses 3 petabytes, and has about 150 billion pages. It’s used by 1.5M/day. A small organization, like the Internet Archive, can take this task on.

This archive is dynamic, he says. The average Web page has 15 links. The average Web page changes every 100 days.

There are downsides to the archive. E.g., the WayBack Machine gets used to enable lawsuits. We don’t want people to pull out of the public sphere. “Get archived, go to jail,” is not a useful headline. Brewster says that they once got an FBI letter asking for info, which they successfully fought (via the EFF). The Archive gets lots of lawyer letters. They get about 50 requests per week to have material taken out of the Archive. Rarely do people ask for other people’s stuff to be taken down. Once, the Scientologists wanted some copyright-infringing material taken down from someone else’s archived site; the Archive finally agreed to this. The Archive held a conference and came up with Oakland Archive Policy for issues such as these.

Brewster points out that John Postel’s taxonomy is sticking: .com, .org, .gov, .edu, .mil … Perhaps we need separate policies for each of these, he says. And how do we take policy ideas and make them effective? E.g., if you put up a robots.txt exclusion, you will nevertheless get spidered by lots of people.

“We can build the Library of Alexandria,” he concludes, “but it might be problematic.”

Q: I’ve heard people say they don’t need to archive their sites because you will.
A: Please archive your own. More copies make us safe.

Q: What do you think about the Right to Oblivion movement that says that some types of content we want to self-destruct on some schedule, e.g. Facebook.
A: I have no idea. It’s really tough. Personal info is so damn useful. I wish we could keep our computers from being used against us in court; if we defined the 5th amendment so that who “we” are included our computers…

Richard Cox says if you gold, you know about info overload. It used to be that you had one choice of golf ball, Top-Flite. Now they have twenty varieties.

Archives are full of stories waiting to be told, he says. “When I think about Big Data…most archivists would think we’re talking about being science, corporate world, and government.” Most archivists work in small cultural, public institutions. Richard is going to talk about the shifting role of archivists.

As early as the 1940s, archivists were talking about machine-readable records. The debates and experiments have been going on for many decades. One early approach was to declare that electronic records were not archives, because the archives couldn’t deal with them. (Archivists and records managers have always been at odds, he says, because RM is about retention schedules, i.e., deleting records.) Over time, archivists came up to speed. By 2000, some were dealing with electronic records. In 2010, many do, but many do not. There is a continuing debate. Archivists have spent too long debating among themselves when they need to be talking with others. But, “archivists tend not to be outgoing folks.” (Archivists have had issues with the National Archives because their methods don’t “scale down.”)

There are many projects these days. E.g., we now have citizen archivists who maintain their own archives and who may contribute to public archives. Who are today’s archivists? Archival educators are redefining the role. Richard believes archives will continue, but the profession may not. He recommends reading the Clair report [I couldn't get the name or the spelling, and can't find it on Google :( ] on audio-visual archives. “I read it and I wept.” It says that we need people who understand the analog systems so that they can be preserved, but there’s no funding.

Victoria Stodden’s talk gloomy title is “The Coming Dark Ages in Scientific Knowledge.”

She begins by pointing to the pervasive use of computers and computational methods in the sciences, and even in the humanities and law schools. E.g., Northwestern is looking at the word counts in Shakespearean works. It’s changing the type of scientific analysis we’re doing. We can do very complicated simulations that give us a new way of understanding our world. E.g., we do simulations of math proofs, quite different from the traditional deductive processes.

This means what we’re doing as scientists is being stored in script, codes, data, etc. But science only is science when it’s communicated. If the data and scripts are not shared, the results are not reproducible. We need to act as scientists to make sure that this data etc. are shared. How do we communicate results based on enormous data sets? We have to give access to those data sets. And what happens when those data sets change (corrected or updated)? What happens to results based on the earlier sets? We need to preserve the prior versions of the data. How do we version it? How do we share it? How do we share it? E.g., There’s an experiment at NSF: All proposals have to include a data management plan. The funders and journals have a strong role to play here.

Sharing scientific knowledge is harder than it sounds, but is vital. E.g., a recent study showed that a cancer therapy will be particular effective based on individual genomes. But, it was extremely hard to trace back the data and code used to get this answer. Victoria notes that peer reviewers do not check the data and algorithms.

Why a dark age? Because “without reproducibility, knowledge cannot be recreated or understood.” we need ways and processes of sharing. Without this, we only have scientists making proclamations.

She gives some recommendations: (1) Assessment of the expense of data/code archiving. (2) Enforcement of funding agency guidelines. (3) Publication requirements. (4) Standards for scientific tools. (5) Versioning as a scientific principal. (6) Licensing to realign scientific intellectual property with longstanding scientific norms (Reproducible Research Standard). [verbatim from her slide] Victoria stresses the need to get past the hurdles copyright puts in the way.

Q: Are you a pessimist?
A: I’m an optimist. The scientific community is aware of these issues and is addressing them.

Q: Do we need an IRS for the peer review process?
A: Even just the possibility that someone could look at your code and data is enough to make scientists very aware of what they’re doing. I don’t advocate code checking as part of peer review because it takes too long. Instead, throw your paper out into the public while it’s still being reviewed and let other scientists have at it.

Q: [rick] Every age has lost more info than it has preserved. This is not a new problem. Every archivist from the beginning of time has had to cope with this.

Jason Baron of the National Archives (who is not speaking officially) points to the volume of data the National Archives (NARA) has to deal with. E.g., in 2001 32 million emails were transferred to NARA; in 2009, 250+ million archives were. He predicts there will be a billion presidential emails by 2017 held at NARA. The first lawsuit over email was filed in 1989 (email=PROFS). Right now, the official policy of 300 govt agencies is to print email out for archiving. We can no longer deal with the info flow with manual processes. Processing of printed pages occurs when there’s a lawsuit or a a FOIA request. Jason is pushing on the value of search as a way of encouraging systematic intake of digital records. He dreams of search algorithms that retrieve all relevant materials. There are clustering algorithms emerging within law that hold hope. He also wants to retrieve docs other than via key words. Visual analytics can help.

There are three languages we need: Legal, Records Management, and IT. How do we make the old ways work in the new? We need both new filtering techniques, but also traditional notions of appraisal. “The neutral archivist may serve as an unbiased resource for the filtering of information in an increasingly partisan (untrustworthy) world” [from the slide].


September 3, 2008

Preserving atoms

Beginning by disagreeing with me — always a good way to start! — Lev at Certain Musings has a useful post about the difficulty of preserving the texts we care about, including some of the interesting efforts underway.

[Tags: ]

Be the first to comment »

Next Page »

Switch to our mobile site