Joho the Blog » science

January 27, 2013

Alfred Russel Wallace’s letters go online, with a very buried CC license that maybe doesn’t apply anyway

The letters of Lord Alfred Russel Wallace, co-discoverer of the theory of evolution by natural selection, are now online. As the Alfred Russel Wallace Correspondence Project explains, the collection consists of 4,000 letters gathered from about 100 different institutions, with about half in the British Natural History Museum and British Library.

The Correspondence Project has, admirably, been releasing the scans without waiting for transcription; more faster is better! Predictably annoyingly, the letters, written by a man who died ten years before the Perpetual Copyright date of 1923, seem to be (but are they?) carefully obstructed by copyright: The Natural History Museum, which houses the collection, asserts copyright over “data held in the Wallace Letters Online database (including letter summaries)” [pdf oddly unreadable in Mac Preview]. Beyond the summaries, exactly what data is this referring to? Not sure. Don’t know.

But that isn’t the full story anyway, for the NHM sends us to the Wallace Fund for more information about the copyright. That page tells us that the unpublished letters are copyrighted until 2039, with this very helpful footnote:

Unless the work was published with the permission of his Literary Estate before 1 August 1989, in which case the work will be in copyright for 70 years after Wallace’s death, unless he died more than 20 years before the work’s publication, in which case copyright would expire 50 years after publication.


Eventually it gets to some good news:

Authors wishing to publish such works would ordinarily need to obtain permission from the copyright holder before doing so. However, on July 31st 2011, in an attempt to facilitate the scholarly study of ARW’s writings, the co-executors of ARW’s Literary Estate agreed to allow third parties to publish ARW’s copyright works non-commercially without first having to ask the Literary Estate for permission, under the terms and conditions of Creative Commons license “Attribution-NonCommercial-ShareAlike 3.0 Unported”

So, are the letters published on the NHM site actually available under a Creative Commons non-commercial license? The Wallace Fund that aggregated them seems to think so. The NHM that published them maybe thinks not.

Because copyright is just so magical.


TWO HOURS LATER: Please see the first comment, from George Beccaloni, Director of the Wallace Correspondence Project. Thanks, George.

He explains that the transcribed text is available under a Creative Commons non-commercial license, but the digitized images are not. Plus some further complications, such as the content of the database being under copyright, although it is not clear from the site what data that is.

Since the aim of CC is to make it easier for people to re-use material, may I suggest (in the friendliest of fashions) that this be prominently clarified on the sites themselves?


January 9, 2013

What I learned at NASA

Well, I learned a bunch of stuff, but I’ll only mention two.

First, NASA is as totally awesome as you think it is. I went to the Langley centerfor a one day visit, and got a morning tour, and it is a nerd-heaven work space, with no Star Wars white plastic, but lots and lots of dented workbenches covered with sprays of components. And it adds up to our species looking down on our planet. Ultra ultra cool.

Second, I got a tour of the National Transonic Facility by Bill Bisset, who manages the place. They test models in the world’s most sophisticated wind tunnel — they fill it with liquid nitrogen (which they make themselves) that’s blown in by the world’s most powerful horizontally-mounted electrical motor (that consumes an eighth of the output of a local nuclear generator), and they measure up to 5,000 different parameters. So, naturally, I ran an urban myth past Bill, because that’s an excellent use of his time.

I had been told by someone sometime that those little upturned wing tips you sometimes see on planes were discovered more than invented: Someone tried them out, and they turned out to increase the efficiency of the plane, but no one knew why.


Nope, nope, and nope. They’re called winglets. Here’s the story, from a NASA page:

The concept of winglets originated with a British aerodynamicist in the late 1800s, but the idea remained on the drawing board until rekindled in the early 1970s by Dr. Richard Whitcomb when the price of aviation fuel started spiraling upward.

Bill explained that winglets work by altering the vortex that forms when air rushes over a wing. “Winglets…produce a forward thrust inside the circulation field of the vortices and reduce their strength,” as the NASA page says. They increase efficiency by 6-9%. Bill said they also effectively increase the wingspan of the plane, but without extending the wings horizontally, which matters to airlines because they pay airports based upon the horizontal length of the wings.

So,yes, everything I’d heard was wrong. And, yes, it was in Wikipedia all along.

(And yes, I learned a whole lot more. It was for me a wonderful day.)

Be the first to comment »

January 5, 2013

[2b2k] Science as social object

An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.

The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.

First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].

Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:

Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…

Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”

They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:

Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.

I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.

But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.

So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.

We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.


Notes, copied straight from the article:

[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).

[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43

[3] P. Ladwig et al., Mater. Today 13, 52 (2010)

[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts

1 Comment »

December 12, 2012

[eim][2b2k] The DSM — never entirely correct

The American Psychiatric Association has approved its new manual of diagnoses — Diagnostic and Statistical Manual of Mental Disorders — after five years of controversy [nytimes].

For example, it has removed Aspberger’s as a diagnosis, lumping it in with autism, but it has split out hoarding from the more general category of obsessive-compulsive disorder. Lumping and splitting are the two most basic activities of cataloguers and indexers. There are theoretical and practical reasons for sometimes lumping things together and sometimes splitting them, but they also characterize personalities. Some of us are lumpers, and some of us are splitters. And all of us are a bit of each at various times.

The DSM runs into the problems faced by all attempts to classify a field. Attempts to come up with a single classification for a complex domain try to impose an impossible order:

First, there is rarely (ever?) universal agreement about how to divvy up a domain. There are genuine disagreements about which principles of organization ought to be used, and how they apply. Then there are the Lumper vs. the Splitter personalities.

Second, there are political and economic motivations for dividing up the world in particular ways.

Third, taxonomies are tools. There is no one right way to divide up the world, just as there is no one way to cut a piece of plywood and no one right thing to say about the world. It depends what you’re trying to do. DSM has conflicting purposes. For one thing, it affects treatment. For example, the NY Times article notes that the change in the classification of bipolar disease “could ‘medicalize’ frequent temper tantrums,” and during the many years in which the DSM classified homosexuality as a syndrome, therapists were encouraged to treat it as a disease. But that’s not all the DSM is for. It also guides insurance payments, and it affects research.

Given this, do we need the DSM? Maybe for insurance purposes. But not as a statement of where nature’s joints are. In fact, it’s not clear to me that we even need it as a single source to define terms for common reference. After all, biologists don’t agree about how to classify species, but that science seems to be doing just fine. The Encyclopedia of Life takes a really useful approach: each species gets a page, but the site provides multiple taxonomies so that biologists don’t have to agree on how to lump and split all the forms of life on the planet.

If we do need a single diagnostic taxonomy, DSM is making progress in its methodology. It has more publicly entered the fray of argument, it has tried to respond to current thinking, and it is now going to be updated continuously, rather than every 5 years. All to the good.

But the rest of its problems are intrinsic to its very existence. We may need it for some purposes, but it is never going to be fully right…because tools are useful, not true.

1 Comment »

October 28, 2012

[2b2k] The moment for science

And one more thing about my previous post: I understand that when Heidegger was writing Being and Time in the 1920s, it was important to try to relax our culture’s commitment to scientific objectivity in order to allow more types of truths to appear – more ways that the world shows itself to us.

Almost a hundred years later, with a brand new medium for knowledge, truth, and disclosure, it is time to re-assert science’s privileged (yet still human and imperfect) position as we try to come to agreement across cultures about what we need to do in order to live together on this earth.

In my opinion.


[2b2k] Facts, truths, and meta-knowledge

Last night I gave a talk at the Festival of Science in Genoa (or, as they say in Italy, Genova). I was brought over by Codice Edizioni, the publisher of the just-released Italian version of Too Big to Know (or, as they say in Italy “La Stanza Intelligente” (or as they say in America, “The Smart Room”)). The event was held in the Palazzo Ducale, which ain’t no Elks Club, if you know what I mean. And if you don’t know what I mean, what I mean is that it’s a beautiful, arched, painted-ceiling room that holds 800 people and one intimidated American.

genova - palazzo ducale

After my brief talk, Serena Danna of Corriere della Serra interviewed me. She’s really good. For example, her first question was: If the facts no longer have the ability to settle arguments the way we hoped they would, then what happens to truth?

Yeah, way to pitch the ol’ softballs, Serena!

I wasn’t satisfied with my answer, which had three parts. (1) There are facts. The world is one way and not all the other ways that it isn’t. You are not free to make up your own facts. [Yes, I'm talking to you, Mitt!] (2) The basing of knowledge primarily on facts is a relatively new phenomenon. (3) I explicitly invoked Heidegger’s concept of truth, with a soupçon of pragmatism’s view of truth as a tool intended to serve a purpose.

Meanwhile, I’ve been watching The Heidegger Circle mailing list contort itself trying to understand Heidegger’s views about the world that existed before humans entered the scene. Was there Being? Were there beings? It seems to me that any answer has to begin by saying, “Of course the world existed before we did.” But not everyone on the list is comfortable with a statement that simple. Some seem to think that acknowledging that most basic fact somehow diminishes Heidegger’s analysis of the relation of Being and disclosure. Yo, Heideggerians! The world shows itself to us as independent of us. We were born into it, and it keeps going after we’ve died. If that’s a problem for your philosophy, then your philosophy is a problem. And for all of the problems with Heidegger’s philosophy, that just isn’t one. (To be fair, no one on the list suggests that the existence of the universe depends upon our awareness of it, although some are puzzled about how to maintain Heidegger’s conception of “world” (which does seem to depend on us) with that which survives our awareness of it. Heidegger, after all, offers phenomenological ontology, so there is a question about what Being looks like when there is no one to show itself to.)

So, I wasn’t very happy with what I said about truth last night. I said that I liked Heidegger’s notion that truth is the world showing itself to us, and it shows itself to us differently depending on our projects. I’ve always liked this idea for a few reasons. First, it’s phenomenologically true: the onion shows itself differently whether you’re intending to cook it, whether you’re trying to grow it as a cash crop, whether you’re trying to make yourself cry, whether you’re trying to find something to throw at a bad actor, etc. Second, because truth is the way the world shows itself, Heidegger’s sense contains the crucial acknowledgement that the world exists independently of us. Third, because this sense of truth look at our projects, it contains the crucial acknowledgement that truth is not independent of our involvement in the world (which Heidegger accurately characterizes not with the neutral term “involvement” but as our caring about what happens to us and to our fellow humans). Fourth, this gives us a way of thinking about truth without the correspondence theory’s schizophrenic metaphysics that tells us that we live inside our heads, and our mental images can either match or fail to match external reality.

But Heidegger’s view of truth doesn’t do the job that we want done when we’re trying to settle disagreements. Heidegger observes (correctly in my and everybody’s opinion) that different fields have different methodologies for revealing the truth of the world. He speaks coldly (it seems to me) of science, and warmly of poetry. I’m much hotter on science. Science provides a methodology for letting the world show itself (= truth) that is reproducible precisely so that we can settle disputes. For settling disputes about what the world is like regardless of our view of it, science has priority, just as the legal system has priority for settling disputes over the law.

This matters a lot not just because of the spectacular good that science does, but because the question of truth only arises because we sense that something is hidden from us. Science does not uncover all truths but it uniquely uncovers truths about which we can agree. It allows the world to speak in a way that compels agreement. In that sense, of all the disciplines and methodologies, science is the closest to giving the earth we all share its own authentic voice. That about which science cannot speak in a compelling fashion across all cultures and starting points is simply not subject to scientific analysis. Here the poets and philosophers can speak and should be heard. (And of course the compulsive force science manifests is far from beyond resistance and doubt.)

But, when we are talking about the fragmenting of belief that the Internet facilitates, and the fact that facts no longer settle arguments across those gaps, then it is especially important that we commit to science as the discipline that allows the earth to speak of itself in its most compelling terms.

Finally, I was happy that last night I did manage to say that science provides a model for trying to stay smart on the Internet because it is highly self-aware about what it knows: it does not simply hold on to true statements, but is aware of the methodology that led us to see those statements as true. This type of meta awareness — not just within the realm of science — is crucial for a medium as open as the Internet.


September 30, 2012

[2b2] A moon from Mars

Someday I’ll figure out the threads that bind the mere sentences that make me fill with tears. Sometimes it’s sadness, but surprisingly often it’s joy.

Here’s today’s joy:

Look in the upper right for a crescent-shaped smudge. That’s Phobos, one of Mars’ two moons.

Emily Lakdawalla writes in her blog:

Think about this for a moment — we’re seeing a different moon from the surface of a different world. And this moon is weird not just for its lumpiness, but also because it orbits so close to Mars that it outpaces Mars’ rotation. That means it rises in the west and sets in the east, more than twice every Martian day. Completely alien. And awesome, in the literal sense of the world.

It turns me into a soppy ol’ Boehner.

Here’s a close-up of Phobos:

Emily adds:

I would not have noticed this image were it not for the ever-watchful members of (user “fredk” this time). I’m so grateful for that community. We’re running a fundraiser right now to support our hosting costs — if you, too, value the beautiful images and constant attentiveness of this community of volunteers and amateurs, please consider making a donation to support it.


September 10, 2012

Obesity is good for your heart

From, an article by Lisa Nainggolan:

Gothenburg, Sweden – Further support for the concept of the obesity paradox has come from a large study of patients with acute coronary syndrome (ACS) in the Swedish Coronary Angiography and Angioplasty Registry (SCAAR) [1]. Those who were deemed overweight or obese by body-mass index (BMI) had a lower risk of death after PCI [percutaneous coronary intervention, aka angioplasty] than normal-weight or underweight participants up to three years after hospitalization, report Dr Oskar Angerås (University of Gothenburg, Sweden) and colleagues in their paper, published online September 5, 2012 in the European Heart Journal.

Can confirm. My grandmother in the 1930s was instructed to make sure she fed her husband lots and lots of butter to lubricate his heart after a heart attack. This proved to work extraordinarily well, at least until his next heart attack.

I refer once again to the classic 1999 The Onion headline: Eggs Good for You This Week.

Be the first to comment »

August 13, 2012

Hummingbirds live in The Shire

I’ve been watching hummingbirds at our feeder, and took a moment to read up on them a bit more. has a lot of interesting information, including about their impossible migrations. (These migrations are proved by the Internet and reported by people like you and me.)

But what really amused me was this straightforward and presumably accurate description of their nests:

The walnut-sized nest, built by the female, is constructed on a foundation of bud scales attached to a tree limb with spider silk; lichens camouflage the outside, and the inside is lined with dandelion, cattail, or thistle down.

Undoubtedly tended by singing dragonflies that feed on unicorn tears.


July 19, 2012

[2b2k][eim]Digital curation

I’m at the “Symposium on Digital Curation in the Era of Big Data” held by the Board on Research Data and Information of the National Research Council. These liveblog notes cover (in some sense — I missed some folks, and have done my usual spotty job on the rest) the morning session. (I’m keynoting in the middle of it.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Alan Blatecky [pdf] from the National Science Foundation says science is being transformed by Big Data. [I can't see his slides from the panel at front.] He points to the increase in the volume of data, but we haven’t paid enough attention to the longevity of the data. And, he says, some data is centralized (LHC) and some is distributed (genomics). And, our networks are unable to transport large amounts of data [see my post], making where the data is located quite significant. NSF is looking at creating data infrastructures. “Not one big cloud in the sky,” he says. Access, storage, services — how do we make that happen and keep it leading edge? We also need a “suite of policies” suitable for this new environment.

He closes by talking about the Data Web Forum, a new initiative to look at a “top-down governance approach.” He points positively to the IETF’s “rough consensus and running code.” “How do we start doing that in the data world?” How do we get a balanced representation of the community? This is not a regulatory group; everything will be open source, and progress will be through rough consensus. They’ve got some funding from gov’t groups around the world. (Check for more info.)

Now Josh Greenberg from the Sloan Foundation. He points to the opportunities presented by aggregated Big Data: the effects on social science, on libraries, etc. But the tools aren’t keeping up with the computational power, so researchers are spending too much time mastering tools, plus it can make reproducibility and provenance trails difficult. Sloan is funding some technical approaches to increasing the trustworthiness of data, including in publishing. But Sloan knows that this is not purely a technical problem. Everyone is talking about data science. Data scientist defined: Someone who knows more about stats than most computer scientists, and can write better code than typical statisticians :) But data science needs to better understand stewardship and curation. What should the workforce look like so that the data-based research holds up over time? The same concerns apply to business decisions based on data analytics. The norms that have served librarians and archivists of physical collections now apply to the world of data. We should be looking at these issues across the boundaries of academics, science, and business. E.g., economics works now rests on data from Web businesses, US Census, etc.

[I couldn't liveblog the next two — Michael and Myron — because I had to leave my computer on the podium. The following are poor summaries.]

Michael Stebbins, Assistant Director for Biotechnology in the Office of Science and Technology Policy in the White House, talked about the Administration’s enthusiasm for Big Data and open access. It’s great to see this degree of enthusiasm coming directly from the White House, especially since Michael is a scientist and has worked for mainstream science publishers.

Myron Gutmann, Ass’t Dir of of the National Science Foundation likewise expressed commitment to open access, and said that there would be an announcement in Spring 2013 that in some ways will respond to the recent UK and EC policies requiring the open publishing of publicly funded research.

After the break, there’s a panel.

Anne Kenney, Dir. of Cornell U. Library, talks about the new emphasis on digital curation and preservation. She traces this back at Cornell to 2006 when an E-Science task force was established. She thinks we now need to focus on e-research, not just e-science. She points to Walters and Skinners “New Roles for New Times: Digital Curation for Preservation.” When it comes to e-research, Anne points to the need for metadata stabilization, harmonizing applications, and collaboration in virtual communities. Within the humanities, she sees more focus on curation, the effect of the teaching environment, and more of a focus on scholarly products (as opposed to the focus on scholarly process, as in the scientific environment).

She points to Youngseek Kim et al. “Education for eScience Professionals“: digital curators need not just subject domain expertise but also project management and data expertise. [There's lots of info on her slides, which I cannot begin to capture.] The report suggests an increasing focus on people-focused skills: project management, bringing communities together.

She very briefly talks about Mary Auckland’s “Re-Skilling for Research” and Williford and Henry, “One Culture: Computationally Intensive Research in the Humanities and Sciences.”

So, what are research libraries doing with this information? The Association of Research Libraries has a jobs announcements database. And Tito Sierra did a study last year analyzing 2011 job postings. He looked at 444 jobs descriptions. 7.4% of the jobs were “newly created or new to the organization.” New mgt level positions were significantly higher, while subject specialist jobs were under-represented.

Anne went through Tito’s data and found 13.5% have “digital” in the title. There were more digital humanities positions than e-science. She posts a lists of the new titles jobs are being given, and they’re digilicious. 55% of those positions call for a library science degree.

Anne concludes: It’s a growth area, with responsibilities more clearly defined in the sciences. There’s growing interest in serving the digital humanists. “Digital curation” is not common in the qualifications nomenclature. MLS or MLIS is not the only path. There’s a lot of interest in post-doctoral positions.

Margarita Gregg of the National Oceanic and Atmospheric Administration, begins by talking about challenges in the era of Big Data. They produce about 15 petabytes of data per year. It’s not just about Big Data, though. They are very concerned with data quality. They can’t preserve all versions of their datasets, and it’s important to keep track of the provenance of that data.

Margarita directs one of NOAA’s data centers that acquires, preserves, assembles, and provides access to marine data. They cannot preserve everything. They need multi-disciplinary people, and they need to figure out how to translate this data into products that people need. In terms of personnel, they need: Data miners, system architects, developers who can translate proprietary formats into open standards, and IP and Digital Rights Management experts so that credit can be given to the people generating the data. Over the next ten years, she sees computer science and information technology becoming the foundations of curation. There is no currently defined job called “digital curator” and that needs to be addressed.

Vicki Ferrini at the Lamont -Doherty Earth Observatory at Columbia University works on data management, metadata, discovery tools, educational materials, best practice guidelines for optimizing acquisition, and more. She points to the increased communication between data consumers and producers.

As data producers, the goal is scientific discovery: data acquisition, reduction, assembly, visualization, integration, and interpretation. And then you have to document the data (= metadata).

Data consumers: They want data discoverability and access. Inceasingly they are concerned with the metadata.

The goal of data providers is to provide acccess, preservation and reuse. They care about data formats, metadata standards, interoperability, the diverse needs of users. [I've abbreviated all these lists because I can't type fast enough.].

At the intersection of these three domains is the data scientist. She refers to this as the “data stewardship continuum” since it spans all three. A data scientist needs to understand the entire life cycle, have domain experience, and have technical knowledge about data systems. “Metadata is key to all of this.” Skills: communication and organization, understanding the cultural aspects of the user communities, people and project management, and a balance between micro- and macro perspectives.

Challenges: Hard to find the right balance between technical skills and content knowledge. Also, data producers are slow to join the digital era. Also, it’s hard to keep up with the tech.

Andy Maltz, Dir. of Science and Technology Council of Academy of Motion Picture Arts and Sciences. AMPA is about arts and sciences, he says, not about The Business.

The Science and Technology Council was formed in 2005. They have lots of data they preserve. They’re trying to build the pipeline for next-generation movie technologists, but they’re falling behind, so they have an internship program and a curriculum initiative. He recommends we read their study The Digital Dilemma. It says that there’s no digital solution that meets film’s requirement to be archived for 100 years at a low cost. It costs $400/yr to archive a film master vs $11,000 to archive a digital master (as of 2006) because of labor costs. [Did I get that right?] He says collaboration is key.

In January they released The Digital Dilemma 2. It found that independent filmmakers, documentarians, and nonprofit audiovisual archives are loosely coupled, widely dispersed communities. This makes collaboration more difficult. The efforts are also poorly funded, and people often lack technical skills. The report recommends the next gen of digital archivists be digital natives. But the real issue is technology obsolescence. “Technology providers must take archival lifetimes into account.” Also system engineers should be taught to consider this.

He highly recommends the Library of Congress’ “The State of Recorded Sound Preservation in the United States,” which rings an alarm bell. He hopes there will be more doctoral work on these issues.

Among his controversial proposals: Require higher math scores for MLS/MLIS students since they tend to score lower than average on that. Also, he says that the new generation of content creators have no curatorial awareness. Executivies and managers need to know that this is a core business function.

Demand side data points: 400 movies/year at 2PB/movie. CNN has 1.5M archived assets, and generates 2,500 new archive objects/wk. YouTube: 72 hours of video uploaded every minute.


  • Show business is a business.

  • Need does not necessarily create demand.

  • The nonprofit AV archive community is poorly organized.

  • Next gen needs to be digital natvies with strong math and sci skills.

  • The next gen of executive leaders needs to understand the importance of this.

  • Digital curation and long-term archiving need a business case.


Q: How about linking the monetary value of the metadata to the metadata? That would encourage the generation of metadata.

Q: Weinberger paints a picture of flexible world of flowing data, and now we’re back in the academic, scientific world where you want good data that lasts. I’m torn.

A: Margarita: We need to look how that data are being used. Maybe in some circumstances the quality of the data doesn’t matter. But there are other instances where you’re looking for the highest quality data.

A: [audience] In my industry, one person’s outtakes are another person’s director cuts.

A: Anne: In the library world, we say if a little metadata would be great, a lot of it would be great. We need to step away from trying to capture the most to capturing the most useful (since can’t capture the most). And how do you produce data in a way that’s opened up to future users, as well as being useful for its primary consumers? It’s a very interesting balance that needs to be played. Maybe short-term need is a higher thing and long-term is lower.

A: Vicki: The scientists I work with use discrete data sets, spreadsheets, etc. As we get along we’ll have new ways to check the quality of datasets so we can use the messy data as well.

Q: Citizen curation? E.g., a lot of antiques are curated by being put into people’s attics…Not sure what that might imply as model. Two parallel models?

A: Margarita: We’re going to need to engage anyone who’s interested. We need to incorporate citizen corporation.

Anne: That’s already underway where people have particular interests. E.g., Cornell’s Lab of Ornithology where birders contribute heavily.

Q: What one term will bring people info about this topic?

A: Vicki: There isn’t one term, which speaks to the linked data concept.

Q: How will you recruit people from all walks of life to have the skills you want?

A: Andy: We need to convince people way earlier in the educational process that STEM is cool.

A: Anne: We’ll have to rely to some degree on post-hire education.

Q: My shop produces and integrates lots of data. We need people with domain and computer science skills. They’re more likely to come out of the domains.

A: Vicki: As long as you’re willing to take the step across the boundary, it doesn’t mater which side you start from.

Q: 7 yrs ago in library school, I was told that you need to learn a little programming so that you understand it. I didn’t feel like I had to add a whole other profession on to the one I was studying.

1 Comment »

« Previous Page | Next Page »