Joho the Blogannotations Archives - Joho the Blog

April 9, 2013

Elsevier acquires Mendeley + all the data about what you read, share, and highlight

I liked the Mendeley guys. Their product is terrific — read your scientific articles, annotate them, be guided by the reading behaviors of millions of other people. I’d met with them several times over the years about whether our LibraryCloud project (still very active but undergoing revisions) could get access to the incredibly rich metadata Mendeley gathers. I also appreciated Mendeley’s internal conflict about the urge to openness and the need to run a business. They were making reasonable decisions, I thought. At they very least they felt bad about the tension :)

Thus I was deeply disappointed by their acquisition by Elsevier. We could have a fun contest to come up with the company we would least trust with detailed data about what we’re reading and what we’re attending to in what we’re reading, and maybe Elsevier wouldn’t win. But Elsevier would be up there. The idea of my reading behaviors adding economic value to a company making huge profits by locking scholarship behind increasingly expensive paywalls is, in a word, repugnant.

In tweets back and forth with Mendeley’s William Gunn [twitter: mrgunn], he assures us that Mendeley won’t become “evil” so long as he is there. I do not doubt Bill’s intentions. But there is no more perilous position than standing between Elsevier and profits.

I seriously have no interest in judging the Mendeley folks. I still like them, and who am I to judge? If someone offered me $45M (the minimum estimate that I’ve seen) for a company I built from nothing, and especially if the acquiring company assured me that it would preserve the values of that company, I might well take the money. My judgment is actually on myself. My faith in the ability of well-intentioned private companies to withstand the brute force of money has been shaken. After all this time, I was foolish to have believed otherwise.

MrGunn tweets: “We don’t expect you to be joyous, just to give us a chance to show you what we can do.” Fair enough. I would be thrilled to be wrong. Unfortunately, the real question is not what Mendeley will do, but what Elsevier will do. And in that I have much less faith.

 


I’ve been getting the Twitter handles of Mendeley and Elsevier wrong. Ack. The right ones: @Mendeley_com and @ElsevierScience. Sorry!

19 Comments »

March 28, 2013

[annotation][2b2k]Opencast-Matterhorn

Andy Wasklewicz and Jeff Austin from Entwine [twitter:entwinemedia] describe a multi-institutional project to build a platform-agnostic tool for enriching video through note-taking, structured annotations, and sharing. It uses HTML 5, and allows for structured tagging, time-based annotation, and more.

Comments Off on [annotation][2b2k]Opencast-Matterhorn

[annotations][2b2k] Rob Sanderson on annotating digitized medieval manuscripts

Rob Sanderson [twitter:@azaroth42] of Los Alamos is talking about annotating Medieval manuscripts.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He says many Medieval manuscripts are being digitized. The Mellon Foundation is funding many such projects. But these have tended to reinvent the same tech, and have not been designed for interoperability with other projects. So the Digital Medieval Initiative was founded, with a long list of prestigious partners. They thought about what they’d like: distributed, linked data, interoperable, etc. For this they need a shared description format.

The traditional approach is annotate an image of a page. But it can be very difficult to know which images to annotate; he gives as an example a page that has fold-outs. “The naive assuption is that an image equals a page.” But there may be fragments, or only portions of the page have been digitized (e.g., the illuminations), etc. There may be multiple images on a page, revealed by multi-spectral imaging. There may be multiple orientations of the page, etc.

The solution? The canvas paradigm. A canvas is an empty space corresponding to the rectangle (or whatever) of the page. You allow rich resources to be associated with it, and allow users to comment. For this, they use Open Annotation. You can specify a choice of images. You can associate text with an area of the canvas. There are lots of different ways to visualize those comments: overlays, side-by-side, etc.

You can build hybrid pages. For example, and old scan might have a new color scan of its illustrations pointing at it. Or you could have a recorded performance of a piece of music pointing at the musical notation.

In summary, the SharedCanvas model uses open standards (HTML 5, Open Annotation, TEI, etc.) and can be implement distributed across reporsitories, encouraging engagement by domain experts.

Comments Off on [annotations][2b2k] Rob Sanderson on annotating digitized medieval manuscripts

[annotation][2b2k] Philip Desenne

I’m at a workshop on annotation at Harvard. Philip Desenne is giving one of the keynotes.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

We’re here to talk about the Web 3.0, Phil says — making the Web more fully semantic.

Phil says that we need to re-write the definition of annotation. We should be talking about hyper-nota: digital media-rich annotations. Annotations are important, he says. Try to imagine social networks with the ratings, stars, comments, etc. Annotations also spawn new scholarship.

The new dew digital annotation paradigm is the gateway to Web 3.0: connecting knowledge through a common semantic language. There are many annotation tools out there. “All are very good in their own media…But none of them share a common model to interoperate.” That’s what we’re going to work on today. “The Open Annotation Framework” is the new digital paradigm. But it’s not a simple model because it’s a complex framework. Phil shows a pyramid: Create / Search / Seek patterns / Analyze / Publish / Share. [Each of these has multiple terms and ideas that I didn’t have time to type out.]

Of course we need to abide by open standards. He points to W3C, Open Source and Creative Commons. And annotations need to include multimedia notes. We need to be able to see annotations relating to one another, building networks across the globe. [Knowledge networks FTW!] Hierarchies of meaning allow for richer connections. We can analyze text and other media and connect that metadata. We can look across regional and cultural patterns. We can publish, share and collaborate. All if we have a standard framework.

For this to happeb we beed a standardized referencing system for segments or fragments of a work. We also need to be able to export them into standard formats such as XML TEI.

Lots of work has been done on this: RDF Models and Ontologies, the Open Annotiation Community Group, the Open Annotation Model. “The Open Annotation Model is the common language.”

If we don’t adopt standards for annotation we’ll have disassociated, stagnant info. We’ll dereased innovation research, teaching, and learning knowledge. This is especially an issue when one thinks about MOOCs — a course with 150,000 students creating annotations.

Connective Collective Knowledge has existed for millennia he says. As far back as Aristarchus, marginalia had ymbols to allow pointing to different scrolls in the Library of Alexandria. Where are the connected collective knowledge systems today? Who is networking the commentaries on digital works? “Shouldn’t this be the mission of the 21st century library?”

Harvard has a portal for info about annotations: annotations.harvard.edu

3 Comments »