Joho the Blog » interop

March 28, 2013

[annotation][2b2k] Critique^it

Ashley Bradford of Critique-It describes his company’s way of keeping review and feedback engaging.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

To what extent can and should we allow classroom feedback to be available in the public sphere? The classroom is a type of Habermasian civic society. Owning one’s discourse in that environment is critical. It has to feel human if students are to learn.

So, you can embed text, audio, and video feedback in documents, video and images. It translates docs into HTML. To make the feedback feel human, it uses slightly stamps. You can also type in comments, marking them as neutral, positive, or critique. A “critique panel” follows you through the doc as you read it, so you don’t have to scroll around. It rolls up comments and stats for the student or the faculty.

It works the same in different doc types, including Powerpoint, images, and video.

Critiques can be shared among groups. Groups can be arbitrarily defined.

It uses HTML 5. It’s written in Javascript, PHP, and uses Mysql.

“We’re starting with an environment. We’re building out tools.” Ashley aims for Critique^It to feel very human.

2 Comments »

[annotation][2b2k]Opencast-Matterhorn

Andy Wasklewicz and Jeff Austin from Entwine [twitter:entwinemedia] describe a multi-institutional project to build a platform-agnostic tool for enriching video through note-taking, structured annotations, and sharing. It uses HTML 5, and allows for structured tagging, time-based annotation, and more.

Be the first to comment »

[annotation][2b2k] Mediathread

Jonah Bossewich and Mark Philipsonfrom Columbia University talk about Mediathread, an open source project that makes it easy to annotate various digital sources. It’s used in many courses at Columbi, as well as around the world.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

It comes from Columbia’s Center for New Media Teaching and Learning. It began with Vital, a video library tool. It let students clip and save portions of videos, and comment on them. Mediathread connects annotations to sources by bookmarking, via a bookmarklet that interoperates with a variety of collections. The bookmarklet scrapes the metadata because “We couldn’t wait for the standards to be developed.” Once an item is in Mediathread, it embeds the metadata as well.

It has always been conceived of a “small-group sharing and collaboration space.” It’s designed for classes. You can only see the annotations by people in your class. It does item-level annotation, as well as regions.

Mediathread connects assignments and responses, as well as other workflows. [He's talking quickly :)]

Mediathread’s bookmarklet approach requires it to have to accommodate the particularities of sites. They are aiming at making the annotations interoperable in standard forms.

Be the first to comment »

[annotation][2b2k] Phil Desenne on Harvard annotation tools

Phil Desenne begins with a brief history of annotation tools at Harvard. There are a lot, for annotating from everything to texts to scrolls to music scores to video. Most of them are collaborative tools. The collaborative tool has gone from Adobe AIR to Harvard iSites, to open source HTML 5. “It’s been a wonderful experience.” It’s been picked up by groups in Mexico, South America and Europe.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Phil works on edX. “We’re beginning to introduce annotation into edX.” It’s being used to encourage close reading. “It’s the beginning of a new way of thinking about teaching and assessing students.” Students tag the text, which “is the beginning of a semantic tagging system…Eventually we want to create a semantic ontology.”

What are the implications for the “MOOC Generation”? MOOC students are out finding information anywhere they can. They stick within a single learning management system (LMS). LMS’s usually have commentary tools “but none of them talk with one another . Even within the same LMS you don’t have cross-referencing of the content.” We should have an interoperable layer that rides on top of the LMS’s.

Within edX, there are discussions within classes, courses, tutorials, etc. These should be aggregated so that the conversations can reach across the entire space, and, of course, outside of it. edX is now working on annotation systems that will do this. E.g., imagine being able to discuss a particular image or fragments of videos, and being able to insert images into streams of commentary. Plus analytics of these interations. Heatmaps of activity. And a student should be able to aggregate all her notes, journal-like, so they can be exported, saved, and commented on, “We’re talking about a persistent annotation layer with API access.” “We want to go there.”

For this we need stable repositories. They’ll use URNs.

Be the first to comment »

[annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

Paolo Ciccarese begins by reminding us just how vast the scientific literature is. We can’t possibly read everything we should. But “science is social” so we rely on each other, and build on each other’s work. “Everything we do now is connected.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Today’s media do provide links, but not enough. Things are so deeply linked. “How do we keep track of it?” How do we communicate with others so that when they read the same paper they get a little bit of our mental model, and see why we found the article interesting?

Paolo’s project — Domeo [twitter:DomeoTool] — is a web app for “producing, browsing, and sharing manual and semi-automatic (structure and unstructured) annotations, using open standards. Domeo shows you an article and lets you annotate fragments. You can attach a tag or an unstructured comment. The tag can be defined by the user or by a defined ontology. Domeo doesn’t care which ontologies you use, which means you could use it for annotating recipes as well as science articles.

Domeo also enables discussions; it has a threaded msg facility. You can also run text mining and entity recognition systems (Calais, etc.) that automatically annotates the work with those words, which helps with search, understanding, and curation. This too can be a social process. Domeo lets you keep the annotation private or share it with colleagues, groups, communities, or the Web. Also, Domeo can be extended. In one example, it produces information about experiments that can be put into a database where it can be searched and linked up with other experiments and articles. Another example: “hypothesis management” lets readers add metadata to pick out the assertions and the evidence. (It uses RDF) You can visualize the network of knowledge.

It supports open APIs for integrating with other systems., including into the Neuroscience Information Framework and Drupal. “Domeo is a platform.” It aims at supporting rich source, and will add the ability to follow authors and topics, etc., and enabling mashups.

Be the first to comment »

[annotation][2b2k] Neel Smith: Scholarly annotation + Homer

Neel Smith of Holy Cross is talking about the Homer Multitext project, a “long term project to represent the transmission of the Iliad in digital form.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He shows the oldest extant ms of the Iliad, which includes 10th century notes. “The medieval scribes create a wonderful hypermedia” work.

“Scholarly annotation starts with citation.” He says we have a good standard: URNs, which can point to, for example, and ISBN number. His project uses URNs to refer to texts in a FRBR-like hierarchy [works at various levels of abstraction]. These are semantically rich and machine-actionable. You can google URN and get the object. You can put a URN into a URL for direct Web access. You can embed an image into a Web page via its URN [using a service, I believe].

An annotation is an association. In a scholarly notation, it’s associated with a citable entity. [He shows some great examples of the possibilities of cross linking and associating.]

The metadata is expressed as RDF triples. Within the Homer project, they’re inductively building up a schema of the complete graph [network of connections]. For end users, this means you can see everything associated with a particular URN. Building a facsimile browser, for example, becomes straightforward, mainly requiring the application of XSL and CSS to style it.

Another example: Mise en page: automated layout analysis. This in-progress project analyzes the layout of annotation info on the Homeric pages.

1 Comment »

[annotations][2b2k] Rob Sanderson on annotating digitized medieval manuscripts

Rob Sanderson [twitter:@azaroth42] of Los Alamos is talking about annotating Medieval manuscripts.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He says many Medieval manuscripts are being digitized. The Mellon Foundation is funding many such projects. But these have tended to reinvent the same tech, and have not been designed for interoperability with other projects. So the Digital Medieval Initiative was founded, with a long list of prestigious partners. They thought about what they’d like: distributed, linked data, interoperable, etc. For this they need a shared description format.

The traditional approach is annotate an image of a page. But it can be very difficult to know which images to annotate; he gives as an example a page that has fold-outs. “The naive assuption is that an image equals a page.” But there may be fragments, or only portions of the page have been digitized (e.g., the illuminations), etc. There may be multiple images on a page, revealed by multi-spectral imaging. There may be multiple orientations of the page, etc.

The solution? The canvas paradigm. A canvas is an empty space corresponding to the rectangle (or whatever) of the page. You allow rich resources to be associated with it, and allow users to comment. For this, they use Open Annotation. You can specify a choice of images. You can associate text with an area of the canvas. There are lots of different ways to visualize those comments: overlays, side-by-side, etc.

You can build hybrid pages. For example, and old scan might have a new color scan of its illustrations pointing at it. Or you could have a recorded performance of a piece of music pointing at the musical notation.

In summary, the SharedCanvas model uses open standards (HTML 5, Open Annotation, TEI, etc.) and can be implement distributed across reporsitories, encouraging engagement by domain experts.

Be the first to comment »

June 15, 2012

Interop: The podcast

My Radio Berkman interview of John Palfrey and Urs Gasser about their suprisingly wide-ranging book Interop is now up, as is the video of their Berkman book talk…

Be the first to comment »

May 30, 2012

Interop: The Book

John Palfrey and Urs Gasser are giving a book talk at Harvard about their new book, Interop. (It’s really good. Broad, thoughtful, engaging. Not at all focused on geeky tech issues.) NOTE: Posted without re-reading

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

[The next day: Nathan Matias has posted a far better live-blog post of this event.)

JP says the topic of interop seems on the face of it like it should be "very geeky and very dull." He says the book started out fairly confined, about the effect of interop on innovation. But as they worked on it, it got broader. E.g., the Facebook IPO has been spun about the stock price's ups and downs. But from an interop perspective, the story is about why FB was worth $100B or more when its revenues don't indicate any such thing. It's because FB's interop in our lives make it hard to extract. But from this also come problems, which is why the subtitle of the book Interop talks about its peril.

Likewise, the Flame virus shows that viral outbreaks cannot be isolated easily. We need fire breaks to prevent malware from spreading.

In the book, JP and Urs look at how railroad systems became interoperable. Currency is like that, too: currencies vary but we are able to trade across borders. This has been great for the global economy, but it make problems. E.g., the Greek economic meltdown shows the interdependencies of economies.

The book gives a concise def of interop: "The ability to transfer and render userul data and other information across systems (including organizations), applications or components." But that is insufficient. The book sees interop more broadly as "The art and science of working together." The book talks about interop in terms of four levels: data, tech, humans, and institutions.

They view the book as an inquiry, some of which is expressed in a series of case studies and papers.

Urs takes the floor. He's going to talk about a few case studies.

First, how can we make our cities smarter using tech? (Urs shows an IBM video that illustrates how dependent we are on sharing information.) He draws some observations:

  • Solutions to big societal problems increasingly depend on interoperability — from health care to climate change.

  • Interop is not black or white. Many degrees. E.g., power plugs are not interoperable around the world, but there are converters. Or, international air travel requires a lot of interop among the airlines.

  • Interop is a design challenge. In fact, once you've messed up with interop, it's hard to make it right. E.g., it took a long time to fix air traffic control systems because there was a strongly embedded legacy system.

  • There are important benefits, including systems efficiency, user choice, and economic growth.

Urs points to their four-layer model. To make a smart city, the tech the firefighters and police use need to interop, as do their data. But at the human layer, the language used to vary among branches; e.g., "333" might code one thing for EMTs and another for the police. At the institutional layer, the laws for privacy might not be interoperable, making it hard for businesses to work globally.

Second example: When Facebook opened its APIs so that other apps could communicate with FB, there was a spike in innovation; 4k apps were made by non-FB devs that plug into FB. FB's decision to become more interoperable led to innovation. Likewise for Twitter. "Much of the story behind Twitter is an interop question."

Likewise for Ushahidi; after the Haitian earthquake, it made a powerful platform that enabled people to share and accumulate info, mapping it, across apps and devices. This involved all layers of the interop stack, from data to institutions such as the UN pitching in. (Urs also points to safe2pee.org :)

Observations:

  • There's a cycle of interop, competition, and innovation.

  • There are theories of innovation, including generativity (Zittrain), user-driven innovation (Von Hippel) and small-step innocations (Christensen).

  • Caveat: More interop isn't always good. A highly interop business can take over the market, creating a de facto monopoly, and suppressing innovation.

  • Interop also can help diffuse adoption. E.g., the transition to high def tv: it only took off once the tvs were were able to interoperate between analog and digital signals.

Example 3: Credit cards are highly interoperable: whatever your buying opportunity is, you can use a selection of cards that work with just about any bank. Very convenient.

Observations:

  • this level of interop comes with costs and risks: identity thefts, security problems, etc.

  • The benefits outweigh the risks

  • This is a design problem

  • More interop creates more problems because it means there are more connection points.

Example 4: Cell phone chargers. Traditionally phones had their own chargers. Why? Europe addressed this by the "Sword of Damocles" approach that said that if the phone makers didn't get their act together, the EC would regulate them into it. The micro-USB charger is now standard in Europe.

Observations:

  • It can take a long time, because of the many actors, legacy problems, and complexity.

  • It's useful to think about these issues in terms of a 2x2 of regulation/non-regulation, and collaborative-unilateral.

JP back up. He is going to talk about libraries and the preservation of knowledge as interop problems. Think about this as an issue of maintaining interop over time. E.g., try loading up one of your floppy disks. The printed version is much more useful over the long term. Libraries find themselves in a perverse situation: If you provide digital copies of books, you can provide much less than physical books. Five of the 6 major publishers won't let libraries lend e versions. It'd make sense to have new books provided on an upon standard format. So, even if libraries could lend the books, people might not have the interoperable tech required to play it. Yet libraries are spending more on e-books, and less on physical. If libraries have digital copies and not physical copies, they are are vulnerable to tech changes. How do we insure that we can continuously update? The book makes a fairly detailed suggestion. But as it stands, as we switch from one format to another over time, we're in worse shape than if we had physical books. We need to address this. "When it comes to climate change, or electronic health records, or preservation of knowledge, interop matters, both as a theory and as a practice." We need to do this by design up front, deciding what the optimal interop is in each case.

Q&A

Q: [doc searls] Are there any places where you think we should just give up?

A: [jp] I’m a cockeyed optimist. We thought that electronic health records in the US is the hardest case we came across.

Q: How does the govt conduct consultations with experts from across the US. What would it take to create a network of experts?

A: [urs] Lots of expert networks that have emerged, enabled by tech that fosters from the bottom up human interoperability.
A: [jp] It’s not clear to me that we want that level of consultation. I don’t know that we could manage direct democracy enabled in that way.

Q: What are the limits you’d like to see emerge on interop. I.e., I’m thinking of problems of hyper-coherence in bio: a single species of rice or corn that may be more efficient can turn out to be with one blight to have been a big mistake. How do you build in systems of self-limit?

[urs] We try to address this somewhat in a chapter on diversity, which begins with biodiversity. When we talk about interop, we do not suggest merging or unifying systems. To the contrary, interop is a way to preserve diversity, and prevent fragmentation within diversity. It’s extremely difficult to find the optimums, which varies from case to case, and to decide on which speed bumps to put in place.
[jp] You’ve gone to the core of what we’re thinking about.

Q: Human autonomy, efficiency, and economic growth are three of the benefits you mention, but they can be in conflict with one another. How important are decentralized systems?

[urs] We’re not arguing in favor of a single system, e.g., that we have only one type of cell phone. That’s exactly not what we’re arguing for. You want to work toward the sweet spot of interop.
[jp] They are in tension, but there are some highly complex systems where they coexist. E.g., the Web.

Q: Yes, having a single cell phone charger is convenient. But there may be a performance tradeoff, where you can’t choose the optimum voltage if you standard on 5V. And an innovation deficit: you won’t get magnetic plugs, etc.

[urs] Yes. This is one of the potential downsides of interop. It may lock you in. When you get interop by choosing a standard, you freeze the standard for the future. So one of the additional challenge is: how can we incorporate mechanisms of learning into standards-setting?
Cell phone chargers don’t have a lot of layers on top of them, so the standardization doesn’t have quite the ripples through generativity. And that’s what the discussion should be about.

Be the first to comment »