Joho the Blog » linked data

March 18, 2014

Dean Krafft on the Linked Data for Libraries project

Dean Krafft, Chief Technology Strategist for Cornell University Library, is at Harvard to talk about the Mellon-funded Linked Data for Libraries (LD4L) project he leads. The grantees include Cornell, Stanford, and the Harvard Library Innovation Lab (which is co-sponsoring the talk with ABCD). (I provide nominal leadership for the Harvard team working on this.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Dean will talk about the LD4L project by talking about its building blocks. [Dean had lots of information and a lot on the slides. I did a particularly bad job of capturing it.]

Ld4L

Mellon last December put up $1M for a 2-year project that will end in Dec. 2015. The participants are Cornell, Stanford, and the Harvard Library Innovation Lab.

Cornell: Dean Krafft, Jon Corso-Rickert, Brian Lowe, Simeon Warner

Stanford: Tom Cramer, Lynn McRae, Naomi Dushay, Philip Schreur

Harvard: Paul Deschner, Paolo Ciccarese, me

Aim: Create a Scholarly Resource Semantic Info Store model that works within and across institutions to create a network of Linked Open Data to capture the intellectual value that librarians and other domain experts add to info, patterns of usage, and more.

Ld4L wants to have a common language for talking about scholarly materials. – Outcomes: – Create a SRSIS ontology sufficiently expressive to encompass catalog metadata and other contextual elements – Create a SRSIS semantic editing display, and discovery system based on Vitro to support the incremental ingest of semantic data from multiple info sources – Create a Project Hydra-compatible interface to SRSIS, an active triples software component to facilitate easy use of the data

Why use Linked Data?

LD puts the emphasis on the relationships. Everything is related.

Benefits: The connections have meaning. And it supports “many dimensions of nearness”

Dean explains RDF triples. They connect subjects with objects via a consistent set of relationships.

A nice feature of LOD is that the same URL that points to a human-readable page can also be taken as a query to show the machine-readable data.

There’s commonality among references: shared types, shared relationships, shared instances defined as types and linked by relationships.

LOD is great for sharing data. There’s a startup cost, but as you share more data repositories and types, the costs/effort goes up linearly, not at the steeper rate of traditional approaches.

Dean shows the mandatory graphic of a cloud of LOD sources.

Building Blocks

VIVO: Vivo was the inspiration for LD4L. It makes info about researchers discoverable. It’s software, data, a standard, and a community. It connects scientists and scholars through their research and scholarship. It provides self-describing data via shared ontologies. It provides search results enhanced by what it knows. And it does simple reasoning.

Vivo is built on the VIVO/Vitro platform. It has ingest tools, ontology editing tools, instance editing tools, and a display system. It models people, organizations, grants, etc., the relationships among them, and links to URIs elsewhere. It describes people in the process of doing research. It’s discipline-neutral. It uses existing domain terminology to describe the content of research. It’s modular, flexible, and extensible.

VIVO harvests much of its data automatically from verified sources.

It takes a complexity of inputs and makes them discoverable and usable.

All the data in VIVO is public and visible.

Dean shows us a page, and then traverses the network of interrelated authors.

He points out that other institutions are able to mash up their data with VIVO. E.g., the ICTS has info about 1.2M publications that they’ve integrated with VIVO’s data. E.g., you can see research papers created with federal funding but not deposited in PubMed Central.

VIVO is extensible. LASP extended VIVO to include spacecraft. Brown U. is extending it to support the humanities and artistic works, adding “performances,” for example.

The LD4L ontology will use components of the VIVO-ISF ontology. When new ontologies are needed, it will draw upon VIVO design patterns. The basis for SRSIS implementations will be Vitro plus LD4L ontologies. The multi-institution LD4L demo search will adapt VIVOsearch.org.

The 8M items at Cornell have generated billions of triples.

Project Hydra. Hydra is a tech suite and a partnership. You put your data there and can have many different apps. 22 institutions are collaborating.

Fundamental assumption: No single system can provide the full range of repository-based solutions for a given institution’s needs, yet sustainable solutions do require a common repository. Hydra is now building a set of “heads” (UI’s) for media, special collections, archives, etc.

Fundamental assumption: No single institution can build the full range of what it needs, so you need to work with others.

Hydra has an open architecture with many contributors to a common core. There are collaboratively built solution bundles.

Fedora, Ruby on Rails for Blacklight, Solr, etc.

LD4L will create an activeTriples Hyrdra component to mimic ActiveFedora.

Our Lab’s LibraryCloud/ShelfRank is another core element. It provides model for access to library data. Provides concrete example for creating an ontology for usage.

LD4L – the project

We’re now developing use cases. We have 32 on the wiki. [See the wiki for them]

We’re identifying data sources: Biblio, person (VIVO), usage (LibCloud, circ data, BorrowDirect circ), collections (EAD, IRs, SharedShelf, Olivia, arbitrary OAI-PMH), annotations (CuLLR, Stanford DMW, Bloglinks, DBpedia LibGuides), subjects and authorities (external sources). Imagine being able to look at usage across 50 research libraries…

Assembling the Ontology:

VIVO, Open Annotation, SKOS

BibFrame, BIBO, FaBIO

PROV-O, PAV

FOAF, PROVE, Schema.org

CreativeCommons, Dublin Core

etc.

Whenever possible the project will use existing ontologies

Timeline: By the end of the year we hope to be piloting initial ingests.

Workshop: Jan. 2015. 10-12 institutions. Aim: get feedback, make a “sales pitch” to other organizations to join in.

June 2015: Pilot SRSIS instances at Harvard and Stanford. Pilot gather info across all three instances.

Dec. 2015: Instances implemented.

wiki: http://wiki.duraspace.org/display/ld4l

Q&A

Q: Who anointed VIVO a standard?

A: It’s a de facto.

Q: SKOS is considered a great start, but to do anything real with it you have to modify it, and if it changes you’re screwed.

A: (Paolo) I think VIVO uses SKOS mainly for terms, not hierarchies. But I’m not sure.

Q: What are ActiveTriples?

A: It’s a Ruby Gem that serves as an interface for Hydra into a Fedora repository. ActiveTriples will serve the same function for a backend triple store. So you can swap different triple stores into the Fedora repository. This is Simeon Warner’s project.

Q: Does this mean you wouldn’t have to have a Fedora backend to take advantage of Hydra?

A: Yes, that’s part of it.

Q: Are you bringing in GIS linked data?

A: Yes, to the extent that we can and it makes sense to.

A: David Siegel: We have 6M data points from 1.1M Hollis records. LibraryCloud is ingesting them.

Q: What’s the product at the end?

A: We promised Mellon the ontology and instances of LOD based on the ontology at each of the 3 institutions, and search across the three.

Q: Harvard doesn’t have a Fedora backend…

A: We’d like to pull from non-catalog sources. That might well be an OAI-PMH ingest, or some other non-Fedora source.

Q: What is Simeon interested in with regard to Arxiv.org?

A: There isn’t a direct relationship.

Q: He’s also working on ORCID.

A: We have funding to do some level of integration of ORCID and VIVO.

Q: What is the bibliographic scope? BibFrame isn’t really defining items, etc. They’ve pushed it into annotations.

A: We’re interested in capturing some of that. BibFrame is offering most of what we need, but we have to look at each case. Then we communicate with them and hope that BibFrame does most of the work.

Q: Are any of your use cases posit tagging of contents, including by users perhaps with a controlled vocabulary?

A: We’ll be doing tagging at the object level. I’m unsure whether we’re willing to do tagging within the object.

A: [paolo] We assume we don’t have access to the full text.

A: You could always point into our data.

Q: How can we help?

A: We’re accumulating use cases and data sources. If you’re aware of any, let us know.

Q: It’s been hard for libraries to put enough effort into authority control, to associate values comparable across different subject schemes…there’s a lot of work to make things work together. What sort of vocabulary or semantic links will you be using? The hard part is getting values to work across domains.

A: One way to deal with that is to bring together the disparate info. By pulling together enough info, you can sometimes use the network to you figure that out. But in general the disambiguation challenge (and text fields are even worse) is not something we’re going to solve.

Q: Are the working groups institutionally based?

A: No. They’re cross-institution.

[I’m very excited about this project, and about the people working on it.]

1 Comment »

February 1, 2014

Linked Data for Libraries: And we’re off!

I’m just out of the first meeting of the three universities participating in a Mellon grant — Cornell, Harvard, and Stanford, with Cornell as the grant instigator and leader — to build, demonstrate, and model using library resources expressed as Linked Data as a tool for researchers, student, teachers, and librarians. (Note that I’m putting all this in my own language, and I was certainly the least knowledgeable person in the room. Don’t get angry at anyone else for my mistakes.)

This first meeting, two days long, was very encouraging indeed: it’s a superb set of people, we are starting out on the same page in terms of values and principles, and we enjoyed working with one another.

The project is named Linked Data for Libraries (LD4L) (minimal home page), although that doesn’t entirely capture it, for the actual beneficiaries of it will not be libraries but scholarly communities taken in their broadest sense. The idea is to help libraries make progress with expressing what they know in Linked Data form so that their communities can find more of it, see more relationships, and contribute more of what the communities learn back into the library. Linked Data is not only good at expressing rich relations, it makes it far easier to update the dataset with relationships that had not been anticipated. This project aims at helping libraries continuously enrich the data they provide, and making it easier for people outside of libraries — including application developers and managers of other Web sites — to connect to that data.

As the grant proposal promised, we will use existing ontologies, adapting them only when necessary. We do expect to be working on an ontology for library usage data of various sorts, an area in which the Harvard Library Innovation Lab has done some work, so that’s very exciting. But overall this is the opposite of an attempt to come up with new ontologies. Thank God. Instead, the focus is on coming up with implementations at all three universities that can serve as learning models, and that demonstrate the value of having interoperable sets of Linked Data across three institutions. We are particularly focused on showing the value of the high-quality resources that libraries provide.

There was a great deal of emphasis in the past two days on partnerships and collaboration. And there was none of the “We’ll show ‘em where they got it wrong, by gum!” attitude that in my experience all too often infects discussions on the pioneering edge of standards. So, I just got to spend two days with brilliant library technologists who are eager to show how a new generation of tech, architecture, and thought can amplify the already immense value of libraries.

There will be more coming about this effort soon. I am obviously not a source for tech info; that will come soon and from elsewhere.

2 Comments »

June 22, 2013

What I learned at LODLAM

On Wednesday and Thursday I went to the second LODLAM (linked open data for libraries, archives, and museums) unconference, in Montreal. I’d attended the first one in San Francisco two years ago, and this one was almost as exciting — “almost” because the first one had more of a new car smell to it. This is a sign of progress and by no means is a complaint. It’s a great conference.

But, because it was an unconference with up to eight simultaneous sessions, there was no possibility of any single human being getting a full overview. Instead, here are some overall impressions based upon my particular path through the event.

  • Serious progress is being made. E.g., Cornell announced it will be switching to a full LOD library implementation in the Fall. There are lots of great projects and initiatives already underway.

  • Some very competent tools have been developed for converting to LOD and for managing LOD implementations. The development of tools is obviously crucial.

  • There isn’t obvious agreement about the standard ways of doing most things. There’s innovation, re-invention, and lots of lively discussion.

  • Some of the most interesting and controversial discussions were about whether libraries are being too library-centric and not web-centric enough. I find this hugely complex and don’t pretend to understand all the issues. (Also, I find myself — perhaps unreasonably — flashing back to the Standards Wars in the late 1980s.) Anyway, the argument crystallized to some degree around BIBFRAME, the Library of Congress’ initiative to replace and surpass MARC. The criticism raised in a couple of sessions was that Bibframe (I find the all caps to be too shouty) represents how libraries think about data, and not how the Web thinks, so that if Bibframe gets the bib data right for libraries, Web apps may have trouble making sense of it. For example, Bibframe is creating its own vocabulary for talking about properties that other Web standards already have names for. The argument is that if you want Bibframe to make bib data widely available, it should use those other vocabularies (or, more precisely, namespaces). Kevin Ford, who leads the Bibframe initiative, responds that you can always map other vocabs onto Bibframe’s, and while Richard Wallis of OCLC is enthusiastic about the very webby Schema.org vocabulary for bib data, he believes that Bibframe definitely has a place in the ecosystem. Corey Harper and Debra Riley-Huff, on the other hand, gave strong voice to the cultural differences. (If you want to delve into the mapping question, explore the argument about whether Bibframe’s annotation framework maps to Open Annotation.)

  • I should add that although there were some strong disagreements about this at LODLAM, the participants seem to be genuinely respectful.

  • LOD remains really really hard. It is not a natural way of thinking about things. Of course, neither are old-fashioned database schemas, but schemas map better to a familiar forms-based view of the world: you fill in a form and you get a record. Linked data doesn’t even think in terms of records. Even with the new generation of tools, linked data is hard.

  • LOD is the future for library, archive, and museum data.


Here’s a list of brief video interviews I did at LODLAM:

June 21, 2013

[lodlam] Kevin Ford on the state of BIBFRAME

Kevin Ford who is a principle member of the team behind the Library of Congress’ BIBFRAME effort — a modern replacement for the aging MARC standard — gives an update on its status, and addresses a controversy about whether it’s “webby” enough. (I liveblogged a session about this at LODLAM.)

3 Comments »

[lodlam] Kitio Fofack on why Linked Data

Kitio Fofack turned to Linked Data when creating a prototype app that aggregated researcher events. He explains why.

Be the first to comment »

June 20, 2013

[lodlam] Richard Urban on LOD patterns

At the LODLAM conference, Richard Urban suggests that we build a pattern library so that people can identify common problems and common linked data solutions.

Be the first to comment »

[lodlam] Corey Harper on designing LOD with users in mind

I videoed the opening of a session (liveblogged here) at LODLAM about trying to get past thinking about Linked Data as a way of stitching together resources, and instead trying to address user needs. Corey Harper led the session. Here are his opening remarks, recorded with his permission but in very low lighting that makes it look furtive.

Be the first to comment »

[lodlam] Topics for Day 2

Here are the sessions people are proposing for the second day of the LODLAM conference in Montreal:


  • Getty Vocabulary goes open


  • Linked data on mobiles, wearable devices


  • Do cool things with the data sets that you have on your laptop – let’s build stuff!


  • Your tools and solutions


  • NLP for linked open data for libraries, archives, and museums. Data extraction, taxonomy alignment, context extraction, etc.


  • World War I in LOD


  • LOD and accessibility & assistive devices


  • The Pundit software package


  • the KARMA mapping tool


  • Tools and techniques for generating concordances between people


  • Why Schema.org?


  • Copying and synching linked data


  • FRBR and other standards [couldn’t hear]


  • How to create a new generation of LOD professionals. Getting students involved in projects.


  • The future of LODLAM


  • Normalizing ata models and licensing models


The official list is here.

Be the first to comment »

June 19, 2013

[lodlam] Dean Krafft on VIVO

Dean Krafft of Cornell talks about the status of VIVO, an interdisciplinary tool to help researchers discover one another.

This is from the LODLAM conference in Montreal.

1 Comment »

[lodlam] Focus on helping users

Corey Harper [twitter:chrpr] starts a session by giving a terrific presentation of the problem: Linked data discussions and apps have focused too much on resources instead of on topics, narratives, etc. — what users are using resources to explore. We are not extracting all the value from librarians’ controlled vocabulary.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Some notes from the open discussion. Very sketchy, much choppier than in life, and highly incomplete.

Why not use Solr, i.e., an indexer of SQL databases? In part because Solr doesn’t know enough about the context, so a search for “silver” comes back with all sorts of hits without recognizing that some refer to the mineral, some to geo places with “silver” in the name, etc. E.g., if you say “john constable artist birthdate,” linked data can get you the answer. [I typed that into Google. It came back with the answer in big letters.]

Linked data can do the sort of thing that reference librarians do: Here’s what you’re looking for, but have you also seen this and this and that?

How do we evaluate the user interfaces we come up with? How do we know if it’s helped someone find something, put something into context, tell a story…?

We have two weird paradigms in the library community: Lucene-based indexes of metadata (e.g., Blacklight) vs. exhibit makers (e.g., Omeka). How to bring those together so exhibits are made through an index, and the flow through them is itself indexed and made findable and re-usable. (And then there’s the walking through a room and discovering relationships among things.)

How do we preserve the value of the subject classifications? [Here’s one idea: Stacklife :) ]

It’s important to keep one of the core functions of catalog: to identify and create identities for resources. A lot of our examples are facts, but in the Humanities what’s our role in maintaining identities around which we can hang relationships and maintain the disagreements among people. How do you help people navigate that problem space?

The Web’s taught us that the only way to find things is through search, but let’s remember the “link” in “linked data”: the ability to find the relationship between things you’ve found. E.g., the Google Knowledge Graph and Google fact panel are doing this to some degree. We’ve lost that, thanks to computers.

People want to have debates and find conflicting information. It’s hard how to bring this into a search interface.

The Digital Mellini project digitized a specialized manuscript and opened up. Once something is digitized, there are pieces you cannot see with the human eye — e.g., marginal notes.

Other examples of the sort of thing that Corey is talking about:

  • Linking Lives. EACCPF (corporations persons and families).

  • SNACs [??] (“Facebook for dead people”) mines finding aids to find social relationships.

  • LinkSailor (RIP) traversed a many OWL sameAs relationships.

  • CultureSampo (Finnish)

  • Tim Sherratt‘s group has something coming out soon

People think that museum web sites are boring. At LODLAM we’re a bunch of data geeks and are the wrong people to be talking about user interfaces. Response: We should take the Apple route and give people what they don’t know they want. We should also be testing our models against how people think about the world.

“I have a lot of data. It’s very sparse and sometimes very concentrated. It’s hard to know what users want from it. I don’t know what’s going to be important to you. So we generate video games, using geodata to create the playing field.” That’s not a retrieval engine, but it’s a way to make use of the factoids.

Read “The Lean Startup.” The Minimum Viable Product is an important idea. Don’t underrate the role of the product owner in shaping a great project. (Me:) Having strong, usable, graphs that take advantage of what libraries know would be helpful.

Who are our clients? Users? Scholars? Developers? A: All of them. Response: Then we’ll fail. Response: Catalogs were designed to manage collections, not for the general public. People have been forced to learn how to use them; you have to understand the collection’s abstraction. And that’s not sustainable.

Our library wants to build the graph. We build simple interfaces to demonstrate the power, but our value is in building the graph.

We don’t want to deliver linked data to users. We want to build the layer between the linked data and the apps. If we do it well, users won’t know or care that there’s linked data underneath it.

We tend to focus on what we think our users should want. It’s an “eat your broccoli” approach to search. E.g., users want social networks, but many scholars resist it because it seems too non-rigorous.

1 Comment »

Next Page »