Joho the Blogplatforms Archives - Page 2 of 2 - Joho the Blog

November 21, 2014

APIs are magic

(This is cross-posted at Medium.)

Dave Winer recalls a post of his from 2007 about an API that he’s now revived:

“Because Twitter has a public API that allows anyone to add a feature, and because the NY Times offers its content as a set of feeds, I was able to whip up a connection between the two in a few hours. That’s the power of open APIs.”

Ah, the power of APIs! They’re a deep magic that draws upon five skills of the Web as Mage:

First, an API matters typically because some organization has decided to flip the default: it assumes data should be public unless there’s a reason to keep it private.

Second, an API works because it provides a standard, or at least well-documented, way for an application to request that data.

Third, open APIs tend to be “RESTful,” which means that they work using the normal Web way of proceeding (i.e., Web protocols). All you or your program have to do is go to the API’s site using a standard URL of the sort you enter in a browser. The site comes back not with a Web page but with data. For example, click on this URL (or paste it into your browser) and you’ll get data from Wikipedia’s API: http://en.wikipedia.org/w/api.php?action=query&titles=San_Francisco&prop=images&imlimit=20&format=jsonfm. (This is from the Wikipedia API tutorial.)

Fourth, you need people anywhere on the planet who have ideas about how that data can be made more useful or delightful. (cf. Dave Winer.)

Fifth, you need a worldwide access system that makes the results of that work available to everyone on the Internet.

In short, API’s show the power of a connective infrastructure populated by ingenuity and generosity.

In shorter shortnesss: API’s embody the very best of the Web.

Comments Off on APIs are magic

October 13, 2014

Library as starting point

A new report on Ithaka S+R‘s annual survey of libraries suggests that library directors are committed to libraries being the starting place for their users’ research, but that the users are not in agreement. This calls into question the expenditures libraries make to achieve that goal. (Hat tip to Carl Straumsheim and Peter Suber.)

The question is good. My own opinion is that libraries should let Google do what it’s good at, while they focus on what they’re good at. And libraries are very good indeed at particular ways of discovery. The goal should be to get the mix right, not to make sure that libraries are the starting point for their communities’ research.

The Ithaka S+R survey found that “The vast majority of the academic library directors…continued to agree strongly with the statement: ‘It is strategically important that my library be seen by its users as the first place they go to discover scholarly content.'” But the survey showed that only about half think that that’s happening. This gap can be taken as room for improvement, or as a sign that the aspiration is wrongheaded.

The survey confirms that many libraries have responded to this by moving to a single-search-box strategy, mimicking Google. You just type in a couple of words about what you’re looking for and it searches across every type of item and every type of system for managing those items: images, archival files, books, maps, museum artifacts, faculty biographies, syllabi, databases, biological specimens… Just like Google. That’s the dream, anyway.

I am not sold on it. Roger cites Lorcan Dempsey, who is always worth listening to:

Lorcan Dempsey has been outspoken in emphasizing that much of “discovery happens elsewhere” relative to the academic library, and that libraries should assume a more “inside-out” posture in which they attempt to reveal more effectively their distinctive institutional assets.

Yes. There’s no reason to think that libraries are going to be as good at indexing diverse materials as Google et al. are. So, libraries should make it easier for the search engines to do their job. Library platforms can help. So can Schema.org as a way of enriching HTML pages about library items so that the search engines can easily recognize the library item metadata.

But assuming that libraries shouldn’t outsource all of their users’ searches, then what would best serve their communities? This is especially complicated since the survey reveals that preference for the library web site vs. the open Web varies based on just about everything: institution, discipline, role, experience, and whether you’re exploring something new or keeping up with your field. This leads Roger to provocatively ask:

While academic communities are understood as institutionally affiliated, what would it entail to think about the discovery needs of users throughout their lifecycle? And what would it mean to think about all the different search boxes and user login screens across publishes [sic] and platforms as somehow connected, rather than as now almost entirely fragmented? …Libraries might find that a less institutionally-driven approach to their discovery role would counterintuitively make their contributions more relevant.

I’m not sure I agree, in part because I’m not entirely sure what Roger is suggesting. If it’s that libraries should offer an experience that integrates all the sources scholars consult throughout the lifecycle of their projects or themselves, then, I’d be happy to see experiments, but I’m skeptical. Libraries generally have not shown themselves to be particularly adept at creating grand, innovative online user experiences. And why should they be? It’s a skill rarely exhibited anywhere on the Web.

If designing great Web experiences is not a traditional strength of research libraries, the networked expertise of their communities is. So is the library’s uncompromised commitment to serving its community’s interests. A discovery system that learns from its community can do something that Google cannot: it can find connections that the community has discerned, and it can return results that are particularly relevant to that community. (It can make those connections available to the search engines also.)

This is one of the principles behind the Stacklife project that came out of the Harvard Library Innovation Lab that until recently I co-directed. It’s one of the principles of the Harvard LibraryCloud platform that makes Stacklife possible. It’s one of the reasons I’ve been touting a technically dumb cross-library measure of usage. These are all straightforward ways to start to record and use information about the items the community has voted for with its library cards.

It is by far just the start. Anonymization and opt-in could provide rich sets of connections and patterns of usage. Imagine we could know what works librarians recommend in response to questions. Imagine if we knew which works were being clustered around which topics in lib guides and syllabi. (Support the Open Syllabus Project!) Imagine if we knew which books were being put on lists by faculty and students. Imagine if knew what books were on participating faculty members’ shelves. Imagine we could learn which works the community thinks are awesome. Imagine if we could do this across institutions so that communities could learn from one another. Imagine we could do this with data structures that support wildly messily linked sources, many of them within the library but many of them outside of it. (Support Linked Data!)

Let the Googles and Bings do what they do better than any sane person could have imagined twenty years ago. Let libraries do what they have been doing better than anyone else for centuries: supporting and learning from networked communities of scholars, librarians, and students who together are a profound source of wisdom and working insight.

Comments Off on Library as starting point

June 20, 2014

[platform] Unreal Tournament 2014 to provide market for mods

According to an article in PC Gamer (August 2014, Ben Griffin, p. 10), Epic Games’ Unreal Tournament 2014 will make “Every line of code, evert art asset and animation…available for download.” Users will be able to create their own mods and sell them through a provided marketplace. “Epic, naturally, gets a cut of the profits.”

Steve Polge, project lead and senior programmer, said “I believe this development model gives us the opportunity to build a much better balanced and finely tuned game, which is vital to the long-term success of a competitive shooter.” He points out that players already contribute to design discussions.

1 Comment »

June 1, 2014

Oculus Riiiiiiiiiift

At the Tel Aviv headquarters of the Center for Educational Technology, an NGO I’m very fond of because of its simultaneous dedication to improving education and its embrace of innovative technology, I got to try an Oculus Rift.

They put me on a virtual roller coaster. My real knees went weak.

Holy smokes.

wearing an Oculus Rift

 


Earlier, I gave a talk at the Israeli Wikimedia conference. I was reminded — not that I actually need reminding — how much I like being around Wikipedians. And what an improbable work of art is Wikipedia.

1 Comment »

[liveblog] Jan-Bart de Vreede at Wikimedia Israel

I’m at the Israeli Wikimedia conference. The chair of the Wikimedia Foundation, Jan-Bart De Vreede, is being interviewed by Shizaf Rafaeli.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jan introduces himself. Besides being the chair, in the Netherlands he works on open educational resources at Kinnesnent. He says that the Wikimedia Foundation is quite small compared to other organizations like it. Five members are elected by the community (anyone with enough edits can vote), there are four appointed members, and Jimmy Wales.

Q: The Foundation is based on volunteers, and it has a budget. What are the components of the future for Wikipedia?

A: We have to make sure we get the technology to the place where we’re prepared for the future. And how we can enable the volunteers to do whatever they want to achieve our mission of being the sum of all knowledge, which is a high bar? Enabling volunteers is the highest impact thing that we can do.

Q: Students just did a presentation here based on the idea that Wikipedia already has too much information.

A: It’s not up to us to decide how the info is consumed. We should make sure that the data is available to be presented any way people want to. We are moving toward WikiData: structured data and the relationship among that data. How can we make it easier for people to add data to WikiData without necessarily requiring people to edit pages? How can we enable people to tag data? Can we use that to learn what people find relevant?

Q: What’s most important?

A: WikiData. Then Wikipedia Zero, making WP access available in developing parts of the globe. We’re asking telecoms to provide free access to Wikipedia on mobile phones.

Q: You’re talking with the Israeli Minister of Education tomorrow. About what?

A: We have a project of Wikipedia for children, written by children. Children can have an educational experience — e.g., interview a Holocaust survivor — and share it so all benefit from it.

Q: Any interesting projects?

A: Wiki Monuments [link ?]. Wiki Air. So many ideas. So much more to do. The visual editor will help people make edits. But we also have to make sure that new editors are welcomed and are treated kindly. Someone once told Jan that she “just helps new editors,” and he replied that that scale smuch better than creating your own edits.

A: I’m surprised you didn’t mention reliability…

Q: Books feel trustworthy. The Net automatically brings a measure of distrust, and rightly so. Wikipedia over the years has come to feel trustworthy, but that requires lots of people looking at it and fixing it when its wrong.

Q: 15,000 Europeans have applied to have their history erased on Google. The Israeli Supreme Court has made a judgment along the same lines. What’s Wikipedia’s stance on this?

A: As we understand it, the right to be forgotten applies to search engines, not to source articles about you. Encyclopedia articles are about what’s public.

Q: How much does the neutral point of view count?

A: It’s the most important thing, along with being written by volunteers. Some Silicon Valley types have refused to contributed money because, they say, we have a business model that we choose not to use: advertising. We decided it’d be more important to get many small contributions than corrode NPOV by taking money.

A: How about paid editing so that we get more content?

Q: It’s a tricky thing. There are public and governmental institutions that pay employees to provide Open Access content to Wikipedia and Wiki Commons. On the other hand, there are organizations that take money to remove negative information about their clients. We have to make sure that there’s a way to protect the work of genuine volunteers from this. But even when we make a policy about, the local Wikipedia units can override it.

Q: What did you think of our recent survey?

A: The Arab population was much more interested in editing Wikipedia than the Israeli population. How do you enable that? It didn’t surprise me that women are more interested in editing. We have to work against our systemic bias.

Q: Other diversity dimensions we should pay more attention to?

A: Our concept of encyclopedia itself is very Western. Our idea of citations is very Western and academic. Many cultures have oral citations. Wikipedia doesn’t know how to work with that. How can we accommodate knowledge that’s been passed down through generations?

Q&A

Q: Wikipedia doesn’t allow original research. Shouldn’t there be an open access magazine for new scientific research?

A: There are a lot of OA efforts. If more are needed, they should start with volunteers.

Q: Academics and Wikipedia have a touchy relationship. Wikipedia has won that battle. Isn’t it time to gear up for the next battle, i.e., creating open access journals?

A: There are others doing this. You can always upload and publish articles, if you want [at Wiki Commons?].

19 Comments »

April 27, 2014

The future is a platform

Here’s the video of my talk at The Next Web in Amsterdam on Friday. I haven’t watched it because I don’t like watching me and neither should you. But I would be interested in your comments about what I’m feeling my way toward in this talk.

It’s about what I think is a change in how we think about the future.

3 Comments »

November 9, 2013

Aaron Swartz and the future of libraries

I was unable to go to our local Aaron Swartz Hackathon, one of twenty around the world, because I’d committed (very happily) to give the after dinner talk at the University of Rhode Island Graduate Library and Information Studies 50th anniversary gala last night.

The event brought together an amazing set of people, including Senator Jack Reed, the current and most recent presidents of the American Library Association, Joan Ress Reeves, 50 particularly distinguished alumni (out of the three thousand (!) who have been graduated), and many, many more. These are heroes of libraries. (My cousin’s daughter, Alison Courchesne, also got an award. Yay, Alison!)

Although I’d worked hard on my talk, I decided to open it differently. I won’t try to reproduce what I actually said because the adrenalin of speaking in front of a crowd, especially one as awesome as last night’s, wipes out whatever short term memory remains. But it went very roughly something like this:

It’s awesome to be in a room with teachers, professors, researchers, a provost, deans, and librarians: people who work to make the world better…not to mention the three thousand alumni who are too busy do-ing to be able to be here tonight.

But it makes me remember another do-er: Aaron Swartz, the champion of open access, open data, open metadata, open government, open everything. Maybe I’m thinking about Aaron tonight because today is his birthday.

When we talk about the future of libaries, I usually promote the idea of libraries as platforms — platforms that make openly available everything that libraries know: all the data, all the metadata, what the community is making of what they get from the library (privacy accommodated, of course), all the guidance and wisdom of librarians, all the content especially if we can ever fix the insane copyright laws. Everything. All accessible to anyone who wants to write an application that puts it to use.

And the reason for that is because in my heart I don’t think librarians are going to invent the future of libraries. It’s too big a job for any one group. It will take the world to invent the future of libraries. It will take 14 year olds like Aaron to invent the future of libraries. We need supply them with platforms that enable them.

I should add that I co-direct a Library Innovation Lab where we do work that I’m very proud of. So, of course libraries will participate in the invention of their future. But it’ll take the world — a world that contains people with the brilliance and commitment of an Aaron Swartz — to invent that future fully.

 


Here are wise words delivered at an Aaron Hackathon last night by Carl Malamud: Hacking Authority. For me, Carl is reminding us that the concept of hacking over-promises when the changes threaten large institutions that represent long-held values and assumptions. Change often requires the persistence and patience that Aaron exhibited, even as he hacked.

2 Comments »

October 24, 2013

E-Dickinson

The Emily Dickinson archive went online today. It’s a big deal not only because of the richness of the collection, and the excellent technical work by the Berkman Center, but also because it is a good sign for Open Access. Amherst, one of the major contributors, had open accessed its Dickinson material earlier, and now the Harvard University Press has open accessed some of its most valuable material. Well done!

The collection makes available in one place the great Dickinson collections held by Amherst, Harvard, and others. The metadata for the items is (inevitably) inconsistent in terms of its quantity, but the system has been tuned so that items with less metadata are not systematically overwhelmed by its search engine.

The Berkman folks tell me that they’re going to develop an open API. That will be extra special cool.

Comments Off on E-Dickinson

March 28, 2013

[annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

Paolo Ciccarese begins by reminding us just how vast the scientific literature is. We can’t possibly read everything we should. But “science is social” so we rely on each other, and build on each other’s work. “Everything we do now is connected.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Today’s media do provide links, but not enough. Things are so deeply linked. “How do we keep track of it?” How do we communicate with others so that when they read the same paper they get a little bit of our mental model, and see why we found the article interesting?

Paolo’s project — Domeo [twitter:DomeoTool] — is a web app for “producing, browsing, and sharing manual and semi-automatic (structure and unstructured) annotations, using open standards. Domeo shows you an article and lets you annotate fragments. You can attach a tag or an unstructured comment. The tag can be defined by the user or by a defined ontology. Domeo doesn’t care which ontologies you use, which means you could use it for annotating recipes as well as science articles.

Domeo also enables discussions; it has a threaded msg facility. You can also run text mining and entity recognition systems (Calais, etc.) that automatically annotates the work with those words, which helps with search, understanding, and curation. This too can be a social process. Domeo lets you keep the annotation private or share it with colleagues, groups, communities, or the Web. Also, Domeo can be extended. In one example, it produces information about experiments that can be put into a database where it can be searched and linked up with other experiments and articles. Another example: “hypothesis management” lets readers add metadata to pick out the assertions and the evidence. (It uses RDF) You can visualize the network of knowledge.

It supports open APIs for integrating with other systems., including into the Neuroscience Information Framework and Drupal. “Domeo is a platform.” It aims at supporting rich source, and will add the ability to follow authors and topics, etc., and enabling mashups.

Comments Off on [annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

[annotation][2b2k] Philip Desenne

I’m at a workshop on annotation at Harvard. Philip Desenne is giving one of the keynotes.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

We’re here to talk about the Web 3.0, Phil says — making the Web more fully semantic.

Phil says that we need to re-write the definition of annotation. We should be talking about hyper-nota: digital media-rich annotations. Annotations are important, he says. Try to imagine social networks with the ratings, stars, comments, etc. Annotations also spawn new scholarship.

The new dew digital annotation paradigm is the gateway to Web 3.0: connecting knowledge through a common semantic language. There are many annotation tools out there. “All are very good in their own media…But none of them share a common model to interoperate.” That’s what we’re going to work on today. “The Open Annotation Framework” is the new digital paradigm. But it’s not a simple model because it’s a complex framework. Phil shows a pyramid: Create / Search / Seek patterns / Analyze / Publish / Share. [Each of these has multiple terms and ideas that I didn’t have time to type out.]

Of course we need to abide by open standards. He points to W3C, Open Source and Creative Commons. And annotations need to include multimedia notes. We need to be able to see annotations relating to one another, building networks across the globe. [Knowledge networks FTW!] Hierarchies of meaning allow for richer connections. We can analyze text and other media and connect that metadata. We can look across regional and cultural patterns. We can publish, share and collaborate. All if we have a standard framework.

For this to happeb we beed a standardized referencing system for segments or fragments of a work. We also need to be able to export them into standard formats such as XML TEI.

Lots of work has been done on this: RDF Models and Ontologies, the Open Annotiation Community Group, the Open Annotation Model. “The Open Annotation Model is the common language.”

If we don’t adopt standards for annotation we’ll have disassociated, stagnant info. We’ll dereased innovation research, teaching, and learning knowledge. This is especially an issue when one thinks about MOOCs — a course with 150,000 students creating annotations.

Connective Collective Knowledge has existed for millennia he says. As far back as Aristarchus, marginalia had ymbols to allow pointing to different scrolls in the Library of Alexandria. Where are the connected collective knowledge systems today? Who is networking the commentaries on digital works? “Shouldn’t this be the mission of the 21st century library?”

Harvard has a portal for info about annotations: annotations.harvard.edu

3 Comments »

« Previous Page