Joho the Blognlp Archives - Joho the Blog

August 14, 2009

Search Pidgin

I know I’m not the only one who’s finding WolframAlpha sometimes frustrating because I can’t figure out the magic words to use to invoke the genii. To give just one example, I can’t figure out how to see the frequency of the surnames Kumar and Weinberger compared side-by-side in WolframAlpha’s signature fashion. It’s a small thing because “surname Kumar” and “surname Weinberger” will get you info about each individually. But over and over, I fail to guess the way WolframAlpha wants me to phrase the question.

Search engines are easier because they have already trained us how to talk to them. We know that we generally get the same results whether we use the stop words “when,” “the,” etc. and questions marks or not. We eventually learn that quoting a phrase searches for exactly that phrase. We may even learn that in many engines, putting a dash in front of a word excludes pages containing it from the results, or that we can do marvelous and magical things with prefaces that end in a colon site:, define:. We also learn the semantics of searching: If you want to find out the name of that guy who’s Ishmael’s friend in Moby-Dick, you’ll do best to include some words likely to be on the same page, so “‘What was the name of that guy in Moby-Dick who was the hero’s friend?'” is way worse than “Moby-Dick harpoonist’.” I have no idea what the curve of query sophistication looks like, but most of us have been trained to one degree or another by the search engines who are our masters and our betters.

In short, we’re being taught a pidgin language — a simplified language for communicating across cultures. In this case, the two cultures are human and computers. I only wish the pidgin were more uniform and useful. Google has enough dominance in the market that its syntax influences other search engines. Good! But we could use some help taking the next step, formulating more complex natural language queries in a pidgin that crosses application boundaries, and that isn’t designed for standard database queries.

Or does this already exist?

Tags:

4 Comments »

February 13, 2008

Reuters Semantic Web Web service

Let me disambiguate that title: Reuters is offering a Web service, called Calais, that will parse text and return it in a form (RDF) that can be utilized by Semantic Web applications. It uses natural language processing (from ClearForest) to find structures of meaning such as places, jobs, facts, events, etc. It apparently has its own metadata schema, but it allows users to extend it. It’s an open API, and Reuters is being quite generous in how much they’ll let you submit during this beta period. It’s English only for now, although they plan to support other languages, opening the exciting prospect of being able to find items of interest in languages you don’t understand via a unified metadata framework.

I’m going by the site’s FAQ. I haven’t tried it and can’t tell how well it works, how accurate it is, how comprehensive or detailed its metadata are, and how much post-processing cleanup uses will want to provide (which of course depends on the application). There are some points I just don’t understand, such as the claim “Calais carries your own metadata anywhere in the content universe.” But if it works within some reasonable definition of “works,” and if it gets widely adopted, Calais could make a lot more information a lot easier to find, and to process for further meaning. [Tags: ]

3 Comments »