Joho the Blog » translation

July 10, 2010

[2b2k] Understanding’s web

I’m on a mailing list that discusses the philosopher Martin Heidegger. Many years ago I was a fledgling Heidegger scholar, but now I am on the list strictly as a tourist.

Today someone posted: “If you don’t know German you don’t have a ghost of a chance of understanding Heidegger.” A few people posted immediately in reaction to the “dismissive” tone of the comment. I felt the same way, but then thought, hmm, this is an empirical question, isn’t it? List the people you think understand Heidegger best — or pick some other writer in some other language — and see how many of them don’t read him in his original language. There is something true about the dismissive remark.

But, there is something false as well. It draws too strong a line between understanding and not understanding. I obviously don’t understand Heidegger as well as the full-time scholars on the mailing list do. But, having studied Heidegger for several years of my life (I wrote my doctoral dissertation on him), I’m pretty sure I understand him better than most who haven’t studied him do. If we acknowledge that our understanding improves as we read and study more, we acknowledge that understanding doesn’t fall into only two buckets: understands or doesn’t have a ghost of a chance of understanding.

For the original comment to be empirically true, we’d either have to show that (a) there is a clear line between those who understand and those who do not (and that reading the original language is a requirement for getting into that first bucket). Or, (b) we could say that the commenter is actually talking about having professional standing as a scholar: You cannot claim to be a Heidegger scholar if you can only read him in translation. The first alternative seems to me to be ridiculous. The second seems far more plausible. The problems arise when someone applies the bright perimeter of professionalism to the messy web of understanding.

I certainly do believe that had my German been better — it was barely adequate at the time, and now has devolved into very basic travel glossary stuff — I could have understood Heidegger better. Likewise, better understanding the history of philosophy, knowing early 20th century German politics, reading Greek and Latin, and being conversant in German poetry all would have helped me understand Heidegger better. There is no end to what we need to know in order to understand the thought of another, because there is no such state as Understanding that excludes all doubt, excludes all errors, and excludes all others.

Finally, it’s not at all clear to me that if we list those whose understanding of a thinker we most respect, they will be in rank order based upon how many of the Professional Requirements they’ve mastered. Some of the best Heidegger scholars — and you can pick your own criteria of bestness — may be weak in Greek, weaker in German politics, but very strong in poetry. Others might have other sets of strengths and weaknesses. Not only doesn’t understanding necessarily correspond to the fields mastered, the community of scholars ameliorates the weaknesses of individuals by writing works that others read: A scholar weak on politics reads the work of scholars strong on politics. Understanding in this sense is a networked property, and a very messy one indeed.

24 Comments »

May 13, 2009

TED translates

TED has started a great new project: Distributed translations of TED Talks. Taking a page from Global Voices, it’s crowd-sourcing translations.

This is exactly what should happen and is a great solution for relatively scarce resources such as TED talks. Figure out how to scale this and get yourself a Nobel prize.

By the way, TED has also introduced interactive transcripts: Click on a phrase in the transcript and the video skips to that spot. Very useful. And with a little specialized text editor, we could have the edit-video-by-editing-text app that I’ve been looking for.

[Tags: ]

Be the first to comment »

March 26, 2009

Data in its untamed abundance gives rise to meaning

Seb Schmoller points to a terrific article by Google’s Alon Halevy, Peter Norvig, and Fernando Pereira about two ways to get meaning out of information. Their example is machine translation of natural language where there is so much translated material available for computers to learn from, which (they argue) works better than trying to learn from attempts that go up a level of abstraction and try to categorize and conceptualize the language. Scale wins. Or, as the article says, “But invariably, simple models and a lot of data trump more elaborate models based on less data.”


They then use this to distinguish the Semantic Web from “Semantic Interpretation.” The latter “deals with imprecise, ambiguous natural languages,” as opposed to aiming at data and application interoperability. “The problem of semantic interpretation remains: using a Semantic Web formalism just means that semantic interpretation must be done on shorter strings that fall between angle brackets.” Oh snap! “What we need are methods to infer relationships between column headers or mentions of entities in the world.” “Web-scale data” to the rescue! This is basic the same problem as translating from one language to another, given a large enough corpus of translations: We have a Web-scale collection of tables with column headers and content, so we should be able to algorithmically recognize clustering concordances of meaning.

I’m not doing the paper justice because I can’t, although it’s written quite clearly. But I find it fascinating. [Tags: ]

1 Comment »