Joho the Blog » philosophy

August 1, 2013

Progressivism implies we’re never done

Now, as part of the settlement, the school district has agreed to treat the child as a boy. Thus does an entire institution find itself compelled to accept the cultural left’s moral categories and priorities. This is why the Times labels transgender “the next civil rights frontier.” There’s always one, isn’t there?

This is from Ron Dreher’s post at The American Conservative about “Progressivism’s Next Battle.”

But what interests me is his comment, “There’s always one, isn’t there?” You can practically hear the sigh.

Well, yes, Ron, there is always one. Progressives are progressive because we believe in progress, and we believe in progress because — generalizing, of course — we believe three basic things.

First, human understanding is conditioned by history, culture, language. We are products of our times.

Second, our understanding tends towards some serious errors. For example, we tend to prefer the company of — and to trust — people who are like us. Worse, we go seriously wrong in judging the relevant ways people are like us, giving far too much weight to differences that make no real difference.

Third, we humans are capable of learning. When it comes to policies and institutions, the great lesson that we keep learning and need to keep learning is that few of the differences actually matter. Put positively, we need to keep learning that people are actually more like us than we thought. The great progressive impulse is to find more and more common humanity, and to adjust our policies around that truth. (And, as an aside that I both believe, and I hope Ron Dreher will find annoying: Nope, it doesn’t end with humans. We need to stop torturing and killing animals because we like the way they taste.)

So, yes, there always is a next frontier. But it’s not because progressives are sneaky land grabbers who are never satisfied. It’s because we are committed to the endless process of discovering our common humanity, and thus becoming fully human.

I’m ok with that.

2 Comments »

July 28, 2013

The shockingly short history of the history of technology

In 1960, the academic journal Technology and Culture devoted its entire Autumn edition [1] to essays about a single work, the fifth and final volume of which had come out in 1958: A History of Technology, edited by Charles Singer, E. J. Holmyard, A. R. Hall, and Trevor I. Williams. Essay after essay implies or outright states something I found quite remarkable: A History of Technology is the first history of technology.

You’d think the essays would have some clever twist explaining why all those other things that claimed to be histories were not, perhaps because they didn’t get the concept of “technology” right in some modern way. But, no, the statements are pretty untwisty. The journal’s editor matter-of-factly claims that the history of technology is a “new discipline.”[2] Robert Woodbury takes the work’s publication as the beginning of the discipline as well, although he thinks it pales next to the foundational work of the history of science [3], a field the journal’s essays generally take as the history of technology’s older sibling, if not its parent. Indeed, fourteen years later, in 1974, Robert Multhauf wrote an article for that same journal, called “Some Observations on the State of the History of Technology,”[4] that suggested that the discipline was only then coming into its own. Why some universities have even recognized that there is such a thing as an historian of science!

The essay by Lewis Mumford, whom one might have mistaken for a prior historian of technology, marks the volumes as a first history of technology, pans them as a history of technology, and acknowledges prior attempts that border on being histories of technology. [5] His main objection to A History of Technology— and he is far from alone in this among the essays — is that the volumes don’t do the job of synthesizing the events recounted, failing to put them into the history of ideas, culture, and economics that explain both how technology took the turns that it did and what the meaning of those turns meant for human life. At least, Mumford says, these five volumes do a better job than the works of three British nineteenth century who wrote something like histories of technology: Andrew Ure, Samuel Smiles, and Charles Babbage. (Yes, that Charles Babbage.) (Multhauf points also to Louis Figuier in France, and Franz Reuleaux in Germany.[6])

Mumford comes across as a little miffed in the essay he wrote about A History of Technology, but, then, Mumford often comes across as at least a little miffed. In the 1963 introduction to his 1934 work, Technics and Civilization, Mumford seems to claim the crown for himself, saying that his work was “the first to summarize the technical history of the last thousand years of Western Civilization…” [7]. And, indeed, that book does what he claims is missing from A History of Technology, looking at the non-technical factors that made the technology socially feasible, and at the social effects the technology had. It is a remarkable work of synthesis, driven by a moral fervor that borders on the rhetoric of a prophet. (Mumford sometimes crossed that border; see his 1946 anti-nuke essay, “Gentlemen: You are Mad!” [8]) Still, in 1960 Mumford treated A History of Technology as a first history of technology not only in the academic journal Technology and Culture, but also in The New Yorker, claiming that until recently the history of technology had been “ignored,” and “…no matter what the oversights or lapses in this new “History of Technology, one must be grateful that it has come into existence at all.”[9]

So, there does seem to be a rough consensus that the first history of technology appeared in 1958. That the newness of this field is shocking, at least to me, is a sign of how dominant technology as a concept — as a frame — has become in the past couple of decades.


[1] Techology and Culture. Autumn, 1960. Vol. 1, Issue 4.

[2] Melvin Kranzberg. “Charles Singer and ‘A History of Technology'” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 299-302. p. 300.

[3] Robert S. Woodbury. “The Scholarly Future of the History of Technology” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 345-8. P. 345.

[4] Robert P. Multhauf, “Some Observations on the State of the History of Technology.” Techology and Culture. Jan, 1974. Vol. 15, no. 1. pp. 1-12

[5] Lewis Mumford. “Tools and the Man.” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 320-334.

[6] Multhauf, p. 3.

[7] Lewis Mumford. Technics and Civilization. (Harcourt Brace, 1934. New edition 1963), p. xi.

[8] Lewis Mumford. “Gentlemen: You Are Mad!” Saturday Review of Literature. March 2, 1946, pp. 5-6.

[9] Lewis Mumford. “From Erewhon to Nowhere.” The New Yorker. Oct. 8, 1960. pp. 180-197.

2 Comments »

July 20, 2013

The Rolling Stone cover: Outrage and sympathy

CNN.com has posted my op-ed about the Rolling Stone cover that features Dzhokhar Tsarnaev. It’s not the favorite thing I’ve ever written, but I had about an hour to do a draft.

There are two things I know I’d change without even going through the scary process of re-reading it:

First, CNN edited out any direct assertion that the Tsarnaev’s are guilty. So, there are some “alleged”s awkwardly inserted, and some language that works around direct attribution of guilt. I’m in favor of the presumption of innocence, of course. But inserting the word “alleged” is a formalism without real effect, except when the allegedly alleged murderer’s lawyers call. But, I get it. (CNN also removed some of the links I’d put, including to the cover itself and to the Wikipedia NPOV policy.)

Second, I wanted to say something more directly about the distinction between the sympathy that feels bad for someone’s troubles and the sympathy that lets us understand where the person is coming from. My post too quickly rules out sympathy of any kind because I knew that if I asked for sympathetic understanding, many readers would accuse me of feeling sympathetic toward the perpetrator rather than toward his victims, as if one rules out the other. So, I opted to strike any positive use of the term. (Of course that didn’t stop many of the commenters from claiming that I’m excusing the Tsarnaevs. Ridiculous.)

So, I’ll say it here: sympathetic understanding is a crucial human project, and, in truth, it often does lead toward sympathetic feelings. For example, in The Executioner’s Song, Norman Mailer leads us through the story of the mass killer Gary Gilmore, providing explanations that implicitly run the gamut from psychological to economic to social to Nietzschean to Freudian [1]. Inevitably we do feel some emotional sympathy for Gilmore, although without thinking him one whit less culpable. It sucked to be Gary Gilmore, and that doesn’t mean it didn’t suck far worse to be one of his victims.

In the same way, the common narrative about the Tsarnaev brothers (which I, too, accept) is that the older brother was a rotten apple who drew the younger brother into evil. I think the story of how the younger brother became “radicalized” — in quotes because it is an exteriorized word — is more interesting to most of us than how the older brother got there. If and when they make the movie, the part of the younger brother will be the plum role. That’s the narrative that has been served to us, and there are reasons to think that it’s basically right: the younger brother didn’t show signs of radicalization — his friends were genuinely shocked — while the older brother did. (The Rolling Stone article elaborates this narrative by explaining not only how the younger came to his beliefs, but also by showing that he kept the change hidden.) This narrative also fits well into our cultural narrative about youth being innocent until corrupted. But my point is not that the narrative is true or false or both or neither. It is that we generally hold to that narrative in this case, and it is a narrative that naturally engenders some element of emotional sympathy for the corrupted youth. And what’s wrong with that? Sympathy doesn’t have to take sides. Our judgment does that. Understanding how Dzhokhar Tsarnaev went from innocent to a murderer of children (allegedly!) and what it was like to be him doesn’t mean that I hold him less culpable, that I want his sentence reduced, or — most importantly — that I now have less sympathy for his victims. Indeed, the whole power of The Narrative depends upon our continuing horror at what he did.

To say otherwise is to deny the power of narrative and art. It means we should ban not just The Executioner’s Song but also In Cold Blood, Crime and Punishment and even Madame Bovary, each of which bring us to both cognitive and emotional sympathy for people who did bad things [2]. It is also to deny that evil is the act of humans and thus is a possibility for each of us, at least in a “there but for the grace of God” sense.[3] And, to my way of thinking, our outrage at any attempt to understand those who commit incontrovertibly evil acts is intended exactly to silence that scariest of thoughts.


[1] It’s been decades since I read The Executioner’s Song, so I’m probably misrepresenting which exact explanatory theories Mailer employs.

[2] I know Madame Bovary is different because her acts of adultery even within the frame of the book are so thoroughly understandable.

[3] At the last minute when finishing this post, I removed a sentence referencing Hannah Arendt’s complex phrase “the banality of evil.” It raised too many issues for a final paragraph. (And I have a sense I will regret including Madame Bovary in the list. See footnote 2.)

4 Comments »

June 27, 2013

Relevant differences unresolved

After yesterday’s Supreme Court decisions, I’m just so happy about the progress we’re making.

It seems like progress to me because of the narrative line I have for the stretch of history I happen to have lived through since my birth in 1950: We keep widening the circle of sympathy, acceptance, and rights so that our social systems more closely approximate the truly relevant distinctions among us. I’ve seen the default position on the rights of African Americans switch, then the default position on the rights of women, and now the default position on sexual “preferences.” I of course know that none of these social changes is complete, but to base a judgment on race, gender, or sexuality now requires special arguments, whereas sixty years ago, those factors were assumed to be obviously relevant to virtually all of life.

According to this narrative, it’s instructive to remember that the Supreme Court overruled state laws banning racial intermarriage only in 1967. That’s amazing to me. When I was 17, outlawing “miscegeny” seemed to some segment of the population to be not just reasonable but required. It was still a debatable issue. Holy cow! How can you remember that and not think that we’re going to struggle to explain to the next generation that in 2013 there were people who actually thought banning same sex marriage was not just defensible but required?

So, I imagine a conversation (and, yes, I know I’m making it up) with someone angry about yesterday’s decisions. Arguing over which differences are relevant is often a productive way to proceed. You say that women’s upper body strength is less than men’s, so women shouldn’t be firefighters, but we can agree that if a woman can pass the strength tests, then she should be hired. Or maybe we argue about how important upper body strength is for that particular role. You say that women are too timid, and I say that we can find that out by hiring some, but at least we agree that firefighters need to be courageous. A lot of our moral arguments about social issues are like that. They are about what are the relevant differences.

But in this case it’s really really hard. I say that gender is irrelevant to love, and all that matters to a marriage is love. You say same sex marriage is unnatural, that it’s forbidden by God, and that lust is a temptation to be resisted no matter what its object. Behind these ideas (at least in this reconstruction of an imaginary argument) is an assumption that physical differences created by God must entail different potentials which in turn entail different moral obligations. Why else would God have created those physical distinctions? The relevance of the distinctions are etched in stone. Thus the argument over relevant differences can’t get anywhere. We don’t even agree about the characteristics of the role (e.g., upper body strength and courage count for firefighters) so that we can then discuss what differences are relevant to those characteristics. We don’t have enough agreement to be able to disagree fruitfully.

I therefore feel bad for those who see yesterday’s rulings as one more step toward a permissive, depraved society. I wish I could explain why my joy feels based on acceptance, not permissiveness, and not on depravity but on love.


By the way, my spellchecker flags “miscegeny” as a misspelled word, a real sign of progress.

Be the first to comment »

June 10, 2013

Heidegger on technology, and technodeterminism

I’m leaving tomorrow night for a few days in Germany as a fellow at the University of Stuttgart’s International Center for Research on Culture and Technology. I’ll be giving a two-day workshop with about 35 students, which I am both very excited about and totally at sea about. Except for teaching a course with John Palfrey, who is an awesomely awesome teacher, I haven’t taught since 1986. I was good at the time, but I forget the basics about structuring sessions.

Anyway, enough of that particular anxiety. I’m also giving a public lecture on Thursday at the city library (Stadtbibliothek am Mailänder Platz). It’ll be in English, thank Gott! My topic is “What the Web Uncovers,” which is a purposeful Heidegger reference. I’ve spent a lot of time trying to write this, and finally on Sunday completed a draft. It undoubtedly will change significantly, but here’s what I plan on saying at the beginning:

In 1954, Heidegger published “The Question about Technology” (Die Frage nach der Technik). I re-read it recently, and discovered why people hold Heidegger’s writing in such disdain (aside from the Nazi thing, of course). Wow! But there are some ideas in it that I think are really helpful.

Heidegger says that technology reveals the world to us in particular ways. For example, a dam across a river, which is one of his central examples, reveals the natural world as Bestand, which gets translated into English as “standing reserve” or “resource”: power waiting to be harnessed by humans. His point I think is profound: Technology should be understood not only in terms of what it does, but in terms of what it reveals about the world and what the world means to us. That is in fact the question I want to ask: What does the world that the Web uncovers look like? What does the Web reveal?

This approach holds the promise of letting us talk about technology from beyond the merely technical position. But it also happens to throw itself into an old controversy that has recently re-arisen. It sounds as if Heidegger is presenting a form of technodeterminism — the belief that technology determines our reaction to it, that technology shapes us. Against technodeterminism it is argued quite sensibly that a tool is not even a tool until humans come along and decide to use it for something. So, a screwdriver can be used to drive screws, but it could also be used to bang on a drum or to open and stir a can of paint. So, how could a screw driver have an effect on us, much less shape us, if we’re the ones who are shaping it?

Heidegger doesn’t fall prey to technodeterminism because one of his bedrock ideas is that things don’t have meaning outside of the full context of relationships that constitute the entire world — a world into which we are thrown. So, technology doesn’t determine us, since it takes an entire world to determine technology, us, and everything else. Further, in “Die Frage nach der Technik,” he explains the various historical ways technology has affected us by referring to a mysterious history of Being that gives us that historical context. But I don’t want to talk about that, mainly because insofar as I understand it, I find it deeply flawed. Even so I think we want to be able to talk about the effect of technology, granting that it’s not technology itself taken in isolation, but rather the fact that we do indeed come to technology out of a situation that is historical, cultural, social, and even individual.

So, how does the Web reveal the world? What does the world look like in the Age of the Web? (And that means: what does it look like to us educated Westerners with sufficient leisure time to consider such things, etc.) Here are the subject headings of the talk until I rewrite it as I inevitably do: chaotic, unmasterable, messy, interest-based, unsettling, and turning us to a shared world about which we disagree. This is very unlike the way the world looks in the prior age of technology, the age about which Heidegger was writing. Yet, I find at the heart of the Web-revealed world the stubborn fact that the world is revealed through human care: we are creatures that care about our existence, about others, and about our world. Care (Sorge) is at the heart of early Heidegger’s analysis.

9 Comments »

March 27, 2013

Why homosexuality looks like a decision

Note that in the following, I’m figuring out something that is probably obvious to everyone except me.

The other day I found myself expostulating, “How can anyone think people choose which sex they’re attracted to???” (Yes, with three question marks. I was expostulating.) I followed this with the well-worn, “If they think homosexuality is a choice, then they must also think that heterosexuality is. But at what point in their lives did they really make a choice between finding boys or girls hot? Never!!!”

My argument is not a good one. For at least some anti-gay folks, it poses a false equivalence. I think.

Thinking that gays choose their sexual “preference” but straights do not appears to be a contradiction until you factor in some assumptions about nature and temptation. So: God set it up so that humans naturally are drawn to the opposite sex. But we can be tempted toward all sorts of sins: We can lust after a neighbor’s spouse. We can be drawn toward liquor. We can be tempted to shoplift. We all face many different temptations of varying degrees of badness. Good people resist temptations as firmly as they can. Homosexuals give in to their temptations, and even flaunt them.

Thus, the proper equivalence isn’t between heterosexuals and homosexuals deciding which gender they’ll desire. It’s between homosexuals giving in to their temptation (same-sex sex) and heterosexuals giving in to their temptation (adultery, promiscuity, or some such). The equivalence isn’t in the choice of temptations but in the reaction to those temptations.

I’m not agreeing, of course. I fly my rainbow flag high. But this helps me to understand what otherwise looks like an argument so incoherent that it’s incomprehensible how anyone could actually hold it. It’s not incoherent, given a certain set of premises. It’s coherent…but wrong.

5 Comments »

March 2, 2013

[misc] The Wars on Terrorism, Al Qaeda, Cancer, and Dessert

Steve Coll has a good piece in the New Yorker about the importance of Al Qaeda as a brand:

…as long as there are bands of violent Islamic radicals anywhere in the world who find it attractive to call themselves Al Qaeda, a formal state of war may exist between Al Qaeda and America. The Hundred Years War could seem a brief skirmish in comparison.

This is a different category of issue than the oft-criticized “war on terror,” which is a war against a tactic, not against an enemy. The war against Al Qaeda implies that there is a structurally unified enemy organization. How do you declare victory against a group that refuses to enforce its trademark?

In this, the war against Al Qaeda (which is quite preferable to a war against terror — and I think Steve agrees) is similar to the war on cancer. Cancer is not a single disease and the various things we call cancer are unlikely to have a single cause and thus are unlikely to have a single cure (or so I have been told). While this line of thinking would seem to reinforce politicians’ referring to terrorism as a “cancer,” the same applies to dessert. Each of these terms probably does have a single identifying characteristic, which means they are not classic examples of Wittgensteinian family resemblances: all terrorism involves a non-state attack that aims at terrifying the civilian population, all cancers involve “unregulated cell growth” [thank you Wikipedia!], and all desserts are designed primarily for taste not nutrition and are intended to end a meal. In fact, the war on Al Qaeda is actually more like the war on dessert than like the war on cancer, because just as there will always be some terrorist group that takes up the Al Qaeda name, there will always be some boundary-pushing chef who declares that beefy jerky or glazed ham cubes are the new dessert. You can’t defeat an enemy that can just rebrand itself.

I think that Steve Coll comes to the wrong conclusion, however. He ends his piece this way:

Yet the empirical case for a worldwide state of war against a corporeal thing called Al Qaeda looks increasingly threadbare. A war against a name is a war in name only.

I agree with the first sentence, but I draw two different conclusions. First, this has little bearing on how we actually respond to terrorism. The thinking that has us attacking terrorist groups (and at times their family gatherings) around the world is not made threadbare by the misnomer “war against Al Qaeda.” Second, isn’t it empirically obvious that a war against a name is not a war in name only?

1 Comment »

February 12, 2013

[2b2k] Margaret Sullivan on Objectivity

Magaret Sullivan [twitter:Sulliview] is the public editor of the New York Times. She’s giving a lunchtime talk at the Harvard Shorenstein Center [twitter:ShorensteinCtr] . Her topic is: how is social media is changing journalism? She says she’s open to any other topic during the Q&A as well.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Margaret says she’s going to talk about Tom Kent, the standards editor for the Association Press, and Jay Rosen [twitter:jayrosen_nyu] . She begins by saying she respects them both. [Disclosure: Jay is a friend] She cites Tom [which I’m only getting roughly]: At heart, objective journalism sets out to establish the facts, state the range of opinions, and take a first cut at which arguments are the most rigorous. Journalists should show their commitment to balance by keeping their opinions to themselves. Tom wrote a memo to his staff (leaked to Romenesca
) about expressing personal opinions on social networks. [Margaret wrote an excellent column about this a month ago.]

Jay Rosen, she says, thinks that objectivity is an outdated concept. Journalists should tell their readers where they’re coming from so you can judge their output based on that. “The grounds for trust are slowly shifting. The view from nowhere is getting harder to trust, and ‘here’s where I’m coming from’ is become more trustworthy.” [approx] Objectivity is a cop out, says Jay.

Margaret says that these are the two poles, although both are very reasonable people.

Now she’s going to look at two real situations. The NYT Jerusalem bureau chief Jody Rudoren is relatively new. It is one of the most difficult positions. Within a few weeks she had sent some “twitter messages” (NYT won’t allow the word “tweets,” she says, although when I tweeted this, some people disagreed; Alex Jones and Margaret bantered about this, so she was pretty clear about the policy.). She was criticized for who she praised in the tweets, e.g., Peter Beinart. She also linked without comment to a pro-Hezbollah newspaper. The NYT had an editor “work with her” on her social media; that is, she no longer had free access to those media. Margaret notes that many believe “this is against the entire ethos of social media. If you’re going to be on social media, you don’t want a NYT editor sitting next to you.”

The early reporting from Newtown was “pretty bad” across the entire media, she says. In the first few hours, a shooter was named — Ryan Lanza — and a Facebook photo of him was shown. But it was the wrong Ryan Lanza. And then it turned out it was that other Ryan Lanza’s brother. The NYT in its early Web reporting said “according to early Web reports” the shooter was Ryan Lanza. Lots of other wrong information was floated, and got into early Web reports (although generally not into the N YT). “Social media was a double edged sword because it perpetuated these inaccuracies and then worked to correct them.” It often happens that way, she says.

So, where’s the right place to be on the spectrum between Tom and Jay? “It’s no longer possible to be completely faceless. Journalists are on social media. They’re honing their personal brands. Their newspapers are there…They’re trying to use the Web to get their message out, and in that process they’re exposing who they are. Is that a bad thing? Is it a bad thing for us to know what a political reporter’s politics are? I don’t think that question is easily answerable now. I come down a little closer to where Tom Kent is. I think that it makes a lot of sense for hard news reporters … for the White House reporter, I think it makes a lot of sense to keep their politics under wraps. I don’t see how it helps for people to be prejudging and distrusting them because ‘You’re in the tank for so-and-so.'” Phil Corbett, the standards editor for the NYT, rejects the idea there is no impartial journalism. He rejects that it’s a pretense or charade.

Margaret says, “The one thing I’m very sure of is that this business of impartiality and balance should no longer mean” going down the middle in a he-said-she-said. That’s false equivalence. “That’s changing and should change.” There are facts that we fully believe are true. Evolution and Creationism are not equivalents.

Q&A

Q: Alex Jones: It used to be that the NYT wouldn’t let you cite an anonymous negative comment, along the lines of “This or that person sucks.”

A: Everyone agrees doing so is bad, but I still see it from time to time.

Q: Alex Jones: The NYT policy used to be that you must avoid an appearance of conflict of interest. E.g., a reporter’s son was in the Israeli Army. Should that reporter be forbidden from covering Israel?

A: WhenEthan Bronner went to cover Israel, his son wasn’t in the military. But then his son decided to go join up. “It certainly wasn’t ideal.” Should Ethan have been yanked out the moment his son joined? I’m not sure, Margaret says. It’s certainly problematic. I don’t know the answer.

Q: Objectivity doesn’t always draw a clear line. How do you engage with people whose ideas are diametrically opposed to yours?

A: Some issues are extremely difficult and you’re probably not going to come to a meeting of the minds on it. Be respectful. Accept that you’re not going to make much headway.

Q: Wouldn’t transparency fragment the sources? People will only listen to sources that agree.

A: Yes, this further fractures a fractured environment. It’s useful to have some news sources that set out to be in neither camp. The DC bureau chief of the NYT knows a lot about economics. For him to tell us about his views on that is helpful, but it doesn’t help to know who he voted for.

Q: Martin Nisenholz] The NYT audience is smart but it hasn’t lit up the NYT web site. Do you think the NYT should be a place where people can freely offer their opinions/reviews even if they’re biased? E.g., at Yelp you don’t know if the reviewer is the owner, a competitor… How do you feel about this central notion of user ID and the intersection with commentary?

A: I disagree that readers haven’t lit up the web site. The commentary beneath stories is amazing…

Q: I meant in reviews, not hard news…

A: A real ID policy improves the tenor.

Q: How about the snarkiness of twitter?

A: The best way to be mocked on Twitter is to be earnest. It’s a place to be snarky. It’s regrettable. Reporters should be very careful before they hit the “tweet” button. The tone is a problem.

Q: If you want to build a community — and we reporters are constantly pushed to do that — you have to engage your readers. How can you do that without disclosing your stands? We all have opinions, and we share them with a circle we feel safe in. But sometimes those leak. I’d hope that my paper would protect me.

A: I find Twitter to be invaluable. Incredible news source. Great way to get your message out. The best thing for me is not people’s sarcastic comments. It’s the link to a story. It’s “Hey, did you see this?” To me that’s the most useful part. Even though I describe it as snarky, I’ve also found it to be a very supportive place. When you take a stand, as I did on Sunday about the press not holding things back for national security reasons, you can get a lot of support there. You just have to be careful. Use it for th best possible reasons: to disseminate info, rather than to comment sarcastically.

Q: Between Kent and Rosen, I don’t think there is some higher power of morality that decides this. It depends on where you sit and what you own. If you own NYT, you have billions of dollars in good will you’ve built up. Your audience comes to you with a certain expectation. There’s an inherent bias in what they cover, but also expectations about an effort toward objectivity. Social media is a distribution channel, not a place to bear your soul. A foreign correspondent for Time made a late-night blog post. (“I’d put a breathalyzer on keyboards,” he says.) A seasoned reporter said offhandedly that maybe the victim of some tragedy deserved it. This got distributed via social media as Time Mag’s position. Reporters’ tweets should be edited first. The institution has every right to have a policy that constrains what reporters say on social media. But now there are legal cases. Social media has become an inalienable right. In the old days, the WSJ fired a reporter for handing out political leaflets in a subway station. If you’re Jay Rosen and your business is to throw bombs at the institutional media, and to say everything you do is wrong [!], then that’s ok. But if you own a newspaper, you have to stand up for objectivity.

A: I don’t disagree, although I think Jay is a thoughtful person.

Q: I blog on the HuffPo. But at Harvard, blogging is not considered professional. It’s thought of as tossed off…

A: A blog is just a delivery system. It’s not inherently good or bad, slapdash or well-researched. It’s a way to get your message out.

A: [Alex Jones] Actually there’s a fair number of people who blog at Harvard. The Berkman Center, places like that. [Thank you, Alex :)]

Q: How do you think about the evolution of your job as public editor? Are you thinking about how you interact with the readers and the rhythm of how you publish?

A: When I was brought in 5 months ago, they wanted to take it to the new media world. I was very interested in that. The original idea was to get rid of the print column all together. But I wanted to do both. I’ve been doing both. It’s turned into a conversation with readers.

Q: People are deeply convinced of wrong ideas. Goebbels’ diaries show an upside down world in which Churchill is a gangster. How do you know what counts as fact?

A: Some things are just wrong. Paul Ryan was wrong about criticizing Obama for allowing a particular GM plant to close. The plant closed before Obama took office. That’s a correctable. When it’s more complex, we have to hear both sides out.


Then I got to ask the last question, which I asked so clumsily that it practically forced Margaret to respond, “Then you’re locking yourself into a single point of view, and that’s a bad way to become educated.” Ack.

I was trying to ask the same question as the prior one, but to get past the sorts of facts that Margaret noted. I think it’d be helpful to talk about the accuracy of facts (about which there are their own questions, of course) and focus the discussion of objectivity at least one level up the hermeneutic stack. I tried to say that I don’t feel bad about turning to partisan social networks when I need an explanation of the meaning of an event. For my primary understanding I’m going to turn to people with whom I share first principles, just as I’m not going to look to a Creationism site to understand some new paper about evolution. But I put this so poorly that I drew the Echo Chamber rebuke.

What it really comes down to, for me, is the theory of understanding and knowledge that underlies the pursuit of objectivity. Objectivity imagines a world in which we understand things by considering all sides from a fresh, open start. But in fact understanding is far more incremental, far more situated, and far more pragmatic than that. We understand from a point of view and a set of commitments. This isn’t a flaw in understanding. It is what enables understanding.

Nor does this free us from the responsibility to think through our opinions, to sympathetically understand opposing views, and to be open to the possibility that we are wrong. It’s just to say that understanding has a job to do. In most cases, it does that job by absorbing the new into our existing context. There is a time and place for revolution in our understanding. But that’s not the job we need to do as we try to make sense of the world pressing in on us. Reason can’t function in the world the way objectivity would like it to.


I’m glad the NY Times is taking these questions seriously,and Margaret is impressive (and not just because she takes Jay Rosen very seriously). I’m a little surprised that we’re still talking about objectivity, however. I thought that the discussion had usefully broken the concept up into questions of accuracy, balance, and fairness — with “balance” coming into question because of the cowardly he-said-she-said dodges that have become all too common, and that Margaret decries. I’m not sure what the concept of objectivity itself adds to this mix except a set of difficult assumptions.

Be the first to comment »

January 24, 2013

Attending to appearances

I picked up a copy of Bernard Knox’s 1994 Backing into the Future because somewhere I saw it referenced about the weird fact that the ancient Greeks thought that the future was behind them. Knox presents evidence from The Odyssey and Oedipus the King to back this up, so to speak. But that’s literally on the first page of the book. The rest of it consists of brilliant and brilliantly written essays about ancient life and scholarship. Totally enjoyable.

True, he undoes one of my favorite factoids: that Greeks in Homer’s time did not have a concept of the body as an overall unity, but rather only had words for particular parts of the body. This notion comes most forcefully from Bruno Snell in The Discovery of Mind, although I first read about it — and was convinced — by a Paul Feyerabend essay. In his essay “What Did Achilles Look Like?,” Knox convincingly argues that the Greeks had both and a word and concept for the body as a unity. In fact, they may have had three. Knox then points to Homeric uses that seem to indicate, yeah, Homer was talking about a unitary body. E.g., “from the bath he [Oydsseus] stepped, in body [demas] like the immortals,” and Poseidon “takes on the likeness of Calchas, in bodily form,” etc. [p. 52] I don’t read Greek, so I’ll believe whatever the last expert tells me, and Knox is the last expert I’ve read on this topic.

In a later chapter, Knox comes back to Bernard William’s criticism, in Shame and Necessity, of the “Homeric Greeks had no concept of a unitary body” idea, and also discusses another wrong thing that I had been taught. It turns out that the Greeks did have a concept of intention, decision-making, and will. Williams argues that they may not have had distinct words for these things, but Homer “and his characters make distinctions that can only be understood in terms of” those concepts. Further, Williams writes that Homer has

no word that means, simply, “decide.” But he has the notion…All that Homer seems to have left out is the idea of another mental action that is supposed necessarily to lie between coming to a conclusion and acting on it: and he did well in leaving it out, since there is no such action, and the idea of it is the invention of bad philosophy.” [p. 228]

Wow. Seems pretty right to me. What does the act of “making a decision” add to the description of how we move from conclusion to action?

Knox also has a long appreciation of Martha Nussbaum’s The Fragility of Goodness (1986) which makes me want to go out and get that book immediately, although I suspect that Knox is making it considerably more accessible than the original. But it sounds breath-takingly brilliant.

Knox’s essay on Nussbaum, “How Should We Live,” is itself rich with ideas, but one piece particularly struck me. In Book 6 of the Nichomachean Ethics, Aristotle dismisses one of Socrates’ claims (that no one knowingly does evil) by saying that such a belief is “manifestly in contradiction with the phainomena.” I’ve always heard the word “phainomena” translated in (as Knox says) Baconian terms, as if Aristotle were anticipating modern science’s focus on the facts and careful observation. We generally translate phainomena as “appearances” and contrast it with reality. The task of the scientist and the philosopher is to let us see past our assumptions to reveal the thing as it shows itself (appears) free of our anticipations and interpretations, so we can then use those unprejudiced appearances as a guide to truths about reality.

But Nussbaum takes the word differently, and Knox is convinced. Phainomena, are “the ordinary beliefs and sayings” and the sayings of the wise about things. Aristotle’s method consisted of straightening out whatever confusions and contradictions are in this body of beliefs and sayings, but then to show that at least the majority of those beliefs are true. This is a complete inversion of what I’d always thought. Rather than “attending to appearances” meaning dropping one’s assumptions to reveal the thing in its untouched state, it actually means taking those assumptions — of the many and of the wise — as containing truth. It is a confirming activity, not a penetrating and an overturning. Nussbaum says for Aristotle (and in contrast to Plato), “Theory must remain committed to the ways human beings live, act, see.” (Note that it’s entirely possible I’m getting Aristotle, Nussbaum, and Knox wrong. A trifecta of misunderstanding!)

Nussbaum’s book sounds amazing, and I know I should have read it, oh, 20 years ago, but it came out the year I left the philosophy biz. And Knox’s book is just wonderful. If you ever doubted why we need scholars and experts — why would you think such a thing? — this book is a completely enjoyable reminder.

1 Comment »

January 14, 2013

What gods and beasts have in common

“The man who is incapable of working in common, or who in his self-sufficiency has no need of others, is no part of the community, like a beast, or a god.”


Aristotle, Politics, Book One, Chapter 2, this quotation translated by Bernard Knox in Backing into the Future.

Be the first to comment »

« Previous Page | Next Page »