CNN.com has posted my op-ed about the Rolling Stone cover that features Dzhokhar Tsarnaev. It’s not the favorite thing I’ve ever written, but I had about an hour to do a draft.
There are two things I know I’d change without even going through the scary process of re-reading it:
First, CNN edited out any direct assertion that the Tsarnaev’s are guilty. So, there are some “alleged”s awkwardly inserted, and some language that works around direct attribution of guilt. I’m in favor of the presumption of innocence, of course. But inserting the word “alleged” is a formalism without real effect, except when the allegedly alleged murderer’s lawyers call. But, I get it. (CNN also removed some of the links I’d put, including to the cover itself and to the Wikipedia NPOV policy.)
Second, I wanted to say something more directly about the distinction between the sympathy that feels bad for someone’s troubles and the sympathy that lets us understand where the person is coming from. My post too quickly rules out sympathy of any kind because I knew that if I asked for sympathetic understanding, many readers would accuse me of feeling sympathetic toward the perpetrator rather than toward his victims, as if one rules out the other. So, I opted to strike any positive use of the term. (Of course that didn’t stop many of the commenters from claiming that I’m excusing the Tsarnaevs. Ridiculous.)
So, I’ll say it here: sympathetic understanding is a crucial human project, and, in truth, it often does lead toward sympathetic feelings. For example, in The Executioner’s Song, Norman Mailer leads us through the story of the mass killer Gary Gilmore, providing explanations that implicitly run the gamut from psychological to economic to social to Nietzschean to Freudian . Inevitably we do feel some emotional sympathy for Gilmore, although without thinking him one whit less culpable. It sucked to be Gary Gilmore, and that doesn’t mean it didn’t suck far worse to be one of his victims.
In the same way, the common narrative about the Tsarnaev brothers (which I, too, accept) is that the older brother was a rotten apple who drew the younger brother into evil. I think the story of how the younger brother became “radicalized” — in quotes because it is an exteriorized word — is more interesting to most of us than how the older brother got there. If and when they make the movie, the part of the younger brother will be the plum role. That’s the narrative that has been served to us, and there are reasons to think that it’s basically right: the younger brother didn’t show signs of radicalization — his friends were genuinely shocked — while the older brother did. (The Rolling Stone article elaborates this narrative by explaining not only how the younger came to his beliefs, but also by showing that he kept the change hidden.) This narrative also fits well into our cultural narrative about youth being innocent until corrupted. But my point is not that the narrative is true or false or both or neither. It is that we generally hold to that narrative in this case, and it is a narrative that naturally engenders some element of emotional sympathy for the corrupted youth. And what’s wrong with that? Sympathy doesn’t have to take sides. Our judgment does that. Understanding how Dzhokhar Tsarnaev went from innocent to a murderer of children (allegedly!) and what it was like to be him doesn’t mean that I hold him less culpable, that I want his sentence reduced, or — most importantly — that I now have less sympathy for his victims. Indeed, the whole power of The Narrative depends upon our continuing horror at what he did.
To say otherwise is to deny the power of narrative and art. It means we should ban not just The Executioner’s Song but also In Cold Blood, Crime and Punishment and even Madame Bovary, each of which bring us to both cognitive and emotional sympathy for people who did bad things . It is also to deny that evil is the act of humans and thus is a possibility for each of us, at least in a “there but for the grace of God” sense. And, to my way of thinking, our outrage at any attempt to understand those who commit incontrovertibly evil acts is intended exactly to silence that scariest of thoughts.
 It’s been decades since I read The Executioner’s Song, so I’m probably misrepresenting which exact explanatory theories Mailer employs.
 I know Madame Bovary is different because her acts of adultery even within the frame of the book are so thoroughly understandable.
 At the last minute when finishing this post, I removed a sentence referencing Hannah Arendt’s complex phrase “the banality of evil.” It raised too many issues for a final paragraph. (And I have a sense I will regret including Madame Bovary in the list. See footnote 2.)
Tagged with: blogs
Date: July 20th, 2013 dw
After yesterday’s Supreme Court decisions, I’m just so happy about the progress we’re making.
It seems like progress to me because of the narrative line I have for the stretch of history I happen to have lived through since my birth in 1950: We keep widening the circle of sympathy, acceptance, and rights so that our social systems more closely approximate the truly relevant distinctions among us. I’ve seen the default position on the rights of African Americans switch, then the default position on the rights of women, and now the default position on sexual “preferences.” I of course know that none of these social changes is complete, but to base a judgment on race, gender, or sexuality now requires special arguments, whereas sixty years ago, those factors were assumed to be obviously relevant to virtually all of life.
According to this narrative, it’s instructive to remember that the Supreme Court overruled state laws banning racial intermarriage only in 1967. That’s amazing to me. When I was 17, outlawing “miscegeny” seemed to some segment of the population to be not just reasonable but required. It was still a debatable issue. Holy cow! How can you remember that and not think that we’re going to struggle to explain to the next generation that in 2013 there were people who actually thought banning same sex marriage was not just defensible but required?
So, I imagine a conversation (and, yes, I know I’m making it up) with someone angry about yesterday’s decisions. Arguing over which differences are relevant is often a productive way to proceed. You say that women’s upper body strength is less than men’s, so women shouldn’t be firefighters, but we can agree that if a woman can pass the strength tests, then she should be hired. Or maybe we argue about how important upper body strength is for that particular role. You say that women are too timid, and I say that we can find that out by hiring some, but at least we agree that firefighters need to be courageous. A lot of our moral arguments about social issues are like that. They are about what are the relevant differences.
But in this case it’s really really hard. I say that gender is irrelevant to love, and all that matters to a marriage is love. You say same sex marriage is unnatural, that it’s forbidden by God, and that lust is a temptation to be resisted no matter what its object. Behind these ideas (at least in this reconstruction of an imaginary argument) is an assumption that physical differences created by God must entail different potentials which in turn entail different moral obligations. Why else would God have created those physical distinctions? The relevance of the distinctions are etched in stone. Thus the argument over relevant differences can’t get anywhere. We don’t even agree about the characteristics of the role (e.g., upper body strength and courage count for firefighters) so that we can then discuss what differences are relevant to those characteristics. We don’t have enough agreement to be able to disagree fruitfully.
I therefore feel bad for those who see yesterday’s rulings as one more step toward a permissive, depraved society. I wish I could explain why my joy feels based on acceptance, not permissiveness, and not on depravity but on love.
By the way, my spellchecker flags “miscegeny” as a misspelled word, a real sign of progress.
I’m leaving tomorrow night for a few days in Germany as a fellow at the University of Stuttgart’s International Center for Research on Culture and Technology. I’ll be giving a two-day workshop with about 35 students, which I am both very excited about and totally at sea about. Except for teaching a course with John Palfrey, who is an awesomely awesome teacher, I haven’t taught since 1986. I was good at the time, but I forget the basics about structuring sessions.
Anyway, enough of that particular anxiety. I’m also giving a public lecture on Thursday at the city library (Stadtbibliothek am Mailänder Platz). It’ll be in English, thank Gott! My topic is “What the Web Uncovers,” which is a purposeful Heidegger reference. I’ve spent a lot of time trying to write this, and finally on Sunday completed a draft. It undoubtedly will change significantly, but here’s what I plan on saying at the beginning:
In 1954, Heidegger published “The Question about Technology” (Die Frage nach der Technik). I re-read it recently, and discovered why people hold Heidegger’s writing in such disdain (aside from the Nazi thing, of course). Wow! But there are some ideas in it that I think are really helpful.
Heidegger says that technology reveals the world to us in particular ways. For example, a dam across a river, which is one of his central examples, reveals the natural world as Bestand, which gets translated into English as “standing reserve” or “resource”: power waiting to be harnessed by humans. His point I think is profound: Technology should be understood not only in terms of what it does, but in terms of what it reveals about the world and what the world means to us. That is in fact the question I want to ask: What does the world that the Web uncovers look like? What does the Web reveal?
This approach holds the promise of letting us talk about technology from beyond the merely technical position. But it also happens to throw itself into an old controversy that has recently re-arisen. It sounds as if Heidegger is presenting a form of technodeterminism — the belief that technology determines our reaction to it, that technology shapes us. Against technodeterminism it is argued quite sensibly that a tool is not even a tool until humans come along and decide to use it for something. So, a screwdriver can be used to drive screws, but it could also be used to bang on a drum or to open and stir a can of paint. So, how could a screw driver have an effect on us, much less shape us, if we’re the ones who are shaping it?
Heidegger doesn’t fall prey to technodeterminism because one of his bedrock ideas is that things don’t have meaning outside of the full context of relationships that constitute the entire world — a world into which we are thrown. So, technology doesn’t determine us, since it takes an entire world to determine technology, us, and everything else. Further, in “Die Frage nach der Technik,” he explains the various historical ways technology has affected us by referring to a mysterious history of Being that gives us that historical context. But I don’t want to talk about that, mainly because insofar as I understand it, I find it deeply flawed. Even so I think we want to be able to talk about the effect of technology, granting that it’s not technology itself taken in isolation, but rather the fact that we do indeed come to technology out of a situation that is historical, cultural, social, and even individual.
So, how does the Web reveal the world? What does the world look like in the Age of the Web? (And that means: what does it look like to us educated Westerners with sufficient leisure time to consider such things, etc.) Here are the subject headings of the talk until I rewrite it as I inevitably do: chaotic, unmasterable, messy, interest-based, unsettling, and turning us to a shared world about which we disagree. This is very unlike the way the world looks in the prior age of technology, the age about which Heidegger was writing. Yet, I find at the heart of the Web-revealed world the stubborn fact that the world is revealed through human care: we are creatures that care about our existence, about others, and about our world. Care (Sorge) is at the heart of early Heidegger’s analysis.
Tagged with: heidegger
Date: June 10th, 2013 dw
Note that in the following, I’m figuring out something that is probably obvious to everyone except me.
The other day I found myself expostulating, “How can anyone think people choose which sex they’re attracted to???” (Yes, with three question marks. I was expostulating.) I followed this with the well-worn, “If they think homosexuality is a choice, then they must also think that heterosexuality is. But at what point in their lives did they really make a choice between finding boys or girls hot? Never!!!”
My argument is not a good one. For at least some anti-gay folks, it poses a false equivalence. I think.
Thinking that gays choose their sexual “preference” but straights do not appears to be a contradiction until you factor in some assumptions about nature and temptation. So: God set it up so that humans naturally are drawn to the opposite sex. But we can be tempted toward all sorts of sins: We can lust after a neighbor’s spouse. We can be drawn toward liquor. We can be tempted to shoplift. We all face many different temptations of varying degrees of badness. Good people resist temptations as firmly as they can. Homosexuals give in to their temptations, and even flaunt them.
Thus, the proper equivalence isn’t between heterosexuals and homosexuals deciding which gender they’ll desire. It’s between homosexuals giving in to their temptation (same-sex sex) and heterosexuals giving in to their temptation (adultery, promiscuity, or some such). The equivalence isn’t in the choice of temptations but in the reaction to those temptations.
I’m not agreeing, of course. I fly my rainbow flag high. But this helps me to understand what otherwise looks like an argument so incoherent that it’s incomprehensible how anyone could actually hold it. It’s not incoherent, given a certain set of premises. It’s coherent…but wrong.
Tagged with: gay marriage
• gay rights
Date: March 27th, 2013 dw
Steve Coll has a good piece in the New Yorker about the importance of Al Qaeda as a brand:
…as long as there are bands of violent Islamic radicals anywhere in the world who find it attractive to call themselves Al Qaeda, a formal state of war may exist between Al Qaeda and America. The Hundred Years War could seem a brief skirmish in comparison.
This is a different category of issue than the oft-criticized “war on terror,” which is a war against a tactic, not against an enemy. The war against Al Qaeda implies that there is a structurally unified enemy organization. How do you declare victory against a group that refuses to enforce its trademark?
In this, the war against Al Qaeda (which is quite preferable to a war against terror — and I think Steve agrees) is similar to the war on cancer. Cancer is not a single disease and the various things we call cancer are unlikely to have a single cause and thus are unlikely to have a single cure (or so I have been told). While this line of thinking would seem to reinforce politicians’ referring to terrorism as a “cancer,” the same applies to dessert. Each of these terms probably does have a single identifying characteristic, which means they are not classic examples of Wittgensteinian family resemblances: all terrorism involves a non-state attack that aims at terrifying the civilian population, all cancers involve “unregulated cell growth” [thank you Wikipedia!], and all desserts are designed primarily for taste not nutrition and are intended to end a meal. In fact, the war on Al Qaeda is actually more like the war on dessert than like the war on cancer, because just as there will always be some terrorist group that takes up the Al Qaeda name, there will always be some boundary-pushing chef who declares that beefy jerky or glazed ham cubes are the new dessert. You can’t defeat an enemy that can just rebrand itself.
I think that Steve Coll comes to the wrong conclusion, however. He ends his piece this way:
Yet the empirical case for a worldwide state of war against a corporeal thing called Al Qaeda looks increasingly threadbare. A war against a name is a war in name only.
I agree with the first sentence, but I draw two different conclusions. First, this has little bearing on how we actually respond to terrorism. The thinking that has us attacking terrorist groups (and at times their family gatherings) around the world is not made threadbare by the misnomer “war against Al Qaeda.” Second, isn’t it empirically obvious that a war against a name is not a war in name only?
Magaret Sullivan [twitter:Sulliview] is the public editor of the New York Times. She’s giving a lunchtime talk at the Harvard Shorenstein Center [twitter:ShorensteinCtr] . Her topic is: how is social media is changing journalism? She says she’s open to any other topic during the Q&A as well.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Margaret says she’s going to talk about Tom Kent, the standards editor for the Association Press, and Jay Rosen [twitter:jayrosen_nyu] . She begins by saying she respects them both. [Disclosure: Jay is a friend] She cites Tom [which I’m only getting roughly]: At heart, objective journalism sets out to establish the facts, state the range of opinions, and take a first cut at which arguments are the most rigorous. Journalists should show their commitment to balance by keeping their opinions to themselves. Tom wrote a memo to his staff (leaked to Romenesca
) about expressing personal opinions on social networks. [Margaret wrote an excellent column about this a month ago.]
Jay Rosen, she says, thinks that objectivity is an outdated concept. Journalists should tell their readers where they’re coming from so you can judge their output based on that. “The grounds for trust are slowly shifting. The view from nowhere is getting harder to trust, and ‘here’s where I’m coming from’ is become more trustworthy.” [approx] Objectivity is a cop out, says Jay.
Margaret says that these are the two poles, although both are very reasonable people.
Now she’s going to look at two real situations. The NYT Jerusalem bureau chief Jody Rudoren is relatively new. It is one of the most difficult positions. Within a few weeks she had sent some “twitter messages” (NYT won’t allow the word “tweets,” she says, although when I tweeted this, some people disagreed; Alex Jones and Margaret bantered about this, so she was pretty clear about the policy.). She was criticized for who she praised in the tweets, e.g., Peter Beinart. She also linked without comment to a pro-Hezbollah newspaper. The NYT had an editor “work with her” on her social media; that is, she no longer had free access to those media. Margaret notes that many believe “this is against the entire ethos of social media. If you’re going to be on social media, you don’t want a NYT editor sitting next to you.”
The early reporting from Newtown was “pretty bad” across the entire media, she says. In the first few hours, a shooter was named — Ryan Lanza — and a Facebook photo of him was shown. But it was the wrong Ryan Lanza. And then it turned out it was that other Ryan Lanza’s brother. The NYT in its early Web reporting said “according to early Web reports” the shooter was Ryan Lanza. Lots of other wrong information was floated, and got into early Web reports (although generally not into the N YT). “Social media was a double edged sword because it perpetuated these inaccuracies and then worked to correct them.” It often happens that way, she says.
So, where’s the right place to be on the spectrum between Tom and Jay? “It’s no longer possible to be completely faceless. Journalists are on social media. They’re honing their personal brands. Their newspapers are there…They’re trying to use the Web to get their message out, and in that process they’re exposing who they are. Is that a bad thing? Is it a bad thing for us to know what a political reporter’s politics are? I don’t think that question is easily answerable now. I come down a little closer to where Tom Kent is. I think that it makes a lot of sense for hard news reporters … for the White House reporter, I think it makes a lot of sense to keep their politics under wraps. I don’t see how it helps for people to be prejudging and distrusting them because ‘You’re in the tank for so-and-so.'” Phil Corbett, the standards editor for the NYT, rejects the idea there is no impartial journalism. He rejects that it’s a pretense or charade.
Margaret says, “The one thing I’m very sure of is that this business of impartiality and balance should no longer mean” going down the middle in a he-said-she-said. That’s false equivalence. “That’s changing and should change.” There are facts that we fully believe are true. Evolution and Creationism are not equivalents.
Q: Alex Jones: It used to be that the NYT wouldn’t let you cite an anonymous negative comment, along the lines of “This or that person sucks.”
A: Everyone agrees doing so is bad, but I still see it from time to time.
Q: Alex Jones: The NYT policy used to be that you must avoid an appearance of conflict of interest. E.g., a reporter’s son was in the Israeli Army. Should that reporter be forbidden from covering Israel?
A: WhenEthan Bronner went to cover Israel, his son wasn’t in the military. But then his son decided to go join up. “It certainly wasn’t ideal.” Should Ethan have been yanked out the moment his son joined? I’m not sure, Margaret says. It’s certainly problematic. I don’t know the answer.
Q: Objectivity doesn’t always draw a clear line. How do you engage with people whose ideas are diametrically opposed to yours?
A: Some issues are extremely difficult and you’re probably not going to come to a meeting of the minds on it. Be respectful. Accept that you’re not going to make much headway.
Q: Wouldn’t transparency fragment the sources? People will only listen to sources that agree.
A: Yes, this further fractures a fractured environment. It’s useful to have some news sources that set out to be in neither camp. The DC bureau chief of the NYT knows a lot about economics. For him to tell us about his views on that is helpful, but it doesn’t help to know who he voted for.
Q: Martin Nisenholz] The NYT audience is smart but it hasn’t lit up the NYT web site. Do you think the NYT should be a place where people can freely offer their opinions/reviews even if they’re biased? E.g., at Yelp you don’t know if the reviewer is the owner, a competitor… How do you feel about this central notion of user ID and the intersection with commentary?
A: I disagree that readers haven’t lit up the web site. The commentary beneath stories is amazing…
Q: I meant in reviews, not hard news…
A: A real ID policy improves the tenor.
Q: How about the snarkiness of twitter?
A: The best way to be mocked on Twitter is to be earnest. It’s a place to be snarky. It’s regrettable. Reporters should be very careful before they hit the “tweet” button. The tone is a problem.
Q: If you want to build a community — and we reporters are constantly pushed to do that — you have to engage your readers. How can you do that without disclosing your stands? We all have opinions, and we share them with a circle we feel safe in. But sometimes those leak. I’d hope that my paper would protect me.
A: I find Twitter to be invaluable. Incredible news source. Great way to get your message out. The best thing for me is not people’s sarcastic comments. It’s the link to a story. It’s “Hey, did you see this?” To me that’s the most useful part. Even though I describe it as snarky, I’ve also found it to be a very supportive place. When you take a stand, as I did on Sunday about the press not holding things back for national security reasons, you can get a lot of support there. You just have to be careful. Use it for th best possible reasons: to disseminate info, rather than to comment sarcastically.
Q: Between Kent and Rosen, I don’t think there is some higher power of morality that decides this. It depends on where you sit and what you own. If you own NYT, you have billions of dollars in good will you’ve built up. Your audience comes to you with a certain expectation. There’s an inherent bias in what they cover, but also expectations about an effort toward objectivity. Social media is a distribution channel, not a place to bear your soul. A foreign correspondent for Time made a late-night blog post. (“I’d put a breathalyzer on keyboards,” he says.) A seasoned reporter said offhandedly that maybe the victim of some tragedy deserved it. This got distributed via social media as Time Mag’s position. Reporters’ tweets should be edited first. The institution has every right to have a policy that constrains what reporters say on social media. But now there are legal cases. Social media has become an inalienable right. In the old days, the WSJ fired a reporter for handing out political leaflets in a subway station. If you’re Jay Rosen and your business is to throw bombs at the institutional media, and to say everything you do is wrong [!], then that’s ok. But if you own a newspaper, you have to stand up for objectivity.
A: I don’t disagree, although I think Jay is a thoughtful person.
Q: I blog on the HuffPo. But at Harvard, blogging is not considered professional. It’s thought of as tossed off…
A: A blog is just a delivery system. It’s not inherently good or bad, slapdash or well-researched. It’s a way to get your message out.
A: [Alex Jones] Actually there’s a fair number of people who blog at Harvard. The Berkman Center, places like that. [Thank you, Alex :)]
Q: How do you think about the evolution of your job as public editor? Are you thinking about how you interact with the readers and the rhythm of how you publish?
A: When I was brought in 5 months ago, they wanted to take it to the new media world. I was very interested in that. The original idea was to get rid of the print column all together. But I wanted to do both. I’ve been doing both. It’s turned into a conversation with readers.
Q: People are deeply convinced of wrong ideas. Goebbels’ diaries show an upside down world in which Churchill is a gangster. How do you know what counts as fact?
A: Some things are just wrong. Paul Ryan was wrong about criticizing Obama for allowing a particular GM plant to close. The plant closed before Obama took office. That’s a correctable. When it’s more complex, we have to hear both sides out.
Then I got to ask the last question, which I asked so clumsily that it practically forced Margaret to respond, “Then you’re locking yourself into a single point of view, and that’s a bad way to become educated.” Ack.
I was trying to ask the same question as the prior one, but to get past the sorts of facts that Margaret noted. I think it’d be helpful to talk about the accuracy of facts (about which there are their own questions, of course) and focus the discussion of objectivity at least one level up the hermeneutic stack. I tried to say that I don’t feel bad about turning to partisan social networks when I need an explanation of the meaning of an event. For my primary understanding I’m going to turn to people with whom I share first principles, just as I’m not going to look to a Creationism site to understand some new paper about evolution. But I put this so poorly that I drew the Echo Chamber rebuke.
What it really comes down to, for me, is the theory of understanding and knowledge that underlies the pursuit of objectivity. Objectivity imagines a world in which we understand things by considering all sides from a fresh, open start. But in fact understanding is far more incremental, far more situated, and far more pragmatic than that. We understand from a point of view and a set of commitments. This isn’t a flaw in understanding. It is what enables understanding.
Nor does this free us from the responsibility to think through our opinions, to sympathetically understand opposing views, and to be open to the possibility that we are wrong. It’s just to say that understanding has a job to do. In most cases, it does that job by absorbing the new into our existing context. There is a time and place for revolution in our understanding. But that’s not the job we need to do as we try to make sense of the world pressing in on us. Reason can’t function in the world the way objectivity would like it to.
I’m glad the NY Times is taking these questions seriously,and Margaret is impressive (and not just because she takes Jay Rosen very seriously). I’m a little surprised that we’re still talking about objectivity, however. I thought that the discussion had usefully broken the concept up into questions of accuracy, balance, and fairness — with “balance” coming into question because of the cowardly he-said-she-said dodges that have become all too common, and that Margaret decries. I’m not sure what the concept of objectivity itself adds to this mix except a set of difficult assumptions.
I picked up a copy of Bernard Knox’s 1994 Backing into the Future because somewhere I saw it referenced about the weird fact that the ancient Greeks thought that the future was behind them. Knox presents evidence from The Odyssey and Oedipus the King to back this up, so to speak. But that’s literally on the first page of the book. The rest of it consists of brilliant and brilliantly written essays about ancient life and scholarship. Totally enjoyable.
True, he undoes one of my favorite factoids: that Greeks in Homer’s time did not have a concept of the body as an overall unity, but rather only had words for particular parts of the body. This notion comes most forcefully from Bruno Snell in The Discovery of Mind, although I first read about it — and was convinced — by a Paul Feyerabend essay. In his essay “What Did Achilles Look Like?,” Knox convincingly argues that the Greeks had both and a word and concept for the body as a unity. In fact, they may have had three. Knox then points to Homeric uses that seem to indicate, yeah, Homer was talking about a unitary body. E.g., “from the bath he [Oydsseus] stepped, in body [demas] like the immortals,” and Poseidon “takes on the likeness of Calchas, in bodily form,” etc. [p. 52] I don’t read Greek, so I’ll believe whatever the last expert tells me, and Knox is the last expert I’ve read on this topic.
In a later chapter, Knox comes back to Bernard William’s criticism, in Shame and Necessity, of the “Homeric Greeks had no concept of a unitary body” idea, and also discusses another wrong thing that I had been taught. It turns out that the Greeks did have a concept of intention, decision-making, and will. Williams argues that they may not have had distinct words for these things, but Homer “and his characters make distinctions that can only be understood in terms of” those concepts. Further, Williams writes that Homer has
no word that means, simply, “decide.” But he has the notion…All that Homer seems to have left out is the idea of another mental action that is supposed necessarily to lie between coming to a conclusion and acting on it: and he did well in leaving it out, since there is no such action, and the idea of it is the invention of bad philosophy.” [p. 228]
Wow. Seems pretty right to me. What does the act of “making a decision” add to the description of how we move from conclusion to action?
Knox also has a long appreciation of Martha Nussbaum’s The Fragility of Goodness (1986) which makes me want to go out and get that book immediately, although I suspect that Knox is making it considerably more accessible than the original. But it sounds breath-takingly brilliant.
Knox’s essay on Nussbaum, “How Should We Live,” is itself rich with ideas, but one piece particularly struck me. In Book 6 of the Nichomachean Ethics, Aristotle dismisses one of Socrates’ claims (that no one knowingly does evil) by saying that such a belief is “manifestly in contradiction with the phainomena.” I’ve always heard the word “phainomena” translated in (as Knox says) Baconian terms, as if Aristotle were anticipating modern science’s focus on the facts and careful observation. We generally translate phainomena as “appearances” and contrast it with reality. The task of the scientist and the philosopher is to let us see past our assumptions to reveal the thing as it shows itself (appears) free of our anticipations and interpretations, so we can then use those unprejudiced appearances as a guide to truths about reality.
But Nussbaum takes the word differently, and Knox is convinced. Phainomena, are “the ordinary beliefs and sayings” and the sayings of the wise about things. Aristotle’s method consisted of straightening out whatever confusions and contradictions are in this body of beliefs and sayings, but then to show that at least the majority of those beliefs are true. This is a complete inversion of what I’d always thought. Rather than “attending to appearances” meaning dropping one’s assumptions to reveal the thing in its untouched state, it actually means taking those assumptions — of the many and of the wise — as containing truth. It is a confirming activity, not a penetrating and an overturning. Nussbaum says for Aristotle (and in contrast to Plato), “Theory must remain committed to the ways human beings live, act, see.” (Note that it’s entirely possible I’m getting Aristotle, Nussbaum, and Knox wrong. A trifecta of misunderstanding!)
Nussbaum’s book sounds amazing, and I know I should have read it, oh, 20 years ago, but it came out the year I left the philosophy biz. And Knox’s book is just wonderful. If you ever doubted why we need scholars and experts — why would you think such a thing? — this book is a completely enjoyable reminder.
“The man who is incapable of working in common, or who in his self-sufficiency has no need of others, is no part of the community, like a beast, or a god.”
Aristotle, Politics, Book One, Chapter 2, this quotation translated by Bernard Knox in Backing into the Future.
Tagged with: aristotle
• social media
Date: January 14th, 2013 dw
I woke up this morning from an anxiety dream about an event that doesn’t exist. In the dream, I’ve been tasked with replying to a presentation by someone talking about something philosophical, except they’ve never made clear to me who’s speaking or what he (it’s a he) is talking about. So, I write down some ideas, but then the guy doesn’t show up at the event, and I am bed in the theater as the guy ahead of me gives his talk, and then I can’t find my shoes, and then I can’t find my notes. So, I scribble a new talk on a scrap of paper, and wake up before I go on stage.
I woke up from the dream with my notes complete in my head. Here are the notes, fleshed out so they’ll make some sense to people who are not me. But, it is very important to me that you understand that I know I am not a philosopher. I have a Ph.D. in philosophy, but even when I was teaching (1980-1986) I would never call myself a philosopher. There is nothing original or new in the following.
So, with those caveats, here are the notes for my talk as I dreamt them.
1. Philosophy is an interruption. During uneventful times, it is an interruption in the normal work of society the way my old teacher, Joseph Fell, described it as an “open space of play.”
2. Interruptions in the content of philosophies can be brought about by interruptions: by traumatic wars, plagues, genocides, revolutions in science, in technology, in economic infrastructures…
3. This is not supposed to happen because philosophers tend to think that philosophy shapes our understanding, not that not it is shaped by the accidents of what is around us. Philosophy (Western, anyway) is supposed to transcend that stuff and deal with the eternal verities.
4. Except that it turns out that we’re situated creatures. Our understanding of our world depends on our culture, history, language, family, and even accidents of “fate.”
5. But it’s not that simple. We are shaped by our historical world, but how that world shapes us depends at least in part on how we understand that world.
6. The interruptive effect of technology on thought is especially significant when it is the technology by which philosophers engage in the activity of philosophy: talking, writing, talking about what’s been written.
7. Technology doesn’t determine how we understand it, but (a) insofar as the technology offers some possibilities and closes others, (b) insofar as it occurs within a situation that already has meaning, and (c) insofar as it is designed to be taken one way and not another, it affects our understanding of it. How we understand it in turn affects how we understand our world, and how philosophers understand philosophy.
8. The mixed-up mutual effect of thing and world happens because we think in the world by using the things of the world. (Thank you Heidegger, and thank you Andy Clark.) The relation of the two is not mystical.
9. Finally, none of the above escapes the situatedness of our existence. The concept of an interruption itself implies a belief that there is a normalcy of existence — something that is capable of being interrupted — that belief is itself situated.
Tagged with: dreams
Date: December 24th, 2012 dw
Last night I gave a talk at the Festival of Science in Genoa (or, as they say in Italy, Genova). I was brought over by Codice Edizioni, the publisher of the just-released Italian version of Too Big to Know (or, as they say in Italy “La Stanza Intelligente” (or as they say in America, “The Smart Room”)). The event was held in the Palazzo Ducale, which ain’t no Elks Club, if you know what I mean. And if you don’t know what I mean, what I mean is that it’s a beautiful, arched, painted-ceiling room that holds 800 people and one intimidated American.
After my brief talk, Serena Danna of Corriere della Serra interviewed me. She’s really good. For example, her first question was: If the facts no longer have the ability to settle arguments the way we hoped they would, then what happens to truth?
Yeah, way to pitch the ol’ softballs, Serena!
I wasn’t satisfied with my answer, which had three parts. (1) There are facts. The world is one way and not all the other ways that it isn’t. You are not free to make up your own facts. [Yes, I’m talking to you, Mitt!] (2) The basing of knowledge primarily on facts is a relatively new phenomenon. (3) I explicitly invoked Heidegger’s concept of truth, with a soupçon of pragmatism’s view of truth as a tool intended to serve a purpose.
Meanwhile, I’ve been watching The Heidegger Circle mailing list contort itself trying to understand Heidegger’s views about the world that existed before humans entered the scene. Was there Being? Were there beings? It seems to me that any answer has to begin by saying, “Of course the world existed before we did.” But not everyone on the list is comfortable with a statement that simple. Some seem to think that acknowledging that most basic fact somehow diminishes Heidegger’s analysis of the relation of Being and disclosure. Yo, Heideggerians! The world shows itself to us as independent of us. We were born into it, and it keeps going after we’ve died. If that’s a problem for your philosophy, then your philosophy is a problem. And for all of the problems with Heidegger’s philosophy, that just isn’t one. (To be fair, no one on the list suggests that the existence of the universe depends upon our awareness of it, although some are puzzled about how to maintain Heidegger’s conception of “world” (which does seem to depend on us) with that which survives our awareness of it. Heidegger, after all, offers phenomenological ontology, so there is a question about what Being looks like when there is no one to show itself to.)
So, I wasn’t very happy with what I said about truth last night. I said that I liked Heidegger’s notion that truth is the world showing itself to us, and it shows itself to us differently depending on our projects. I’ve always liked this idea for a few reasons. First, it’s phenomenologically true: the onion shows itself differently whether you’re intending to cook it, whether you’re trying to grow it as a cash crop, whether you’re trying to make yourself cry, whether you’re trying to find something to throw at a bad actor, etc. Second, because truth is the way the world shows itself, Heidegger’s sense contains the crucial acknowledgement that the world exists independently of us. Third, because this sense of truth look at our projects, it contains the crucial acknowledgement that truth is not independent of our involvement in the world (which Heidegger accurately characterizes not with the neutral term “involvement” but as our caring about what happens to us and to our fellow humans). Fourth, this gives us a way of thinking about truth without the correspondence theory’s schizophrenic metaphysics that tells us that we live inside our heads, and our mental images can either match or fail to match external reality.
But Heidegger’s view of truth doesn’t do the job that we want done when we’re trying to settle disagreements. Heidegger observes (correctly in my and everybody’s opinion) that different fields have different methodologies for revealing the truth of the world. He speaks coldly (it seems to me) of science, and warmly of poetry. I’m much hotter on science. Science provides a methodology for letting the world show itself (= truth) that is reproducible precisely so that we can settle disputes. For settling disputes about what the world is like regardless of our view of it, science has priority, just as the legal system has priority for settling disputes over the law.
This matters a lot not just because of the spectacular good that science does, but because the question of truth only arises because we sense that something is hidden from us. Science does not uncover all truths but it uniquely uncovers truths about which we can agree. It allows the world to speak in a way that compels agreement. In that sense, of all the disciplines and methodologies, science is the closest to giving the earth we all share its own authentic voice. That about which science cannot speak in a compelling fashion across all cultures and starting points is simply not subject to scientific analysis. Here the poets and philosophers can speak and should be heard. (And of course the compulsive force science manifests is far from beyond resistance and doubt.)
But, when we are talking about the fragmenting of belief that the Internet facilitates, and the fact that facts no longer settle arguments across those gaps, then it is especially important that we commit to science as the discipline that allows the earth to speak of itself in its most compelling terms.
Finally, I was happy that last night I did manage to say that science provides a model for trying to stay smart on the Internet because it is highly self-aware about what it knows: it does not simply hold on to true statements, but is aware of the methodology that led us to see those statements as true. This type of meta awareness — not just within the realm of science — is crucial for a medium as open as the Internet.
« Previous Page | Next Page »