Joho the Blog » philosophy

March 2, 2013

[misc] The Wars on Terrorism, Al Qaeda, Cancer, and Dessert

Steve Coll has a good piece in the New Yorker about the importance of Al Qaeda as a brand:

…as long as there are bands of violent Islamic radicals anywhere in the world who find it attractive to call themselves Al Qaeda, a formal state of war may exist between Al Qaeda and America. The Hundred Years War could seem a brief skirmish in comparison.

This is a different category of issue than the oft-criticized “war on terror,” which is a war against a tactic, not against an enemy. The war against Al Qaeda implies that there is a structurally unified enemy organization. How do you declare victory against a group that refuses to enforce its trademark?

In this, the war against Al Qaeda (which is quite preferable to a war against terror — and I think Steve agrees) is similar to the war on cancer. Cancer is not a single disease and the various things we call cancer are unlikely to have a single cause and thus are unlikely to have a single cure (or so I have been told). While this line of thinking would seem to reinforce politicians’ referring to terrorism as a “cancer,” the same applies to dessert. Each of these terms probably does have a single identifying characteristic, which means they are not classic examples of Wittgensteinian family resemblances: all terrorism involves a non-state attack that aims at terrifying the civilian population, all cancers involve “unregulated cell growth” [thank you Wikipedia!], and all desserts are designed primarily for taste not nutrition and are intended to end a meal. In fact, the war on Al Qaeda is actually more like the war on dessert than like the war on cancer, because just as there will always be some terrorist group that takes up the Al Qaeda name, there will always be some boundary-pushing chef who declares that beefy jerky or glazed ham cubes are the new dessert. You can’t defeat an enemy that can just rebrand itself.

I think that Steve Coll comes to the wrong conclusion, however. He ends his piece this way:

Yet the empirical case for a worldwide state of war against a corporeal thing called Al Qaeda looks increasingly threadbare. A war against a name is a war in name only.

I agree with the first sentence, but I draw two different conclusions. First, this has little bearing on how we actually respond to terrorism. The thinking that has us attacking terrorist groups (and at times their family gatherings) around the world is not made threadbare by the misnomer “war against Al Qaeda.” Second, isn’t it empirically obvious that a war against a name is not a war in name only?

1 Comment »

February 12, 2013

[2b2k] Margaret Sullivan on Objectivity

Magaret Sullivan [twitter:Sulliview] is the public editor of the New York Times. She’s giving a lunchtime talk at the Harvard Shorenstein Center [twitter:ShorensteinCtr] . Her topic is: how is social media is changing journalism? She says she’s open to any other topic during the Q&A as well.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Margaret says she’s going to talk about Tom Kent, the standards editor for the Association Press, and Jay Rosen [twitter:jayrosen_nyu] . She begins by saying she respects them both. [Disclosure: Jay is a friend] She cites Tom [which I'm only getting roughly]: At heart, objective journalism sets out to establish the facts, state the range of opinions, and take a first cut at which arguments are the most rigorous. Journalists should show their commitment to balance by keeping their opinions to themselves. Tom wrote a memo to his staff (leaked to Romenesca
) about expressing personal opinions on social networks. [Margaret wrote an excellent column about this a month ago.]

Jay Rosen, she says, thinks that objectivity is an outdated concept. Journalists should tell their readers where they’re coming from so you can judge their output based on that. “The grounds for trust are slowly shifting. The view from nowhere is getting harder to trust, and ‘here’s where I’m coming from’ is become more trustworthy.” [approx] Objectivity is a cop out, says Jay.

Margaret says that these are the two poles, although both are very reasonable people.

Now she’s going to look at two real situations. The NYT Jerusalem bureau chief Jody Rudoren is relatively new. It is one of the most difficult positions. Within a few weeks she had sent some “twitter messages” (NYT won’t allow the word “tweets,” she says, although when I tweeted this, some people disagreed; Alex Jones and Margaret bantered about this, so she was pretty clear about the policy.). She was criticized for who she praised in the tweets, e.g., Peter Beinart. She also linked without comment to a pro-Hezbollah newspaper. The NYT had an editor “work with her” on her social media; that is, she no longer had free access to those media. Margaret notes that many believe “this is against the entire ethos of social media. If you’re going to be on social media, you don’t want a NYT editor sitting next to you.”

The early reporting from Newtown was “pretty bad” across the entire media, she says. In the first few hours, a shooter was named — Ryan Lanza — and a Facebook photo of him was shown. But it was the wrong Ryan Lanza. And then it turned out it was that other Ryan Lanza’s brother. The NYT in its early Web reporting said “according to early Web reports” the shooter was Ryan Lanza. Lots of other wrong information was floated, and got into early Web reports (although generally not into the N YT). “Social media was a double edged sword because it perpetuated these inaccuracies and then worked to correct them.” It often happens that way, she says.

So, where’s the right place to be on the spectrum between Tom and Jay? “It’s no longer possible to be completely faceless. Journalists are on social media. They’re honing their personal brands. Their newspapers are there…They’re trying to use the Web to get their message out, and in that process they’re exposing who they are. Is that a bad thing? Is it a bad thing for us to know what a political reporter’s politics are? I don’t think that question is easily answerable now. I come down a little closer to where Tom Kent is. I think that it makes a lot of sense for hard news reporters … for the White House reporter, I think it makes a lot of sense to keep their politics under wraps. I don’t see how it helps for people to be prejudging and distrusting them because ‘You’re in the tank for so-and-so.'” Phil Corbett, the standards editor for the NYT, rejects the idea there is no impartial journalism. He rejects that it’s a pretense or charade.

Margaret says, “The one thing I’m very sure of is that this business of impartiality and balance should no longer mean” going down the middle in a he-said-she-said. That’s false equivalence. “That’s changing and should change.” There are facts that we fully believe are true. Evolution and Creationism are not equivalents.

Q&A

Q: Alex Jones: It used to be that the NYT wouldn’t let you cite an anonymous negative comment, along the lines of “This or that person sucks.”

A: Everyone agrees doing so is bad, but I still see it from time to time.

Q: Alex Jones: The NYT policy used to be that you must avoid an appearance of conflict of interest. E.g., a reporter’s son was in the Israeli Army. Should that reporter be forbidden from covering Israel?

A: WhenEthan Bronner went to cover Israel, his son wasn’t in the military. But then his son decided to go join up. “It certainly wasn’t ideal.” Should Ethan have been yanked out the moment his son joined? I’m not sure, Margaret says. It’s certainly problematic. I don’t know the answer.

Q: Objectivity doesn’t always draw a clear line. How do you engage with people whose ideas are diametrically opposed to yours?

A: Some issues are extremely difficult and you’re probably not going to come to a meeting of the minds on it. Be respectful. Accept that you’re not going to make much headway.

Q: Wouldn’t transparency fragment the sources? People will only listen to sources that agree.

A: Yes, this further fractures a fractured environment. It’s useful to have some news sources that set out to be in neither camp. The DC bureau chief of the NYT knows a lot about economics. For him to tell us about his views on that is helpful, but it doesn’t help to know who he voted for.

Q: Martin Nisenholz] The NYT audience is smart but it hasn’t lit up the NYT web site. Do you think the NYT should be a place where people can freely offer their opinions/reviews even if they’re biased? E.g., at Yelp you don’t know if the reviewer is the owner, a competitor… How do you feel about this central notion of user ID and the intersection with commentary?

A: I disagree that readers haven’t lit up the web site. The commentary beneath stories is amazing…

Q: I meant in reviews, not hard news…

A: A real ID policy improves the tenor.

Q: How about the snarkiness of twitter?

A: The best way to be mocked on Twitter is to be earnest. It’s a place to be snarky. It’s regrettable. Reporters should be very careful before they hit the “tweet” button. The tone is a problem.

Q: If you want to build a community — and we reporters are constantly pushed to do that — you have to engage your readers. How can you do that without disclosing your stands? We all have opinions, and we share them with a circle we feel safe in. But sometimes those leak. I’d hope that my paper would protect me.

A: I find Twitter to be invaluable. Incredible news source. Great way to get your message out. The best thing for me is not people’s sarcastic comments. It’s the link to a story. It’s “Hey, did you see this?” To me that’s the most useful part. Even though I describe it as snarky, I’ve also found it to be a very supportive place. When you take a stand, as I did on Sunday about the press not holding things back for national security reasons, you can get a lot of support there. You just have to be careful. Use it for th best possible reasons: to disseminate info, rather than to comment sarcastically.

Q: Between Kent and Rosen, I don’t think there is some higher power of morality that decides this. It depends on where you sit and what you own. If you own NYT, you have billions of dollars in good will you’ve built up. Your audience comes to you with a certain expectation. There’s an inherent bias in what they cover, but also expectations about an effort toward objectivity. Social media is a distribution channel, not a place to bear your soul. A foreign correspondent for Time made a late-night blog post. (“I’d put a breathalyzer on keyboards,” he says.) A seasoned reporter said offhandedly that maybe the victim of some tragedy deserved it. This got distributed via social media as Time Mag’s position. Reporters’ tweets should be edited first. The institution has every right to have a policy that constrains what reporters say on social media. But now there are legal cases. Social media has become an inalienable right. In the old days, the WSJ fired a reporter for handing out political leaflets in a subway station. If you’re Jay Rosen and your business is to throw bombs at the institutional media, and to say everything you do is wrong [!], then that’s ok. But if you own a newspaper, you have to stand up for objectivity.

A: I don’t disagree, although I think Jay is a thoughtful person.

Q: I blog on the HuffPo. But at Harvard, blogging is not considered professional. It’s thought of as tossed off…

A: A blog is just a delivery system. It’s not inherently good or bad, slapdash or well-researched. It’s a way to get your message out.

A: [Alex Jones] Actually there’s a fair number of people who blog at Harvard. The Berkman Center, places like that. [Thank you, Alex :)]

Q: How do you think about the evolution of your job as public editor? Are you thinking about how you interact with the readers and the rhythm of how you publish?

A: When I was brought in 5 months ago, they wanted to take it to the new media world. I was very interested in that. The original idea was to get rid of the print column all together. But I wanted to do both. I’ve been doing both. It’s turned into a conversation with readers.

Q: People are deeply convinced of wrong ideas. Goebbels’ diaries show an upside down world in which Churchill is a gangster. How do you know what counts as fact?

A: Some things are just wrong. Paul Ryan was wrong about criticizing Obama for allowing a particular GM plant to close. The plant closed before Obama took office. That’s a correctable. When it’s more complex, we have to hear both sides out.


Then I got to ask the last question, which I asked so clumsily that it practically forced Margaret to respond, “Then you’re locking yourself into a single point of view, and that’s a bad way to become educated.” Ack.

I was trying to ask the same question as the prior one, but to get past the sorts of facts that Margaret noted. I think it’d be helpful to talk about the accuracy of facts (about which there are their own questions, of course) and focus the discussion of objectivity at least one level up the hermeneutic stack. I tried to say that I don’t feel bad about turning to partisan social networks when I need an explanation of the meaning of an event. For my primary understanding I’m going to turn to people with whom I share first principles, just as I’m not going to look to a Creationism site to understand some new paper about evolution. But I put this so poorly that I drew the Echo Chamber rebuke.

What it really comes down to, for me, is the theory of understanding and knowledge that underlies the pursuit of objectivity. Objectivity imagines a world in which we understand things by considering all sides from a fresh, open start. But in fact understanding is far more incremental, far more situated, and far more pragmatic than that. We understand from a point of view and a set of commitments. This isn’t a flaw in understanding. It is what enables understanding.

Nor does this free us from the responsibility to think through our opinions, to sympathetically understand opposing views, and to be open to the possibility that we are wrong. It’s just to say that understanding has a job to do. In most cases, it does that job by absorbing the new into our existing context. There is a time and place for revolution in our understanding. But that’s not the job we need to do as we try to make sense of the world pressing in on us. Reason can’t function in the world the way objectivity would like it to.


I’m glad the NY Times is taking these questions seriously,and Margaret is impressive (and not just because she takes Jay Rosen very seriously). I’m a little surprised that we’re still talking about objectivity, however. I thought that the discussion had usefully broken the concept up into questions of accuracy, balance, and fairness — with “balance” coming into question because of the cowardly he-said-she-said dodges that have become all too common, and that Margaret decries. I’m not sure what the concept of objectivity itself adds to this mix except a set of difficult assumptions.

Be the first to comment »

January 24, 2013

Attending to appearances

I picked up a copy of Bernard Knox’s 1994 Backing into the Future because somewhere I saw it referenced about the weird fact that the ancient Greeks thought that the future was behind them. Knox presents evidence from The Odyssey and Oedipus the King to back this up, so to speak. But that’s literally on the first page of the book. The rest of it consists of brilliant and brilliantly written essays about ancient life and scholarship. Totally enjoyable.

True, he undoes one of my favorite factoids: that Greeks in Homer’s time did not have a concept of the body as an overall unity, but rather only had words for particular parts of the body. This notion comes most forcefully from Bruno Snell in The Discovery of Mind, although I first read about it — and was convinced — by a Paul Feyerabend essay. In his essay “What Did Achilles Look Like?,” Knox convincingly argues that the Greeks had both and a word and concept for the body as a unity. In fact, they may have had three. Knox then points to Homeric uses that seem to indicate, yeah, Homer was talking about a unitary body. E.g., “from the bath he [Oydsseus] stepped, in body [demas] like the immortals,” and Poseidon “takes on the likeness of Calchas, in bodily form,” etc. [p. 52] I don’t read Greek, so I’ll believe whatever the last expert tells me, and Knox is the last expert I’ve read on this topic.

In a later chapter, Knox comes back to Bernard William’s criticism, in Shame and Necessity, of the “Homeric Greeks had no concept of a unitary body” idea, and also discusses another wrong thing that I had been taught. It turns out that the Greeks did have a concept of intention, decision-making, and will. Williams argues that they may not have had distinct words for these things, but Homer “and his characters make distinctions that can only be understood in terms of” those concepts. Further, Williams writes that Homer has

no word that means, simply, “decide.” But he has the notion…All that Homer seems to have left out is the idea of another mental action that is supposed necessarily to lie between coming to a conclusion and acting on it: and he did well in leaving it out, since there is no such action, and the idea of it is the invention of bad philosophy.” [p. 228]

Wow. Seems pretty right to me. What does the act of “making a decision” add to the description of how we move from conclusion to action?

Knox also has a long appreciation of Martha Nussbaum’s The Fragility of Goodness (1986) which makes me want to go out and get that book immediately, although I suspect that Knox is making it considerably more accessible than the original. But it sounds breath-takingly brilliant.

Knox’s essay on Nussbaum, “How Should We Live,” is itself rich with ideas, but one piece particularly struck me. In Book 6 of the Nichomachean Ethics, Aristotle dismisses one of Socrates’ claims (that no one knowingly does evil) by saying that such a belief is “manifestly in contradiction with the phainomena.” I’ve always heard the word “phainomena” translated in (as Knox says) Baconian terms, as if Aristotle were anticipating modern science’s focus on the facts and careful observation. We generally translate phainomena as “appearances” and contrast it with reality. The task of the scientist and the philosopher is to let us see past our assumptions to reveal the thing as it shows itself (appears) free of our anticipations and interpretations, so we can then use those unprejudiced appearances as a guide to truths about reality.

But Nussbaum takes the word differently, and Knox is convinced. Phainomena, are “the ordinary beliefs and sayings” and the sayings of the wise about things. Aristotle’s method consisted of straightening out whatever confusions and contradictions are in this body of beliefs and sayings, but then to show that at least the majority of those beliefs are true. This is a complete inversion of what I’d always thought. Rather than “attending to appearances” meaning dropping one’s assumptions to reveal the thing in its untouched state, it actually means taking those assumptions — of the many and of the wise — as containing truth. It is a confirming activity, not a penetrating and an overturning. Nussbaum says for Aristotle (and in contrast to Plato), “Theory must remain committed to the ways human beings live, act, see.” (Note that it’s entirely possible I’m getting Aristotle, Nussbaum, and Knox wrong. A trifecta of misunderstanding!)

Nussbaum’s book sounds amazing, and I know I should have read it, oh, 20 years ago, but it came out the year I left the philosophy biz. And Knox’s book is just wonderful. If you ever doubted why we need scholars and experts — why would you think such a thing? — this book is a completely enjoyable reminder.

1 Comment »

January 14, 2013

What gods and beasts have in common

“The man who is incapable of working in common, or who in his self-sufficiency has no need of others, is no part of the community, like a beast, or a god.”


Aristotle, Politics, Book One, Chapter 2, this quotation translated by Bernard Knox in Backing into the Future.

Be the first to comment »

December 24, 2012

Philosophy as interruption

I woke up this morning from an anxiety dream about an event that doesn’t exist. In the dream, I’ve been tasked with replying to a presentation by someone talking about something philosophical, except they’ve never made clear to me who’s speaking or what he (it’s a he) is talking about. So, I write down some ideas, but then the guy doesn’t show up at the event, and I am bed in the theater as the guy ahead of me gives his talk, and then I can’t find my shoes, and then I can’t find my notes. So, I scribble a new talk on a scrap of paper, and wake up before I go on stage.

I woke up from the dream with my notes complete in my head. Here are the notes, fleshed out so they’ll make some sense to people who are not me. But, it is very important to me that you understand that I know I am not a philosopher. I have a Ph.D. in philosophy, but even when I was teaching (1980-1986) I would never call myself a philosopher. There is nothing original or new in the following.

So, with those caveats, here are the notes for my talk as I dreamt them.

1. Philosophy is an interruption. During uneventful times, it is an interruption in the normal work of society the way my old teacher, Joseph Fell, described it as an “open space of play.”

2. Interruptions in the content of philosophies can be brought about by interruptions: by traumatic wars, plagues, genocides, revolutions in science, in technology, in economic infrastructures…

3. This is not supposed to happen because philosophers tend to think that philosophy shapes our understanding, not that not it is shaped by the accidents of what is around us. Philosophy (Western, anyway) is supposed to transcend that stuff and deal with the eternal verities.

4. Except that it turns out that we’re situated creatures. Our understanding of our world depends on our culture, history, language, family, and even accidents of “fate.”

5. But it’s not that simple. We are shaped by our historical world, but how that world shapes us depends at least in part on how we understand that world.

6. The interruptive effect of technology on thought is especially significant when it is the technology by which philosophers engage in the activity of philosophy: talking, writing, talking about what’s been written.

7. Technology doesn’t determine how we understand it, but (a) insofar as the technology offers some possibilities and closes others, (b) insofar as it occurs within a situation that already has meaning, and (c) insofar as it is designed to be taken one way and not another, it affects our understanding of it. How we understand it in turn affects how we understand our world, and how philosophers understand philosophy.

8. The mixed-up mutual effect of thing and world happens because we think in the world by using the things of the world. (Thank you Heidegger, and thank you Andy Clark.) The relation of the two is not mystical.

9. Finally, none of the above escapes the situatedness of our existence. The concept of an interruption itself implies a belief that there is a normalcy of existence — something that is capable of being interrupted — that belief is itself situated.

1 Comment »

October 28, 2012

[2b2k] Facts, truths, and meta-knowledge

Last night I gave a talk at the Festival of Science in Genoa (or, as they say in Italy, Genova). I was brought over by Codice Edizioni, the publisher of the just-released Italian version of Too Big to Know (or, as they say in Italy “La Stanza Intelligente” (or as they say in America, “The Smart Room”)). The event was held in the Palazzo Ducale, which ain’t no Elks Club, if you know what I mean. And if you don’t know what I mean, what I mean is that it’s a beautiful, arched, painted-ceiling room that holds 800 people and one intimidated American.

genova - palazzo ducale


After my brief talk, Serena Danna of Corriere della Serra interviewed me. She’s really good. For example, her first question was: If the facts no longer have the ability to settle arguments the way we hoped they would, then what happens to truth?


Yeah, way to pitch the ol’ softballs, Serena!


I wasn’t satisfied with my answer, which had three parts. (1) There are facts. The world is one way and not all the other ways that it isn’t. You are not free to make up your own facts. [Yes, I'm talking to you, Mitt!] (2) The basing of knowledge primarily on facts is a relatively new phenomenon. (3) I explicitly invoked Heidegger’s concept of truth, with a soupçon of pragmatism’s view of truth as a tool intended to serve a purpose.


Meanwhile, I’ve been watching The Heidegger Circle mailing list contort itself trying to understand Heidegger’s views about the world that existed before humans entered the scene. Was there Being? Were there beings? It seems to me that any answer has to begin by saying, “Of course the world existed before we did.” But not everyone on the list is comfortable with a statement that simple. Some seem to think that acknowledging that most basic fact somehow diminishes Heidegger’s analysis of the relation of Being and disclosure. Yo, Heideggerians! The world shows itself to us as independent of us. We were born into it, and it keeps going after we’ve died. If that’s a problem for your philosophy, then your philosophy is a problem. And for all of the problems with Heidegger’s philosophy, that just isn’t one. (To be fair, no one on the list suggests that the existence of the universe depends upon our awareness of it, although some are puzzled about how to maintain Heidegger’s conception of “world” (which does seem to depend on us) with that which survives our awareness of it. Heidegger, after all, offers phenomenological ontology, so there is a question about what Being looks like when there is no one to show itself to.)


So, I wasn’t very happy with what I said about truth last night. I said that I liked Heidegger’s notion that truth is the world showing itself to us, and it shows itself to us differently depending on our projects. I’ve always liked this idea for a few reasons. First, it’s phenomenologically true: the onion shows itself differently whether you’re intending to cook it, whether you’re trying to grow it as a cash crop, whether you’re trying to make yourself cry, whether you’re trying to find something to throw at a bad actor, etc. Second, because truth is the way the world shows itself, Heidegger’s sense contains the crucial acknowledgement that the world exists independently of us. Third, because this sense of truth look at our projects, it contains the crucial acknowledgement that truth is not independent of our involvement in the world (which Heidegger accurately characterizes not with the neutral term “involvement” but as our caring about what happens to us and to our fellow humans). Fourth, this gives us a way of thinking about truth without the correspondence theory’s schizophrenic metaphysics that tells us that we live inside our heads, and our mental images can either match or fail to match external reality.


But Heidegger’s view of truth doesn’t do the job that we want done when we’re trying to settle disagreements. Heidegger observes (correctly in my and everybody’s opinion) that different fields have different methodologies for revealing the truth of the world. He speaks coldly (it seems to me) of science, and warmly of poetry. I’m much hotter on science. Science provides a methodology for letting the world show itself (= truth) that is reproducible precisely so that we can settle disputes. For settling disputes about what the world is like regardless of our view of it, science has priority, just as the legal system has priority for settling disputes over the law.


This matters a lot not just because of the spectacular good that science does, but because the question of truth only arises because we sense that something is hidden from us. Science does not uncover all truths but it uniquely uncovers truths about which we can agree. It allows the world to speak in a way that compels agreement. In that sense, of all the disciplines and methodologies, science is the closest to giving the earth we all share its own authentic voice. That about which science cannot speak in a compelling fashion across all cultures and starting points is simply not subject to scientific analysis. Here the poets and philosophers can speak and should be heard. (And of course the compulsive force science manifests is far from beyond resistance and doubt.)


But, when we are talking about the fragmenting of belief that the Internet facilitates, and the fact that facts no longer settle arguments across those gaps, then it is especially important that we commit to science as the discipline that allows the earth to speak of itself in its most compelling terms.


Finally, I was happy that last night I did manage to say that science provides a model for trying to stay smart on the Internet because it is highly self-aware about what it knows: it does not simply hold on to true statements, but is aware of the methodology that led us to see those statements as true. This type of meta awareness — not just within the realm of science — is crucial for a medium as open as the Internet.

2 Comments »

July 8, 2012

Louis C.K. and the Decent Net, or How Louis won the Internet

(This is the lead article in the new issue of my free and highly intermittent newsletter, JOHO. Also in it, a Higgs-Bogus Contest on particles that would explain mysteries of the Internet.)

 

Louis C.K. now famously sold his latest comedy album over the Internet direct to his audience for $5, with no DRM to get in the way of our ability to play it on any device we want, and even to share it. After making over a million dollars in a few days (and after giving most of his profits to his staff and to charity) Louis went to great pains to schedule his upcoming comedy tour in venues not beholden to their TicketMasters, so that he could sell tickets straight to his audience for a flat $45, free of scalpers. So far he’s made over $6 million in ticket sales.

But Louis C.K. also thereby — in the vocabulary of Reddit — won the Internet.

There are lots of reasons to be heartened by Louis’ actions and by his success: He is validating new business models that could spread. He is demonstrating his trust in his audience. He is protecting his audience while making the relationship more direct. He is not being greedy. But it seems to me that Louis is demonstrating one more point that is especially important. Louis C.K. won the Internet by reminding us that the Internet offers us a chance for a moral do-over.

 


Way back in the early days of all of this Internet madness, many of us thought that the Internet was a new beginning, an opportunity to get things right. That’s why we looked at all The Hullabaloo about the Net as missing The Point. The Hullabaloo saw the Net as a way to drive out some of the inefficiencies of the physical world of business. The Point was that the Net would let us build new ways of treating one another that would be fairer, more fully supportive of human flourishing, and thus more representative of the best of what it means to be human together.

We optimists were not entirely wrong, but not as right as we had hoped. Even as late as the turn of the century, the early blogging community thought it was forging not only a new community, but a new type of community, one with social ties made visible as blue underlined text. That original community has maintained itself rather well, and the amount of generosity and collaboration the Net has occasioned continues to confound the predictions of the pessimists. But clearly the online world did not become one big blogosphere of love.

It’s difficult, and ultimately rather silly, to try to quantify the unfathomable depth of depravity, skullduggery and plain old greed exhibited on the Net, and compare it to a cumulative calculus of the Net’s loveliness. For example, most email is spam that treats its recipients as means, not ends, but the bulk of it is sent by a tiny percentage of email users. Should we compare the number of bits or of bastards? How do we weigh phishing against the time people put in answering the questions of strangers? How do we measure the casual hatred exhibited in long streams of YouTube comments against the purposeful altruism and caring exhibited at the best of Reddit? How do we total up the casual generosity of every link that leads a reader away from the linker’s site to some other spot? Fortunately, we do not have to resolve these questions. We can instead acknowledge that the Net provides yet another place in which we play out our moral natures.

But its accessibility, its immediacy, its malleability, and its weird physics provide a place where we can invent new ways of doing old things like buying music and concert tickets — new ways in which we can state what we think counts, new ways in which we can assert our better or worse moral natures.

 


I am of course not suggesting that Louis C.K. is a moral messiah or that he “won the Internet” is anything except playful overstatement. I’m instead suggesting a way of interpreting the very positive response to his relatively modest actions on the Net: we responded so positively because we saw in those actions the Net as a moral opportunity.

We responded this way, I’d suggest, in part because Louis C.K. is not of the Internet. His Web site made that very clear when Louis charmingly claimed, “Look, I don’t really get the whole ‘torrent’ thing. I don’t know enough about it to judge either way.” He goes on to urge us to live up to the trust he’s placed in us. He’s thus not behaving by some Internet moral code. Rather, he’s applying Old World morality to the Net. It is not a morality of principles, but of common decency.

And herewith begins a totally unnecessary digression…

This is coherent with Louis’ comedy. His series fits within the line that began with Seinfeld and continued into Curb Your Enthusiasm, but not just because all three make us squirm.

Seinfeld was a comedy of norms: people following arbitrary rules as if they were divine commandments. Sometimes the joke was the observation of rules that we all follow blindly: No double dipping! Sometimes the joke was the arbitrariness of rules the show made up: No soup for you! (Yes, I realize the Soup Nazi was based on a real soup guy, but the success of the script didn’t depend on us knowing that.) Seinfeld characters’s are too self-centered to live by anything more than norms. And, in a finale that most people liked less than I did, they are at last confronted with their lack of moral substance.

Curb Your Enthusiasm is a comedy of principles, albeit with a whole lot of norms thrown in. Larry and his world are made unlivable by people (including Larry) who try to live by moral rules. Hum a bit of Wagner while passing by a Jew, and you’re likely to touch off some righteous indignation as if you were siding with the Nazis. Larry won’t give kids without a costume any Halloween candy, and then can’t resist telling a cop with a shaven head that the cop isn’t actually bald according to Larry’s principled definition. In a parody of rule-based life, Larry takes advantage of the rule governing handicapped toilet stalls. (See also.) In Curb the duties of friendship are carefully laid out, and are to be followed even when they make no sense. Larry’s life is pretty much ruined by the adherence to principles.

Louis is less about norms and principles than about doing the right thing in a world unguided by norms and principles, and in which human weakness is assumed. When a male southern cop who has saved his life asks to be thanked by being kissed on the lips, Louis reasons outloud that he can’t think of any reason not to. So he does. Norms are there to be broken when they get in the way of a human need, such as to feel appreciated. Nor do principles much matter, except the principle “Thou shalt not be a dick.” So, Louis watches bemused as an airline passenger becomes righteously indignant because his reservation wasn’t honored. The passenger had principle on his side, but is cast as the transgressor because he’s acting like a d-bag. In his Live at Beacon show, Louis contrasts the norm against using the word “fag” with nondiscriminatory behavior and attitude. (I’d like to hear what Lisa Nakamura has to say about this.)

And because Louis is a comedian, the humor is in the human failure to live up to even this simple ideal of not being a total a-hole. In his $5 comedy album, Louis relates how he thought about giving up his first class airplane seat to a soldier in uniform. Not only doesn’t Louis give up his seat, he then congratulates himself for being the sort of person who would think of such a thing. Giving up your seat is neither a norm nor a principle. It is what people who rise above dickhood do.

So, here’s why I think this is relevant.

The Internet is a calamity of norms. Too many cultures, too many localities, too many communities, each with its own norms. And there’s no global agreement on principles that will sort things out for us. In fact, people who disagree based on principles often feel entitled to demonize their opponents because they differ on principles. The only hope for living together morally on the Net is to try not to be dicks to one another. I’m not saying it’s obvious how to apply that rule. And I’m certainly not saying that we’ll succeed at it. But now that we’ve been thrown together without any prior agreement on norms or principles, what else can we do except try to treat each other with trust and a touch of sympathy?

That’s what Louis C.K.’s gestures embody. Many of us have responded warmly to them because they are moral in the most basic way: Let’s try to treat one another well, or at least not be total dicks, ok? Louis C.K.’s gestures were possible because the Net lets us try out new relationships and practices. Those gestures therefore remind us of our larger hope for the Net and for ourselves — not that the Net will drive out all rotten behavior, but that we can replace some corrupt practices with better ones. We can choose to dwell together more decently.

Nothing more than that. But also nothing less.

10 Comments »

December 4, 2011

[2b2k] Truth, knowledge, and not knowing: A response to “The Internet Ruins Everything”

Quentin Hardy has written up on the NYT Bits blog the talk I gave at UC Berkeley’s School of Information a few days ago, refracting it through his intelligence and interests. It’s a terrific post and I appreciate it. [Later that day: Here's another perspicacious take on the talk, from Marcus Banks.]

I want to amplify the answer I gave to Quentin’s question at the event. And I want to respond to the comments on his post that take me as bemoaning the fate of knowledge in the age of the Net. The post itself captures my enthusiasm about networked knowledge, but the headline of Quentin’s post is “The Internet ruins everything,” which could easily mislead readers. I am overall thrilled about what’s happening to knowledge.

Quentin at the event noted that the picture of networked knowledge I’d painted maps closely to postmodern skepticism about the assumption that there are stable, eternal, knowable truths. So, he asked, did we invent the Net as a tool based on those ideas, or did the Net just happen to instantiate them? I replied that the question is too hard, but that it doesn’t much matter that we can’t answer it. I don’t think I did a very good job explaining either part of my answer. (You can hear the entire talk and questions here. The bit about truth starts at 46:36. Quentin’s question begins at 1:03:19.)

It’s such a hard question because it requires us to disentangle media from ideas in a way that the hypothesis of entanglement itself doesn’t allow. Further, the play of media and ideas occurs on so many levels of thought and society, and across so many forms of interaction and influence, that the results are emergent.

It doesn’t matter, though, because even if we understood how it works, we still couldn’t stand apart from the entanglement of media and ideas to judge those ideas independent of our media-mediated involvement with them. We can’t ever get a standpoint that isn’t situated within that entanglement. (Yes, I acknowledge that the idea that ideas are always situated is itself a situated idea. Nothing I can do about that.)

Nevertheless, I should add that almost everything I’ve written in the past fifteen years is about how our new medium (if that’s what the Net is (and it’s not)) affects our ideas, so I obviously find some merit in looking at the particulars of how media shape ideas, even if I don’t have a general theory of how that chaotic dance works.

I can see why Quentin may believe that I have “abandoned the idea of Truth,” even though I don’t think I have. I talked at the I School about the Net being phenomenologically more true to avoid giving the impression that I think our media evolve toward truth the way we used to think (i.e., before Thomas Kuhn) science does. Something more complex is happening than one approximation of truth replacing a prior, less accurate approximation.

And I have to say that this entire topic makes me antsy. I have an awkward, uncertain, unresolved attitude about the nature of truth. The same as many of us. I claim no special insight into this at all. Nevertheless, here goes…

My sense that truth and knowledge are situated in one’s culture, history, language, and personal history comes from Heidegger. I also take from Heidegger my sense of “phenomenological truth,” which takes truth as being the ways the world shows itself to us, rather than as an inner mental representation that accords with an outer reality. This is core to Heidegger and phenomenology. There are many ways in which we enable the world to show itself to us, including science, religion and art. Those ways have their own forms and rules (as per Wittgenstein). They are genuinely ways of knowing the world, not mere “games.” Nor are the truths these engagements reveal “pictures of reality” (to use Quentin’s phrase). They are — and I’m sorry to get all Heideggerian on you again — ways of being in the world. We live them. They are engaged, embodied truths, not mere representations or cognitions.

So, yes, I am among the many who have abandoned the idea of Truth as an inner representation of an outer reality from which we are so essentially detached that some of the greatest philosophers in the West have had to come up with psychotic theories to explain how we can know our world at all. (Leibniz, Spinoza, and Descartes, you know who I’m talking about.) But I have not abandoned the idea that the world is one way and not another. I have not abandoned the idea that beliefs can seem right but be wrong. I have not abandoned the importance of facts and evidence within many crucial discourses. Nor have I abandoned the idea that it is supremely important to learn how the world is. In fact, I may have said in the talk, and do say (I think) in the book that networked knowledge is becoming more like how scientists have understood knowledge for generations now.

So, for me the choice isn’t between eternal verities that are independent of all lived historial situations and the chaos of no truth at all. We can’t get outside of our situation, but that’s ok because truth and knowledge are only possible within a situation. If the Net’s properties are closer to the truth of our human condition than, say, broadcast’s properties were, that truth of our human condition itself is situated in a particular historical-cultural moment. That does not lift the obligation on us poor humans beings to try to understand, cherish, and engage with our world as truthfully as we possibly can.

But the main thing is, no, I don’t think the Net is ruining everything, and I am (overall) thrilled to see how the Net is transforming knowledge.

21 Comments »

December 2, 2011

The Net is a place

The latest Pew Internet study confirms what most of suspected was the case: “Americans are increasingly going online just for fun and to pass the time, particularly young adults under 30. On any given day, 53% of all the young adults ages 18-29 go online for no particular reason except to have fun or to pass the time. ”

And this also confirms an idea many of us have been proposing for a decade and a half or so: The Internet is a place. It is a weird place in which proximity is determined by interest, rather than a space in which interests are kept apart by distances. It is a place in which nearness defeats distance. It is a place, not just a space, because spaces are empty but places are saturated with meaning: Place is space that has been made to matter to us. The Internet is a place.

And now we have the polling numbers to prove it :)

2 Comments »

October 25, 2011

What “I know” means

If meaning is use, as per Wittgenstein and John Austin, then what does “know” mean?

I’m going to guess that the most common usage of the term is in the phrase “I know,” as in:

1. “You have to be careful what you take Lipitor with.” “I know.”
2. “The science articles have gotten really hard to read in Wikipedia.” “I know.”
3. “This cookbook thinks you’ll just happen to have strudel dough just hanging around.” “I know.”
4. “The books are arranged by the author’s last name within any one topic area.” “I know.”
5. “They’re closing the Red Line on weekends.” “I know!”

In each of these, the speaker is not claiming to have an inner state of belief that is justifiable and true. The speaker is using “I know” to shape the conversation and the social relationship with the initial speaker.

1., 4. “You can stop explaining now.”
2., 3. “I agree with you. We’re on the same side.”
5. “I agree that it’s outrageous!”

And I won’t even mention words like “surely” and “certainly” that are almost always used to indicate that you’re going to present no evidence for the claim that follows.

4 Comments »

« Previous Page | Next Page »