“Of course what I’ve just said may not be right,” concluded the thirteen year old girl, “but what’s important is to engage in the interpretation and to participate in the discussion that has been going on for thousands of years.”
So said the bas mitzvah girl at an orthodox Jewish synagogue this afternoon. She is the daughter of friends, so I went. And because it is an orthodox synagogue, I didn’t violate the Sabbath by taking notes. Thus that quote isn’t even close enough to count as a paraphrase. But that is the thought that she ended her D’var Torah with. (I’m sure as heck violating the Sabbath now by writing this, but I am not an observant Jew.)
The D’var Torah is a talk on that week’s portion of the Torah. Presenting one before the congregation is a mark of one’s coming of age. The bas mitzvah girl (or bar mitzvah boy) labors for months on the talk, which at least in the orthodox world is a work of scholarship that shows command of the Hebrew sources, that interprets the words of the Torah to find some relevant meaning and frequently some surprising insight, and that follows the carefully worked out rules that guide this interpretation as a fundamental practice of the religion.
While the Torah’s words themselves are taken as sacred and as given by G-d, they are understood to have been given to us human beings to be interpreted and applied. Further, that interpretation requires one to consult the most revered teachers (rabbis) in the tradition. An interpretation that does not present the interpretations of revered rabbis who disagree about the topic is likely to be flawed. An interpretation that writes off prior interpretations with which one disagrees is not listening carefully enough and is likely to be flawed. An interpretation that declares that it is unequivocally the correct interpretation is wrong in that certainty and is likely to be flawed in its stance.
It seems to me — and of course I’m biased — that these principles could be very helpful regardless of one’s religion or discipline. Jewish interpretation takes the Word as the given. Secular fields take facts as the given. The given is not given unless it is taken, and taking is an act of interpretation. Always.
If that taking is assumed to be subjective and without boundaries, then we end up living in fantasy worlds, shouting at those bastards who believe different fantasies. But if there are established principles that guide the interpretations, then we can talk and learn from one another.
If we interpret without consulting prior interpretations, then we’re missing the chance to reflect on the history that has shaped our ideas. This is not just arrogance but stupidity.
If we fail to consult interpretations that disagree with one another, we not only will likely miss the truth, but we will emerge from the darkness certain that we are right.
If we consult prior interpretations that disagree but insist that we must declare one right and the other wrong, we are being so arrogant that we think we can stand in unequivocal judgment of the greatest minds in our history.
If we come out of the interpretation certain that we are right, then we are far more foolish than the thirteen year old I heard speak this morning.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
He begins with a hypothetical in which you can swerve a car to kill one person instead of stay on its course and kill five. The audience chooses to swerve, and Edmond points out that we’re utilitarians. Second hypothesis: swerve into a barrier that will kill you but save the pedestrians. Most of us say we’d like it to swerve. Edmond points out that this is a variation of the trolley problem, except now it’s a machine that’s making the decision for us.
Autonomous cars are predicted to minimize fatalities from accidents by 90%. He says his advisor’s research found that most people think a car should swerve and sacrifice the passenger, but they don’t want to buy such a car. They want everyone else to.
He connects this to the Tragedy of the Commons in which if everyone acts to maximize their good, the commons fails. In such cases, governments sometimes issue regulations. Research shows that people don’t want the government to regulate the behavior of autonomous cars, although the US Dept of Transportation is requiring manufacturers to address this question.
Edmond’s group has created the moral machine, a website that creates moral dilemmas for autonomous cars. There have been about two million users and 14 million responses.
Some national trends are emerging. E.g., Eastern countries tend to prefer to save passengers more than Western countries do. Now the MIT group is looking for correlations with other factors, e.g., religiousness, economics, etc. Also, what are the factors most crucial in making decisions?
They are also looking at the effect of automation levels on the assignment of blame. Toyota’s “Guardian Angel” model results in humans being judged less harshly: that mode has a human driver but lets the car override human decisions.
In response to a question, Edmond says that Mercedes has said that its cars will always save the passenger. He raises the possibility of the owner of such a car being held responsible for plowing into a bus full of children.
Q: The solutions in the Moral Machine seem contrived. The cars should just drive slower.
A: Yes, the point is to stimulate discussion. E.g., it doesn’t raise the possibility of swerving to avoid hitting someone who is in some way considered to be more worthy of life. [I’m rephrasing his response badly. My fault!]
Q: Have you analyzed chains of events? Does the responsibility decay the further you are from the event?
In 1974, the prestigious scholarly journal TV Guide published my original research that suggested that the inspector in Dostoyevsky’s Crime and Punishment was modeled on Socrates. I’m still pretty sure that’s right, and an actual scholarly article came out a few years later making the same case, by people who actually read Russian ‘n’ stuff.
Around the time that I came up with this hypothesis, the creators of the show Columbo had acknowledged that their main character was also modeled on Socrates. I put one and one together and …
Click on the image to go to a scan of that 1974 article.
The essays take a fruitful approach. In each of the chapters, someone in the field recounts how s/he first encountered a figure who became important to her/him and why that person mattered. That entails explaining the figure’s ideas and place in the history of media studies — although almost none of the figures would have characterized their work as being within that relatively newly-minted field.
I write about how Heidegger’s ideas about language pulled me out of an adolescent “identity crisis” [draft]. Lance Strate explains his struggle to understand McLuhan (I feel his pain!) and how the struggle paid off for him. Cynthia Lewis connects her interest in Mikhail Bakhtin to her precocious recognition that “the presence of other interpreters always already exists” in the words one hears and uses. Michael Robbgrieco explains how Foucault became a crucial thinker for him about media and education, even though Foucault doesn’t talk about the former and views the latter primarily as a system of oppression, which was far from Michael’s experience as a teacher. Henry Jenkins talks about how Raymond Williams’ work spoke to him as a son of a construction company owner in Georgia, and how that led Jenkins to John Fiske who had been tutored by Williams.
These are just a few of the seventeen essays.
The personal approach enables the authors to walks us through their intellectual grandparents’ ideas the way they first did — and the paths these authors took clearly worked for them. It simultaneously makes clear why those grandparents, with their often quite difficult ideas, mattered so personally to the authors. Overall it works splendidly. All credit to Renee.
Errata: For the imaginary record, I want to note that an error was introduced into my chapter on Heidegger. Somehow John William Miller’s ‘ “mid world” mutated into “mind world” and I did not catch it in the copy-edit phase. Also “a preacher of narcissism” became “a preacher or narcissist.” I should have caught these attempts to make my text better. Ack.
I was supposed to give an opening talk at the 9th annual Ethics & Publishing conference put on by George Washington Uinversity. Unfortunately, a family emergency kept me from going, so I sent a very homemade video of the presentation that I recorded at my desk with my monitor raised to head height.
The theme of my talk was a change in how we make the place better — “the place” being where we live — in the networked age. It’s part of what I’ve been thinking about as I prepare to write a book about the change in our paradigm of the future. So, these are thoughts-in-progress. And I know I could have stuck the landing better. In any case, here it is.
In 1962, Claude Levi-Strauss brought the concept of bricolage into the anthropological and philosophical lexicons. It has to do with thinking with one’s hands, putting together new things by repurposing old things. It has since been applied to the Internet (including, apparently, by me, thanks to a tip from Rageboy). The term “bricolage” uncovers something important about the Net, but it also covers up something fundamental about the Net that has been growing even more important.
In The Savage Mind (relevant excerpt), CLS argued against the prevailing view that “primitive” peoples were unable to form abstract concepts. After showing that they often in have extensive sets of concepts for flora and fauna, he maintains that these concepts go beyond what they pragmatically need to know:
…animals and plants are not known as a result of their usefulness; they are deemed to be useful or interesting because they are first of all known.
It may be objected that science of this kind can scarcely be of much practical effect. The answer to this is that its main purpose is not a practical one. It meets intellectual requirements rather than or instead of satisfying needs.
It meets, in short, a “demand for order.”
CLS wants us to see the mythopoeic world as being as rich, complex, and detailed as the modern scientific world, while still drawing the relevant distinctions. He uses bricolage as a bridge for our understanding. A bricoleur scavenges the environment for items that can be reused, getting their heft, trying them out, fitting them together and then giving them a twist. The mythopoeic mind engages in this bricolage rather than in the scientific or engineering enterprise of letting a desired project assemble the “raw materials.” A bricoleur has what s/he has and shapes projects around that. And what the bricoleur has generally has been fashioned for some other purpose.
Bricolage is a very useful concept for understanding the Internet’s mashup culture, its culture of re-use. It expresses the way in which one thing inspires another, and the power of re-contextualization. It evokes the sense of invention and play that is dominant on so much of the Net. While the Engineer is King (and, all too rarely, Queen) of this age, the bricoleurs have kept the Net weird, and bless them for it.
But there are at least two ways in which this metaphor is inapt.
First, traditional bricoleurs don’t have search engines that let them in a single glance look across the universe for what they need. Search engines let materials assemble around projects, rather than projects be shaped by the available materials. (Yes, this distinction is too strong. Yes, it’s more complicated than that. Still, there’s some truth to it.)
Second, we have been moving with some consistency toward a Net that at its topmost layers replicates the interoperability of its lower layers. Those low levels specify the rules — protocols — by which networks can join together to move data packets to their destinations. Those packets are designed so they can be correctly interpreted as data by any recipient applications. As you move up the stack, you start to lose this interoperability: Microsoft Word can’t make sense of the data output by Pages, and a graphics program may not be able to make sense of the layer information output by Photoshop.
But, over time, we’re getting better at this:
Applications add import and export services as the market requires. More consequentially, more and richer standards for interoperability continue to emerge, as they have from the very beginning: FTP, HTML, XML, Dublin Core, Schema.org, the many Semantic Web vocabularies, ontologies, and schema, etc.
More important, we are now taking steps to make sure that what we create is available for re-use in ways we have not imagined. We do this by working within standards and protocols. We do it by putting our work into the sphere of reusable items, whether that’s by applying the Creative Commons license, putting our work into a public archive, , or even just paying attention to what will make our work more findable.
This is very different from the bricoleur’s world in which objects are designed for one use, and it takes the ingenuity of the bricoleur to find a new use for it.
This movement continues the initial work of the Internet. From the beginning the Net has been predicated on providing an environment with the fewest possible assumptions about how it will be used. The Net was designed to move anyone’s information no matter what it’s about, what it’s for, where it’s going, or who owns it. The higher levels of the stack are increasingly realizing that vision. The Net is thus more than ever becoming a universe of objects explicitly designed for reuse in unexpected ways. (An important corrective to this sunny point of view: Christian Sandvig’s brilliant description of how the Net has incrementally become designed for delivering video above all else.)
Insofar as we are explicitly creating works designed for unexpected reuse, the bricolage metaphor is flawed, as all metaphors are. It usefully highlights the “found” nature of so much of Internet culture. It puts into the shadows, however, the truly transformative movement we are now living through in which we are explicitly designing objects for uses that we cannot anticipate.
Mills Baker defends personalization on the right grounds. In a brilliant and brilliantly written post, he maintains that the personalization provided by sites does at scale what we do in the real world to enable conversations: through multiple and often subtle signals, we let an interlocutor know where our interests and beliefs are similar enough that we are able to safely express our differences.
Digression: This is at the heart of our cultural fear of echo chambers, in my opinion. Conversation consists of iteration on small differences based on an iceberg of agreement. Every conversation inadvertently reinforces the beliefs that enable it to go forward. Likewise, understanding is contextual, assimilating the novel to the familiar, thus reinforcing that context by making it richer and more coherent. But our tradition has taught us that Reason requires us to be open to all ideas, ready to undo the entire structure of our beliefs. Reason, if applied purely, would thus make conversation, understanding, and knowledge impossible. In fearing echo chambers, we are running from the fact that understanding and conversation share the basic elements of echo chambers. I’ll return to this point in a later post sometime…
I love everything about Mills’ post except his under-valuing of concerns about the power personalization has over us on-line. Yes, personalization is a requirement in a scaled environment. Yes, the right comparison is between our new info flows and our old info trickles. But…
…Miles does not fully confront the main complaint: our interests and the interests of the commercial entities that are doing the personalizing do not fully coincide. Facebook has an economic motivation to get us to click more and to exit Facebook sessions eager to return for more. Facebook thus has an economic interest in showing us personalized clickbait, and to filter our feeds toward happiness rather than hey-my-cat-died-yesterday posts.
In one sense, this is entirely Mills’ point. He wants designers to understand the positive role personalization has always played, so they can reinstate that role in software that works for us. He thinks that getting this right is the responsibility of the software for “Most users do not want the ‘control’ of RSS and Twitter lists and blocking, muting, and unfollowing their fellows.” Thus the software needs to learn from the clues left inadvertently by users. (I’d argue that there’s also room for better designed control systems. I bet Mills agrees, because how could anyone argue against better designed anything?)
But in my view he too casually dismisses the responsibility and culpability of some of the most important sites when he writes:
The idea that personalization is about corporate or political control is an emotionally satisfying but inaccurate one.
If we take “personalization” in the insightful and useful way he has defined it, then sure. But when people rail against personalization they are thinking about the algorithmic function performed by commercial entities. And those entities have a massive incentive—exercised by companies like Facebook—to personalize the flow of information toward users as consumers rather than as persons.
Thanks to Dave Birk for pointing me to Mills’ post.
Here’s something I took from Heidegger that may not be in Heidegger:
The basis of morality is the recognition that the world matters to each person, but matters differently.
After that, I don’t know what to do except to be highly suspicious of anyone who cites moral precepts.
It turns out that I don’t find morality to be a very useful category since the way the world matters to us is so deeply contextual and individual: whether you should steal the loaf of bread has less to do with the general principle that it’s wrong to steal, and more to do with how hungry your family is, how much money you have, your opportunities to earn more money, the moral and legal codes of your culture, how kind the baker has been to you, what you know of the baker’s own circumstances, etc.
“Do unto others…,” Kant’s Categorical Imperative, the traditional Jewish formulation of “Don’t do unto others what you would not want done to you,” all are heuristics for remembering that the world matters to others just as much as it matters to you, but it matters differently. Trying to apply those heuristics without recognizing that the world can matter differently can lead to well-intentioned mistakes in which you substitute how your world matters to you for how theirs matters to them: you don’t believe in accepting blood transfusions so you refuse to give one to someone who believes otherwise.
This gets messy fast: You believe in the efficacy of blood transfusions, so you give one to someone who for religious reasons has stipulated that she does not want one. You are not treating her as an autonomous agent. Are you wrong? Once she’s under anesthesia should you let her die because she does not want a transfusion? I have my own inclination, but I have no confidence in it: Even the principle of always treating people as autonomous is hard to apply.
It’s easy to multiply examples, and very easy to find cases where I condemn entire cultures for how their world matters to them. For example, I’m really pretty sure that girls ought to be educated and women ought not to be subservient to men. I’d argue for that. I’d vote for that. I’d fight for that. But not because of morality. “Morality” just doesn’t seem like a helpful concept for deciding what one ought to do.
It can be useful as a name for the topic of what that “ought” means. But those discussions can obscure the particularities of each life that need to be as clear as possible when we talk about what we ought to do.
None of this is new or original with me. Maybe I’m just an old fashioned Existentialist — more Kierkegaardian than Satrean — but I feel like I could carry on the rest of my moral life without ever thinking about morality.
(No, I am not sure of any of the above.)
 That the world matters to us is certainly Heidegger. That it matters differently to us is more ambiguous. It’s captured in his notion of the existentiell, but his attempt at what seems to be a universal description of Dasein suggests that there may be some fundamental ways in which it matters in the same ways to us all. But it’s been a long time since I read Being and Time. Plus, he was a Nazi, so maybe he’s not the best person to consult about the nature of morality.
Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.
Thus begins Jonathan Zittrain‘s consideration of an all-too-plausible hypothetical. Should Google respond to a request to search everyone’s gmail inboxes to find everyone to whom the to-do list was sent ? As JZ says, you can’t get a warrant to search an entire city, much less hundreds of millions of inboxes.
But, while this is a search that sweeps a good portion of the globe, it doesn’t “listen in” on any mail except for that which contains a precise string of words in a precise order. What happens next would depend upon the discretion of the investigators.
JZ points out that Google already does something akin to this when it searches for inboxes that contain known child pornography images.
JZ’s treatment is even handed and clear. (He’s a renown law professor. He knows how to do these things.) He discusses the reasons pro and con. He comes to his own personal conclusion. It’s a model of clarity of exposition and reasoning.
I like this article a lot on its own, but I find it especially fascinating because of its implications for the confused feeling of violation many of us have when it’s a computer doing the looking. If a computer scans your emails looking for a terrorist to-do list, has it violated your sense of privacy? If a robot looks at you naked, should you be embarrassed? Our sense of violation is separable from our legal and moral right to privacy question, but the two meanings often get mixed up in such discussions. Not in JZ’s, but often enough.