The essays take a fruitful approach. In each of the chapters, someone in the field recounts how s/he first encountered a figure who became important to her/him and why that person mattered. That entails explaining the figure’s ideas and place in the history of media studies — although almost none of the figures would have characterized their work as being within that relatively newly-minted field.
I write about how Heidegger’s ideas about language pulled me out of an adolescent “identity crisis” [draft]. Lance Strate explains his struggle to understand McLuhan (I feel his pain!) and how the struggle paid off for him. Cynthia Lewis connects her interest in Mikhail Bakhtin to her precocious recognition that “the presence of other interpreters always already exists” in the words one hears and uses. Michael Robbgrieco explains how Foucault became a crucial thinker for him about media and education, even though Foucault doesn’t talk about the former and views the latter primarily as a system of oppression, which was far from Michael’s experience as a teacher. Henry Jenkins talks about how Raymond Williams’ work spoke to him as a son of a construction company owner in Georgia, and how that led Jenkins to John Fiske who had been tutored by Williams.
These are just a few of the seventeen essays.
The personal approach enables the authors to walks us through their intellectual grandparents’ ideas the way they first did — and the paths these authors took clearly worked for them. It simultaneously makes clear why those grandparents, with their often quite difficult ideas, mattered so personally to the authors. Overall it works splendidly. All credit to Renee.
Errata: For the imaginary record, I want to note that an error was introduced into my chapter on Heidegger. Somehow John William Miller’s ‘ “mid world” mutated into “mind world” and I did not catch it in the copy-edit phase. Also “a preacher of narcissism” became “a preacher or narcissist.” I should have caught these attempts to make my text better. Ack.
I was supposed to give an opening talk at the 9th annual Ethics & Publishing conference put on by George Washington Uinversity. Unfortunately, a family emergency kept me from going, so I sent a very homemade video of the presentation that I recorded at my desk with my monitor raised to head height.
The theme of my talk was a change in how we make the place better — “the place” being where we live — in the networked age. It’s part of what I’ve been thinking about as I prepare to write a book about the change in our paradigm of the future. So, these are thoughts-in-progress. And I know I could have stuck the landing better. In any case, here it is.
In 1962, Claude Levi-Strauss brought the concept of bricolage into the anthropological and philosophical lexicons. It has to do with thinking with one’s hands, putting together new things by repurposing old things. It has since been applied to the Internet (including, apparently, by me, thanks to a tip from Rageboy). The term “bricolage” uncovers something important about the Net, but it also covers up something fundamental about the Net that has been growing even more important.
In The Savage Mind (relevant excerpt), CLS argued against the prevailing view that “primitive” peoples were unable to form abstract concepts. After showing that they often in have extensive sets of concepts for flora and fauna, he maintains that these concepts go beyond what they pragmatically need to know:
…animals and plants are not known as a result of their usefulness; they are deemed to be useful or interesting because they are first of all known.
It may be objected that science of this kind can scarcely be of much practical effect. The answer to this is that its main purpose is not a practical one. It meets intellectual requirements rather than or instead of satisfying needs.
It meets, in short, a “demand for order.”
CLS wants us to see the mythopoeic world as being as rich, complex, and detailed as the modern scientific world, while still drawing the relevant distinctions. He uses bricolage as a bridge for our understanding. A bricoleur scavenges the environment for items that can be reused, getting their heft, trying them out, fitting them together and then giving them a twist. The mythopoeic mind engages in this bricolage rather than in the scientific or engineering enterprise of letting a desired project assemble the “raw materials.” A bricoleur has what s/he has and shapes projects around that. And what the bricoleur has generally has been fashioned for some other purpose.
Bricolage is a very useful concept for understanding the Internet’s mashup culture, its culture of re-use. It expresses the way in which one thing inspires another, and the power of re-contextualization. It evokes the sense of invention and play that is dominant on so much of the Net. While the Engineer is King (and, all too rarely, Queen) of this age, the bricoleurs have kept the Net weird, and bless them for it.
But there are at least two ways in which this metaphor is inapt.
First, traditional bricoleurs don’t have search engines that let them in a single glance look across the universe for what they need. Search engines let materials assemble around projects, rather than projects be shaped by the available materials. (Yes, this distinction is too strong. Yes, it’s more complicated than that. Still, there’s some truth to it.)
Second, we have been moving with some consistency toward a Net that at its topmost layers replicates the interoperability of its lower layers. Those low levels specify the rules — protocols — by which networks can join together to move data packets to their destinations. Those packets are designed so they can be correctly interpreted as data by any recipient applications. As you move up the stack, you start to lose this interoperability: Microsoft Word can’t make sense of the data output by Pages, and a graphics program may not be able to make sense of the layer information output by Photoshop.
But, over time, we’re getting better at this:
Applications add import and export services as the market requires. More consequentially, more and richer standards for interoperability continue to emerge, as they have from the very beginning: FTP, HTML, XML, Dublin Core, Schema.org, the many Semantic Web vocabularies, ontologies, and schema, etc.
More important, we are now taking steps to make sure that what we create is available for re-use in ways we have not imagined. We do this by working within standards and protocols. We do it by putting our work into the sphere of reusable items, whether that’s by applying the Creative Commons license, putting our work into a public archive, , or even just paying attention to what will make our work more findable.
This is very different from the bricoleur’s world in which objects are designed for one use, and it takes the ingenuity of the bricoleur to find a new use for it.
This movement continues the initial work of the Internet. From the beginning the Net has been predicated on providing an environment with the fewest possible assumptions about how it will be used. The Net was designed to move anyone’s information no matter what it’s about, what it’s for, where it’s going, or who owns it. The higher levels of the stack are increasingly realizing that vision. The Net is thus more than ever becoming a universe of objects explicitly designed for reuse in unexpected ways. (An important corrective to this sunny point of view: Christian Sandvig’s brilliant description of how the Net has incrementally become designed for delivering video above all else.)
Insofar as we are explicitly creating works designed for unexpected reuse, the bricolage metaphor is flawed, as all metaphors are. It usefully highlights the “found” nature of so much of Internet culture. It puts into the shadows, however, the truly transformative movement we are now living through in which we are explicitly designing objects for uses that we cannot anticipate.
Mills Baker defends personalization on the right grounds. In a brilliant and brilliantly written post, he maintains that the personalization provided by sites does at scale what we do in the real world to enable conversations: through multiple and often subtle signals, we let an interlocutor know where our interests and beliefs are similar enough that we are able to safely express our differences.
Digression: This is at the heart of our cultural fear of echo chambers, in my opinion. Conversation consists of iteration on small differences based on an iceberg of agreement. Every conversation inadvertently reinforces the beliefs that enable it to go forward. Likewise, understanding is contextual, assimilating the novel to the familiar, thus reinforcing that context by making it richer and more coherent. But our tradition has taught us that Reason requires us to be open to all ideas, ready to undo the entire structure of our beliefs. Reason, if applied purely, would thus make conversation, understanding, and knowledge impossible. In fearing echo chambers, we are running from the fact that understanding and conversation share the basic elements of echo chambers. I’ll return to this point in a later post sometime…
I love everything about Mills’ post except his under-valuing of concerns about the power personalization has over us on-line. Yes, personalization is a requirement in a scaled environment. Yes, the right comparison is between our new info flows and our old info trickles. But…
…Miles does not fully confront the main complaint: our interests and the interests of the commercial entities that are doing the personalizing do not fully coincide. Facebook has an economic motivation to get us to click more and to exit Facebook sessions eager to return for more. Facebook thus has an economic interest in showing us personalized clickbait, and to filter our feeds toward happiness rather than hey-my-cat-died-yesterday posts.
In one sense, this is entirely Mills’ point. He wants designers to understand the positive role personalization has always played, so they can reinstate that role in software that works for us. He thinks that getting this right is the responsibility of the software for “Most users do not want the ‘control’ of RSS and Twitter lists and blocking, muting, and unfollowing their fellows.” Thus the software needs to learn from the clues left inadvertently by users. (I’d argue that there’s also room for better designed control systems. I bet Mills agrees, because how could anyone argue against better designed anything?)
But in my view he too casually dismisses the responsibility and culpability of some of the most important sites when he writes:
The idea that personalization is about corporate or political control is an emotionally satisfying but inaccurate one.
If we take “personalization” in the insightful and useful way he has defined it, then sure. But when people rail against personalization they are thinking about the algorithmic function performed by commercial entities. And those entities have a massive incentive—exercised by companies like Facebook—to personalize the flow of information toward users as consumers rather than as persons.
Thanks to Dave Birk for pointing me to Mills’ post.
Here’s something I took from Heidegger that may not be in Heidegger:
The basis of morality is the recognition that the world matters to each person, but matters differently.
After that, I don’t know what to do except to be highly suspicious of anyone who cites moral precepts.
It turns out that I don’t find morality to be a very useful category since the way the world matters to us is so deeply contextual and individual: whether you should steal the loaf of bread has less to do with the general principle that it’s wrong to steal, and more to do with how hungry your family is, how much money you have, your opportunities to earn more money, the moral and legal codes of your culture, how kind the baker has been to you, what you know of the baker’s own circumstances, etc.
“Do unto others…,” Kant’s Categorical Imperative, the traditional Jewish formulation of “Don’t do unto others what you would not want done to you,” all are heuristics for remembering that the world matters to others just as much as it matters to you, but it matters differently. Trying to apply those heuristics without recognizing that the world can matter differently can lead to well-intentioned mistakes in which you substitute how your world matters to you for how theirs matters to them: you don’t believe in accepting blood transfusions so you refuse to give one to someone who believes otherwise.
This gets messy fast: You believe in the efficacy of blood transfusions, so you give one to someone who for religious reasons has stipulated that she does not want one. You are not treating her as an autonomous agent. Are you wrong? Once she’s under anesthesia should you let her die because she does not want a transfusion? I have my own inclination, but I have no confidence in it: Even the principle of always treating people as autonomous is hard to apply.
It’s easy to multiply examples, and very easy to find cases where I condemn entire cultures for how their world matters to them. For example, I’m really pretty sure that girls ought to be educated and women ought not to be subservient to men. I’d argue for that. I’d vote for that. I’d fight for that. But not because of morality. “Morality” just doesn’t seem like a helpful concept for deciding what one ought to do.
It can be useful as a name for the topic of what that “ought” means. But those discussions can obscure the particularities of each life that need to be as clear as possible when we talk about what we ought to do.
None of this is new or original with me. Maybe I’m just an old fashioned Existentialist — more Kierkegaardian than Satrean — but I feel like I could carry on the rest of my moral life without ever thinking about morality.
(No, I am not sure of any of the above.)
 That the world matters to us is certainly Heidegger. That it matters differently to us is more ambiguous. It’s captured in his notion of the existentiell, but his attempt at what seems to be a universal description of Dasein suggests that there may be some fundamental ways in which it matters in the same ways to us all. But it’s been a long time since I read Being and Time. Plus, he was a Nazi, so maybe he’s not the best person to consult about the nature of morality.
Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.
Thus begins Jonathan Zittrain‘s consideration of an all-too-plausible hypothetical. Should Google respond to a request to search everyone’s gmail inboxes to find everyone to whom the to-do list was sent ? As JZ says, you can’t get a warrant to search an entire city, much less hundreds of millions of inboxes.
But, while this is a search that sweeps a good portion of the globe, it doesn’t “listen in” on any mail except for that which contains a precise string of words in a precise order. What happens next would depend upon the discretion of the investigators.
JZ points out that Google already does something akin to this when it searches for inboxes that contain known child pornography images.
JZ’s treatment is even handed and clear. (He’s a renown law professor. He knows how to do these things.) He discusses the reasons pro and con. He comes to his own personal conclusion. It’s a model of clarity of exposition and reasoning.
I like this article a lot on its own, but I find it especially fascinating because of its implications for the confused feeling of violation many of us have when it’s a computer doing the looking. If a computer scans your emails looking for a terrorist to-do list, has it violated your sense of privacy? If a robot looks at you naked, should you be embarrassed? Our sense of violation is separable from our legal and moral right to privacy question, but the two meanings often get mixed up in such discussions. Not in JZ’s, but often enough.
We’re pretty convinced that the future lies ahead of us. But according to Bernard Knox, the ancient Greeks were not. In Backing into the Future he writes:
“ The future, invisible, is behind us. ” the Greek word opiso, which literally means ‘behind’ or ‘back, refers not to the past but to the future. The early Greek imagination envisaged the past and the present as in front of us–we can see them. The future, invisible, is behind us. Only a few very wise men can see what is behind them. (p. 11)
…we Indo-Germanic peoples think of time as a line on which we ourselves stand at a point called now; then we have the future lying before us, and the past stretches out behind us. The [ancient] Israelites use the same expressions ‘before’ and ‘after’ but with opposite meanings. qedham means ‘what is before’ (Ps. 139.5) therefore, ‘remote antiquity’, past. ‘ahar means ‘back’, ‘behind’, and of the time ‘after; aharith means ‘hindermost side’, and then ‘end of an age’, future… (p. 149)
This is bewildering, and not just because the Borman’s writing is hard to parse.“we also sometimes switch the direction of future and past.”
He continues on to note that we modern Westerners also sometimes switch the direction of future and past. In particular, when we “appreciate time as the transcendental design of history,” we
think of ourselves as living men who are on a journey from the cradle to the grave and who stand in living association with humanity which is also journeying ceaselessly forward. . Then the generation of the past are our progenitors, at least our forebears, who have existed before us because they have gone on before us, and we follow after then. In that case we call the past foretime. According to this mode of thinking, the future generation are our descendants, at least our successors, who therefore come after us. (p. 149. Emphasis in the original.)
Yes, I find this incredibly difficult to wrap my brain around. I think the trick is the ambiguity of “before us.” The future lies before us, but our forebears were also before us.
Borman tries to encapsulate our contradictory ways of thinking about the future as follows: “the future lies before us but comes after us.” The problem in understanding this is that we hear “before us” as “ahead of us.” The word “before” means “ahead” when it comes to space.
Borman’s explanation of the ancient Hebrew way of thinking is related to Knox’s explanation of the Greek idiom:
From the psychological viewpoint it is absurd to say that we have the future before us and the past behind us, as though the future were visible to us and the past occluded. “…as though the future were visible to us and the past occluded. Quite the reverse is true.”Quite the reverse is true. What our forebears have accomplished lies before us as their completed works; the house we see, the meadows and fields, the cultural and political system are congealed expressions of the deeds of our fathers. The same is true of everything they have done, lived, or suffered; it lies before us as completed facts… The present and the future are, on the contrary still in the process of coming and becoming. (p. 150)
The nature of becoming is different for the Greeks and Hebrews, so the darkness of the future has different meanings. But both result in the future lying behind us.
I’m on a Heidegger mailing list where I get to lurk as serious scholars probe his writings and thoughts, and, not infrequently these days, his politics.
Recently, a member of the list I highly respect suggested that “Heidegger’s phenomenology of ‘Sein-zum-Tode’ [Being-toward-death] amounts to living each day of our lives with a sense of our finitude, our mortality, that unifies and heightens the meaningfulness of each and every moment.” He equates this to Michel de Montaigne saying that “it is my custom to have death not only in my imagination, but continually in my mouth.” This is great wisdom, said the list member.
I don’t want to argue against those who find wisdom in living “with the taste of death” in their mouths. But I also wouldn’t argue for it.
My understanding, such as it is, of Heidegger’s idea of Being-toward-death is that our temporal finitude is constantly present as a horizon: we look before we cross the street because we know we can die — “know” not as an explicit thought but as the landscape within which our experience occurs. We make long-term plans within a horizon of possibility that we number in decades and not centuries.
But that’s not what I take Montaigne to mean. And if that’s what Heidegger’s concept of authenticity entails (as I think it might), then that’s just another problem I have with his idea of authenticity.
Why is keeping explicit the awareness of my impending death preferable, wise, or phenomenologically true-er? Because only I can die my death, as Heidegger says? I’m also the only one who can eat my lunch or take my shower. [Frivolity aside: these are both instances of “Only I have my body.”] Because it makes our experience more precious? It doesn’t for me. For example:
We have a four-month-old grandchild, our first. (Yes, yes, thank you for your good wishes :) When I am caring for him–playing with him–my death is always present, but as an horizon. I’m aware that I’m 65 years older than he is, that I am in my waning years and he is just beginning. That is part of the deep joy of a grandchild, and it is definitional: if I thought I were immortal, the experience would be very different; if I didn’t have the concept of one life beginning and another ending, my experience of children would be incomprehensible. So, phenomenologically I think Heidegger is right about our death (finitude) always shaping our experience as an implicit horizon. Our stretch of time only extends so far before it snaps.
But, beyond that implicit horizon, do I need to keep a taste of death in my mouth to make the experience of our grandson more precious? On the contrary, the explicit thought, “Wow, I’m really going to be dead someday” would distract me from my grandson, and keep me from letting the adorable little phenomenon show himself as he is.
That’s a charged example, of course. But here’s another: I’m eating a delicious piece of chocolate cake. I do so within the horizon of my finitude, but that horizon is probably quite implicit. Perhaps it’s a bit more explicit than that, but still horizonal: I’m only eating half the slice for health reasons. But then I have a vivid taste of death alongside the chocolate: “Crap! I’m going to be dead someday.”
Does the cake taste better? I guess maybe for some people. For a lot of us, though, the realization that death is surely a-comin’ would make the cake turn to ash. Who cares about cake when I’m going to be dead sometime, maybe in a minute or a day? We’ve been pulled out of the experience and out of the world by the vivid intrusion of what is undeniably a truth. Why do you think Roquentin can never enjoy a nice slice of cake?
We can complain that such morbidity is inauthentic, but as far as I can tell that’s a value judgment, not philosophy and certainly not phenomenology.
My intention is not to argue against Montaigne on this. If keeping the fact of death explicitly present helps some of us appreciate life more, who am I to say otherwise? Seriously. And if someone goes further and seeks out death-defying experiences because she feels most alive when she is most at risk, who am I to judge? That works for her. Good! (I feel bad for her parents, though.)
But valorizing keeping death explicitly present seems to me to be more personality than philosophy.
I understand that Heidegger’s putting death front and center was a radical and healthy move for philosophy. Western philosophy, after all, has spent so much of its energy pursuing deathless wisdom and eternal Reality as the only truths. But as a reader of Heidegger, I put much of what he writes in Being and Time about death into the same bucket as what he writes about destiny, das Man, authenticity, and German peasant romanticism: It’s (to put it mildly) phenomenologically non-disclosive for me — part of the price of reading an ontologist whose methodology, at least initially, was phenomenology.