May 9, 2016
May 9, 2016
April 6, 2016
Mills Baker defends personalization on the right grounds. In a brilliant and brilliantly written post, he maintains that the personalization provided by sites does at scale what we do in the real world to enable conversations: through multiple and often subtle signals, we let an interlocutor know where our interests and beliefs are similar enough that we are able to safely express our differences.
Digression: This is at the heart of our cultural fear of echo chambers, in my opinion. Conversation consists of iteration on small differences based on an iceberg of agreement. Every conversation inadvertently reinforces the beliefs that enable it to go forward. Likewise, understanding is contextual, assimilating the novel to the familiar, thus reinforcing that context by making it richer and more coherent. But our tradition has taught us that Reason requires us to be open to all ideas, ready to undo the entire structure of our beliefs. Reason, if applied purely, would thus make conversation, understanding, and knowledge impossible. In fearing echo chambers, we are running from the fact that understanding and conversation share the basic elements of echo chambers. I’ll return to this point in a later post sometime…
I love everything about Mills’ post except his under-valuing of concerns about the power personalization has over us on-line. Yes, personalization is a requirement in a scaled environment. Yes, the right comparison is between our new info flows and our old info trickles. But…
…Miles does not fully confront the main complaint: our interests and the interests of the commercial entities that are doing the personalizing do not fully coincide. Facebook has an economic motivation to get us to click more and to exit Facebook sessions eager to return for more. Facebook thus has an economic interest in showing us personalized clickbait, and to filter our feeds toward happiness rather than hey-my-cat-died-yesterday posts.
In one sense, this is entirely Mills’ point. He wants designers to understand the positive role personalization has always played, so they can reinstate that role in software that works for us. He thinks that getting this right is the responsibility of the software for “Most users do not want the ‘control’ of RSS and Twitter lists and blocking, muting, and unfollowing their fellows.” Thus the software needs to learn from the clues left inadvertently by users. (I’d argue that there’s also room for better designed control systems. I bet Mills agrees, because how could anyone argue against better designed anything?)
But in my view he too casually dismisses the responsibility and culpability of some of the most important sites when he writes:
If we take “personalization” in the insightful and useful way he has defined it, then sure. But when people rail against personalization they are thinking about the algorithmic function performed by commercial entities. And those entities have a massive incentive—exercised by companies like Facebook—to personalize the flow of information toward users as consumers rather than as persons.
Thanks to Dave Birk for pointing me to Mills’ post.
February 8, 2016
Here’s something I took from Heidegger that may not be in Heidegger:
The basis of morality is the recognition that the world matters to each person, but matters differently.
After that, I don’t know what to do except to be highly suspicious of anyone who cites moral precepts.
It turns out that I don’t find morality to be a very useful category since the way the world matters to us is so deeply contextual and individual: whether you should steal the loaf of bread has less to do with the general principle that it’s wrong to steal, and more to do with how hungry your family is, how much money you have, your opportunities to earn more money, the moral and legal codes of your culture, how kind the baker has been to you, what you know of the baker’s own circumstances, etc.
“Do unto others…,” Kant’s Categorical Imperative, the traditional Jewish formulation of “Don’t do unto others what you would not want done to you,” all are heuristics for remembering that the world matters to others just as much as it matters to you, but it matters differently. Trying to apply those heuristics without recognizing that the world can matter differently can lead to well-intentioned mistakes in which you substitute how your world matters to you for how theirs matters to them: you don’t believe in accepting blood transfusions so you refuse to give one to someone who believes otherwise.
This gets messy fast: You believe in the efficacy of blood transfusions, so you give one to someone who for religious reasons has stipulated that she does not want one. You are not treating her as an autonomous agent. Are you wrong? Once she’s under anesthesia should you let her die because she does not want a transfusion? I have my own inclination, but I have no confidence in it: Even the principle of always treating people as autonomous is hard to apply.
It’s easy to multiply examples, and very easy to find cases where I condemn entire cultures for how their world matters to them. For example, I’m really pretty sure that girls ought to be educated and women ought not to be subservient to men. I’d argue for that. I’d vote for that. I’d fight for that. But not because of morality. “Morality” just doesn’t seem like a helpful concept for deciding what one ought to do.
It can be useful as a name for the topic of what that “ought” means. But those discussions can obscure the particularities of each life that need to be as clear as possible when we talk about what we ought to do.
None of this is new or original with me. Maybe I’m just an old fashioned Existentialist — more Kierkegaardian than Satrean — but I feel like I could carry on the rest of my moral life without ever thinking about morality.
(No, I am not sure of any of the above.)
 That the world matters to us is certainly Heidegger. That it matters differently to us is more ambiguous. It’s captured in his notion of the existentiell, but his attempt at what seems to be a universal description of Dasein suggests that there may be some fundamental ways in which it matters in the same ways to us all. But it’s been a long time since I read Being and Time. Plus, he was a Nazi, so maybe he’s not the best person to consult about the nature of morality.
January 13, 2016
Thus begins Jonathan Zittrain‘s consideration of an all-too-plausible hypothetical. Should Google respond to a request to search everyone’s gmail inboxes to find everyone to whom the to-do list was sent ? As JZ says, you can’t get a warrant to search an entire city, much less hundreds of millions of inboxes.
But, while this is a search that sweeps a good portion of the globe, it doesn’t “listen in” on any mail except for that which contains a precise string of words in a precise order. What happens next would depend upon the discretion of the investigators.
JZ points out that Google already does something akin to this when it searches for inboxes that contain known child pornography images.
JZ’s treatment is even handed and clear. (He’s a renown law professor. He knows how to do these things.) He discusses the reasons pro and con. He comes to his own personal conclusion. It’s a model of clarity of exposition and reasoning.
I like this article a lot on its own, but I find it especially fascinating because of its implications for the confused feeling of violation many of us have when it’s a computer doing the looking. If a computer scans your emails looking for a terrorist to-do list, has it violated your sense of privacy? If a robot looks at you naked, should you be embarrassed? Our sense of violation is separable from our legal and moral right to privacy question, but the two meanings often get mixed up in such discussions. Not in JZ’s, but often enough.
January 2, 2016
We’re pretty convinced that the future lies ahead of us. But according to Bernard Knox, the ancient Greeks were not. In Backing into the Future he writes:
Whitrow doesn’t note that Steiner’s quote (which Steiner puts in quotes) comes from Thorlief Borman’s Hebrew Thought Compared with Greek. Borman writes:
This is bewildering, and not just because the Borman’s writing is hard to parse.“we also sometimes switch the direction of future and past.”
He continues on to note that we modern Westerners also sometimes switch the direction of future and past. In particular, when we “appreciate time as the transcendental design of history,” we
Yes, I find this incredibly difficult to wrap my brain around. I think the trick is the ambiguity of “before us.” The future lies before us, but our forebears were also before us.
Borman tries to encapsulate our contradictory ways of thinking about the future as follows: “the future lies before us but comes after us.” The problem in understanding this is that we hear “before us” as “ahead of us.” The word “before” means “ahead” when it comes to space.
The nature of becoming is different for the Greeks and Hebrews, so the darkness of the future has different meanings. But both result in the future lying behind us.
Categories: future, philosophy Tagged with: future • philosophy • platform
Date: January 2nd, 2016 dw
December 21, 2015
Socrates: The Extra Parmesanides
St. Augustine: Deep Dish Confessions
Nietzsche: Thus Spake ‘Za-thruster, the Pizza Delivery Guy
Martin Heidegger: Being and Slices
Bonus for Librarians: Ranganathan’s Five Laws of Pizza Science
Isaac Asimov: Three Rules of Pizzas
Suggested by Andromeda Yelton (@ThatAndromeda). Thanks!
November 28, 2015
I’m on a Heidegger mailing list where I get to lurk as serious scholars probe his writings and thoughts, and, not infrequently these days, his politics.
Recently, a member of the list I highly respect suggested that “Heidegger’s phenomenology of ‘Sein-zum-Tode’ [Being-toward-death] amounts to living each day of our lives with a sense of our finitude, our mortality, that unifies and heightens the meaningfulness of each and every moment.” He equates this to Michel de Montaigne saying that “it is my custom to have death not only in my imagination, but continually in my mouth.” This is great wisdom, said the list member.
I don’t want to argue against those who find wisdom in living “with the taste of death” in their mouths. But I also wouldn’t argue for it.
My understanding, such as it is, of Heidegger’s idea of Being-toward-death is that our temporal finitude is constantly present as a horizon: we look before we cross the street because we know we can die — “know” not as an explicit thought but as the landscape within which our experience occurs. We make long-term plans within a horizon of possibility that we number in decades and not centuries.
But that’s not what I take Montaigne to mean. And if that’s what Heidegger’s concept of authenticity entails (as I think it might), then that’s just another problem I have with his idea of authenticity.
Why is keeping explicit the awareness of my impending death preferable, wise, or phenomenologically true-er? Because only I can die my death, as Heidegger says? I’m also the only one who can eat my lunch or take my shower. [Frivolity aside: these are both instances of “Only I have my body.”] Because it makes our experience more precious? It doesn’t for me. For example:
We have a four-month-old grandchild, our first. (Yes, yes, thank you for your good wishes :) When I am caring for him–playing with him–my death is always present, but as an horizon. I’m aware that I’m 65 years older than he is, that I am in my waning years and he is just beginning. That is part of the deep joy of a grandchild, and it is definitional: if I thought I were immortal, the experience would be very different; if I didn’t have the concept of one life beginning and another ending, my experience of children would be incomprehensible. So, phenomenologically I think Heidegger is right about our death (finitude) always shaping our experience as an implicit horizon. Our stretch of time only extends so far before it snaps.
But, beyond that implicit horizon, do I need to keep a taste of death in my mouth to make the experience of our grandson more precious? On the contrary, the explicit thought, “Wow, I’m really going to be dead someday” would distract me from my grandson, and keep me from letting the adorable little phenomenon show himself as he is.
That’s a charged example, of course. But here’s another: I’m eating a delicious piece of chocolate cake. I do so within the horizon of my finitude, but that horizon is probably quite implicit. Perhaps it’s a bit more explicit than that, but still horizonal: I’m only eating half the slice for health reasons. But then I have a vivid taste of death alongside the chocolate: “Crap! I’m going to be dead someday.”
Does the cake taste better? I guess maybe for some people. For a lot of us, though, the realization that death is surely a-comin’ would make the cake turn to ash. Who cares about cake when I’m going to be dead sometime, maybe in a minute or a day? We’ve been pulled out of the experience and out of the world by the vivid intrusion of what is undeniably a truth. Why do you think Roquentin can never enjoy a nice slice of cake?
We can complain that such morbidity is inauthentic, but as far as I can tell that’s a value judgment, not philosophy and certainly not phenomenology.
My intention is not to argue against Montaigne on this. If keeping the fact of death explicitly present helps some of us appreciate life more, who am I to say otherwise? Seriously. And if someone goes further and seeks out death-defying experiences because she feels most alive when she is most at risk, who am I to judge? That works for her. Good! (I feel bad for her parents, though.)
But valorizing keeping death explicitly present seems to me to be more personality than philosophy.
I understand that Heidegger’s putting death front and center was a radical and healthy move for philosophy. Western philosophy, after all, has spent so much of its energy pursuing deathless wisdom and eternal Reality as the only truths. But as a reader of Heidegger, I put much of what he writes in Being and Time about death into the same bucket as what he writes about destiny, das Man, authenticity, and German peasant romanticism: It’s (to put it mildly) phenomenologically non-disclosive for me — part of the price of reading an ontologist whose methodology, at least initially, was phenomenology.
October 28, 2015
At Digital Trends I take another look at a question that is now gaining some currency: How should autonomous cars be programmed when all the choices are bad and someone has to die in order to maximize the number of lives that are saved?
The question gets knottier the more you look at it. In two regards especially:
First, it makes sense to look at this through a utilitarian lens, but when you do, you have to be open to the possibility that it’s morally better to kill a 64 year old who’s at the end of his productive career (hey, don’t look at me that way!) vs. a young parent, or a promising scientist or musician. We consider age and health when doing triage for organ replacements. Should our cars do it for us when deciding who dies?
Second, the real question is who gets to decide this? The developers at Google who are programming the cars? And suppose the Google software disagrees with the prioritization of the Tesla self-driving cars? Who wins? Or, do we want to have a cross-manufacturer agreement about whose life to sacrifice if someone has to die in an accident? A global agreement about the value of lives?
Yeah, sure. What could go wrong with that? /s
October 23, 2015
The link is to a Wired article by Andy Greenberg about the New Palmyra Project, an effort to reconstruct the ancient monuments ISIS is destroying, and a plea for action to free the project’s creator, Bassel Khartabil, from a Syrian prison.
The second is Donatella’s article in CyberOrient that considers efforts that, like the New Palmyra Project, reconstruct sites destroyed by war, but not with that project’s historical purpose. In the article she brings to light some of the profound and disturbing ways the Net is changing how meaning works.
Her focus is on what she calls “expanded places,” physical places that have been physically destroyed, but that “have been re-animated through multiple mediated versions circulating and re-circulating on the networks.” As she says in the article’s abstract:
Her primary example is Damescene Village, a theme park on the outskirts of Damascus where she conducted ethnographic research in 2010. The brief story of the role that theme park played in Syrian“the multiple layers of unreality that it attracted itself is mind-blowing” popular media, and the multiple layers of unreality that it attracted itself is mind-blowing: “a physical replica of the historic 1920s rebel stronghold conceived as a TV set for a reenactment drama of that very struggle; which, historically speaking, took place exactly in the location where the fictional copy had been rebuilt for the sake of media consumption.” To complete the media hall of mirrors, in the recent conflict each side shot “video accounts narrating the seizure of the theme park using themes, symbols and characters borrowed from the TV series.”
The complexity of this place as real, symbolic, organic, and manipulated is mirrored in the nature of the platform. She argues that the Internet’s “circulation, reflexivity, anonymity, and decentralized authorship” lead to a type of violence against meaning: “…the endless circulation of messages that are shared, manipulated, and repeated over and over again in a loop where any possible meaning is lost.” Citing Jodi Dean, Donatella says: “…the uncontrollable speed and spread of contributions over the networks help prevent the formation of any sort of signification,” generating not “a plurality of visions” but “…a feeling of ‘constituent anxiety.'” This process is, she says “inherent to the networks.”
But is Damescene Village is too good an example? It came onto the Net with so many layers of contested meta-meta-meaning that perhaps its online life is atypical. Donatella confronts this question, “ the Net not only continues the alienation of images of violence … but adds a participatory level”arguing that the Net not only continues the alienation of images of violence from their actuality and from ethical responses, as noted by Susan Sontag in the 1970s, but adds a participatory level to this: the images of violence are hyperlinked and recirculated by the viewers themselves. This borderless remixing and recirculation “have all contributed to the expansion of the place formerly known as the Damascene Village.”
But what to make of this expansion? Here again I worry that Donatella’s example is too good:
She takes this as a type of fictionality, as described by Jacques Rancière: a rearrangement of something real into new political and aesthetic formats without regard to the truth of that something, blurring “the logic of facts and the logic of fiction” in multiple layers of meaning. She invokes Baudrillard, saying that “The story of the Damascene Village proves that it does not really matter” whether the various factions’ fantasies correspond to historical truth. Rather:
But hasn’t that statement been true of every intra-cultural conflict? The truth of historians has never much mattered to factions trying to rouse support for their side. Donatella uses Rancière’s thought to find the difference between how this worked “the Net is in important ways moving us back to a simpler relation between image and reality through the posting of cellphone videos of police attacks, ”before and after the Net. I have not read him (I know, I know) but am not fully convinced by the ideas she cites. In the modern era, “technology is not understood as a mere technique of reproduction and transmission.” Yes, but that’s hardly new to the Internet. Not only has it been well understood at least since the 1960s, but one could argue that the Net is in important ways moving us back to a simpler relation between image and reality through the posting of cellphone videos of police attacks, the proliferation of video surveillance, and the new insistence that the police wear video cameras. Also: Russian dash cams.
She cites Rancière further to make the case that the anonymity of Net postings and the ability to record just about everything “has given rise a new understanding of history as a continuous process of assigning meanings to material realities, of connecting signs and symbols in unprecedented ways. In this sense we can define history as a ‘new form of fiction’…”
I have a complex reaction to this. (This is one of the reasons I so like Donatella’s writing.)
1. Yes, this is exactly what’s happening.
2. It is what happens when we all have access to the materials of history, and the decisions about what counts as history are not made by handfuls of people who control the media, which includes highly qualified historians, the editorial staffs of (sometimes scurrilous) newspapers, and self-interested political leaders.
3. If we substitute “current events” for “history,” the situation seems somewhat less novel. The word “history” carries with it a weight that “current events” does not. (a) We do not yet know what history (as practiced by that discipline) will say about current events. It may become far more settled than the fracturing of interpretations of current events now suggests, which depends to a large degree on how education and authority evolves over the years. (b) History of course always is fractured along the lines that divide people; one side in the United States Civil War still sometimes insists slavery was not the issue the war was fought over.
I am not disagreeing with the dangerousness of the fragmenting of interpretations engendered by the Net. I find illuminating and helpful Donatella’s brilliant exposition of the way in which these are not shards so much as multiply reflecting mirrors in which meanings cannot be separated from the act of meaning, and that act “meanings cannot be separated from the act of meaning, and that act of meaning is a performance that gets reflected, reappropriated, and reenacted without end ”of meaning is a performance that gets reflected, reappropriated, and reenacted without end and without the ability to see its source either in the actual world or in its initial expression — “the rise of the anonymous subject and decentralized authorship nurtured by virtue of the circularity and reflexivity of the networks.” Rancière says this creates “‘uncertain communities'” politically questioning “‘the distribution of roles, territories, and languages’.” That’s an important point, although these images also sometimes create powerful political communities, as was the case with images from Ferguson.
Donatella is admirably focused on what this means when the stakes are high:
Her presentation of the ways in which the Net leads to not just a fracturing of meaning but of an impossibly self-reflective entanglement of meaning is brilliant. Her drawing our attention to the direness of this when it comes to the most dire of human situations is crucial. Her concept of “expanded spaces” seems to me to be worth holding on to and exploring. In fact, it’s powerful enough that I don’t think it should be confined to places that have been destroyed, much less destroyed by war. It applies more broadly than that. Her discussion of places destroyed by violence seems to me to point to a case where the stakes are higher, but where the game is essentially the same.
I recognize I have not resolved the question posed in my title. You can thank Donatella for that :)
Categories: culture, philosophy Tagged with: 2b2k • meaning • pessimism • technnodeterminism • violence
Date: October 23rd, 2015 dw
October 14, 2015
Joho the Blog by David Weinberger is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.
Creative Commons license: Share it freely, but attribute it to me, and don't use it commercially without my permission.