Atlantic.com has just posted an article of mine that re-examines the “Argument from Architecture” that has been at the bottom of much of what I’ve written over the past twenty years. That argument says, roughly, that the Internet’s architecture embodies particular values that are inevitably transmitted to its users. (Yes, the article discusses what “inevitably” means in this context.) But has the Net been so paved by Facebook, apps, commercialism, etc., that we don’t experience that architecture any more?
I remember a 1971 National Lampoon article that gave away the endings of a hundred books and movies. Wikipedia and others think that article might have been the first use of the term “spoiler.” But “SPOILER ALERT” has only become a common signpost because of what the Internet has done to time, and in particular, to simultaneity.
In the old days of one-to-many, broadcast media, the events that shaped culture happened once and usually happened on schedule. So, it would make sense to bring up what was on the news broadcast last night, or to chuckle over that hilarious scene in this week’s Beverly Hillbillies. Now we watch on our own schedules, having common moments mainly around sports events and breaking news — games or tragedies. Perhaps this has contributed to our culture’s addiction to extremes.
We need SPOILER ALERT signposts because we watch when we want but the Net is so huge and unconstrained and cheap that it operates like a push medium — the opposite of why traditional broadcast was a push medium. Trying to avoid finding out what happened on Game of Thrones this week is like trying to avoid getting run over when crossing a highway, except that even seeing the approaching cars counts as getting run over.
Game of Thrones spoiler
This change in temporality shows up in the phrase “real time.” We only distinguish one type of time as “real” because it is no longer the default. The default is asynchronous because that’s how most of our communications occur online. Real time increasingly feels like a deprivation. It requires you to drop what you’re doing to participate or you’re going to lose out. And that feels sub-optimal, or even unfair.
Without the requirement of simultaneity, we are more free to follow our interests. And that turns out to fragment our culture. Or liberate it. Or enrich it. Or all of the above.
Tagged with: culture
Date: June 19th, 2015 dw
We used to have an obligation to at least try to be sympathetic. Now that’s ratcheted up to having to be empathetic. We should lower the bar.
Sympathy means feeling bad for someone while empathy means actually feeling the same feelings.
If that’s what those words still mean, empathy is more than we usually need and is less than we can often accomplish.
You’re hungry? I can be sympathetic about your hunger, but I can’t feel your hunger.
There are child soldiers? I can perhaps understand some of the situation that lets such a thing happen, and I can be shocked and sad that it does, but I don’t think I can feel what those children feel.
You have been sexually assaulted? I can be deeply sympathetic and supportive, but I don’t think I can actually feel what you felt or even what you are feeling now. For example, if you are now overwhelming anxious about being in some ordinary situations — walking to your car, entering an unlit room — you will have all my sympathy and support, but I will not experience the trembling you feel in your knees or the tension expressed by your shallow breaths.
Empathy is hard. It often takes the magic of an artist to get us to feel what a character is feeling. (Q: If I am feeling what a non-existent character is feeling, is that even empathy?)
Empathy is hard. Empathy is rare. Empathy is often exactly what is not required: If you are afraid, you probably don’t need another frightened person. You need someone sympathetic who can help you deal with your fears.
Sympathy is getting a bad rap, as if it means just patting someone on the shoulder and saying “There there.” That’s not what sympathy ever was. Sympathy means you are affected by another person’s feelings, not that you feel those very feelings. If I am sad and worried that you are so depressed, I am affected by your feelings, but I am not myself depressed.
Empathy can be a pure mirror of someone else’s feelings. But sympathy requires more than just feeling. If I see you crying, to be sympathetic I have to know something about you and especially about what has caused you to cry. Are you crying because you’ve been hurt? Because you broke up with someone you loved? Because you just saw a sad movie? Because you didn’t get into a school or onto a particular team? Because you’re sympathizing with someone else? In order to sympathize more fully, I need to know.
That is, in sympathy you turn not just to feelings but to the world. You see what the sufferer sees from her/his point of view, or as close to that point of view as you can. What you see is not a matter of indifference to you. You are moved by what is moving the other. How you are moved is different in type and extent — you are not fearful in the face of the other’s fears, you are not as wracked by grief as is the mourner — but you are moved.
Sympathy lets the world matter to you as it matters to someone else. In sympathy, the mattering culminates from heart, mind, and caring about the other. It is perhaps the best thing we do.
Most importantly, through sympathy are we moved to helpful action, whether that is indeed a pat on the shoulder or requires a far larger commitment. Sympathy does that to us. For us.
Empathy can get in the way of the supportive action that sympathy demands. If a friend is heartbroken because a relationship ended, you may bring to bear a different view of the world and hold out other feelings as possibilities. Hope perhaps. A different perspective. A pint of Ben and Jerry’s. The gap in feelings between you and your friend enables the sympathetic action your friend needs.
If our aim is to act in the world to try to reduce pain, fear, and sadness, then asking for empathy is often to ask for too much. Sympathy more than suffices.
Tagged with: sympathy
Date: January 31st, 2015 dw
Tagged with: humor
Date: January 27th, 2015 dw
It had to be back in 1993 that I had dual cards at Interleaf. But it was only a couple of days ago that I came across them.
Yes, for a couple of years I was both VP of Strategic Marketing and Chief Philosophical Officer at Interleaf.
The duties of the former were more rigorously defined than those of the latter. It was mainly just a goofy card, but it did reflect a bit of my role there. I got to think about the nature of documents, knowledge, etc., and then write and speak about it.
Goofy for sure. But I think in some small ways it helped the company. Interleaf had amazingly innovative software, decades ahead of its time, in large part because the developers had stripped documents down to their elements, and were thinking in new ways about how they could go back together. Awesome engineers, awesome software.
And I got to try to explain why this was important even beyond what the software enabled you to do.
Should every company have a CPO? I remember writing about that at the end of my time there. If I find it, I’ll post it. But I won’t and so I won’t.
Tagged with: innovation
Date: January 12th, 2015 dw
I recently published a column at KMWorld pointing out some of the benefits of having one’s thoughts share a context with people who build things. Today I came across an article by Jethro Masis titled “Making AI Philosophical Again: On Philip E. Agre’s Legacy.” Jethro points to a 1997 work by the greatly missed Philip Agre that says it so much better:
…what truly founds computational work is the practitioner’s evolving sense of what can be built and what cannot” (1997, p. 11). The motto of computational practitioners is simple: if you cannot build it, you do not understand it. It must be built and we must accordingly understand the constituting mechanisms underlying its workings.This is why, on Agre’s account, computer scientists “mistrust anything unless they can nail down all four corners of it; they would, by and large, rather get it precise and wrong than vague and right” (Computation and Human Experience, 1997, p. 13).
(I’m pretty sure I read Computation and Human Experience many years ago. Ah, the Great Forgetting of one in his mid-60s.)
Jethro’s article overall attempts to adopt Agre’s point that “The technical and critical modes of research should come together in this newly expanded form of critical technical consciousness,” and to apply this to Heidegger’s idea of Zuhandenheit: how things show themselves to us as useful to our plans and projects; for Heidegger, that is the normal, everyday way most things present themselves to us. This leads Jethro to take us through Agre’s criticisms of AI modeling, its failure to represent context except as vorhanden [pdf], (Heidegger’s term for how things look when they are torn out of the context of our lived purposes), and the need to thoroughly rethink the idea of consciousness as consisting of representations of an external world. Agre wants to work out “on a technical level” how this can apply to AI. Fascinating.
Here’s another bit of brilliance from Agre:
For Agre, this is particularly problematic because “as long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem.” (p. 260 of CHE)
Tagged with: agre
• too big to know
Date: December 7th, 2014 dw
Here’s the opening of my latest column at KMWorld:
A couple of weeks ago, I joined other former students of Joseph P. Fell at Bucknell University for a weekend honoring him. Although he is a philosophy professor, the takeaway for many of us was a reminder that while hands are useless without minds to guide them, minds need hands more deeply than we usually think.
Philosophy is not the only discipline that needs this reminder. Almost anyone—it’s important to maintain the exceptions—who is trying to understand a topic would do well by holding something in her hands, or, better, building something with them…
A year ago, Harold Feld posted one of the most powerful ways of framing our excessive zeal for copyright that I have ever read. I was welling up even before he brought Aaron Swartz into the context.
Harold’s post is within a standard Jewish genre: the d’var Torah, an explanation of a point in the portion of the Torah being read that week. As is expected of the genre, he draws upon a long, self-reflective history of interpretation. I urge you to read it because of the light it sheds on our culture of copyright, but it’s also worth noticing the form of the discussion.
The content: In the Jewish tradition, Sodom’s sin wasn’t sexual but rather an excessive possessiveness leading to a fanatical unwillingness to share. Harold cites from a collection of traditional commentary, The Ethics of Our Fathers:
“There are four types of moral character. One who says: ‘what is mine is mine and what is yours is yours.’ This is an average person. Some say it is the Way of Sodom. The one who says: ‘what is mine is yours and what is yours is mine,’ is ignorant of the world. ‘What is mine is yours and what is yours is yours’ is the righteous. ‘What is mine is mine and what is yours is mine’ is the wicked.”
In a PowerPoint, it’d be a 2×2 chart. Harold’s point will be that the ‘what is mine is mine and what is yours is yours.’ of the average person becomes wicked when enforced without compassion or flexibility. Harold evokes the traditional Jewish examples of Sodom’s wickedness and compares them to what’s become our dominant “average” assumptions about how copyright ought to work.
I am purposefully not explaining any further. Read Harold’s piece.
The form: I find the space of explanation within which this d’var Torah — and most others that I’ve heard — operates to be fascinating. At the heart of Harold’s essay is a text accepted by believers as having been given by God, yet the explanation is accomplished by reference to a history of human interpretations that disagree with one another, with guidance by a set of values (e.g., sharing is good) that persevere in a community thanks to that community’s insistent adherence to its tradition. The result is that an agnostic atheist like me (I’m only pretty sure there is no God) can find truth and wisdom in the interpretation of a text I take as being ungrounded in a divine act.
But forget all that. Read Harold’s post, bubbelah.
Google self-driving cars are presumably programmed to protect their passengers. So, when a traffic situation gets nasty, the car you’re in will take all the defensive actions it can to keep you safe.
But what will robot cars be programmed to do when there’s lots of them on the roads, and they’re networked with one another?
We know what we as individuals would like. My car should take as its Prime Directive: “Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.
It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.
Or, a broken run-off pipe creates a dangerous pool of water on the highway during a flash storm. The self-driving cars agree that unless my car accelerates and rams into a concrete piling, all other joint action results in a tractor trailing jack-knifing, causing lots of death and destruction. Not to mention The Angelic Children’s Choir school bus that would be in harm’s way. So, the swarm of robotic cars makes the right decision and intentionally kills me.
In short, the networking of robotic cars will change the basic moral principles that guide their behavior. Non-networked cars are presumably programmed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be programmed to support some form of utilitarianism that tries to minimize the collective damage. And that’s probably what we’d want. Isn’t it?
But one of the problems with utilitarianism is that there turns out to be little agreement about what counts as a value and how much it counts. Is saving a pedestrian more important than saving a passenger? Is it always right try to preserve human life, no matter how unlikely it is that the action will succeed and no matter how many other injuries it is likely to result in? Should the car act as if its passenger has seat-belted him/herself in because passengers should do so? Should the cars be more willing to sacrifice the geriatric than the young, on the grounds that the young have more of a lifespan to lose? And won’t someone please think about the kids m— those cute choir kids?
We’re not good at making these decisions, or even at having rational conversations about them. Usually we don’t have to, or so we tell ourselves. For example, many of the rules that apply to us in public spaces, including roads, optimize for fairness: everyone waits at the same stop lights, and you don’t get to speed unless something is relevantly different about your trip: you are chasing a bad guy or are driving someone who urgently needs medical care.
But when we are better able control the circumstances, fairness isn’t always the best rule, especially in times of distress. Unfortunately, we don’t have a lot of consensus around the values that would enable us to make joint decisions. We fall back to fairness, or pretend that we can have it all. Or we leave it to experts, as with the rules that determine who gets organ transplants. It turns out we don’t even agree about whether it’s morally right to risk soldiers’ lives to rescue a captured comrade.
Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.
Imagine a time when the roadways are full of self-driving cars and trucks. There are some good reasons to think that that time is coming, and coming way sooner than we’d imagined.
Imagine that Google remains in the lead, and the bulk of the cars carry their brand. And assume that these cars are in networked communication with one another.
Can we assume that Google will support Networked Road Neutrality, so that all cars are subject to the same rules, and there is no discrimination based on contents, origin, destination, or purpose of the trip?
Or would Google let you pay a premium to take the “fast lane”? (For reasons of network optimization the fast lane probably wouldn’t actually be a designated lane but well might look much more like how frequencies are dynamically assigned in an age of “smart radios.”) We presumably would be ok with letting emergency vehicles go faster than the rest of the swarm, but how about letting the rich go faster by programming the robot cars to give way when a car with its “Move aside!” bit is on?
Let’s say Google supports a strict version of Networked Road Neutrality. But let’s assume that Google won’t be the only player in this field. Suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage — perhaps forming a cordon around them? Would that be legal? Would selling a virtual fast lane on a public roadway be legal in the first place? And who gets to decide? The FCC?
One thing is sure: It’ll be a golden age for lobbyists.
I’ve posted [pdf] a terrible scan that I made of a talk given by Joseph P. Fell in Sept. 1970. “What is philosophy?” was presented to a general university audience, and in Prof. Fell’s way, it is both clear and deep.
Prof. Fell was my most influential teacher when I was at Bucknell, and, well, ever. He was and is more interested in understanding than in being right, and certainly more than in being perceived as right. This enables him to model a philosophizing that is both rigorous and gentle.
Although I’ve told him more than once how much he has affected my life, he is too humble to believe it. So I’m telling you all instead.
Tagged with: gratitude
Date: February 11th, 2014 dw
« Previous Page | Next Page »