Joho the Blog » philosophy

October 28, 2015

When should your self-driving car kill you?

At Digital Trends I take another look at a question that is now gaining some currency: How should autonomous cars be programmed when all the choices are bad and someone has to die in order to maximize the number of lives that are saved?

The question gets knottier the more you look at it. In two regards especially:

First, it makes sense to look at this through a utilitarian lens, but when you do, you have to be open to the possibility that it’s morally better to kill a 64 year old who’s at the end of his productive career (hey, don’t look at me that way!) vs. a young parent, or a promising scientist or musician. We consider age and health when doing triage for organ replacements. Should our cars do it for us when deciding who dies?

Second, the real question is who gets to decide this? The developers at Google who are programming the cars? And suppose the Google software disagrees with the prioritization of the Tesla self-driving cars? Who wins? Or, do we want to have a cross-manufacturer agreement about whose life to sacrifice if someone has to die in an accident? A global agreement about the value of lives?

Yeah, sure. What could go wrong with that? /s


June 1, 2015

[2b2k] Russell on knowledge

Bertrand Russell on knowledge for the Encyclopedia Brittanica:

[A]t first sight it might be thought that knowledge might be defined as belief which is in agreement with the facts. The trouble is that no one knows what a belief is, no one knows what a fact is, and no one knows what sort of agreement between them would make a belief true.

But that wonderful quote is misleading if left there. In fact it introduces Russell’s careful exploration and explanation of those terms. Crucially: “We are thus driven to the view that, if a belief is to be something causally important, it must be defined as a characteristic of behaviour.”

1 Comment »

January 12, 2015

Chief Philosophical Officer

It had to be back in 1993 that I had dual cards at Interleaf. But it was only a couple of days ago that I came across them.

Interleaf business cards

Yes, for a couple of years I was both VP of Strategic Marketing and Chief Philosophical Officer at Interleaf.

The duties of the former were more rigorously defined than those of the latter. It was mainly just a goofy card, but it did reflect a bit of my role there. I got to think about the nature of documents, knowledge, etc., and then write and speak about it.

Goofy for sure. But I think in some small ways it helped the company. Interleaf had amazingly innovative software, decades ahead of its time, in large part because the developers had stripped documents down to their elements, and were thinking in new ways about how they could go back together. Awesome engineers, awesome software.

And I got to try to explain why this was important even beyond what the software enabled you to do.

Should every company have a CPO? I remember writing about that at the end of my time there. If I find it, I’ll post it. But I won’t and so I won’t.


December 7, 2014

[2b2k] Agre on minds and hands

I recently published a column at KMWorld pointing out some of the benefits of having one’s thoughts share a context with people who build things. Today I came across an article by Jethro Masis titled “Making AI Philosophical Again: On Philip E. Agre’s Legacy.” Jethro points to a 1997 work by the greatly missed Philip Agre that says it so much better:

…what truly founds computational work is the practitioner’s evolving sense of what can be built and what cannot” (1997, p. 11). The motto of computational practitioners is simple: if you cannot build it, you do not understand it. It must be built and we must accordingly understand the constituting mechanisms underlying its workings.This is why, on Agre’s account, computer scientists “mistrust anything unless they can nail down all four corners of it; they would, by and large, rather get it precise and wrong than vague and right” (Computation and Human Experience, 1997, p. 13).

(I’m pretty sure I read Computation and Human Experience many years ago. Ah, the Great Forgetting of one in his mid-60s.)

Jethro’s article overall attempts to adopt Agre’s point that “The technical and critical modes of research should come together in this newly expanded form of critical technical consciousness,” and to apply this to Heidegger’s idea of Zuhandenheit: how things show themselves to us as useful to our plans and projects; for Heidegger, that is the normal, everyday way most things present themselves to us. This leads Jethro to take us through Agre’s criticisms of AI modeling, its failure to represent context except as vorhanden [pdf], (Heidegger’s term for how things look when they are torn out of the context of our lived purposes), and the need to thoroughly rethink the idea of consciousness as consisting of representations of an external world. Agre wants to work out “on a technical level” how this can apply to AI. Fascinating.

Here’s another bit of brilliance from Agre:

For Agre, this is particularly problematic because “as long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem.” (p. 260 of CHE)


June 8, 2014

Will a Google car sacrifice you for the sake of the many? (And Networked Road Neutrality)

Google self-driving cars are presumably programmed to protect their passengers. So, when a traffic situation gets nasty, the car you’re in will take all the defensive actions it can to keep you safe.

But what will robot cars be programmed to do when there’s lots of them on the roads, and they’re networked with one another?

We know what we as individuals would like. My car should take as its Prime Directive: “Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.

It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.

Or, a broken run-off pipe creates a dangerous pool of water on the highway during a flash storm. The self-driving cars agree that unless my car accelerates and rams into a concrete piling, all other joint action results in a tractor trailing jack-knifing, causing lots of death and destruction. Not to mention The Angelic Children’s Choir school bus that would be in harm’s way. So, the swarm of robotic cars makes the right decision and intentionally kills me.

In short, the networking of robotic cars will change the basic moral principles that guide their behavior. Non-networked cars are presumably programmed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be programmed to support some form of utilitarianism that tries to minimize the collective damage. And that’s probably what we’d want. Isn’t it?

But one of the problems with utilitarianism is that there turns out to be little agreement about what counts as a value and how much it counts. Is saving a pedestrian more important than saving a passenger? Is it always right try to preserve human life, no matter how unlikely it is that the action will succeed and no matter how many other injuries it is likely to result in? Should the car act as if its passenger has seat-belted him/herself in because passengers should do so? Should the cars be more willing to sacrifice the geriatric than the young, on the grounds that the young have more of a lifespan to lose? And won’t someone please think about the kids m— those cute choir kids?

We’re not good at making these decisions, or even at having rational conversations about them. Usually we don’t have to, or so we tell ourselves. For example, many of the rules that apply to us in public spaces, including roads, optimize for fairness: everyone waits at the same stop lights, and you don’t get to speed unless something is relevantly different about your trip: you are chasing a bad guy or are driving someone who urgently needs medical care.

But when we are better able control the circumstances, fairness isn’t always the best rule, especially in times of distress. Unfortunately, we don’t have a lot of consensus around the values that would enable us to make joint decisions. We fall back to fairness, or pretend that we can have it all. Or we leave it to experts, as with the rules that determine who gets organ transplants. It turns out we don’t even agree about whether it’s morally right to risk soldiers’ lives to rescue a captured comrade.

Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.


Imagine a time when the roadways are full of self-driving cars and trucks. There are some good reasons to think that that time is coming, and coming way sooner than we’d imagined.

Imagine that Google remains in the lead, and the bulk of the cars carry their brand. And assume that these cars are in networked communication with one another.

Can we assume that Google will support Networked Road Neutrality, so that all cars are subject to the same rules, and there is no discrimination based on contents, origin, destination, or purpose of the trip?

Or would Google let you pay a premium to take the “fast lane”? (For reasons of network optimization the fast lane probably wouldn’t actually be a designated lane but well might look much more like how frequencies are dynamically assigned in an age of “smart radios.”) We presumably would be ok with letting emergency vehicles go faster than the rest of the swarm, but how about letting the rich go faster by programming the robot cars to give way when a car with its “Move aside!” bit is on?

Let’s say Google supports a strict version of Networked Road Neutrality. But let’s assume that Google won’t be the only player in this field. Suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage — perhaps forming a cordon around them? Would that be legal? Would selling a virtual fast lane on a public roadway be legal in the first place? And who gets to decide? The FCC?

One thing is sure: It’ll be a golden age for lobbyists.


February 16, 2014

First post at The Internet is not a Panopticon

I’ve been meaning to try, a magazine-bloggy place that encourages carefully constructed posts by providing an elegant writing environment. It’s hard to believe, but it’s even better looking than Joho the Blog. And, unlike HuffPo, there are precious few stories about side boobs. So, and might do so again.

The piece is about why we seem to keep insisting that the Internet is panopticon when it clearly is not. So, if you care about panopticons, you might find it interesting. Here’s a bit from the beginning:

A panopticon was Jeremy Bentham’s (1748-1832) idea about how to design a prison or other institution where people need to be watched. It was to be a circular building with a watchers’ station in the middle containing a guard who could see everyone, but who could not himself/herself be seen. Even though everyone couldn’t be seen at the same time, prisoners would never know when they were being watched. That’d keep ’em in line.

There is indeed a point of comparison between a panopticon and the Internet: you generally can’t tell when your public stuff is being seen (although your server logs could tell you). But that’s not even close to what a panopticon is.

…So why did the comparison seem so apt?


February 11, 2014

What is philosophy? An essay by JP Fell

I’ve posted [pdf] a terrible scan that I made of a talk given by Joseph P. Fell in Sept. 1970. “What is philosophy?” was presented to a general university audience, and in Prof. Fell’s way, it is both clear and deep.

Prof. Fell was my most influential teacher when I was at Bucknell, and, well, ever. He was and is more interested in understanding than in being right, and certainly more than in being perceived as right. This enables him to model a philosophizing that is both rigorous and gentle.

Although I’ve told him more than once how much he has affected my life, he is too humble to believe it. So I’m telling you all instead.

Be the first to comment »

December 28, 2013

[2b2k] From thinkers to memes

The history of Western philosophy usually has a presumed shape: there’s a known series of Great Men (yup, men) who in conversation with their predecessors came up with a coherent set of ideas. You can list them in chronological order, and cluster them into schools of thought with their own internal coherence: the neo-Platonists, the Idealists, etc. Sometimes, the schools and not the philosophers are the primary objects in the sequence, but the topology is basically the same. There are the Big Ideas and the lesser excursions, the major figures and the supporting players.

Of course the details of the canon are always in dispute in every way: who is included, who is major, who belongs in which schools, who influenced whom. A great deal of scholarly work is given over to just such arguments. But there is some truth to this structure itself: philosophers traditionally have been shaped by their tradition, and some have had more influence than others. There are also elements of a feedback loop here: you need to choose which philosophers you’ll teach in philosophy courses, so you you act responsibly by first focusing on the majors, and by so doing you confirm for the next generation that the ones you’ve chosen are the majors.

But I wonder if in one or two hundred years philosophers (by which I mean the PT-3000 line of Cogbots™) will mark our era as the end of the line — the end of the linear sequence of philosophers. Rather than a sequence of recognized philosophers in conversation with their past and with one another, we now have a network of ideas being passed around, degraded by noise and enhanced by pluralistic appropriation, but without owners — at least without owners who can hold onto their ideas long enough to be identified with them in some stable form. This happens not simply because networks are chatty. It happens not simply because the transmission of ideas on the Internet occurs through a p2p handoff in which each of the p’s re-expresses the idea. It happens also because the discussion is no longer confined to a handful of extensively trained experts with strict ideas about what is proper in such discussions, and who share a nano-culture that supersedes the values and norms of their broader local cultures.

If philosophy survives as anything more than the history of thought, perhaps we will not be able to outline its grand movements by pointing to a handful of thinkers but will point to the webs through which ideas passed, or, more exactly, the ideas around which webs are formed. Because no idea passes through the Web unchanged, it will be impossible to pretend that there are “ideas-in-themselves” — nothing like, say, Idealism which has a core definition albeit with a history of significant variations. There is no idea that is not incarnate, and no incarnation that is not itself a web of variations in conversation with itself.

I would spell this out for you far more precisely, but I don’t know what I’m talking about, beyond an intuition that the tracks end at the trampled field in which we now live.


June 27, 2013

Relevant differences unresolved

After yesterday’s Supreme Court decisions, I’m just so happy about the progress we’re making.

It seems like progress to me because of the narrative line I have for the stretch of history I happen to have lived through since my birth in 1950: We keep widening the circle of sympathy, acceptance, and rights so that our social systems more closely approximate the truly relevant distinctions among us. I’ve seen the default position on the rights of African Americans switch, then the default position on the rights of women, and now the default position on sexual “preferences.” I of course know that none of these social changes is complete, but to base a judgment on race, gender, or sexuality now requires special arguments, whereas sixty years ago, those factors were assumed to be obviously relevant to virtually all of life.

According to this narrative, it’s instructive to remember that the Supreme Court overruled state laws banning racial intermarriage only in 1967. That’s amazing to me. When I was 17, outlawing “miscegeny” seemed to some segment of the population to be not just reasonable but required. It was still a debatable issue. Holy cow! How can you remember that and not think that we’re going to struggle to explain to the next generation that in 2013 there were people who actually thought banning same sex marriage was not just defensible but required?

So, I imagine a conversation (and, yes, I know I’m making it up) with someone angry about yesterday’s decisions. Arguing over which differences are relevant is often a productive way to proceed. You say that women’s upper body strength is less than men’s, so women shouldn’t be firefighters, but we can agree that if a woman can pass the strength tests, then she should be hired. Or maybe we argue about how important upper body strength is for that particular role. You say that women are too timid, and I say that we can find that out by hiring some, but at least we agree that firefighters need to be courageous. A lot of our moral arguments about social issues are like that. They are about what are the relevant differences.

But in this case it’s really really hard. I say that gender is irrelevant to love, and all that matters to a marriage is love. You say same sex marriage is unnatural, that it’s forbidden by God, and that lust is a temptation to be resisted no matter what its object. Behind these ideas (at least in this reconstruction of an imaginary argument) is an assumption that physical differences created by God must entail different potentials which in turn entail different moral obligations. Why else would God have created those physical distinctions? The relevance of the distinctions are etched in stone. Thus the argument over relevant differences can’t get anywhere. We don’t even agree about the characteristics of the role (e.g., upper body strength and courage count for firefighters) so that we can then discuss what differences are relevant to those characteristics. We don’t have enough agreement to be able to disagree fruitfully.

I therefore feel bad for those who see yesterday’s rulings as one more step toward a permissive, depraved society. I wish I could explain why my joy feels based on acceptance, not permissiveness, and not on depravity but on love.

By the way, my spellchecker flags “miscegeny” as a misspelled word, a real sign of progress.

Be the first to comment »

June 10, 2013

Heidegger on technology, and technodeterminism

I’m leaving tomorrow night for a few days in Germany as a fellow at the University of Stuttgart’s International Center for Research on Culture and Technology. I’ll be giving a two-day workshop with about 35 students, which I am both very excited about and totally at sea about. Except for teaching a course with John Palfrey, who is an awesomely awesome teacher, I haven’t taught since 1986. I was good at the time, but I forget the basics about structuring sessions.

Anyway, enough of that particular anxiety. I’m also giving a public lecture on Thursday at the city library (Stadtbibliothek am Mailänder Platz). It’ll be in English, thank Gott! My topic is “What the Web Uncovers,” which is a purposeful Heidegger reference. I’ve spent a lot of time trying to write this, and finally on Sunday completed a draft. It undoubtedly will change significantly, but here’s what I plan on saying at the beginning:

In 1954, Heidegger published “The Question about Technology” (Die Frage nach der Technik). I re-read it recently, and discovered why people hold Heidegger’s writing in such disdain (aside from the Nazi thing, of course). Wow! But there are some ideas in it that I think are really helpful.

Heidegger says that technology reveals the world to us in particular ways. For example, a dam across a river, which is one of his central examples, reveals the natural world as Bestand, which gets translated into English as “standing reserve” or “resource”: power waiting to be harnessed by humans. His point I think is profound: Technology should be understood not only in terms of what it does, but in terms of what it reveals about the world and what the world means to us. That is in fact the question I want to ask: What does the world that the Web uncovers look like? What does the Web reveal?

This approach holds the promise of letting us talk about technology from beyond the merely technical position. But it also happens to throw itself into an old controversy that has recently re-arisen. It sounds as if Heidegger is presenting a form of technodeterminism — the belief that technology determines our reaction to it, that technology shapes us. Against technodeterminism it is argued quite sensibly that a tool is not even a tool until humans come along and decide to use it for something. So, a screwdriver can be used to drive screws, but it could also be used to bang on a drum or to open and stir a can of paint. So, how could a screw driver have an effect on us, much less shape us, if we’re the ones who are shaping it?

Heidegger doesn’t fall prey to technodeterminism because one of his bedrock ideas is that things don’t have meaning outside of the full context of relationships that constitute the entire world — a world into which we are thrown. So, technology doesn’t determine us, since it takes an entire world to determine technology, us, and everything else. Further, in “Die Frage nach der Technik,” he explains the various historical ways technology has affected us by referring to a mysterious history of Being that gives us that historical context. But I don’t want to talk about that, mainly because insofar as I understand it, I find it deeply flawed. Even so I think we want to be able to talk about the effect of technology, granting that it’s not technology itself taken in isolation, but rather the fact that we do indeed come to technology out of a situation that is historical, cultural, social, and even individual.

So, how does the Web reveal the world? What does the world look like in the Age of the Web? (And that means: what does it look like to us educated Westerners with sufficient leisure time to consider such things, etc.) Here are the subject headings of the talk until I rewrite it as I inevitably do: chaotic, unmasterable, messy, interest-based, unsettling, and turning us to a shared world about which we disagree. This is very unlike the way the world looks in the prior age of technology, the age about which Heidegger was writing. Yet, I find at the heart of the Web-revealed world the stubborn fact that the world is revealed through human care: we are creatures that care about our existence, about others, and about our world. Care (Sorge) is at the heart of early Heidegger’s analysis.


Next Page »

Creative Commons License
Joho the Blog by David Weinberger is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Creative Commons license: Share it freely, but attribute it to me, and don't use it commercially without my permission.

Joho the Blog gratefully uses WordPress blogging software.
Thank you, WordPress!