Joho the BlogWhen should your self-driving car kill you? - Joho the Blog

When should your self-driving car kill you?

At Digital Trends I take another look at a question that is now gaining some currency: How should autonomous cars be programmed when all the choices are bad and someone has to die in order to maximize the number of lives that are saved?

The question gets knottier the more you look at it. In two regards especially:

First, it makes sense to look at this through a utilitarian lens, but when you do, you have to be open to the possibility that it’s morally better to kill a 64 year old who’s at the end of his productive career (hey, don’t look at me that way!) vs. a young parent, or a promising scientist or musician. We consider age and health when doing triage for organ replacements. Should our cars do it for us when deciding who dies?

Second, the real question is who gets to decide this? The developers at Google who are programming the cars? And suppose the Google software disagrees with the prioritization of the Tesla self-driving cars? Who wins? Or, do we want to have a cross-manufacturer agreement about whose life to sacrifice if someone has to die in an accident? A global agreement about the value of lives?

Yeah, sure. What could go wrong with that? /s

6 Responses to “When should your self-driving car kill you?”

  1. I’m pretty sure software that’s anywhere near making that sort of moral decision is decades away if it’s even possible.

    I’m also pretty sure self-driving cars that we have now are mostly operating on the level of “don’t hit that thing I see in front of me” and that any moral dimension to self-driving car is just imagined paranoia.

    I’m entirely sure that _humans_ driving are also mostly operating on the level of “don’t hit that thing in front of me” and moral thought only comes up when they decide that they do want to hit things (e.g. Bostonians).

  2. I think you are looking at this through the wrong tense. Cars with human drivers are inherently dangerous, cars with computer drivers – less so.

    Who cares about the edge cases. Let the courts sort that out.

    If the lobbyists (or anyone else) prevent autonomous vehicles from arriving, then the 30k deaths we continue to have on the road every year despite every automotive safety gadget in cars today? Those deaths are their fault.

  3. Karl, given what Google (and others) know about us, it has info by which it can make triage-like decisions. Of course the info is incomplete, but these calculations are always probabilistic. Or the sw might be programmed to treat all humans as equally valuable, but that too is a moral decision.

    As I say in the article, the current self-driving cars presumably make decisions the way you say, but that changes once the cars are networked.

  4. Peter, I’m not arguing against self-driving cars. I agree with you: especially once they are networked, they should be far safer than human-driven cars. But what I’m talking about are not edge cases; they are decisions that sw devs are going to have to make, even if they make the decide-not-to-decide decision of treating all lives equally.

  5. If the self driving cars actually had the sensors and spare computing power to recognize people and assess their societal worth within the split second it takes to make a decision when a car accident is happening, then I for one would be worried much more about the pervasive private surveillance network that we’d’ve unleash on our streets.

    But more realistically, no one’s going to write the code to make that sort of corner case do anything at all clever; they’re just going to avoid hitting anything if they can, and minimize the kinetic energy if they can’t.

    (which is realistically what a human would do)

    (And if you really want to have a moral argument, you want to hit the schoolbus rather than the lone pedestrian, because the schoolbus is a lot bigger and will absorb way more of your inertia. No human driver is going to make that “correct” decision though.)

  6. Karl, I’m assuming that when you use a self-driving car, it will know who you are via some form of sign-in. So, it will have the info it needs well before the accident.

    As for your good point about school buses absorbing inertia, then I just have to change the hypothetical where hitting the bus does in fact cause a predictable series of events that kills everyone inside of it. E.g., the choice is either to kill you or to nudge a school bus into the path of a firetruck.

    I don’t know if anyone is going to write any code ever, but I’m pretty confident cars (autonomous or not) are going to be networked in the very near future, so the decision to _not_ write code that takes advantage of the interests of others, and instead to stick with the self-centered me-first morality of unnetworked cars would itself be a moral — actually, an immoral — decision.


Web Joho only

Comments (RSS).  RSS icon