Google self-driving cars are presumably programmed to protect their passengers. So, when a traffic situation gets nasty, the car you’re in will take all the defensive actions it can to keep you safe.
But what will robot cars be programmed to do when there’s lots of them on the roads, and they’re networked with one another?
We know what we as individuals would like. My car should take as its Prime Directive: “Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.
It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.
Or, a broken run-off pipe creates a dangerous pool of water on the highway during a flash storm. The self-driving cars agree that unless my car accelerates and rams into a concrete piling, all other joint action results in a tractor trailing jack-knifing, causing lots of death and destruction. Not to mention The Angelic Children’s Choir school bus that would be in harm’s way. So, the swarm of robotic cars makes the right decision and intentionally kills me.
In short, the networking of robotic cars will change the basic moral principles that guide their behavior. Non-networked cars are presumably programmed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be programmed to support some form of utilitarianism that tries to minimize the collective damage. And that’s probably what we’d want. Isn’t it?
But one of the problems with utilitarianism is that there turns out to be little agreement about what counts as a value and how much it counts. Is saving a pedestrian more important than saving a passenger? Is it always right try to preserve human life, no matter how unlikely it is that the action will succeed and no matter how many other injuries it is likely to result in? Should the car act as if its passenger has seat-belted him/herself in because passengers should do so? Should the cars be more willing to sacrifice the geriatric than the young, on the grounds that the young have more of a lifespan to lose? And won’t someone please think about the kids m— those cute choir kids?
We’re not good at making these decisions, or even at having rational conversations about them. Usually we don’t have to, or so we tell ourselves. For example, many of the rules that apply to us in public spaces, including roads, optimize for fairness: everyone waits at the same stop lights, and you don’t get to speed unless something is relevantly different about your trip: you are chasing a bad guy or are driving someone who urgently needs medical care.
But when we are better able control the circumstances, fairness isn’t always the best rule, especially in times of distress. Unfortunately, we don’t have a lot of consensus around the values that would enable us to make joint decisions. We fall back to fairness, or pretend that we can have it all. Or we leave it to experts, as with the rules that determine who gets organ transplants. It turns out we don’t even agree about whether it’s morally right to risk soldiers’ lives to rescue a captured comrade.
Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.
Imagine a time when the roadways are full of self-driving cars and trucks. There are some good reasons to think that that time is coming, and coming way sooner than we’d imagined.
Imagine that Google remains in the lead, and the bulk of the cars carry their brand. And assume that these cars are in networked communication with one another.
Can we assume that Google will support Networked Road Neutrality, so that all cars are subject to the same rules, and there is no discrimination based on contents, origin, destination, or purpose of the trip?
Or would Google let you pay a premium to take the “fast lane”? (For reasons of network optimization the fast lane probably wouldn’t actually be a designated lane but well might look much more like how frequencies are dynamically assigned in an age of “smart radios.”) We presumably would be ok with letting emergency vehicles go faster than the rest of the swarm, but how about letting the rich go faster by programming the robot cars to give way when a car with its “Move aside!” bit is on?
Let’s say Google supports a strict version of Networked Road Neutrality. But let’s assume that Google won’t be the only player in this field. Suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage — perhaps forming a cordon around them? Would that be legal? Would selling a virtual fast lane on a public roadway be legal in the first place? And who gets to decide? The FCC?
One thing is sure: It’ll be a golden age for lobbyists.
Pardon my brevity (I’m traveling), but if you care about preserving the Internet as a place where innovation isn’t squashed by the inertia of the incumbents, then let FCC Chairman Tom Wheeler know that his proposed “Net Neutrality” policy is a non-starter. [Ars Technica] [WaPo] [Mashable] [NY Times] [Wheeler’s response, via Verge]
Here are the email addresses of the four commissioners + Wheeler who are eagerly awaiting your opinion. Public response matters.
Scott Bradner, one of the shapers of the Internet, wrote to a mailing list today:
It seems to me that the value of “fast lanes” only comes when there is enough congestion to mean that the normal lane is not useful –
Maybe the ISPs will have an incentive to ensure that the normal service sucks.
Categories: net neutrality
Tagged with: fcc
Date: April 24th, 2014 dw
I’ve posted a podcast interview with Dan Cohen, the executive director of the Digital Public Library of America about their proposal to the FCC.
The FCC is looking for ways to modernize the E-Rate program that has brought the Internet to libraries and schools. The DPLA is proposing DPLA Local, which will enable libraries to create online digital collections using the DPLA’s platform.
I’m excited about this for two reasons beyond the service it would provide.
First, it could be a first step toward providing cloud-based library services, instead of the proprietary, closed, expensive systems libraries typically use to manage their data. (Evergreen, I’m not talking about you, you open source scamp!)
Second, as libraries build their collections using DPLA Local, their metadata is likely to assume normalized forms, which means that we should get cross-collection discovery and semantic riches.
Here’s the proposal itself. And here’s where you can comment to the FCC about it.
Tagged with: dpla
Date: March 6th, 2014 dw
The FCC’s Open Internet Advisory Committee’s 2013 Annual Report has been posted. The OIAC is a civilian group, headed by Jonathan Zittrain [twitter:zittrain] . The report is rich, but I want to point to one part that I found especially interesting: the section on “specialized services.”
Specialized services are interesting because when the FCC adopted the Open Internet Order (its “Net Neutrality” policy), it permitted the carriers to use their Internet-delivery infrastructure to provide some specific type of content or service to side of the Internet. As Harold Feld put it in 2009, in theory the introduction of “managed services”
allows services like telemedicine to get dedicated capacity without resorting to “tiering” that is anathema to network neutrality. In reality, is great new way for incumbents to privilege their own VOIP and video services over traffic of others.
The danger is that the providers will circumvent the requirement that they not discriminate in favor of their own content (or in favor of content from companies that pay them) by splintering off that content and calling it a a special service. (For better explanations, check Technoverse, Ars Technica, Commissioner Copps’ statement.)
So, a lot comes down to the definition of a “specialized service.” This Annual Report undertakes the challenge. The summary begins on page 9, and the full section begins on p. 66.
I won’t pretend to have the expertise to evaluate the definitions. But I do like the principles that guided the group:
Regulation should not create a perverse incentive for operators to move away from a converged IP infrastructure
A service should not be able to escape regulatory burden or acquire a burden by moving to IP
The Specialized Services group was led by David Clark, and manifests a concern for what Jonathan Zittrain calls “generativity“: it’s not enough to measure the number of bits going through a line to a person’s house; we also have to make sure that the user is able to do more with those bits than simply consume them.
I’m happy to see the Committee address the difficult issue of specialized services, and to do so with the clear intent of (a) not letting access to the open Internet be sacrificed, and(b) not allowing special services to be an end run around an open Internet.
Note: Jonathan Zittrain is my boss’ boss at the Harvard Law Library. I’ve known him through the Berkman Center for ten years before that.
Categories: net neutrality
Tagged with: fcc
• net neutrality
Date: August 21st, 2013 dw
I was steeling myself a couple of days ago to say something in a talk that believe but don’t want to: We shouldn’t feel guilty about relying on sources with whom we agree to contextualize breaking news. It’s ok. It’s even rational.
For example, if the Supreme Court hands down a ruling I don’t understand, or the FCC issues a policy that sounds like goobledygook to my ears, I turn to sites whose politics I basically agree with. On the one hand, I know that that’s wrong on echo chamber grounds: I’m getting reconfirmed in beliefs that I instead should be challenging. On the other hand, if I want to understand a new finding in evolutionary biology I’m not going to go to a creationist site, and if I want to understand the implications of a change in Obamacare, I’m not going to go to a Tea Party site. [Hint: I’m a liberal.] Oh, I might go afterwards to see what Those Folks are thinking, but to understand something, I’m going to go first to people with whom I basically agree.
Unfortunately, saying that in my talk meant I’d have to acknowledge that if I can to go to, say, DailyKos for primary contextualization, then it’s fine for right-wingers go to Fox News. Then I was going to have to explain how Fox and DailyKos are not truly equivalent, since Kos acknowledges facts that are unpleasant for their beliefs, and because Kos allows lots and lots of community participation. But that’s a distraction: If it’s ok for me to go to a lefty site to contextualize my news, it’s ok for you to go to your righty site. That feels wrong to me, and not only because I think right sites are wrong.
I finally realized that I’ m using the wrong sort of sites for my example. I do feel queasy about recommending that people get news interpreted for them by going to sites that operate in the broadcast mode. Fox News is like that. So are Slate and Salon, although to a lesser extent because they allow comments and because they present themselves as opinion sites, not news sites. Kos much less so because of the prominence of blogs and community. But I have no bad feelings whatsoever about taking my questions about the news to my social networks.
Because I’m old, much of social networking occurs on mailing lists. Some of the lists are based on topic, and contain people who broadly agree, but who disagree about most of the particulars; that’s what conversations are for. For example, a couple of the lists I’m on this morning are talking about what it would mean if Tom Wheeler [someone give that man a Wikipedia page!] were appointed as Chair of the FCC as seems increasingly likely. Tom comes out of the cable TV industry, which raises suspicions on my side of the swimming pool. So there has been an active set of discussions on my mailing lists among people who know much more than I do. The opinions range from he’s likely to be relatively centrist (although veering to the wrong side, where “wrong” is generally agreed upon by the list) to he’s never once stood up for users or for increasing competition and openness. Along the way, people have pointed out the occasional good point about him, although overall the tenor is negative and depressed.
Now, do I need to hear from the cable and telecoms industry about what a wonderful choice Tom would be? Sure, at some point. I even need to have my more fundamental views challenged. At some point. But not when I’m trying to find out about who this Tom Wheeler guy is. If we take understanding as a tool used for a purpose, it becomes a wildly inefficient tool — a hammer that’s all handle — if we have to go back to first principles in order to understand anything. Understanding is an efficient tool because it’s incremental: Given that I favor a wildly open Internet and given that I favor achieving this via vigorous competition, then what should I make of a Tom Wheeler FCC chairmanship? That’s my question this morning, not whether an wildly open Internet is a good thing and not whether the best way to achieve this is by increasing competition. Those are fine questions for another morning, but if I have to ask those questions every time I hear something about the FCC, then understanding has failed at its job.
So, I don’t feel bad about consulting my social network for help understanding the news.
And now, like the fine print in an offer that’s too good to be true, here are the caveats: My social networks may not be typical. Some types of news need more fundamental challenge than others. Reliance exclusively on social networks for news may put you into an impenetrable filter bubble. I acknowledge the risks, but given the situatedness of understanding, every act of interpretation is risky.
And yet there is something right in what I’m saying. I know this because going to “opposition” sites to understand the meaning of particular FCC appointments would require me to uncertainly translate out of their own unstated assumptions, and sites that try for objectivity don’t have the nuanced conversations enabled by shared, unstated assumptions. So, there is something right in what I’m saying, as well as risk and wrongness.
Tagged with: 2b2k
• echo chambers
Date: February 23rd, 2013 dw
4. [NOTE: (These notes are in reverse chronological order. I have numbered them for your reading convenience.)I unlocked my Blackberry by calling Verizon support. I bought an Orange SIM card in a cigarette store in the Old City of Jerusalem for $10, plus $9 of calling time that times out in a week. So, I now have a working phone. It does not come with a data plan, however.]
3. [NOTE added minutes after the note right below this one: I’m on the phone with Verizon. It is indeed $20.48 per MEGABYTE. But wait…I am now talking with a tech support person who assures me that attachments don’t count unless you actually download them. Well, that’s something. She, however, is also telling me that the first two reps I talked with are wrong; in fact (says the tech support person), Verizon’s international plan gives you 70MB per month for $100, and every megabyte after that is $20.48. That’s still piracy, but the broadsword goes into you slightly more slowly.]
2. [Note added minutes later: Some other knowledgeable people tell me that Verizon must mean $20/gigabyte, not per megabyte. So, this may have been a mistake by the the service rep. I would happily take the blame for any misunderstanding, except that I confirmed that the rep said “megabyte” by inquiring, “PER MEGABYTE? PER MEGABYTE? ARE YOU FREAKING CRAZY!!!!!!!!!!,” to which he replied in the affirmative to the first two of the three questions.]
1. I’m going overseas tonight for a week. In the past, I’d call Verizon and have them switch service from my Droid to my previous phone, which was a Blackberry with “world phone” service. For $2/day, I’d get unlimited data access, so I could check my email and perhaps check the news on the Web now and then. (Believe me, on a Blackberry you don’t want to do a lot of heavy Web browsing.)
Today when I tried to make the switch, Verizon informed me that they have changed the plan, entirely for the benefit of their customers of course. So, now it’s $20 per megabyte. Holy crap! What kind of unearthly profit margin is that?
My knowledgeable friends tell me that that I should figure 50-100 emails per megabyte (although that number is conservative). So, no email for me. That’s what happens when the “free” market is so pwned that it laughs in the face of competition.
And these are the folks we’ve handed our Internet to? Great. Freaking great.
Tagged with: fcc
Date: March 23rd, 2011 dw
Robin Chase, the founder of ZipCar, testified in front of Congress. She argued that Congress ought not remove the FCC’s authority to prevent access providers from deciding which information moves fast, slow, or not at all. [pdf]
Categories: net neutrality
Tagged with: fcc
• net neutrality
• robin chase
Date: March 9th, 2011 dw
Elliot Noss, CEO of Tucows, has adopted a quirky and admirable approach to submitting filings to official bodies looking for comments on policies. Rather than writing the traditional legalistic brief, he has been commissioning pieces more readable by the non-lawyerly. I wrote an essay for him on copyright, and he’s just submitted and posted a second one by me on spectrum policy. [Disclosure: 1. Elliot is my friend. 2. He offered to pay me for writing this.]
Tagged with: fcc
• open spectrum
Date: March 1st, 2011 dw
The National Broadband Map is now available. We had wanted to bask in the sunlight provided by the incumbent access providers, but instead we just got freckles. Want to laugh like a broken-hearted clown? View only the places that have fiber to the home.
The map was controversial from its inception, since it at least initially was relying on data coming from parties interested in exaggerating the extent of coverage. (I interviewed Steve Rosenberg at the FCC about his agencies contribution to it, in November 2009.) It does not link to its sources. It did not list my access provider (RCN) as available where I live. Also, the map is very clunky to manipulate. (Hint: Turn off all overlays until you zoom into where you want to see.) (Harold Feld provides a balanced perspective.)
Now want to cry like a generation watching its future slip away? The Republicans seem set on throwing The Master Switch to turn the Internet into a corporate content delivery system. Let your Senators know that you want the Internet to remain the engine of innovation and a public square for free speech.
There are many ways to boil down today’s upcoming FCC rejection of Net neutrality (which they did in the guise of supporting Net neutrality). Here’s one:
The end of Net neutrality means that those who provide access to the Internet â€” to our Internet, for it is ours, not theirs â€” have every economic incentive to keep access scarce. By not providing enough bandwidth, they can claim justification for charging users per bit (or per page, service, download, etc.), and justification for charging Net application/data providers for the right to cut ahead in line.
This is ironic â€” in the not-funny sense â€” since the access providers’ stated justification for opposing Net neutrality is because to do otherwise would discourage investment. But, why are they going to invest in providing more bits when they make more money by throttling access? (Competition? Sure, that’d be great. Let’s require them to rent out their lines. Oh, I forgot.) Abundance would turn access provision into a profitable commodity business, which is exactly what users want, and what would stimulate innovation and economic growth.
So, now that Net neutrality is going to be overturned, the access providers will make money by preventing access. Anyone want to bet that the U.S. is now going to climb the charts of average national broadband rates and of lowest average cost? Does anyone think that we haven’t just moved back by decades when we’ll have, say, gigabit access common across the country?
For shame, FCC.
[Later that day] The FCC has clarified some of what it means. For example, they are not going to allow access providers to charge companies for fast lane access. It seems that Commissioners Copps and Mignon nudged the regulations in the right direction. Thank you for that. (Also, see Harold Feld’s take.)
Categories: net neutrality
Tagged with: fcc
• net neutrality
Date: December 21st, 2010 dw
Next Page »