Joho the Blogai Archives - Joho the Blog

April 19, 2017

Alien knowledge

Medium has published my long post about how our idea of knowledge is being rewritten, as machine learning is proving itself to be more accurate than we can be, in some situations, but achieves that accuracy by “thinking” in ways that we can’t follow.

This is from the opening section:

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.

1 Comment »

December 7, 2014

[2b2k] Agre on minds and hands

I recently published a column at KMWorld pointing out some of the benefits of having one’s thoughts share a context with people who build things. Today I came across an article by Jethro Masis titled “Making AI Philosophical Again: On Philip E. Agre’s Legacy.” Jethro points to a 1997 work by the greatly missed Philip Agre that says it so much better:

…what truly founds computational work is the practitioner’s evolving sense of what can be built and what cannot” (1997, p. 11). The motto of computational practitioners is simple: if you cannot build it, you do not understand it. It must be built and we must accordingly understand the constituting mechanisms underlying its workings.This is why, on Agre’s account, computer scientists “mistrust anything unless they can nail down all four corners of it; they would, by and large, rather get it precise and wrong than vague and right” (Computation and Human Experience, 1997, p. 13).

(I’m pretty sure I read Computation and Human Experience many years ago. Ah, the Great Forgetting of one in his mid-60s.)

Jethro’s article overall attempts to adopt Agre’s point that “The technical and critical modes of research should come together in this newly expanded form of critical technical consciousness,” and to apply this to Heidegger’s idea of Zuhandenheit: how things show themselves to us as useful to our plans and projects; for Heidegger, that is the normal, everyday way most things present themselves to us. This leads Jethro to take us through Agre’s criticisms of AI modeling, its failure to represent context except as vorhanden [pdf], (Heidegger’s term for how things look when they are torn out of the context of our lived purposes), and the need to thoroughly rethink the idea of consciousness as consisting of representations of an external world. Agre wants to work out “on a technical level” how this can apply to AI. Fascinating.


Here’s another bit of brilliance from Agre:

For Agre, this is particularly problematic because “as long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem.” (p. 260 of CHE)

4 Comments »

June 8, 2014

Will a Google car sacrifice you for the sake of the many? (And Networked Road Neutrality)

Google self-driving cars are presumably programmed to protect their passengers. So, when a traffic situation gets nasty, the car you’re in will take all the defensive actions it can to keep you safe.

But what will robot cars be programmed to do when there’s lots of them on the roads, and they’re networked with one another?

We know what we as individuals would like. My car should take as its Prime Directive: “Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.

It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.

Or, a broken run-off pipe creates a dangerous pool of water on the highway during a flash storm. The self-driving cars agree that unless my car accelerates and rams into a concrete piling, all other joint action results in a tractor trailing jack-knifing, causing lots of death and destruction. Not to mention The Angelic Children’s Choir school bus that would be in harm’s way. So, the swarm of robotic cars makes the right decision and intentionally kills me.

In short, the networking of robotic cars will change the basic moral principles that guide their behavior. Non-networked cars are presumably programmed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be programmed to support some form of utilitarianism that tries to minimize the collective damage. And that’s probably what we’d want. Isn’t it?

But one of the problems with utilitarianism is that there turns out to be little agreement about what counts as a value and how much it counts. Is saving a pedestrian more important than saving a passenger? Is it always right try to preserve human life, no matter how unlikely it is that the action will succeed and no matter how many other injuries it is likely to result in? Should the car act as if its passenger has seat-belted him/herself in because passengers should do so? Should the cars be more willing to sacrifice the geriatric than the young, on the grounds that the young have more of a lifespan to lose? And won’t someone please think about the kids m— those cute choir kids?

We’re not good at making these decisions, or even at having rational conversations about them. Usually we don’t have to, or so we tell ourselves. For example, many of the rules that apply to us in public spaces, including roads, optimize for fairness: everyone waits at the same stop lights, and you don’t get to speed unless something is relevantly different about your trip: you are chasing a bad guy or are driving someone who urgently needs medical care.

But when we are better able control the circumstances, fairness isn’t always the best rule, especially in times of distress. Unfortunately, we don’t have a lot of consensus around the values that would enable us to make joint decisions. We fall back to fairness, or pretend that we can have it all. Or we leave it to experts, as with the rules that determine who gets organ transplants. It turns out we don’t even agree about whether it’s morally right to risk soldiers’ lives to rescue a captured comrade.

Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.

 


Imagine a time when the roadways are full of self-driving cars and trucks. There are some good reasons to think that that time is coming, and coming way sooner than we’d imagined.

Imagine that Google remains in the lead, and the bulk of the cars carry their brand. And assume that these cars are in networked communication with one another.

Can we assume that Google will support Networked Road Neutrality, so that all cars are subject to the same rules, and there is no discrimination based on contents, origin, destination, or purpose of the trip?

Or would Google let you pay a premium to take the “fast lane”? (For reasons of network optimization the fast lane probably wouldn’t actually be a designated lane but well might look much more like how frequencies are dynamically assigned in an age of “smart radios.”) We presumably would be ok with letting emergency vehicles go faster than the rest of the swarm, but how about letting the rich go faster by programming the robot cars to give way when a car with its “Move aside!” bit is on?

Let’s say Google supports a strict version of Networked Road Neutrality. But let’s assume that Google won’t be the only player in this field. Suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage — perhaps forming a cordon around them? Would that be legal? Would selling a virtual fast lane on a public roadway be legal in the first place? And who gets to decide? The FCC?

One thing is sure: It’ll be a golden age for lobbyists.

5 Comments »

April 12, 2014

[2b2k] Protein Folding, 30 years ago

Simply in terms of nostalgia, this 1985 video called “Knowledge Engineering: Artificial Intelligence Research at the Stanford Heuristic Programming Project” from the Stanford archives is charming right down to its Tron-like digital soundtrack.

But it’s also really interesting if you care about the way we’ve thought about knowledge. The Stanford Heuristic Programming Project under Edward Feigenbaum did groundbreaking work in how computers represent knowledge, emphasizing the content and not just the rules. (Here is a 1980 article about the Project and its projects.)

And then at the 8:50 mark, it expresses optimism that an expert system would be able to represent not only every atom of proteins but how they fold.

Little could it have been predicted that protein folding even 30 years later would be better recognized by the human brain than by computers, and that humans playing a game — Fold.It — would produce useful results.

It’s certainly the case that we have expert systems all over the place now, from Google Maps to the Nest thermostat. But we also see another type of expert system that was essentially unpredictable in 1985. One might think that the domain of computer programming would be susceptible to being represented in an expert system because it is governed by a finite set of perfectly knowable rules, unlike the fields the Stanford project was investigating. And there are of course expert systems for programming. But where do the experts actually go when they have a problem? To StackOverflow where other human beings can make suggestions and iterate on their solutions. One could argue that at this point StackOverflow is the most successful “expert system” for computer programming in that it is the computer-based place most likely to give you an answer to a question. But it does not look much like what the Stanford project had in mind, for how could even Edward Feigenbaum have predicted what human beings can and would do if connected at scale?

(Here’s an excellent interview with Feigenbaum.)

Be the first to comment »

February 16, 2011

Watson, AI, Jeopardy, Toronto

Watson’s Final Jeopardy answer has created a need for a new word:

Schadenrobot: The joy humans feel watching robots fail.

[Minutes later:] I tweeted this, and Alistair Steele [twitter: alistairsteele immediately riposted with the far cleverer term: Schadendroid. Well played, suh!

2 Comments »