Joho the Blogphilosophy Archives - Joho the Blog

December 4, 2017

Workshop: Trustworthy Algorithmic Decision-Making

I’m at a two-day inter-disciplinary workshop on “Trustworthy Algorithmic Decision-Making” put on by the National Science Foundation and Michigan State University. The 2-page whitepapers
from the participants are online. (Here’s mine.) I may do some live-blogging of the workshops.

Goals:

– Key problems and critical qustionos?

– What to tell pol;icy-makers and others about the impact of these systems?

– Product approaches?

– What ideas, people, training, infrastructure are needed for these approaches?

Excellent diversity of backgrounds: CS, policy, law, library science, a philosopher, more. Good diversity in gender and race. As the least qualified person here, I’m greatly looking forward to the conversations.

Be the first to comment »

August 13, 2017

Machine learning cocktails

Inspired by fabulously wrong paint colors that Janelle Shane’s generated by running existing paint names through a machine learning system, and then by an hilarious experiment in dog breed names by my friend Matthew Battles, I decided to run some data through a beginner’s machine learning algorithm by karpathy.

I fed a list of cocktail names in as data to an unaltered copy of karpathy’s code. After several hundred thousand iterations, here’s a highly curated list of results:

  • French Connerini Mot
  • Freside
  • Rumibiipl
  • Freacher
  • Agtaitane
  • Black Silraian
  • Brack Rickwitr
  • Hang
  • boonihat
  • Tuxon
  • Bachutta B
  • My Faira
  • Blamaker
  • Salila and Tonic
  • Tequila Sou
  • Iriblon
  • Saradise
  • Ponch
  • Deiver
  • Plaltsica
  • Bounchat
  • Loner
  • Hullow
  • Keviy Corpse der
  • KreckFlirch 75
  • Favoyaloo
  • Black Ruskey
  • Avigorrer
  • Anian
  • Par’sHance
  • Salise
  • Tequila slondy
  • Corpee Appant
  • Coo Bogonhee
  • Coakey Cacarvib
  • Srizzd
  • Black Rosih
  • Cacalirr
  • Falay Mund
  • Frize
  • Rabgel
  • FomnFee After
  • Pegur
  • Missoadi Mangoy Rpey Cockty e
  • Banilatco
  • Zortenkare
  • Riscaporoc
  • Gin Choler Lady or Delilah
  • Bobbianch 75
  • Kir Roy Marnin Puter
  • Freake
  • Biaktee
  • Coske Slommer Roy Dog
  • Mo Kockey
  • Sane
  • Briney
  • Bubpeinker
  • Rustin Fington Lang T
  • Kiand Tea
  • Malmooo
  • Batidmi m
  • Pint Julep
  • Funktterchem
  • Gindy
  • Mod Brandy
  • Kkertina Blundy Coler Lady
  • Blue Lago’sil
  • Mnakesono Make
  • gizzle
  • Whimleez
  • Brand Corp Mook
  • Nixonkey
  • Plirrini
  • Oo Cog
  • Bloee Pluse
  • Kremlin Colone Pank
  • Slirroyane Hook
  • Lime Rim Swizzle
  • Ropsinianere
  • Blandy
  • Flinge
  • Daago
  • Tuefdequila Slandy
  • Stindy
  • Fizzy Mpllveloos
  • Bangelle Conkerish
  • Bnoo Bule Carge Rockai Ma
  • Biange Tupilang Volcano
  • Fluffy Crica
  • Frorc
  • Orandy Sour
  • The candy Dargr
  • SrackCande
  • The Kake
  • Brandy Monkliver
  • Jack Russian
  • Prince of Walo Moskeras
  • El Toro Loco Patyhoon
  • Rob Womb
  • Tom and Jurr Bumb
  • She Whescakawmbo Woake
  • Gidcapore Sling
  • Mys-Tal Conkey
  • Bocooman Irion anlis
  • Ange Cocktaipopa
  • Sex Roy
  • Ruby Dunch
  • Tergea Cacarino burp Komb
  • Ringadot
  • Manhatter
  • Bloo Wommer
  • Kremlin Lani Lady
  • Negronee Lince
  • Peady-Panky on the Beach

Then I added to the original list of cocktails a list of Western philosophers. After about 1.4 million iterations, here’s a curated list:

  • Wotticolus
  • Lobquidibet
  • Mores of Cunge
  • Ruck Velvet
  • Moscow Muáred
  • Elngexetas of Nissone
  • Johkey Bull
  • Zoo Haul
  • Paredo-fleKrpol
  • Whithetery Bacady Mallan
  • Greekeizer
  • Frellinki
  • Made orass
  • Wellis Cocota
  • Giued Cackey-Glaxion
  • Mary Slire
  • Robon Moot
  • Cock Vullon Dases
  • Loscorins of Velayzer
  • Adg Cock Volly
  • Flamanglavere Manettani
  • J.N. tust
  • Groscho Rob
  • Killiam of Orin
  • Fenck Viele Jeapl
  • Gin and Shittenteisg Bura
  • buzdinkor de Mar
  • J. Apinemberidera
  • Nickey Bull
  • Fishomiunr Slmester
  • Chimio de Cuckble Golley
  • Zoo b Revey Wiickes
  • P.O. Hewllan o
  • Hlack Rossey
  • Coolle Wilerbus
  • Paipirista Vico
  • Sadebuss of Nissone
  • Sexoo
  • Parodabo Blazmeg
  • Framidozshat
  • Almiud Iquineme
  • P.D. Sullarmus
  • Baamble Nogrsan
  • G.W.J. . Malley
  • Aphith Cart
  • C.G. Oudy Martine ram
  • Flickani
  • Postine Bland
  • Purch
  • Caul Potkey
  • J.O. de la Matha
  • Porel
  • Flickhaitey Colle
  • Bumbat
  • Mimonxo
  • Zozky Old the Sevila
  • Marenide Momben Coust Bomb
  • Barask’s Spacos Sasttin
  • Th mlug
  • Bloolllamand Royes
  • Hackey Sair
  • Nick Russonack
  • Fipple buck
  • G.W.F. Heer Lach Kemlse Male

Yes, we need not worry about human bartenders, cocktail designers, or philosophers being replaced by this particular algorithm. On the other hand, this is algorithm consists of a handful of lines of code and was applied blindly by a person dumber than it. Presumably SkyNet — or the next version of Microsoft Clippy — will be significantly more sophisticated than that.

Comments Off on Machine learning cocktails

July 18, 2017

America's default philosophy

John McCumber — a grad school colleague with whom I have alas not kept up — has posted at Aeon an insightful historical argument that America’s default philosophy came about because of a need to justify censoring American communist professorss (resulting in a naive scientism) and a need to have a positive alternative to Marxism (resulting in the adoption of rational choice theory).

That compressed summary does not do justice to the article’s grounding in the political events of the 1950s nor to how well-written and readable it is.

1 Comment »

May 18, 2017

Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

Comments Off on Indistinguishable from prejudice

May 15, 2017

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Comments Off on [liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

October 12, 2016

[liveblog] Perception of Moral Judgment Made by Machines

I’m at the PAPIs conference where Edmond Awad [ twitter]at the MIT Media Lab is giving a talk about “Moral Machine: Perception of Moral Judgement Made by Machines.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins with a hypothetical in which you can swerve a car to kill one person instead of stay on its course and kill five. The audience chooses to swerve, and Edmond points out that we’re utilitarians. Second hypothesis: swerve into a barrier that will kill you but save the pedestrians. Most of us say we’d like it to swerve. Edmond points out that this is a variation of the trolley problem, except now it’s a machine that’s making the decision for us.

Autonomous cars are predicted to minimize fatalities from accidents by 90%. He says his advisor’s research found that most people think a car should swerve and sacrifice the passenger, but they don’t want to buy such a car. They want everyone else to.

He connects this to the Tragedy of the Commons in which if everyone acts to maximize their good, the commons fails. In such cases, governments sometimes issue regulations. Research shows that people don’t want the government to regulate the behavior of autonomous cars, although the US Dept of Transportation is requiring manufacturers to address this question.

Edmond’s group has created the moral machine, a website that creates moral dilemmas for autonomous cars. There have been about two million users and 14 million responses.

Some national trends are emerging. E.g., Eastern countries tend to prefer to save passengers more than Western countries do. Now the MIT group is looking for correlations with other factors, e.g., religiousness, economics, etc. Also, what are the factors most crucial in making decisions?

They are also looking at the effect of automation levels on the assignment of blame. Toyota’s “Guardian Angel” model results in humans being judged less harshly: that mode has a human driver but lets the car override human decisions.

Q&A

In response to a question, Edmond says that Mercedes has said that its cars will always save the passenger. He raises the possibility of the owner of such a car being held responsible for plowing into a bus full of children.

Q: The solutions in the Moral Machine seem contrived. The cars should just drive slower.

A: Yes, the point is to stimulate discussion. E.g., it doesn’t raise the possibility of swerving to avoid hitting someone who is in some way considered to be more worthy of life. [I’m rephrasing his response badly. My fault!]

Q: Have you analyzed chains of events? Does the responsibility decay the further you are from the event?

This very quickly gets game theoretical.
A:

Comments Off on [liveblog] Perception of Moral Judgment Made by Machines

August 31, 2016

Socrates in a Raincoat

In 1974, the prestigious scholarly journal TV Guide published my original research that suggested that the inspector in Dostoyevsky’s Crime and Punishment was modeled on Socrates. I’m still pretty sure that’s right, and an actual scholarly article came out a few years later making the same case, by people who actually read Russian ‘n’ stuff.

Around the time that I came up with this hypothesis, the creators of the show Columbo had acknowledged that their main character was also modeled on Socrates. I put one and one together and …

Click on the image to go to a scan of that 1974 article.

Socrates in a Raincoat scan

1 Comment »

January 2, 2016

The future behind us

We’re pretty convinced that the future lies ahead of us. But according to Bernard Knox, the ancient Greeks were not. In Backing into the Future he writes:

“ The future, invisible, is behind us. ” the Greek word opiso, which literally means ‘behind’ or ‘back, refers not to the past but to the future. The early Greek imagination envisaged the past and the present as in front of us–we can see them. The future, invisible, is behind us. Only a few very wise men can see what is behind them. (p. 11)

G.W. Whitrow in Time in History quotes George Steiner in After Babel to make the same point about the ancient Hebrews:

…the future is preponderantly thought to lie before us, while in Hebrew future events are always expressed as coming after us. (p. 14)

Whitrow doesn’t note that Steiner’s quote (which Steiner puts in quotes) comes from Thorlief Borman’s Hebrew Thought Compared with Greek. Borman writes:

…we Indo-Germanic peoples think of time as a line on which we ourselves stand at a point called now; then we have the future lying before us, and the past stretches out behind us. The [ancient] Israelites use the same expressions ‘before’ and ‘after’ but with opposite meanings. qedham means ‘what is before’ (Ps. 139.5) therefore, ‘remote antiquity’, past. ‘ahar means ‘back’, ‘behind’, and of the time ‘after; aharith means ‘hindermost side’, and then ‘end of an age’, future… (p. 149)

This is bewildering, and not just because the Borman’s writing is hard to parse.“we also sometimes switch the direction of future and past.”

He continues on to note that we modern Westerners also sometimes switch the direction of future and past. In particular, when we “appreciate time as the transcendental design of history,” we

think of ourselves as living men who are on a journey from the cradle to the grave and who stand in living association with humanity which is also journeying ceaselessly forward. . Then the generation of the past are our progenitors, at least our forebears, who have existed before us because they have gone on before us, and we follow after then. In that case we call the past foretime. According to this mode of thinking, the future generation are our descendants, at least our successors, who therefore come after us. (p. 149. Emphasis in the original.)

Yes, I find this incredibly difficult to wrap my brain around. I think the trick is the ambiguity of “before us.” The future lies before us, but our forebears were also before us.

Borman tries to encapsulate our contradictory ways of thinking about the future as follows: “the future lies before us but comes after us.” The problem in understanding this is that we hear “before us” as “ahead of us.” The word “before” means “ahead” when it comes to space.

Anyway.


Borman’s explanation of the ancient Hebrew way of thinking is related to Knox’s explanation of the Greek idiom:

From the psychological viewpoint it is absurd to say that we have the future before us and the past behind us, as though the future were visible to us and the past occluded. “…as though the future were visible to us and the past occluded. Quite the reverse is true.”Quite the reverse is true. What our forebears have accomplished lies before us as their completed works; the house we see, the meadows and fields, the cultural and political system are congealed expressions of the deeds of our fathers. The same is true of everything they have done, lived, or suffered; it lies before us as completed facts… The present and the future are, on the contrary still in the process of coming and becoming. (p. 150)

The nature of becoming is different for the Greeks and Hebrews, so the darkness of the future has different meanings. But both result in the future lying behind us.

1 Comment »

October 28, 2015

When should your self-driving car kill you?

At Digital Trends I take another look at a question that is now gaining some currency: How should autonomous cars be programmed when all the choices are bad and someone has to die in order to maximize the number of lives that are saved?

The question gets knottier the more you look at it. In two regards especially:

First, it makes sense to look at this through a utilitarian lens, but when you do, you have to be open to the possibility that it’s morally better to kill a 64 year old who’s at the end of his productive career (hey, don’t look at me that way!) vs. a young parent, or a promising scientist or musician. We consider age and health when doing triage for organ replacements. Should our cars do it for us when deciding who dies?

Second, the real question is who gets to decide this? The developers at Google who are programming the cars? And suppose the Google software disagrees with the prioritization of the Tesla self-driving cars? Who wins? Or, do we want to have a cross-manufacturer agreement about whose life to sacrifice if someone has to die in an accident? A global agreement about the value of lives?

Yeah, sure. What could go wrong with that? /s

6 Comments »

June 1, 2015

[2b2k] Russell on knowledge

Bertrand Russell on knowledge for the Encyclopedia Brittanica:

[A]t first sight it might be thought that knowledge might be defined as belief which is in agreement with the facts. The trouble is that no one knows what a belief is, no one knows what a fact is, and no one knows what sort of agreement between them would make a belief true.

But that wonderful quote is misleading if left there. In fact it introduces Russell’s careful exploration and explanation of those terms. Crucially: “We are thus driven to the view that, if a belief is to be something causally important, it must be defined as a characteristic of behaviour.”

1 Comment »

Next Page »