Joho the Blogphilosophy Archives - Joho the Blog

September 20, 2018

Coming to belief

I’ve written before about the need to teach The Kids (also: all of us) not only how to think critically so we can see what we should not believe, but also how to come to belief. That piece, which I now cannot locate, was prompted by danah boyd’s excellent post on the problem with media literacy. Robert Berkman, Outreach, Business Librarian at the University of Rochester and Editor of The Information Advisor’s Guide to Internet Research, asked me how one can go about teaching people how to come to belief. Here’s an edited version of my reply:

I’m afraid I don’t have a good answer. I actually haven’t thought much about how to teach people how to come to belief, beyond arguing for doing this as a social process (the ol’ “knowledge is a network” argument :) I have a pretty good sense of how *not* to do it: the way philosophy teachers relentlessly show how every proposed position can be torn down.

I wonder what we’d learn by taking a literature course as a model — not one that is concerned primarily with critical method, but one that is trying to teach students how to appreciate literature. Or art. The teacher tries to get the students to engage with one another to find what’s worthwhile in a work. Formally, you implicitly teach the value of consistency, elegance of explanation, internal coherence, how well a work clarifies one’s own experience, etc. Those are useful touchstones for coming to belief.

I wouldn’t want to leave students feeling that it’s up to them to come up with an understanding on their own. I’d want them to value the history of interpretation, bringing their critical skills to it. The last thing we need is to make people feel yet more unmoored.

I’m also fond of the orthodox Jewish way of coming to belief, as I, as a non-observant Jew, understand it. You have an unchanging and inerrant text that means nothing until humans interpret it. To interpret it means to be conversant with the scholarly opinions of the great Rabbis, who disagree with one another, often diametrically. Formulating a belief in this context means bringing contemporary intelligence to a question while finding support in the old Rabbis…and always always talking respectfully about those other old Rabbis who disagree with your interpretation. No interpretations are final. Learned contradiction is embraced.

That process has the elements I personally like (being moored to a tradition, respecting those with whom one disagrees, acceptance of the finitude of beliefs, acceptance that they result from a social process), but it’s not going to be very practical outside of Jewish communities if only because it rests on the acceptance of a sacred document, even though it’s one that literally cannot be taken literally; it always requires interpretation.

My point: We do have traditions that aim at enabling us to come to belief. Science is one of them. But there are others. We should learn from them.

TL;DR: I dunno.

2 Comments »

April 2, 2018

"If a lion could talk" updated

“If a lion could talk, we could not understand him.”
— Ludwig Wittgenstein, Philosophical Investigations, 1953.

“If an algorithm could talk, we could not understand it.”
— Deep learning, Now.

1 Comment »

February 15, 2018

Here comes a new round of "I think, therefore I am" philosophical Dad jokes

An earlier draft of Descartes’ Meditations has been discovered, which will inevitably lead to a new round of unfunny jokes under the rubric of “Descartes’ First Draft.” I can’t wait :(

The draft is a big discovery. Camilla Shumaker at Research Frontiers reports that Jeremy Hyman, a philosophy instructor at the University of Arkansas, came across a reference to the manuscript and hied off to a municipal library in Toulouse … a gamble, but he apparently felt he had nothing left Toulouse.

And so it begins…

Comments Off on Here comes a new round of "I think, therefore I am" philosophical Dad jokes

December 4, 2017

Workshop: Trustworthy Algorithmic Decision-Making

I’m at a two-day inter-disciplinary workshop on “Trustworthy Algorithmic Decision-Making” put on by the National Science Foundation and Michigan State University. The 2-page whitepapers
from the participants are online. (Here’s mine.) I may do some live-blogging of the workshops.

Goals:

– Key problems and critical qustionos?

– What to tell pol;icy-makers and others about the impact of these systems?

– Product approaches?

– What ideas, people, training, infrastructure are needed for these approaches?

Excellent diversity of backgrounds: CS, policy, law, library science, a philosopher, more. Good diversity in gender and race. As the least qualified person here, I’m greatly looking forward to the conversations.

Comments Off on Workshop: Trustworthy Algorithmic Decision-Making

August 13, 2017

Machine learning cocktails

Inspired by fabulously wrong paint colors that Janelle Shane’s generated by running existing paint names through a machine learning system, and then by an hilarious experiment in dog breed names by my friend Matthew Battles, I decided to run some data through a beginner’s machine learning algorithm by karpathy.

I fed a list of cocktail names in as data to an unaltered copy of karpathy’s code. After several hundred thousand iterations, here’s a highly curated list of results:

  • French Connerini Mot
  • Freside
  • Rumibiipl
  • Freacher
  • Agtaitane
  • Black Silraian
  • Brack Rickwitr
  • Hang
  • boonihat
  • Tuxon
  • Bachutta B
  • My Faira
  • Blamaker
  • Salila and Tonic
  • Tequila Sou
  • Iriblon
  • Saradise
  • Ponch
  • Deiver
  • Plaltsica
  • Bounchat
  • Loner
  • Hullow
  • Keviy Corpse der
  • KreckFlirch 75
  • Favoyaloo
  • Black Ruskey
  • Avigorrer
  • Anian
  • Par’sHance
  • Salise
  • Tequila slondy
  • Corpee Appant
  • Coo Bogonhee
  • Coakey Cacarvib
  • Srizzd
  • Black Rosih
  • Cacalirr
  • Falay Mund
  • Frize
  • Rabgel
  • FomnFee After
  • Pegur
  • Missoadi Mangoy Rpey Cockty e
  • Banilatco
  • Zortenkare
  • Riscaporoc
  • Gin Choler Lady or Delilah
  • Bobbianch 75
  • Kir Roy Marnin Puter
  • Freake
  • Biaktee
  • Coske Slommer Roy Dog
  • Mo Kockey
  • Sane
  • Briney
  • Bubpeinker
  • Rustin Fington Lang T
  • Kiand Tea
  • Malmooo
  • Batidmi m
  • Pint Julep
  • Funktterchem
  • Gindy
  • Mod Brandy
  • Kkertina Blundy Coler Lady
  • Blue Lago’sil
  • Mnakesono Make
  • gizzle
  • Whimleez
  • Brand Corp Mook
  • Nixonkey
  • Plirrini
  • Oo Cog
  • Bloee Pluse
  • Kremlin Colone Pank
  • Slirroyane Hook
  • Lime Rim Swizzle
  • Ropsinianere
  • Blandy
  • Flinge
  • Daago
  • Tuefdequila Slandy
  • Stindy
  • Fizzy Mpllveloos
  • Bangelle Conkerish
  • Bnoo Bule Carge Rockai Ma
  • Biange Tupilang Volcano
  • Fluffy Crica
  • Frorc
  • Orandy Sour
  • The candy Dargr
  • SrackCande
  • The Kake
  • Brandy Monkliver
  • Jack Russian
  • Prince of Walo Moskeras
  • El Toro Loco Patyhoon
  • Rob Womb
  • Tom and Jurr Bumb
  • She Whescakawmbo Woake
  • Gidcapore Sling
  • Mys-Tal Conkey
  • Bocooman Irion anlis
  • Ange Cocktaipopa
  • Sex Roy
  • Ruby Dunch
  • Tergea Cacarino burp Komb
  • Ringadot
  • Manhatter
  • Bloo Wommer
  • Kremlin Lani Lady
  • Negronee Lince
  • Peady-Panky on the Beach

Then I added to the original list of cocktails a list of Western philosophers. After about 1.4 million iterations, here’s a curated list:

  • Wotticolus
  • Lobquidibet
  • Mores of Cunge
  • Ruck Velvet
  • Moscow Muáred
  • Elngexetas of Nissone
  • Johkey Bull
  • Zoo Haul
  • Paredo-fleKrpol
  • Whithetery Bacady Mallan
  • Greekeizer
  • Frellinki
  • Made orass
  • Wellis Cocota
  • Giued Cackey-Glaxion
  • Mary Slire
  • Robon Moot
  • Cock Vullon Dases
  • Loscorins of Velayzer
  • Adg Cock Volly
  • Flamanglavere Manettani
  • J.N. tust
  • Groscho Rob
  • Killiam of Orin
  • Fenck Viele Jeapl
  • Gin and Shittenteisg Bura
  • buzdinkor de Mar
  • J. Apinemberidera
  • Nickey Bull
  • Fishomiunr Slmester
  • Chimio de Cuckble Golley
  • Zoo b Revey Wiickes
  • P.O. Hewllan o
  • Hlack Rossey
  • Coolle Wilerbus
  • Paipirista Vico
  • Sadebuss of Nissone
  • Sexoo
  • Parodabo Blazmeg
  • Framidozshat
  • Almiud Iquineme
  • P.D. Sullarmus
  • Baamble Nogrsan
  • G.W.J. . Malley
  • Aphith Cart
  • C.G. Oudy Martine ram
  • Flickani
  • Postine Bland
  • Purch
  • Caul Potkey
  • J.O. de la Matha
  • Porel
  • Flickhaitey Colle
  • Bumbat
  • Mimonxo
  • Zozky Old the Sevila
  • Marenide Momben Coust Bomb
  • Barask’s Spacos Sasttin
  • Th mlug
  • Bloolllamand Royes
  • Hackey Sair
  • Nick Russonack
  • Fipple buck
  • G.W.F. Heer Lach Kemlse Male

Yes, we need not worry about human bartenders, cocktail designers, or philosophers being replaced by this particular algorithm. On the other hand, this is algorithm consists of a handful of lines of code and was applied blindly by a person dumber than it. Presumably SkyNet — or the next version of Microsoft Clippy — will be significantly more sophisticated than that.

Comments Off on Machine learning cocktails

July 18, 2017

America's default philosophy

John McCumber — a grad school colleague with whom I have alas not kept up — has posted at Aeon an insightful historical argument that America’s default philosophy came about because of a need to justify censoring American communist professorss (resulting in a naive scientism) and a need to have a positive alternative to Marxism (resulting in the adoption of rational choice theory).

That compressed summary does not do justice to the article’s grounding in the political events of the 1950s nor to how well-written and readable it is.

1 Comment »

May 18, 2017

Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

Comments Off on Indistinguishable from prejudice

May 15, 2017

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Comments Off on [liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

October 12, 2016

[liveblog] Perception of Moral Judgment Made by Machines

I’m at the PAPIs conference where Edmond Awad [ twitter]at the MIT Media Lab is giving a talk about “Moral Machine: Perception of Moral Judgement Made by Machines.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He begins with a hypothetical in which you can swerve a car to kill one person instead of stay on its course and kill five. The audience chooses to swerve, and Edmond points out that we’re utilitarians. Second hypothesis: swerve into a barrier that will kill you but save the pedestrians. Most of us say we’d like it to swerve. Edmond points out that this is a variation of the trolley problem, except now it’s a machine that’s making the decision for us.

Autonomous cars are predicted to minimize fatalities from accidents by 90%. He says his advisor’s research found that most people think a car should swerve and sacrifice the passenger, but they don’t want to buy such a car. They want everyone else to.

He connects this to the Tragedy of the Commons in which if everyone acts to maximize their good, the commons fails. In such cases, governments sometimes issue regulations. Research shows that people don’t want the government to regulate the behavior of autonomous cars, although the US Dept of Transportation is requiring manufacturers to address this question.

Edmond’s group has created the moral machine, a website that creates moral dilemmas for autonomous cars. There have been about two million users and 14 million responses.

Some national trends are emerging. E.g., Eastern countries tend to prefer to save passengers more than Western countries do. Now the MIT group is looking for correlations with other factors, e.g., religiousness, economics, etc. Also, what are the factors most crucial in making decisions?

They are also looking at the effect of automation levels on the assignment of blame. Toyota’s “Guardian Angel” model results in humans being judged less harshly: that mode has a human driver but lets the car override human decisions.

Q&A

In response to a question, Edmond says that Mercedes has said that its cars will always save the passenger. He raises the possibility of the owner of such a car being held responsible for plowing into a bus full of children.

Q: The solutions in the Moral Machine seem contrived. The cars should just drive slower.

A: Yes, the point is to stimulate discussion. E.g., it doesn’t raise the possibility of swerving to avoid hitting someone who is in some way considered to be more worthy of life. [I’m rephrasing his response badly. My fault!]

Q: Have you analyzed chains of events? Does the responsibility decay the further you are from the event?

This very quickly gets game theoretical.
A:

Comments Off on [liveblog] Perception of Moral Judgment Made by Machines

August 31, 2016

Socrates in a Raincoat

In 1974, the prestigious scholarly journal TV Guide published my original research that suggested that the inspector in Dostoyevsky’s Crime and Punishment was modeled on Socrates. I’m still pretty sure that’s right, and an actual scholarly article came out a few years later making the same case, by people who actually read Russian ‘n’ stuff.

Around the time that I came up with this hypothesis, the creators of the show Columbo had acknowledged that their main character was also modeled on Socrates. I put one and one together and …

Click on the image to go to a scan of that 1974 article.

Socrates in a Raincoat scan

1 Comment »

Next Page »