Joho the Blog » [liveblog][pair] Blaise Agüera y Arcas on the source of bias
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[liveblog][pair] Blaise Agüera y Arcas on the source of bias

At the PAIR Symposium, Google’s Blaise Agüera y Arcas is providing some intellectual and historical perspective on AI issues.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

[Note: This is a talk tough to live-blog because it is carefully structured intellectually. My apologies.]

He says neural networks have been part of the computing environment from the beginning. E.g., he thinks that the loop at the end of the logic gate symbol in fact comes from a 1943 symbolization of biological neural networks. There are indications of neural networks in Turing’s early papers. So these ideas go way back. Blaise thinks that the majority of computing processes in a few years will be running on processors designed for running neural networks.

ML has raised anxiety reminiscent of Walter Benjamin’s concern — he cites The Work of Art in the Age of Mechanical Reproduction — about the mass reproduction of art that strips it of its aura. Now there’s the same kind of moral panic about art and human exceptionalism and existence. (Cf. Nick Bostrom’s SuperIntelligence). It reminds him of Jakob Mohr’s 1910 The Influencing Machine in which schizophrenics believe they’re being influenced by an external machine. (They always thought men were managing the machine.) He points to what he calls Bostrom’s ultimate colonialism, in which we are able to populate the universe with 10^58 human minds. [Sorry, but I didn’t get this. My fault.] He ties this to Bacon’s reverence for the domination of nature. Blaise prefers a feminist view, citing Kember & Zylinksa’s Life After New Media.

Many say we have a value alignment problem, he says: how do we make AI that embeds human values? But AI systems do have human values because they’re trained on human data. The problem is that our human values are off. He references a paper on judging criminality based on faces. The paper claims it’s free of human biases. But it’s based on data that is biased. Nevertheless, this sort of tech is being commercialized. E.g., Faception claims to classify people based on their faces: High IQ, Pedophile, etc.

Also, there’s the recent paper about a ML system classifies one’s gender preferences based on faces. Blaise ran a test on Mechanical Turk asking about some of the features in the composite gay and straight faces in that paper. He found that people attracted to the same sex were more likely to wear glasses. There were also significant differences in facial hair, use of makeup, and face tan, features also in the composite faces. Thus, the ML system might have been using social markers, not physiognomy, “There are a lot of tells.”

In conclusion, none of these are arguments against ML. On the contrary. The biases and prejudices, and the social signalling, are things ML lets us hold a mirror up to.

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon