Joho the Blog » Five types of AI fairness
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

Five types of AI fairness

Google PAIR (People + AI Research) has just posted my attempt to explain what fairness looks like when it’s operationalized for a machine learning system. It’s pegged around five “fairness buttons” on the new Google What-If tool, a resource for developers who want to try to figure out what factors (“features” in machine learning talk) are affecting an outcome.

Note that there are far more than five ways to operationalize fairness. The point of the article is that once we are forced to decide exactly what we’re going to count as fair — exactly enough that a machine learning system can implement it — we realize just how freaking complex fairness is. OMG. I broke my brain trying to figure out how to explain some of those ideas, and it took several Google developers (especially James Wexler) and a fine mist of vegetarian broth to restore it even incompletely. Even so, my explanations are less clear than I (or you, I’m sure) would like. But at least there’s no math in them :)

I’ll write more about this at some point, but for me the big take-away is that fairness has had value as a moral concept so far because it is vague enough to allow our intuition to guide us. Machine learning is going to force us to get very specific about it. But we are not yet adept enough at it — e.g., we don’t have a vocabulary for talking about the various varieties — plus we don’t agree about them enough to be able to navigate the shoals. It’s going to be a big mess, but something we have to work through. When we do, we’ll be better at being fair.

Now, about the fact that I am a writer-in-residence at Google. Well, yes I am, and have been for about 6 weeks. It’s a 6 month part-time experiment. My role is to try to explain some of machine learning to people who, like me, lack the technical competence to actually understand it. I’m also supposed to be reflecting in public on what the implications of machine learning might be on our ideas. I am expected to be an independent voice, an outsider on the inside.

So far, it’s been an amazing experience. I’m attached to PAIR, which has developers working on very interesting projects. They are, of course, super-smart, but they have not yet tired of me asking dumb questions that do not seem to be getting smarter over time. So, selfishly, it’s been great for me. And isn’t that all that really matters, hmmm?

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon