Joho the BlogSeptember 2018 - Joho the Blog

September 20, 2018

Coming to belief

I’ve written before about the need to teach The Kids (also: all of us) not only how to think critically so we can see what we should not believe, but also how to come to belief. That piece, which I now cannot locate, was prompted by danah boyd’s excellent post on the problem with media literacy. Robert Berkman, Outreach, Business Librarian at the University of Rochester and Editor of The Information Advisor’s Guide to Internet Research, asked me how one can go about teaching people how to come to belief. Here’s an edited version of my reply:

I’m afraid I don’t have a good answer. I actually haven’t thought much about how to teach people how to come to belief, beyond arguing for doing this as a social process (the ol’ “knowledge is a network” argument :) I have a pretty good sense of how *not* to do it: the way philosophy teachers relentlessly show how every proposed position can be torn down.

I wonder what we’d learn by taking a literature course as a model — not one that is concerned primarily with critical method, but one that is trying to teach students how to appreciate literature. Or art. The teacher tries to get the students to engage with one another to find what’s worthwhile in a work. Formally, you implicitly teach the value of consistency, elegance of explanation, internal coherence, how well a work clarifies one’s own experience, etc. Those are useful touchstones for coming to belief.

I wouldn’t want to leave students feeling that it’s up to them to come up with an understanding on their own. I’d want them to value the history of interpretation, bringing their critical skills to it. The last thing we need is to make people feel yet more unmoored.

I’m also fond of the orthodox Jewish way of coming to belief, as I, as a non-observant Jew, understand it. You have an unchanging and inerrant text that means nothing until humans interpret it. To interpret it means to be conversant with the scholarly opinions of the great Rabbis, who disagree with one another, often diametrically. Formulating a belief in this context means bringing contemporary intelligence to a question while finding support in the old Rabbis…and always always talking respectfully about those other old Rabbis who disagree with your interpretation. No interpretations are final. Learned contradiction is embraced.

That process has the elements I personally like (being moored to a tradition, respecting those with whom one disagrees, acceptance of the finitude of beliefs, acceptance that they result from a social process), but it’s not going to be very practical outside of Jewish communities if only because it rests on the acceptance of a sacred document, even though it’s one that literally cannot be taken literally; it always requires interpretation.

My point: We do have traditions that aim at enabling us to come to belief. Science is one of them. But there are others. We should learn from them.

TL;DR: I dunno.

Be the first to comment »

September 14, 2018

Five types of AI fairness

Google PAIR (People + AI Research) has just posted my attempt to explain what fairness looks like when it’s operationalized for a machine learning system. It’s pegged around five “fairness buttons” on the new Google What-If tool, a resource for developers who want to try to figure out what factors (“features” in machine learning talk) are affecting an outcome.

Note that there are far more than five ways to operationalize fairness. The point of the article is that once we are forced to decide exactly what we’re going to count as fair — exactly enough that a machine learning system can implement it — we realize just how freaking complex fairness is. OMG. I broke my brain trying to figure out how to explain some of those ideas, and it took several Google developers (especially James Wexler) and a fine mist of vegetarian broth to restore it even incompletely. Even so, my explanations are less clear than I (or you, I’m sure) would like. But at least there’s no math in them :)

I’ll write more about this at some point, but for me the big take-away is that fairness has had value as a moral concept so far because it is vague enough to allow our intuition to guide us. Machine learning is going to force us to get very specific about it. But we are not yet adept enough at it — e.g., we don’t have a vocabulary for talking about the various varieties — plus we don’t agree about them enough to be able to navigate the shoals. It’s going to be a big mess, but something we have to work through. When we do, we’ll be better at being fair.

Now, about the fact that I am a writer-in-residence at Google. Well, yes I am, and have been for about 6 weeks. It’s a 6 month part-time experiment. My role is to try to explain some of machine learning to people who, like me, lack the technical competence to actually understand it. I’m also supposed to be reflecting in public on what the implications of machine learning might be on our ideas. I am expected to be an independent voice, an outsider on the inside.

So far, it’s been an amazing experience. I’m attached to PAIR, which has developers working on very interesting projects. They are, of course, super-smart, but they have not yet tired of me asking dumb questions that do not seem to be getting smarter over time. So, selfishly, it’s been great for me. And isn’t that all that really matters, hmmm?

Be the first to comment »