Joho the Blog » [liveblog][PAIR] Jess Holbrook
Who am I? (Blog Disclosure Form) Copy this link as RSS address

[liveblog][PAIR] Jess Holbrook

I’m at the PAIR conference at Google. Jess Holbrook is UX lead for AI. He’s talking about human-centered machine learning.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

“We want to put AI into the maker toolkit, to help you solve real problems.” One of the goals of this: “How do we democratize AI and change what it means to be an expert in this space?” He refers to a blog post he did with Josh Lovejoy about human-centered ML. He emphasizes that we are right at the beginning of figuring this stuff out.

Today, someone finds a data set, and finds a problem that that set could solve. You train a model and look at its performance, and decided if it’s good enough. And then you launch “The world’s first smart X. Next step: profit.” But what if you could do this in a human-centered way?

Human-centered design means: 1. Staying proximate. Know your users. 2. Inclusive divergence: reach out and bring in the right people. 3. Shared definition of success: what does it mean to be done? 4. Make early and often: lots of prototyping. 5. Iterate, test, throw it away.

So, what would a human-centered approach to ML look like? He gives some examples.

Instead of trying to find an application for data, human-centered ML finds a problem and then finds a data set appropriate for that problem. E.g., diagnosis plant diseases. Assemble tagged photos of plants. Or, use ML to personalize a “balancing spoon” for people with Parkinsons.

Today, we find bias in data sets after a problem is discoered. E.g., ProPublica’s article exposing the bias in ML recidivism predictions. Instead, proactively inspect for bias, as per JG’s prior talk.

Today, models personalize experiences, e.g., keyboards that adapt to you. With human-centered ML, people can personalize their models. E.g., someone here created a raccoon detector that uses images he himself took and uploaded, personalized to his particular pet raccoon.

Today, we have to centralize data to get results. “With human-centered ML we’d also have decentralized, federated learning”With human-centered ML we’d also have decentralized, federated learning, getting the benefits while maintaining privacy.

Today there’s a small group of ML experts. [The photo he shows are all white men, pointedly.] With human-centered ML, you get experts who have non-ML domain expertise, which leads to more makers. You can create more diverse, inclusive data sets.

Today, we have narrow training and testing. With human-centered ML, we’ll judge instead by how systems change people’s lives. E.g., ML for the blind to help them recognize things in their environment. Or real-time translation of signs.

Today, we do ML once. E.g., PicDescBot tweets out amusing misfires of image recognition. With human-centered ML we’ll combine ML and teaching. E.g., a human draws an example, and the neural net generates alternatives. In another example, ML improved on landscapes taken by StreetView, where it learned what is an improvement from a data set of professional photos. Google auto-suggest ML also learns from human input. He also shows a video of Simone Giertz, “Queen of the Shitty Robots.”

He references Amanda Case: “Expanding people’s definion of normal” is almost always a gradual process.

[The photo of his team is awesomely diverse.]

Previous: « || Next: »

Comments are closed.

Comments (RSS).  RSS icon