Joho the BlogNovember 2017 - Joho the Blog

November 19, 2017

[liveblog][ai] A Harm-reduction framework for algorithmic accountability

I’m at one of the weekly Harvard’s Berkman Klein Center for Internet & Society and MIT Media Lab talks. Alexandra Wood and Micah Altman are talking about “A harm reduction framework for algorithmic accountability over personal information” — a snapshot of their ongoing research at the Privacy Tools Project. The PTP is an interdisciplinary project that investigates tools for sharing info while preserving privacy.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Alexandra says they’ve been developing frameworks for assessing privacy risk when collecting personal data, and have been looking at the controls that can be used to protect individuals. They’ve found that privacy tools address a narrow slice of the problem; other types of misuse of data require other approaches.

She refers to some risk assessment algorithms used by the courts that have turned out to be racially biased, to have unbalanced error rates (falsely flagging black defendants as future criminals at twice the rate as white defendents), and are highly inaccurate. “What’s missing is an analysis of harm”What’s missing is an analysis of harm. “Current approaches to regulating algorithmic classification and decision-making largely elide harm,” she says. “The ethical norms in the law point to the broader responsibilities of the algorithms’ designers.”

Micah says there isn’t a lot of work mapping the loss privacy to other harms. The project is taking an interdisciplinary approach to this. [I don’t trust my blogging here. Don’t get upset with Micah for my ignorance-based reporting!]

Social science treats harm pragmatically. It finds four main dimensions of well-being: wealth, health, life satisfaction and the meaningful choices available to people. Different schools take different approaches to this, some emphasizing physical and psychological health, others life satisfaction, etc.

But to assess harm, you need to look at people’s lives over time. E.g., how does going to prison affect people’s lives? Being sentenced decreases your health, life-satisfaction, choices, and loer income. “The consequences of sentencing are persistent and individually catastrophic.”

He shows a chart from ProPublica based on Broward County data that shows that the risk scores for white defendants skews heavily toward lower scores while the scores for black defendants is more evenly distributed. This by itself doesn’t prove that it’s unfair. You have to look at the causes of the differences in those distributions.

Modern inference theory says something different about harm. A choice is harmful if the causal effect of that outcome is worse, and the causal effect is measured by potential outcomes. “The causal impact of smoking is not simply that you may get cancer, but includes the impact of not smoking”The causal impact of smoking is not simply that you may get cancer, but includes the impact of not smoking, such as possibly gaining weight. You have to look at the counter-factuals

The COMPAS risk assessment tool that has been the subject of much criticism is affected by which training data you use, choice of algorithm, the application of it to the individual, and the use of the score in sentencing. Should you exclude information about race? Or exclude any info that might violate people’s privacy? Or make it open or not? And how to use the outcomes?

Can various protections reduce harm from COMPAS? Racial features were not explicitly included in the COMPAS model. But there are proxies for race. Removing the proxies could lead to less accurate predictions, and make it difficult to study and correct for bias. That is, removing that data (features) doesn’t help that much and might prevent you from applying corrective measures.

Suppose you throw out the risk score. Judges are still biased. “The adverse impact is potentially greater when the decision is not informed by an algorithm’s prediction.” A recent paper by John Kleinberg showed that “algorithms predicting pre-trial assessments were less biased than decisions made by human judges”algorithms predicting pre-trial assessments were less biased than decisions made by human judges. [I hope I got that right. It sounds like a significant finding.]

There’s another step: going from the outcomes to the burdens these outcomes put on people. “An even distribution of outcomes can produce disproportionate burdens.” E.g. juvenile defendants have more to lose — more of their years will be negatively affected by a jail sentence — so having the same false positive and negatives for adults and juveniles would impost a greater burden on the juveniles. When deciding it an algorithmic decision is unjust, you can’t just look at the equality of error rates.

A decision is unjust when it is: 1. Dominated (all groups pay a higher burden for the same social benefit); 2. Unprogressive (higher relative burdens on members of classes who are less well off); 3. Individually catastrophic (wrong decisions are so harmful that it reduces the well being of individuals in members of a known class); 4) Group punishment (an effect on an entire disadvantaged class.)

For every decision, theere are unavoidable constraints: a tradeoff between the individual and the social group; a privacy cost; can’t be equally accurate in all categories; can’t be fair without comparing utility across people; it’s impossible to avoid constraints by adding human judgment because the human is still governed by these constraints.

Micah’s summary for COMPAS: 1. Some protections would be irrelevant (inclusion of sensitive characteristics and and protection of indvidual information). Other protections would be insufficient (no intention to discriminate, open source/open data/FCRA).

Micah ends with a key question about fairness that has been too neglected: “Do black defendants bear a relatively higher cost than whites from bad decisions that prevent the same social harms?”

Be the first to comment »

November 5, 2017

[liveblog] Stefania Druga on how kids can help teach us about AI

Stefania Druga, a graduate student in the Personal Robots research group at the MIT Media Lab, is leading a discussion focusing on how children can help us to better understand and utilize AI. She’s going to talk about some past and future research projects.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She shows two applications of AI developed for kids The first is Cayla, a robotic doll. “It got hacked three days after it was released in Germany” and was banned there. The second is Aristotle, which was supposed to be an Alexa for kids. A few weeks ago Mattel decided not to release it, after “parents worried about their kids’ privacy signed petitions”parents worried about their kids’ privacy signed petitions.

Stefania got interested in what research was being done in this field. She found a couple of papers. One (Lovato & Piper 2015
) showed that children mirrored how they interact with Siri, e.g., how angry or assertive. Antother (McReynolds et al., 2017 [pdf]) found that how children and parents interact with smart toys revealed how little parents and children know about how much info is being collected by these toys, e.g. Hello Barbie’s privacy concerns. It also looked at how parents and children were being incentivized to share info on social media.

Stefania’s group did a pilot study, having parents and 27 kids interact with various intelligent agents, including Alexa, Julie Chatbot, Tina the T.Rex, and Google Home. Four or five chidlren would interact with the agent at a time, with an adult moderator. Their parents were in the room.

Stefania shows a video about this project. After the kids interacted with the agent, they asked if it was smarter than the child, if it’s a friend, if it has feelings. Children anthropomorphize AIs in playful ways. Most of the older children thought the agents were more intelligent than they were, while the younger children weren’t sure. Two conclusions: Makers of these devices should pay more attention to how children interact with them, and we need more research.

What did the children think? They thought the agents were friendly and truthful. “They thought two Alexa devices were separate individuals.”They thought two Alexa devices were separate individuals. The older children thought about these agents differently than the younger ones do. This latter may be because of how children start thinking about smartness as they progress through school. A question: do they think about artificial intelligence as being the same as human intelligence?

After playing with the agents, they would probe the nature of the device. “They are trying to place the ontology of the device.”

Also, they treated the devices as gender ambiguous.

The media glommed onto this pilot study. E.g., MIT Technology Review: “Growing Up with Alexa.” Or NYTimes: “Co-Parenting with Alexa.” Wired: Understanding Generation Alpha. From these articles, it seems that people are really polarized about the wisdom of introducing children to these devices.

Is this good for kids? “It’s complicated,” Stefania says. The real question is: How can children and parents leverage intelligent agents for learning, or for other good ends?

Her group did another study, this summer, that had 30 pairs of children and parents navigate a robot to solve a maze. They’d see the maze from the perspective of the robot. They also saw a video of a real mouse navigating a maze, and of another robot solving the maze by itself. “Does changing the agent (themselves, mouse, robot) change their idea of intelligence?”Does changing the agent (themselves, mouse, robot) change their idea of intelligence? Kids and parents both did the study. Most of the kids mirrored their parents’ choices. They even mirrored the words the parents used…and the value placed on those words.

What next? Her group wants to know how to use these devices for learning. They build extensions using Scratch, including for an open source project called Poppy. (She shows a very cool video of the robot playing, collaborating, painting, etc.) Kids can program it easily. Ultimately, she hopes that this might help kids see that they have agency, and that while the robot is smart at some things, people are smart at other things.

Q&A

Q: You said you also worked with the elderly. What are the chief differences?

A: Seniors and kids have a lot in common. They were especially interested in the fact that these agents can call their families. (We did this on tablets, and some of the elderly can’t use them because their skin is too dry.)

Q: Did learning that they can program the robots change their perspective on how smart the robots are?

A: The kids who got the bot through the maze did not show a change in their perspective. When they become fluent in customizing it and understanding how it computes, it might. It matters a lot to have the parents involved in flipping that paradigm.

Q: How were the parents involved in your pilot study?

A: It varied widely by parent. It was especially important to have the parents there for the younger kids because the device sometimes wouldn’t understand the question, or what sorts of things the child could ask it about.

Q: Did you look at how the participants reacted to robots that have strong or weak characteristics of humans or animals.

A: We’ve looked at whether it’s an embodied intelligent agent or not, but not at that yet. One of our colleagues is looking at questions of empathy.

Q: [me] Do the adults ask their children to thank Siri or other such agents?

A: No.

Q: [me] That suggests they’re tacitly shaping them to think that these devices are outside of our social norms?

Q: In my household, the “thank you” extinguishes itself: you do it a couple of times, and then you give it up.

A: This indicates that these systems right now are designed in a very transactional way. You have to say the wake up call every single phrase. But these devices will advance rapidly. Right now it’s unnatural conversation. But wth chatbots kids have a more natural conversation, and will say thank you. And kids want to teach it things, e.g, their names or favorite color. When Alexa doesn’t know what the answer is, the natural thing is to tell it, but that doesn’t work.

Q: Do the kids think these are friends?

A: There’s a real question around animism. Is it ok for a device to be designed to create a relationship with, say, a senior person and to convince them to take their pills? My answer is that people tend to anthropomorphize everything. Over time, kids will figure out the limitations of these tools.

Q: Kids don’t have genders for the devices? The speaking ones all have female voices. The doll is clearly a female.

A: Kids were interchanging genders because the devices are in a fluid space in the spectrum of genders. “They’re open to the fact that it’s an entirely new entity.”

Q: When you were talking about kids wanting to teach the devices things, I was thinking maybe that’s because they want the robot to know them. My question: Can you say more about what you observed with kids who had intelligent agents at home as opposed to those who do not?

A: Half already had a device at home. I’m running a workshop in Saudi Arabia with kids there. I’m very curious to see the differences. Also in Europe. We did one in Colombia among kids who had never seen an Alexa before and who wondered where the woman was. They thought there must be a phone inside. They all said good bye at the end.

Q: If the wifi goes down, does the device’s sudden stupidness concern the children? Do they think it died?

A: I haven’t tried that.

[me] Sounds like that would need to go through an IRB.

Q: I think your work is really foundational for people who want to design for kids.

Be the first to comment »