Joho the Blogethics Archives - Joho the Blog

November 5, 2017

[liveblog] Stefania Druga on how kids can help teach us about AI

Stefania Druga, a graduate student in the Personal Robots research group at the MIT Media Lab, is leading a discussion focusing on how children can help us to better understand and utilize AI. She’s going to talk about some past and future research projects.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She shows two applications of AI developed for kids The first is Cayla, a robotic doll. “It got hacked three days after it was released in Germany” and was banned there. The second is Aristotle, which was supposed to be an Alexa for kids. A few weeks ago Mattel decided not to release it, after “parents worried about their kids’ privacy signed petitions”parents worried about their kids’ privacy signed petitions.

Stefania got interested in what research was being done in this field. She found a couple of papers. One (Lovato & Piper 2015
) showed that children mirrored how they interact with Siri, e.g., how angry or assertive. Antother (McReynolds et al., 2017 [pdf]) found that how children and parents interact with smart toys revealed how little parents and children know about how much info is being collected by these toys, e.g. Hello Barbie’s privacy concerns. It also looked at how parents and children were being incentivized to share info on social media.

Stefania’s group did a pilot study, having parents and 27 kids interact with various intelligent agents, including Alexa, Julie Chatbot, Tina the T.Rex, and Google Home. Four or five chidlren would interact with the agent at a time, with an adult moderator. Their parents were in the room.

Stefania shows a video about this project. After the kids interacted with the agent, they asked if it was smarter than the child, if it’s a friend, if it has feelings. Children anthropomorphize AIs in playful ways. Most of the older children thought the agents were more intelligent than they were, while the younger children weren’t sure. Two conclusions: Makers of these devices should pay more attention to how children interact with them, and we need more research.

What did the children think? They thought the agents were friendly and truthful. “They thought two Alexa devices were separate individuals.”They thought two Alexa devices were separate individuals. The older children thought about these agents differently than the younger ones do. This latter may be because of how children start thinking about smartness as they progress through school. A question: do they think about artificial intelligence as being the same as human intelligence?

After playing with the agents, they would probe the nature of the device. “They are trying to place the ontology of the device.”

Also, they treated the devices as gender ambiguous.

The media glommed onto this pilot study. E.g., MIT Technology Review: “Growing Up with Alexa.” Or NYTimes: “Co-Parenting with Alexa.” Wired: Understanding Generation Alpha. From these articles, it seems that people are really polarized about the wisdom of introducing children to these devices.

Is this good for kids? “It’s complicated,” Stefania says. The real question is: How can children and parents leverage intelligent agents for learning, or for other good ends?

Her group did another study, this summer, that had 30 pairs of children and parents navigate a robot to solve a maze. They’d see the maze from the perspective of the robot. They also saw a video of a real mouse navigating a maze, and of another robot solving the maze by itself. “Does changing the agent (themselves, mouse, robot) change their idea of intelligence?”Does changing the agent (themselves, mouse, robot) change their idea of intelligence? Kids and parents both did the study. Most of the kids mirrored their parents’ choices. They even mirrored the words the parents used…and the value placed on those words.

What next? Her group wants to know how to use these devices for learning. They build extensions using Scratch, including for an open source project called Poppy. (She shows a very cool video of the robot playing, collaborating, painting, etc.) Kids can program it easily. Ultimately, she hopes that this might help kids see that they have agency, and that while the robot is smart at some things, people are smart at other things.

Q&A

Q: You said you also worked with the elderly. What are the chief differences?

A: Seniors and kids have a lot in common. They were especially interested in the fact that these agents can call their families. (We did this on tablets, and some of the elderly can’t use them because their skin is too dry.)

Q: Did learning that they can program the robots change their perspective on how smart the robots are?

A: The kids who got the bot through the maze did not show a change in their perspective. When they become fluent in customizing it and understanding how it computes, it might. It matters a lot to have the parents involved in flipping that paradigm.

Q: How were the parents involved in your pilot study?

A: It varied widely by parent. It was especially important to have the parents there for the younger kids because the device sometimes wouldn’t understand the question, or what sorts of things the child could ask it about.

Q: Did you look at how the participants reacted to robots that have strong or weak characteristics of humans or animals.

A: We’ve looked at whether it’s an embodied intelligent agent or not, but not at that yet. One of our colleagues is looking at questions of empathy.

Q: [me] Do the adults ask their children to thank Siri or other such agents?

A: No.

Q: [me] That suggests they’re tacitly shaping them to think that these devices are outside of our social norms?

Q: In my household, the “thank you” extinguishes itself: you do it a couple of times, and then you give it up.

A: This indicates that these systems right now are designed in a very transactional way. You have to say the wake up call every single phrase. But these devices will advance rapidly. Right now it’s unnatural conversation. But wth chatbots kids have a more natural conversation, and will say thank you. And kids want to teach it things, e.g, their names or favorite color. When Alexa doesn’t know what the answer is, the natural thing is to tell it, but that doesn’t work.

Q: Do the kids think these are friends?

A: There’s a real question around animism. Is it ok for a device to be designed to create a relationship with, say, a senior person and to convince them to take their pills? My answer is that people tend to anthropomorphize everything. Over time, kids will figure out the limitations of these tools.

Q: Kids don’t have genders for the devices? The speaking ones all have female voices. The doll is clearly a female.

A: Kids were interchanging genders because the devices are in a fluid space in the spectrum of genders. “They’re open to the fact that it’s an entirely new entity.”

Q: When you were talking about kids wanting to teach the devices things, I was thinking maybe that’s because they want the robot to know them. My question: Can you say more about what you observed with kids who had intelligent agents at home as opposed to those who do not?

A: Half already had a device at home. I’m running a workshop in Saudi Arabia with kids there. I’m very curious to see the differences. Also in Europe. We did one in Colombia among kids who had never seen an Alexa before and who wondered where the woman was. They thought there must be a phone inside. They all said good bye at the end.

Q: If the wifi goes down, does the device’s sudden stupidness concern the children? Do they think it died?

A: I haven’t tried that.

[me] Sounds like that would need to go through an IRB.

Q: I think your work is really foundational for people who want to design for kids.

Be the first to comment »

June 6, 2017

[liveblog] metaLab

Harvard metaLab is giving an informal Berkman Klein talk about their work on designing for ethical AI. Jeffrey Schnapp introduces metaLab as “an idea foundry, a knowledge-design lab, and a production studio experimenting in the networked arts and humanities.” The discussion today will be about metaLab’s various involvements in the Berkman Klein – MIT MediaLab project on ethics and governance of AI. The conference is packed with Fellows and the newly-arrived summer interns.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Matthew Battles and Jessica Yurkofsky begin by talking about Curricle, a “new platform for experimenting with shopping for courses.” How can the experience be richer, more visual, and use more of the information and data that Harvard has? They’ve come up with a UI that has three elements: traditional search, a visualization, and a list of the results.

“They’ve been grappling with the ethics of putting forward new search algorithms. ”They’ve been grappling with the ethics of putting forward new search algorithms. The design is guided by transparency, autonomy, and visualization. Transparency means that they make apparent how the search works, allowing students to assign weights to keywords. If Curricle makes recommendations, it will explain that it’s because other students like you have chosen it or because students like you have never done this, etc. Visualization shows students what’s being returned by their search and how it’s distributed.

Similar principles guide a new project, AI Compass, that is the entry point for information about Berkman Klein’s work on the Ethics and Governance of AI project. It is designed to document the research being done and to provide a tool for surveying the field more broadly. They looked at how neural nets are visualized, how training sets are presented, and other visual metaphors. They are trying to find a way to present these resources in their connections. They have decided to use Conway’s Game of Life [which I was writing about an hour ago, which freaks me out a bit]. The game allows complex structures to emerge from simple rules. AI Compass is using animated cellular automata as icons on the site.

metaLab wants to enable people to explore the information at three different scales. The macro scale shows all of the content arranged into thematic areas. This lets you see connections among the pieces. The middle scale shows the content with more information. At the lowest scale, you see the resource information itself, as well as connections to related content.

Sarah Newman talks about how AI is viewed in popular culture: the Matrix, Ahnuld, etc. “We generally don’t think about AI as it’s expressed in the tools we actually use”We generally don’t think about AI as it’s expressed in the tools we actually use, such as face recognition, search, recommendations, etc. metaLab is interested in how art can draw out the social and cultural dimensions of AI. “What can we learn about ourselves by how we interact with, tell stories about, and project logic, intelligence, and sentience onto machines?” The aim is to “provoke meaningful reflection.”

One project is called “The Future of Secrets.” Where our email and texts be in 100 years? And what does this tell us about our relationship with our tech. Why and how do we trust them? It’s an installation that’s been at the Museum of Fine Arts in Boston and recently in Berlin. People enter secrets that are printed out anonymously. People created stories, most of which weren’t true, often about the logic of the machine. People tended to project much more intelligence on the machine than was there. Cameras were watching and would occasionally print out images from the show itself.

From this came a new piece (done with fellow Rachel Kalmar) in which a computer reads the secrets out loud. It will be installed at the Berkman Klein Center soon.

Working with Kim Albrecht in Berlin, the center is creating data visualizations based on the data that a mobile phone collects, including the accelerometer. “These visualizations let us see how the device is constructing an image of the world we’re moving through”These visualizations let us see how the device is constructing an image of the world we’re moving through. That image is messy, noisy.

The lab is also collaborating on a Berlin exhibition, adding provocative framing using X degrees of Separation. It finds relationships among objects from disparate cultures. What relationships do algorithms find? How does that compare with how humans do it? What can we learn?

Starting in the fall, Jeffrey and a co-teacher are going to be leading a robotics design studio, experimenting with interior and exterior architecture in which robotic agents are copresent with human actors. This is already happening, raising regulatory and urban planning challenges. The studio will also take seriously machine vision as a way of generating new ways of thinking about mobility within city spaces.

Q&A

Q: me: For AI Compass, where’s the info coming from? How is the data represented? Open API?

Matthew: It’s designed to focus on particular topics. E.g., Youth, Governance, Art. Each has a curator. The goal is not to map the entire space. It will be a growing resource. An open API is not yet on the radar, but it wouldn’t be difficult to do.

Q: At the AI Advance, Jonathan Zittrain said that organizations are a type of AI: governed by a set of rules, they grow and learn beyond their individuals, etc.

Matthew: We hope to deal with this very capacious approach to AI is through artists. What have artists done that bear on AI beyond the cinematic tropes? There’s a rich discourse about this. We want to be in dialogue with all sorts of people about this.

Q: About Curricle: Are you integrating Q results [student responses to classes], etc.?

Sarah: Not yet. There’s mixed feeling from administrators about using that data. We want Curricle to encourage people to take new paths. The Q data tends to encourage people down old paths. Curricle will let students annotate their own paths and share it.

Jeffrey: We’re aiming at creating a curiosity engine. We’re working with a century of curricular data. This is a rare privilege.

me: It’d enrich the library if the data about resources was hooked into LibraryCloud.

Q: kendra: A useful feature would be finding a random course that fits into your schedule.

A: In the works.

Q: It’d be great to have transparency around the suggestions of unexpected courses. We don’t want people to be choosing courses simply to be unique.

A: Good point.

A: The same tool that lets you diversify your courses also lets you concentrate all of them into two days in classrooms near your dorm. Because the data includes courses from all the faculty, being unique is actually easy. The challenge is suggesting uniqueness that means something.

Q: People choose courses in part based on who else is choosing that course. It’d be great to have friends in the platform.

A: Great idea.

Q: How do you educate the people using the platform? How do you present and explain the options? How are you going to work with advisors?

A: Important concerns at the core of what we’re thinking about and working on.

1 Comment »