Joho the Blog » [liveblog][PAIR] Hae Won Park on living with AI
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[liveblog][PAIR] Hae Won Park on living with AI

At the PAIR conference, Hae Won Park of the MIT Media Lab is talking abiut personal social robots for home.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Home robots no longer look like R2D2. She shows a 2014 Jibo video.

What sold people is the social dynamic: Jibo is engaged with family members.

She wants to talk about the effect of AI on people’s lives at home.

For example, Google Home changed her morning routine, how she purchases goods, and how she controls her home environment. She shows a 2008 robot called Autom, a weight management coach.

A studied showed that the robot kept people at it longer than using paper or a computer program, and people had the strongest “working alliance” with the robot. They also had emotional engagement with it, personalizing it, giving them names, etc. These users understand it’s just a machine. Why?

She shows a video of Leonardo, a social robot that exhibits bodily cues of emotion. We seem to share a mental model.

They studied how children tell stories and listen to each other. Jin Joo Lee developed a model in which the robot appears to be very attentive as the child tells a story. It notes cues about whether the speaker is engaged. The children were indeed engaged by this reactive behavior.

Researchers have found that social robots activate social thinking, lighting up the social thinking part of the brain. Social modeling occurs between humans and robots too.

Working with children aged 4-6, they studied “growth mindset”: the belief that you can get better if you try hard. Parents and teachers have been shown to affect this. They created a growth mindset robot that plays a game with the child. The robot encourages the child at times determined by a “Boltzmann Machine”en.wikipedia.org/wiki/Boltzmann_machine”>Boltzmann Machine. [Over my head.]

Their researc showed that playing puzzles with a growth-mindset robot fosters that mindset in children. For example, the children tried harder over time.

They also studied early literacy education using personalized robot tutors. In multiple studies of about 120 children. The robot, among other things, encourages the child to tell stories. Over four weeks, they found children more effectively learn vocabulary, and when the robot provided more expressive story telling (rather than speaking in an affect-less TTY voice) the children retained more and would mimic that expressiveness.

Now they’re studying fully automonmous storytelling robots. The robot uses the child’s responses to further engage the child. The children respond more, tell longer stories, and stayed engaged over longer periods across sessions.

We are headed toward a time when robots are more human-centered rather than task focused. So we need to think about making AI not just human-like but humanistic. We hope to make AI that make us better people.

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon