logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

February 3, 2021

What’s missing from media literacy?

danah boyd’s 2018 “You think you want media literacy, do you?” remains an essential, frame-changing discussion of the sort of media literacy that everyone, including danah [@zephoria], agrees we need: the sort that usually focuses on teaching us how to not fall for traps and thus how to disbelieve. But, she argues, that’s not enough. We also need to know how to come to belief.

I went back to danah’s brilliant essay because Barbara Fister [@bfister], a librarian I’ve long admired, has now posted “Lizard People in the Library.” Referencing danah’s essay among many others, Barbara asks: Given the extremity and absurdity of many American’s beliefs, what’s missing from our educational system, and what can we do about it? Barbara presents a set of important, practical, and highly sensible steps we can take. (Her essay is part of the Project Information Literacy research program.)

The only thing I’d dare to add to either essay — or more exactly, an emphasis I would add — is that we desperately need to learn and teach how to come to belief together. Sense-making as well as belief-forming are inherently collaborative projects. It turns out that without explicit training and guidance, we tend to be very very bad at it.

Tweet
Follow me

Categories: culture, echo chambers, education, libraries, philosophy, social media, too big to know Tagged with: education • epistemology • libraries • philosophy Date: February 3rd, 2021 dw

Be the first to comment »

October 6, 2019

Making the Web kid-readable

Of the 4.67 gazillion pages on the Web, exactly 1.87 nano-bazillion are understandable by children. Suppose there were a convention and a service for making child-friendly versions of any site that wanted to increase its presence and value?

That was the basic idea behind our project at the MindCET Hackathon in the Desert a couple of weeks ago.

MindCET is an ed tech incubator created by the Center for Educational Technology (CET) in Israel. “Automatically generates grade-specific versions? Hahaha.”Its founder and leader is Avi Warshavsky, a brilliant technologist and a person of great warmth and character, devoted to improving education for all the world’s children. Over the ten years that I’ve been on the CET tech advisory board, Avi has become a treasured personal friend.

In Yeruham on the edge of the Negev, 14 teams of 6-8 people did the hackathon thing. Our team — to my shame, I don’t have a list of them — pretty quickly settled on thinking about what it would take to create a world-wide expectation that sites that explain things would have versions suitable for children at various grade levels.

So, here’s our plan for Onderstand.com.

Let’s say you have a site that provides information about some topic; our example was a page about how planes fly. It’s written at a normal adult level, or perhaps it assumes even more expertise about the topic. You would like the page to be accessible to kids in grade school.

No problem! Just go to Onderstand.com and enter the page’s URL. Up pops a form that lets you press a button to automatically generate versions for your choice of grade levels. Or you can create your own versions manually. The form also lets you enter useful metadata, including what school kid questions you think your site addresses, such as “How do planes fly?”, “What keeps planes up?”, and “Why don’t planes crash?” (And because everything is miscellaneous, you also enter tags, of course.)

Before I go any further, let me address your question: “It automatically generates grade-specific versions? Hahaha.” Yes, it’s true that in the 36 hours of the hackathon, we did not fully train the requisite machine learning model, in the sense that we didn’t even try. But let’s come back to that…

Ok, so imagine that you now have three grade-specific versions of your page about how planes fly. You put them on your site and give Onderstand their Web addresses as well as the metadata you’ve filled in. (Perhaps Onderstand.com would also host or archive the pages. We did not work out all these details.)

Onderstand generates a button you can place on your site that lets the visitor know that there are kid-ready versions.

The fact that there are those versions available is also recorded by Onderstand.com so that kids know that if they have a question, they can search Onderstand for appropriate versions.

Our business model is the classic “We’re doing something of value so someone will pay for it somehow.” Of course, we guarantee that we will never sell, rent, publish, share or monetize user information. But one positive thing about this approach: The service does not become valuable only once there’s lots of content. “Because sites get the kid-ready button, they get value from it”Because sites get the kid-ready button, they get value from it even if the Onderstand.com site attracts no visitors.

If the idea were to take off, then a convention that it establishes would be useful even if Onderstand were to fold up like a cheap table. The convention would be something like Wikipedia’s prepending “simple” before an article address. For example, the Wikipedia article “Airplane” is a great example of the problem: It is full of details but light on generalizations, uses hyperlinks as an excuse for lazily relying on jargon rather than readable text, and never actually explains how a plane flies. But if you prepend “simple” to that page’s URL — https://simple.wikipedia.org/wiki/Fixed-wing_aircraft — you get taken to a much shorter page with far fewer details (but also still no explanation of how planes fly).

Now, our hackathon group did not actually come up with what those prepensions should be. Maybe “grade3”, “grade9”, etc. But we wouldn’t want kids to have to guess which grade levels the site has available. So maybe just “school” or some such which would then pop up a list of the available versions. What I’m trying to say is that that’s the only detail left before we transform the Web.

The machine learning miracle

Machine learning might be able to provide a fairly straightforward, and often unsatisfactory, way of generating grade-specific versions.

“The ML could be trained on a corpus of text that has human-generated versions for kids.”The ML could be trained on a corpus of text that has human-generated versions for kids. The “simple” Wikipedia pages and their adult equivalents could be one source. Textbooks on the same subjects designed for different class levels might be another, even though — unlike the Wikipedia “simple” pages — they are not more or less translations of the same text. There are several experimental simplification applications discussed on the Web already.

Even if this worked, it’s likely to be sub-par because it would just be simplifying language, not generating explanations that creatively think in kids’ terms. For example, to explain flight to a high schooler, you would probably want to explain the Bernoulli effect and the four forces that act on a wing, but for a middle schooler you might start with the experiment in which they blow across a strip of paper, and for a grade schooler you might want to ask if they’ve ever blown on the bottom of a bubble.

So, even if the ML works, the site owner might want to do something more creative and effective. But still, simply having reduced-vocabulary versions could be helpful, and might set an expectation that a site isn’t truly accessible if it isn’t understandable.

Ok, so who’s in on the angel funding round?

Tweet
Follow me

Categories: misc Tagged with: ai • education • hackathon • machine learning Date: October 6th, 2019 dw

Be the first to comment »

December 17, 2017

[liveblog] Mariia Gavriushenko on personalized learning environments

I’m at the STEAM ed Finland conference in Jyväskylä where Mariia Gavriushenko is talking about personalized learning environments.


Web-based learning systems are being more and more widely used in large part because they can be used any time, anywhere. She points to two types: Learning management systems and game-based systems. But they lack personalization that makes them suitable for particular learners in terms of learning speed, knowledge background, preferences in learning and career, goals for future life, and their differing habits. Personalized systems can provide assistance in learning and adapt their learning path. Web-based learning shouldn’t just be more convenient. It should also be better adapted to personal needs.


But this is hard. But if you can do it, it can monitor the learner’s knowledge level and automatically present the right materials. In can help teachers create suitable material and find the most relevant content and convert it into comprehensive info. It can also help students identify the best courses and programs.


She talks about two types of personalized learning systems: 1. systems that allow the user to change the system or 2. the sysytem changes itself to meet the users needs. The systems can be based on rules and context or can be algorithm driven.


Five main features of adaptive learning systems:

  • Pre-test

  • Pacing and control

  • Feedback and assessment

  • Progress tracking and reports

  • Motivation and reward


The ontological presentation of every learner keeps something like a profile for each user, enabling semantic reasoning.


She gives an example of this model: automated academic advising. It’s based on learning analytics. It’s an intelligent learning support system based on semantically-enhanced decision support, that identifies gaps, and recommends materials and courses. It can create a personal study plan. The ontology helps the system understand which topics are connected to others so that it can identify knowledge gaps.


An adaptive vocabulary learning environment provides cildren with an adaptive way to train their vocabulary, taking into account the individuality of the learner. It assumes the more similar the words, the harder they are to recognize.


Mariia believes we will make increasing use of adaptive educational tech.

Tweet
Follow me

Categories: ai, education, liveblog Tagged with: ai • education • personalization • teaching Date: December 17th, 2017 dw

Be the first to comment »

[liveblogging] SMART education

I’m at the STEAM ed Finland conference in Jyväskylä. Maria Kankaanranta, Leena Hiltunen, Kati Clements and Tiina Mäkelä are on the faculty of the School of Education at the University of Jyväskylä The are going to talk about SMART education.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.


SMART means self-directed, motivated, adaptive, reseource enriched, and technology-embedded learning. (They credit South Korean researchers for this.) This is a paradigm shift: From education a specific times to any time. From lectures to motivated ed methods. From teaching the 3Rs to epanding the ed capacity. From traditional textbooks to enriched resources. From a physical space to anywhere there is the enabling tech.


One project (Horizon 2020) works across disciplines to connect students, parents, teachers, and companies. Companies expect universities to develop the skills they need, but you really have to begin with primary school. The aim of the project is to create a pedagogical framework and design principles for attractive and engaging STEM learning environments. She presents a long list of pedagogical design principles that guide the design of this kind of hybrid learning enviroments. It includes adaptive learning, self-regulation, project-based learning, novelty, but also conventionality: “you don’t have to abandon everything.”


What beyond MOODLE can we do? The EU has funded instruments for procurement of innovation. The presenters have worked on IMAILE & LEA (LearnTech Accelerator). IMAILE ran for 48 months in four countries. To address problems, the project pointed to two existing solutions: YipTree and AMIGO (e-books publisher from Spain). YipTree provides individual personalized learning paths (adaptive materials), student motivation by a virtual tutor and by other students, gamificiation, quick assessment tools, and notifications when a student is having difficulties. They tested this in two schools per country. YipTree did well.


They have been training teachers in computational thinking, programming, and robotics. They use online, mobile apps to make it available and free for all teachers and students. They’re using different training models to motivate and encourage teachers to adopt these apps. E.g., they’re “hijacking” schools and workplaces to train them where they are. Teachers really want human engagement.


Schools have access to tech resources but they’re under-used because the teachers don’t know what’s available and possible. This presentation’s project is helping teachers with this.


Conclusion: Smart ed is not easy. It takes time. It requires getting out of your comfort zone. It requires training, tools, research, and a human touch.


Q&A


Q: Does your model take into account students with disabilities?

A: Yes. Part of this is “access for all.” Also, IMAILE does. Imperfectly. They collaborate with a local school for the impaired.

Tweet
Follow me

Categories: education, liveblog Tagged with: education • liveblog Date: December 17th, 2017 dw

Be the first to comment »

[liveblog] Ulla Richardson on a game that teaches reading

I’m at the STEAM ed Finland conference in Jyväskylä where Ulla Richardson is going to talk about GraphoLearn, an adaptive learning method for learning to read.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.


Ulla has been working on the Jyväskylä< Longitudinal Study of Dyslexia (JLD). Globally, one third of people can’t read or have poor reading skills. One fifth of Europe also. About 15% of children have learning disabilities.


One Issue: knowing which sound goes with which letters. GraphoLearn is a game to help students with this, developed by a multidisciplinary team. You learn a word by connecting a sound to a written letter. Then you can move to syllables and words. The game teaches by trial and error. If you get it wrong, it immediately tells you the correct sound. It uses a simple adaptive approach to select the wrong choices that are presented. The game aims at being entertaining, and motivates also with points and rewards. It’s a multi-modal system: visual and audio. It helps dyslexics by training them on the distinctions between sounds. Unlike human beings, it never displays any impatience.

It adapts to the user’s skill level, automatically assessing performance and aiming at at 80% accuracy so that it’s challenging but not too challenging.


13,000 players have played in Finland, and more in other languages. Ulla displays data that shows positive results among students who use GraphoLearn, including when teaching English where every letter has multiple pronunciations.


There are some difficulties analyzing the logs: there’s great variability in how kids play the game, how long they play, etc. There’s no background info on the students. [I missed some of this.] There’s an opportunity to come up with new ways to understand and analyze this data.


Q&A


Q: Your work is amazing. When I was learning English I could already read Finnish, so I made natural mispronunciations of ape, anarchist, etc. How do you cope with this?


A: Spoken and written English are like separate languages, especially if Finnish is your first language where each letter has only one pronunciation. You need a bigger unit to teach a language like English. That’s why we have the Rime approach where we show the letters in more context. [I may have gotten this wrong.]


Q: How hard is it to adapt the game to each language’s logic?

Understanding xanax: A Prescription Medication

xanax, a brand name for alprazolam, is a prescription benzodiazepine used to treat anxiety and panic disorders. Here’s what you need to know:

• Typically taken orally in tablet form

• Dosage varies based on individual needs and doctor’s prescription

• Can be taken with or without food

• Usually administered 2-4 times daily

Remember: xanax should only be taken as prescribed by a healthcare professional. Misuse can lead to dependence or serious side effects.

Have questions about xanax management? Consult your doctor for safe, personalized treatment


A: It’s hard.

Tweet
Follow me

Categories: ai, education, games, liveblog, machine learning Tagged with: education • games • language • machine learning Date: December 17th, 2017 dw

Be the first to comment »

December 16, 2017

[liveblog] Mirka Saarela and Sanna Juutinen on analyzing education data

I’m at the STEAM ed Finland conference in Jyväskylä. Mirka Saarela and Sanna Juutinen are talking about their analysis of education data.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.


There’s a triennial worldwide study by the OECD to assess students. Usually, people are only interested in its ranking of education by country. Finland does extremely well at this. This is surprising because Finland does not do particularly well in the factors that are taken to produce high quality educational systems. So Finnish ed has been studied extensively. PISA augments this analysis using learning analytics. (The US does at best average in the OECD ranking.)


Traditional research usually starts with the literature, develops a hypothesis, collects the data, and checks the result. PISA’s data mining approach starts with the data. “We want to find a needle in the haystack, but we don’t know what the needle looks like.” That is, they don’t know what type of pattern to look for.


Results of 2012 PISA: If you cluster all 24M students with their characteristics and attitudes without regard to their country you get clusters for Asia, developing world, Islamic, western countries. So, that maps well.


For Finland, the most salient factor seems to be its comprehensive school system that promotes equality and equity.

In 2015 for the first time there was a computerized test environment available. Most students used it. The logfile recorded how long students spent on a task and the number of activities (mouse clicks, etc.) as well as the score. They examined the Finnish log file to find student profiles, related to student’s strategies and knowledge. Their analysis found five different clusters. [I can’t read the slide from here. Sorry.] They are still studying what this tells us. (They purposefully have not yet factored in gender.)


Nov. 2017 results showed that girls did far better than boys. The test was done in a chat environment which might have been more familiar for the girls? Is the computerization of the tests affecting the results? Is the computerization of education affecting the results? More research is needed.


Q&A


Q: Does the clustering suggest interventions? E.g., “Slow down. Less clicking.”

A: [I couldn’t quite hear the answer, but I think the answer is that it needs more analysis. I think.]


Q: I work for ETS. Are the slides available?


A: Yes, but the research isn’t public yet.

Tweet
Follow me

Categories: ai, education, liveblog, machine learning Tagged with: ai • education • liveblog • machine learning Date: December 16th, 2017 dw

Be the first to comment »

[liveblog] Harri Ketamo on micro-learning

I’m at the STEAM ed Finland conference in Jyväskylä. Harri Ketamo is giving a talk on “micro-learning.” He recently won a prestigious prize for the best new ideas in Finland. He is interested in the use of AI for learning.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

We don’t have enough good teachers globally, so we have to think about ed in new ways, Harri says. Can we use AI to bring good ed to everyone without hiring 200M new teachers globally? If we paid teachers equivalent to doctors and lawyers, we could hire those 200M. But we apparently not willing to do that.


One challenge: Career coaching. What do you want to study? Why? What are the skills you need? What do you need to know?


His company does natural language analysis — not word matches, but meaning. As an example he shows a shareholder agreement. Such agreements always have the same elements. After being trained on law, his company’s AI can create a map of the topic and analyze a block of text to see if it covers the legal requirements…the sort of work that a legal assistant does. For some standard agreements, we may soon not need lawyers, he predicts.


The system’s language model is a mess of words and relations. But if you zoom out from the map, the AI has clustered the concepts. At the Slush Sanghai conference, his AI could develop a list of the companies a customer might want to meet based on a text analysis of the companies’ web sites, etc. Likewise if your business is looking for help with a project.


Finland has a lot of public data about skills and openings. Universities’ curricula are publicly available.[Yay!] Unlike LinkedIn, all this data is public. Harri shows a map that displays the skills and competencies Finnish businesses want and the matching training offered by Finnish universities. The system can explore public information about a user and map that to available jobs and the training that is required and available for it. The available jobs are listed with relevancy expressed as a percentage. It can also look internationally to find matches.


The AI can also put together a course for a topic that a user needs. It can tell what the core concepts are by mining publications, courses, news, etc. The result is an interaction with a bot that talks with you in a Whatsapp like way. (See his paper “Agents and Analytics: A framework for educational data mining with games based learning”). It generates tests that show what a student needs to study if she gets a question wrong.


His newest project, in process: Libraries are the biggest collections of creative, educational material, so the AI ought to point people there. His software can find the common sources among courses and areas of study. It can discover the skills and competencies that materials can teach. This lets it cluster materials around degree programs. It can also generate micro-educational programs, curating a collection of readings.

His platform has an open an API. See Headai.

Q&A


Q: Have you done controlled experiments?


A: Yes. We’ve found that people get 20-40% better performance when our software is used in blended model, i.e., with a human teacher. It helps motivate people if they can see the areas they need to work on disappear over time.


Q: The sw only found male authors in the example you put up of automatically collated materials.


A: Small training set. Gender is not part of the metadata in Finland.


A: Don’t you worry that your system will exacerbate bias?


Q: Humans are biased. AI is a black box. We need to think about how to manage this


Q: [me] Are the topics generated from the content? Or do you start off with an ontology?


A: It creates its ontology out of the data.


Q: [me] Are you committing to make sure that the results of your AI do not reflect the built in biases?


A: Our news system on the Web presents a range of views. We need to think about how to do this for gender issues with the course software.

Tweet
Follow me

Categories: ai, education, liveblog, machine learning, too big to know Tagged with: 2b2k • ai • education • liveblog • machine learning Date: December 16th, 2017 dw

Be the first to comment »

November 5, 2017

[liveblog] Stefania Druga on how kids can help teach us about AI

Stefania Druga, a graduate student in the Personal Robots research group at the MIT Media Lab, is leading a discussion focusing on how children can help us to better understand and utilize AI. She’s going to talk about some past and future research projects.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She shows two applications of AI developed for kids The first is Cayla, a robotic doll. “It got hacked three days after it was released in Germany” and was banned there. The second is Aristotle, which was supposed to be an Alexa for kids. A few weeks ago Mattel decided not to release it, after “parents worried about their kids’ privacy signed petitions”parents worried about their kids’ privacy signed petitions.

Stefania got interested in what research was being done in this field. She found a couple of papers. One (Lovato & Piper 2015
) showed that children mirrored how they interact with Siri, e.g., how angry or assertive. Antother (McReynolds et al., 2017 [pdf]) found that how children and parents interact with smart toys revealed how little parents and children know about how much info is being collected by these toys, e.g. Hello Barbie’s privacy concerns. It also looked at how parents and children were being incentivized to share info on social media.

Stefania’s group did a pilot study, having parents and 27 kids interact with various intelligent agents, including Alexa, Julie Chatbot, Tina the T.Rex, and Google Home. Four or five chidlren would interact with the agent at a time, with an adult moderator. Their parents were in the room.

Stefania shows a video about this project. After the kids interacted with the agent, they asked if it was smarter than the child, if it’s a friend, if it has feelings. Children anthropomorphize AIs in playful ways. Most of the older children thought the agents were more intelligent than they were, while the younger children weren’t sure. Two conclusions: Makers of these devices should pay more attention to how children interact with them, and we need more research.

What did the children think? They thought the agents were friendly and truthful. “They thought two Alexa devices were separate individuals.”They thought two Alexa devices were separate individuals. The older children thought about these agents differently than the younger ones do. This latter may be because of how children start thinking about smartness as they progress through school. A question: do they think about artificial intelligence as being the same as human intelligence?

After playing with the agents, they would probe the nature of the device. “They are trying to place the ontology of the device.”

Also, they treated the devices as gender ambiguous.

The media glommed onto this pilot study. E.g., MIT Technology Review: “Growing Up with Alexa.” Or NYTimes: “Co-Parenting with Alexa.” Wired: Understanding Generation Alpha. From these articles, it seems that people are really polarized about the wisdom of introducing children to these devices.

Is this good for kids? “It’s complicated,” Stefania says. The real question is: How can children and parents leverage intelligent agents for learning, or for other good ends?

Her group did another study, this summer, that had 30 pairs of children and parents navigate a robot to solve a maze. They’d see the maze from the perspective of the robot. They also saw a video of a real mouse navigating a maze, and of another robot solving the maze by itself. “Does changing the agent (themselves, mouse, robot) change their idea of intelligence?”Does changing the agent (themselves, mouse, robot) change their idea of intelligence? Kids and parents both did the study. Most of the kids mirrored their parents’ choices. They even mirrored the words the parents used…and the value placed on those words.

What next? Her group wants to know how to use these devices for learning. They build extensions using Scratch, including for an open source project called Poppy. (She shows a very cool video of the robot playing, collaborating, painting, etc.) Kids can program it easily. Ultimately, she hopes that this might help kids see that they have agency, and that while the robot is smart at some things, people are smart at other things.

Q&A

Q: You said you also worked with the elderly. What are the chief differences?

A: Seniors and kids have a lot in common. They were especially interested in the fact that these agents can call their families. (We did this on tablets, and some of the elderly can’t use them because their skin is too dry.)

Q: Did learning that they can program the robots change their perspective on how smart the robots are?

A: The kids who got the bot through the maze did not show a change in their perspective. When they become fluent in customizing it and understanding how it computes, it might. It matters a lot to have the parents involved in flipping that paradigm.

Q: How were the parents involved in your pilot study?

A: It varied widely by parent. It was especially important to have the parents there for the younger kids because the device sometimes wouldn’t understand the question, or what sorts of things the child could ask it about.

Q: Did you look at how the participants reacted to robots that have strong or weak characteristics of humans or animals.

A: We’ve looked at whether it’s an embodied intelligent agent or not, but not at that yet. One of our colleagues is looking at questions of empathy.

Q: [me] Do the adults ask their children to thank Siri or other such agents?

A: No.

Q: [me] That suggests they’re tacitly shaping them to think that these devices are outside of our social norms?

Q: In my household, the “thank you” extinguishes itself: you do it a couple of times, and then you give it up.

A: This indicates that these systems right now are designed in a very transactional way. You have to say the wake up call every single phrase. But these devices will advance rapidly. Right now it’s unnatural conversation. But wth chatbots kids have a more natural conversation, and will say thank you. And kids want to teach it things, e.g, their names or favorite color. When Alexa doesn’t know what the answer is, the natural thing is to tell it, but that doesn’t work.

Q: Do the kids think these are friends?

A: There’s a real question around animism. Is it ok for a device to be designed to create a relationship with, say, a senior person and to convince them to take their pills? My answer is that people tend to anthropomorphize everything. Over time, kids will figure out the limitations of these tools.

Q: Kids don’t have genders for the devices? The speaking ones all have female voices. The doll is clearly a female.

A: Kids were interchanging genders because the devices are in a fluid space in the spectrum of genders. “They’re open to the fact that it’s an entirely new entity.”

Q: When you were talking about kids wanting to teach the devices things, I was thinking maybe that’s because they want the robot to know them. My question: Can you say more about what you observed with kids who had intelligent agents at home as opposed to those who do not?

A: Half already had a device at home. I’m running a workshop in Saudi Arabia with kids there. I’m very curious to see the differences. Also in Europe. We did one in Colombia among kids who had never seen an Alexa before and who wondered where the woman was. They thought there must be a phone inside. They all said good bye at the end.

Q: If the wifi goes down, does the device’s sudden stupidness concern the children? Do they think it died?

A: I haven’t tried that.

[me] Sounds like that would need to go through an IRB.

Q: I think your work is really foundational for people who want to design for kids.

Tweet
Follow me

Categories: ai, education, ethics, liveblog Tagged with: ai • children • education • ethics • robots Date: November 5th, 2017 dw

Be the first to comment »

October 25, 2017

[liveblog] John Palfrey’s new book (and thoughts on rules vs. models)

John Palfrey is doing a launch event at the Berkman Klein Center for his new book, Safe Spaces, Brave Spaces: Diversity and Free Expression in Education. John is the Head of School at Phillips Academy Andover, and for many years was the executive director of the Berkman Klein Center and the head of the Harvard Law School Library. He’s also the chairman of the board of the Knight Foundation. This event is being put on by the BKC, the Law Library, and Andover. His new book is available on paper, or online as an open access book. (Of course it is. It’s John Palfrey, people!)

[Disclosure: Typical conversations about JP, when he’s not present, attempt — and fail — to articulate his multi-facted awesomeness. I’ll fail at this also, so I’ll just note that JP is directly responsible for my affiliation with the BKC and and for my co-directorship of the Harvard Library Innovation Lab…and those are just the most visible ways in which he has enabled me to flourish as best I can. ]

Also, at the end of this post I have some reflections on rules vs. models, and the implicit vs. explicit.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

John begins by framing the book as an attempt to find a balance between diversity and free expression. Too often we have pitted the two against each other, especially in the past few years, he says: the left argues for diversity and the right argues for free expression. It’s important to have both, although he acknowledges that there are extremely hard cases where there is no reconciliation; in those cases we need rules and boundaries. But we are much better off when we can find common ground.

“This may sound old-fashioned in the liberal way. And that’s true,” he says. But we’re having this debate in part because young people have been advancing ideas that we should be listening to. We need to be taking a hard look.

Our institutions should be deeply devoted to diversity, equity and inclusion. Our institutions haven’t been as supportive of these as they should be, although they’re getting better at it, e.g. getting better at acknowledging the effects of institutional racism.

The diversity argument pushes us toward the question of “safe spaces.” Safe spaces are crucial in the same way that every human needs a place where everyone around them supports them and loves them, and where you can say dumb things. We all need zones of comfort, with rules implicit or explicit. It might be a room, a group, a virtual space… E.g., survivors of sexual assault need places where they know there are rules and they can express themselves without feeling at risk.

But, John adds, there should also be spaces where people are uncomfortable, where their beliefs are challenged.

Spaces of both sorts are experienced differently by different people. Privileged people like John experience spaces as safe that others experience as uncomfortable.

The examples in his book include: trigger warnings, safe spaces, the debates over campus symbols, the disinvitation of speakers, etc. These are very hard to navigate and call out for a series of rules or principles. Different schools might approach these differently. E.g.,students from the Gann Academy are here tonight, a local Jewish high school. They well might experience a space differently than students at Andover. Different schools well might need different rules.

Now John turns it over to students for comments. (This is very typical JP: A modest but brilliant intervention and then a generous deferral to the room. I had the privilege of co-teaching a course with him once, and I can attest that he is a brilliant, inspiring teacher. Sorry, but to be such a JP fanboy, but I am at least an evidence-based fanboy.) [I have not captured these student responses adequately, in some cases simply because I had trouble hearing them. They were remarkable, however. And I could not get their names with enough confidence to attempt to reproduce them here. Sorry!]

Student Responses

Student: I graduated from Andover and now I’m at Harvard. I was struck by the book’s idea that we need to get over the dichotomy between diversity and free expression. I want to address Chapter 5, about hate speech. It says each institution ought to assess its own values to come up with its principles about speech and diversity, and those principles ought to be communicated clearly and enforced consistently. But, I believe, we should in fact be debating what the baseline should be for all institutions. We don’t all have full options about what school we’re going to go to, so there ought to be a baseline we all can rely on.

JP: Great critique. Moral relativism is not a good idea. But I don’t think one size fits all. In the hardest cases, there might be sharpest limits. But I do agree there ought to be some sort of baseline around diversity, equity, and inclusion. I’d like to see that be a higher baseline, and we’ve worked on this at Andover. State universities are different. E.g., if a neo-Nazi group wants to demonstrate on a state school campus and they follow the rules laid out in the Skokie case, etc., they should be allowed to demonstrate. If they came to Andover, we’d say no. As a baseline, we might want to change the regulations so that the First Amendment doesn’t apply if the experience is detrimental to the education of the students; that would be a very hard line to draw. Even if we did, we still might want to allow local variations.

Student: Brave spaces are often build from safe spaces. E.g., at Andover we used Facebook to build a safe space for women to talk, in the face of academic competitions where misogyny was too common. This led to creating brave places where open, frank discussion across differences was welcomed.

JP: Yes, giving students a sense of safety so they can be brave is an important point. And, yes, brave spaces do often grow from safe spaces.

Andover student: I was struck by why diversity is important: the cross-pollination of ideas. But from my experience, a lot of that hasn’t occurred because we’re stuck in our own groups. There’s also typically a divide between the students and the faculty. Student activitsts are treated as if they’re just going through a phase. How do we bridge that gap?

JP: How do we encourage more cross-pollination? It’s a really hard problem for educators. I’ve been struck by the difference between teaching at Harvard Law and Andover in terms of the comfort with disagreeing across political divides; it was far more comfortable at the Law School. I’ve told students if you present a paper that disagrees with my point of view and argues for it beautifully, you’ll do better than parroting ideas back to me. Second, we have to stop using demeaning language to talk about student activists. BTW, there is an interesting dynamic, as teachers today may well have been activists when they were young and think of themselves as the reformers.

Student: [hard to hear] At Andover, our classes were seminar-based, which is a luxury not all students have. Also: Wouldn’t encouraging a broader spread of ideas create schisms? How would you create a school identity?

JP: This echoes the first student speaker’s point about establishing a baseline. Not all schools can have 12 students with two teachers in a seminar, as at Andover. We need to find a dialectic. As for schisms: we have to communicate values. Institutions are challenged these days but there is a huge place for them as places that convey values. There needs to be some top down communication of those values. Students can challenge those values, and they should. This gets at the heart of the problem: Do we tolerate the intolerant?

Student: I’m a graduate of Andover and currently at Harvard. My generation has grown up with the Internet. What happens when what is supposed to be a safe space becomes a brave space for some but not all? E.g., a dorm where people speak freely thinking it’s a safe space. What happens when the default values overrides what someone else views as comfortable? What is the power of an institution to develop, monitor, and mold what people actually feel? When communities engage in groupthink, how can an institution construct space safes?

JP: I don’t have an easy answer to this. We do need to remember that these spaces are experienced differently by different people, and the rules ought to reflect this. Some of my best learning came from late night bull sessions. It’s the duty of the institution to do what it can to enable that sort of space. But we also have to recognize that people who have been marginalized react differently. The rule sets need to reflect that fact.

Student: Andover has many different forum spaces available, from hallways to rooms. We get to decide to choose when and where these conversations will occur. For a more traditional public high school where you only have 30-person classroom as a forum, how do we have the difficult conversations that students at Andover choose to have in more intimate settings?

JP: The size and rule-set of the group matters enormously. Even in a traditional HS you can still break a class into groups. The answer is: How do you hack the space?

Student: I’m a freshman at Harvard. Before the era of safe spaces, we’d call them friends: people we can talk with and have no fear that our private words will be made public, and where we will not be judged. Safe spaces may exclude people, e.g., a safe space open only to women.

JP Andover has a group for women of color. That excludes people, and for various reasons we think that’s entirely appropriate an useful.

Q&A

Q [Terry Fisher]: You refer frequently to rule sets. If we wanted to have a discussion in a forum like this, you could announce a set of rules. Or the organizer could announce values, such as: we value respect, or we want people to take the best version of what others say. Or, you could not say anything and model it in your behavior. When you and I went to school, there were no rules in classrooms. It was all done by modeling. But this also meant that gender roles were modeled. My experience of you as a wonderful teacher, JP, is that you model values so well. It doesn’t surprise me that so many of your students talk with the precision and respectfulness that you model. I am worried about relying on rule sets, and doubt their efficacy for the long term. Rather, the best hope is people modeling and conveying better values, as in the old method.

JP: Students, Terry Fischer was my teacher. May answer will be incredibly tentative: It is essential for an institution to convey its values. We do this at Andover. Our values tell us, for example, that we don’t want gender-based balance and are aware that we are in a misogynist culture, and thus need reasonable rules. But, yes, modeling is the most powerful.

Q [Dorothy Zinberg]: I’ve been at Harvard for about 70 yrs and I have seen the importance of an individual in changing an institution. For example, McGeorge Bundy thought he should bring 12 faculty to Harvard from non-traditional backgrounds, including Erik Erikson who did not have a college degree. He had been a disciple of Freud’s. He taught a course at Harvard called “The Lifecycle.” Every Harvard senior was reading The Catcher in the Rye. Erikson was giving brilliant lectures, but I told him it was from his point of view as a man, and had nothing to do with the young women. So, he told me, a grad student, to write the lectures. No traditional professor would have done that. Also: for forming groups, there’s nothing like closing the door. People need to be able to let go and try a lot of ideas.

Q: I am from the Sudan. How do you create a safe space in environments that are exclusive. [I may have gotten that wrong. Sorry.] How do you acknowledge the native American tribes whose land this institution is built on, or the slaves who did the building?

JP: We all have that obligation. [JP gives some examples of the Law School recently acknowledging the slave labor, and the money from slave holders, that helped build the school.]

Q: You used a kitchen as an example of a safe space. Great example. But kitchens are not established or protected by any authority. It’s a new idea that institutions ought to set these up. Do you think there should be safe spaces that are privately set up as well as by institutions? Should some be permitted to exclude people or not?

(JP asks a student to respond): Institutional support can be very helpful when you have a diversity of students. Can institutional safe spaces supplement private ones? I’m not sure. And I do think exclusive groups have a place. As a consensus forms, it’s important to allow the marginalized voices to connect.

Q [ head of Gann]: I’m a grad of Phillips Academy. As head of a religious school, we’re struggling with all these questions. Navigating these spaces isn’t just a political or intellectual activity. It is a work of the heart. If the institution thinks of this only as a rational activity and doesn’t tend to the hearts of our students, and is not explicit about the habits of heart we need to navigate these sensitive waters, only those with natural emotional skills will be able to flourish. We need to develop leaders who can turn hard conversations into generative ones. What would it look like to take on the work of developing social and emotional development?

JP: Ive been to Gann and am confident that’s what you’re doing. And you can see evidence of Andover’s work on it in the students who spoke tonight. Someone asked me if a student became a Nazi, would you expel him? Yes, if it were apparent in his actions, but probably not for his thoughts. Ideally, our students won’t come to have those views because of the social and emotional skills they’re learning. But people in our culture do have those views. Your question brings it back to the project of education and of democracy.

[This session was so JP!]

 


 

A couple of reactions to this discussion without having yet read the book.

First, about Prof. Fisher’s comment: I think we are all likely to agree that modeling the behavior we want is the most powerful educational tool. JP and Prof. Fisher, are both superb, well, models of this.

But, as Prof. Fisher noted in his question, the dominant model of discourse for our generation silently (and sometimes explicitly) favored males, white middle class values, etc. Explicit rules weren’t as necessary because we had internalized them and had stacked the deck against those who were marginalized by them. Now that diversity has thankfully become an explicit goal, and now that the Internet has thrown us into conversations across differences, we almost always need to make those rules explicit; a conversation among people from across divides of culture, economics, power, etc. that does not explicitly acknowledge the different norms under which the participants operate is almost certainly going to either fragment or end in misunderstanding.

(Clay Shirky and I had a collegial difference of opinion about this about fifteen years ago. Clay argued for online social groups having explicit constitutions. I argued
for the importance of the “unspoken” in groups, and the damage that making norms explicit can cause.)

Second, about the need for setting a baseline: I’m curious to see what JP’s book says about this, because the evidence is that we as a culture cannot agree about what the baseline is: vociferous and often nasty arguments about this have been going on for decades. For example, what’s the baseline for inviting (or disinviting) people with highly noxious views to a private college campus? I don’t see a practical way forward for establishing a baseline answer. We can’t even get Texas schools to stop teaching Creationism.

So, having said that modeling is not enough, and having despaired at establishing a baseline, I think I am left being unhelpfully dialectical:

1. Modeling is essential but not enough.

2. We ought to be appropriately explicit about rules in order to create places where people feel safe enough to be frank and honest…

3. …But we are not going to be able to agree on a meaningful baseline for the U.S., much less internationally — “meaningful” meaning that it is specific enough that it can be applied to difficult cases.

4. But modeling may be the only way we can get to enough agreement that we can set a baseline. We can’t do it by rules because we don’t have enough unspoken agreement about what those rules should be. We can only get to that agreement by seeing our leading voices in every field engage across differences in respectful and emotionally truthful ways. So at the largest level, I find I do agree with Prof. Fisher: we need models.

5. But if our national models are to reflect the values we want as a baseline, we need to be thoughtful, reflective, and explicit about which leading voices we want to elevate as models. We tend to do this not by looking for rules but by looking for Prof. Fisher’s second alternative: values. For example, we say positively that we love John McCain’s being a “maverick” or Kamala Harris’ careful noting of the evidence for her claims, and we disdain Trump’s name-calling. Rules derive from values such as those. Values come before rules.

I just wish I had more hope about the direction we’re going in…although I do see hopeful signs in some of the model voices who are emerging, and most of all, in the younger generation’s embrace of difference.

Tweet
Follow me

Categories: culture, education, law, liveblog, politics Tagged with: 2b2k • difference • diversity • education Date: October 25th, 2017 dw

Be the first to comment »

October 19, 2017

[liveblog] AI and Education session

Jenn Halen, Sandra Cortesi, Alexa Hasse, and Andres Lombana Bermudez of the Berkman Klein Youth and Media team are leading about a discussion about AI and Education at MIT Media Lab as part of the Ethics and Governance of AI program jointly at the Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Sandra gives an introduction the BKC Youth and Media project. She points out that their projects are co-designed with the groups that they are researching. From the AI folks they’d love ideas and better understanding of AI, for they are just starting to consider the importance of AI to education and youth. They are creating a Digital Media Literacy Platform (which Sandra says they hope to rename).

They show an intro to AI designed to be useful for a teacher introducing the topic to students. It defines, at a high level, AI, machine learning, and neural networks. They also show “learning experiences” (= “XP”) that Berkman Klein summer interns came up with, including AI and well-being, AI and news, autonomous vehicles, and AI and art. They are committed to working on how to educate youth about AI not only in terms of particular areas, but also privacy, safety, etc., always with an eye towards inclusiveness.

They open it up for discussion by posing some questions. 1. How to promote inclusion? How to open it up to the most diverse learning communities? 2. Did we spot any errors in their materials? 3. How to reduce the complexity of this topic? 4. Should some of the examples become their own independent XPs? 5. How to increase engagement? How to make it exciting to people who don’t come into it already interested in the topic?

[And then it got too conversational for me to blog…]

Tweet
Follow me

Categories: ai, education, liveblog Tagged with: 2b2k • ai • education • machine learning Date: October 19th, 2017 dw

Be the first to comment »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!