Joho the Blog2b2k Archives - Joho the Blog

October 19, 2017

[liveblog] AI and Education session

Jenn Halen, Sandra Cortesi, Alexa Hasse, and Andres Lombana Bermudez of the Berkman Klein Youth and Media team are leading about a discussion about AI and Education at MIT Media Lab as part of the Ethics and Governance of AI program jointly at the Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Sandra gives an introduction the BKC Youth and Media project. She points out that their projects are co-designed with the groups that they are researching. From the AI folks they’d love ideas and better understanding of AI, for they are just starting to consider the importance of AI to education and youth. They are creating a Digital Media Literacy Platform (which Sandra says they hope to rename).

They show an intro to AI designed to be useful for a teacher introducing the topic to students. It defines, at a high level, AI, machine learning, and neural networks. They also show “learning experiences” (= “XP”) that Berkman Klein summer interns came up with, including AI and well-being, AI and news, autonomous vehicles, and AI and art. They are committed to working on how to educate youth about AI not only in terms of particular areas, but also privacy, safety, etc., always with an eye towards inclusiveness.

They open it up for discussion by posing some questions. 1. How to promote inclusion? How to open it up to the most diverse learning communities? 2. Did we spot any errors in their materials? 3. How to reduce the complexity of this topic? 4. Should some of the examples become their own independent XPs? 5. How to increase engagement? How to make it exciting to people who don’t come into it already interested in the topic?

[And then it got too conversational for me to blog…]

Be the first to comment »

September 26, 2017

[liveblog] Google AI Conference

I am, surprisingly, at the first PAIR (People + AI Research) conference at Google, in Cambridge. There are about 100 people here, maybe half from Google. The official topic is: “How do humans and AI work together? How can AI benefit everyone?” I’ve already had three eye-opening conversations and the conference hasn’t even begun yet. (The conference seems admirably gender-balanced in audience and speakers.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The great Martin Wattenberg (half of Wattenberg – Fernanda Viéga) kicks it off, introducing John Giannandrea, a VP at Google in charge of AI, search, and more. “We’ve been putting a lot of effort into using inclusive data sets.”

John says that every vertical will affected by this. “It’s important to get the humanistic side of this right.” He says there are 1,300 languages spoken world wide, so if you want to reach everyone with tech, machine learning can help. Likewise with health care, e.g. diagnosing retinal problems caused by diabetes. Likewise with social media.

PAIR intends to use engineering and analysis to augment expert intelligence, i.e., professionals in their jobs, creative people, etc. And “how do we remain inclusive? How do we make sure this tech is available to everyone and isn’t used just by an elite?”

He’s going to talk about interpretability, controllability, and accessibility.

Interpretability. Google has replaced all of its language translation software with neural network-based AI. He shows an example of Hemingway translated into Japanese and then back into English. It’s excellent but still partially wrong. A visualization tool shows a cluster of three strings in three languages, showing that the system has clustered them together because they are translations of the same sentence. [I hope I’m getting this right.] Another example: a photo of integrated gradients hows that the system has identified a photo as a fire boat because of the streams of water coming from it. “We’re just getting started on this.” “We need to invest in tools to understand the models.”

Controllability. These systems learn from labeled data provided by humans. “We’ve been putting a lot of effort into using inclusive data sets.” He shows a tool that lets you visuallly inspect the data to see the facets present in them. He shows another example of identifying differences to build more robust models. “We had people worldwide draw sketches. E.g., draw a sketch of a chair.” In different cultures people draw different stick-figures of a chair. [See Eleanor Rosch on prototypes.] And you can build constraints into models, e.g., male and female. [I didn’t get this.]

Accessibility. Internal research from Youtube built a model for recommending videos. Initially it just looked at how many users watched it. You get better results if you look not just at the clicks but the lifetime usage by users. [Again, I didn’t get that accurately.]

Google open-sourced Tensor Flow, Google’s AI tool. “People have been using it from everything to to sort cucumbers, or to track the husbandry of cows.”People have been using it from everything to to sort cucumbers, or to track the husbandry of cows. Google would never have thought of this applications.

AutoML: learning to learn. Can we figure out how to enable ML to learn automatically. In one case, it looks at models to see if it can create more efficient ones. Google’s AIY lets DIY-ers build AI in a cardboard box, using Raspberry Pi. John also points to an Android app that composes music. Also, Google has worked with Geena Davis to create sw that can identify male and female characters in movies and track how long each speaks. It discovered that movies that have a strong female lead or co-lead do better financially.

He ends by emphasizing Google’s commitment to open sourcing its tools and research.

 


 

Fernanda and Martin talk about the importance of visualization. (If you are not familiar with their work, you are leading deprived lives.) When F&M got interested in ML, they talked with engineers. ““ML is very different. Maybe not as different as software is from hardware. But maybe. ”ML is very different. Maybe not as different as software is from hardware. But maybe. We’re just finding out.”

M&F also talked with artists at Google. He shows photos of imaginary people by Mike Tyka created by ML.

This tells us that AI is also about optimizing subjective factors. ML for everyone: Engineers, experts, lay users.

Fernanda says ML spreads across all of Google, and even across Alphabet. What does PAIR do? It publishes. It’s interdisciplinary. It does education. E.g., TensorFlow Playground: a visualization of a simple neural net used as an intro to ML. They opened sourced it, and the Net has taken it up. Also, a journal called Distill.pub aimed at explaining ML and visualization.

She “shamelessly” plugs deeplearn.js, tools for bringing AI to the browser. “Can we turn ML development into a fluid experience, available to everyone?”
What experiences might this unleash, she asks.

They are giving out faculty grants. And expanding the Brain residency for people interested in HCI and design…even in Cambridge (!).

Be the first to comment »

September 3, 2017

Free e-book from Los Angeles Review of Books

I’m proud that my essay about online knowledge has been included in a free e-book collecting essays about the effect of the digital revolution, published by the Los Angeles Review of Books.

It’s actually the first essay in the book, which obviously is not arranged in order of preference, but probably means at least the editors didn’t hate it.

 


The next day: Thanks to a tweet by Siva Vaidhyanathan, I and a lot of people on Twitter have realized that all but one of the authors in this volume are male. I’d simply said yes to the editors’ request to re-publish my article. It didn’t occur to me to ask to see the rest of the roster even though this is an issue I care about deeply. LARB seems to feature diverse writers overall, but apparently not so much in tech.

On the positive, this has produced a crowd-sourced list of non-male writers and thinkers about tech with a rapidity that is evidence of the pain and importance of this issue.

Be the first to comment »

August 18, 2017

Journalism, mistrust, transparency

Ethan Zuckerman brilliantly frames the public’s distrust of institutional journal in a whitepaper he is writing for Knight. (He’s posted it both on his blog and at Medium. Choose wisely.)
As he said at an Aspen event where he led a discussion of it:

…I think mistrust in civic institutions is much broader than mistrust in the press. Because mistrust is broad-based, press-centric solutions to mistrust are likely to fail. This is a broad civic problem, not a problem of fake news,

The whitepaper explores the roots of that broad civic problem and suggests ways to ameliorate it. The essay is deeply thought, carefully laid out, and vividly expressed. It is, in short, peak Ethanz.

The best news is that Ethan notes that he’s writing a book on civic mistrust.

 


 

In the early 2000’s, some of us thought that journalists would blog and we would thereby get to know who they are and what they value. This would help transparency become the new objectivity. Blogging has not become the norm for reporters, although it does occur. But it turns out that Twitter is doing that transparency job for us. Jake Tapper (@jaketapper) at CNN is one particularly good example of this; he tweets with a fierce decency. Margie Haberman (@maggieNYT) and Glenn Thrush (@glennThrush) from the NY Times, too. And many more.

This, I think is a good thing. For one thing, it increases trust in at least some news media, while confirming our distrust of news media we already didn’t trust. But we are well past the point where we are ever going to trust the news media as a generalization. The challenge is to build public trust in news media that report as truthfully and fairly as they can.

3 Comments »

August 8, 2017

Messy meaning

Steve Thomas [twitter: @stevelibrarian] of the Circulating Ideas podcast interviews me about the messiness of meaning, library innovation, and educating against fake news.

You can listen to it here.

Comments Off on Messy meaning

May 15, 2017

[liveblog][AI] Perspectives on community and AI

Chelsea Barabas is moderating a set of lightning talks at the AI Advance, aat Berkman Klein and MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Lionel Brossi recounts growing up in Argentina and the assumption that all boys care about football. He moved to Chile which is split between people who do and do not watch football. “Humans are inherently biased.” So, our AI systems are likely to be biased. Cognitive science has shown that the participants in their studies tend to be WEIRD: western, educated, industrialized, rich and developed. Also straight and white. He references Kate Crawford‘s “AI’s White Guy Problem.” We need not only diverse teams of developers, but also to think about how data can be more representative. We also need to think about the users. One approach is work on goal centered design.

If we ever get to unbiased AI, Borges‘ statement, “The original is unfaithful to the translation” may apply.

Chelsea: What is an inclusive way to think of cross-border countries?

Lionel: We need to co-design with more people.

Madeline Elish is at Data and Society and an anthropology of technology grad student at Columbia. She’s met designers who thought it might be a good to make a phone run faster if you yell at it. But this would train children to yell at things. What’s the context in which such designers work? She and Tim Hwang set about to build bridges between academics and businesses. They asked what designers see as their responsibility for the social implications of their work. They found four core challenges:

1. Assuring users perceive good intentions
2. Protecting privacy
3. Long term adoption
4. Accuracy and reliability

She and Tim wrote An AI Pattern Language [pdf] about the frameworks that guide design. She notes that none of them were thinking about social justice. The book argues that there’s a way to translate between the social justice framework and, for example, the accuracy framework.

Ethan Zuckerman: How much of the language you’re seeing feels familiar from other hype cycles?

Madeline: Tim and I looked at the history of autopilot litigation to see what might happen with autonomous cars. We should be looking at Big Data as the prior hype cycle.

Yarden Katz is at the BKC and at the Dept. of Systems Biology at Harvard Medical School. He talks about the history of AI, starting with 1958 claim about translation machine. 1966: Minsky Then there was an AI funding winter, but now it’s big again. “Until recently, AI was a dirty word.”

Today we use it schizophrenically: for Deep Learning or in a totally diluted sense as something done by a computer. “AI” now seems to be a branding strategy used by Silicon Valley.

“AI’s history is diverse, messy, and philosophical.” If complexit is embraced, “AI” might not be a useful caregory for policy. So we should go basvk to the politics of technology:

1. who controls the code/frameworks/data
2. Is the system inspectable/open?
3. Who sets the metrics? Who benefits from them?

The media are not going to be the watchdogs because they’re caught up in the hype. So who will be?

Q: There’s a qualitative difference in the sort of tasks now being turned over to computers. We’re entrusting machines with tasks we used to only trust to humans with good judgment.

Yarden: We already do that with systems that are not labeled AI, like “risk assessment” programs used by insurance companies.

Madeline: Before AI got popular again, there were expert systems. We are reconfiguring our understanding, moving it from a cognition frame to a behavioral one.

Chelsea: I’ve been involved in co-design projects that have backfired. These projects have sometimes been somewhat extractive: going in, getting lots of data, etc. How do we do co-design that are not extractive but that also aren’t prohibitively expensive?

Nathan: To what degree does AI change the dimensions of questions about explanation, inspectability, etc.

Yarden: The promoters of the Deep Learning narrative want us to believe you just need to feed in lots and lots of data. DL is less inspectable than other methods. DL is not learning from nothing. There are open questions about their inductive power.


Amy Zhang and Ryan Budish give a pre-alpha demo of the AI Compass being built at BKC. It’s designed to help people find resources exploring topics related to the ethics and governance of AI.

Comments Off on [liveblog][AI] Perspectives on community and AI

March 18, 2017

How a thirteen-year-old interprets what's been given

“Of course what I’ve just said may not be right,” concluded the thirteen year old girl, “but what’s important is to engage in the interpretation and to participate in the discussion that has been going on for thousands of years.”

So said the bas mitzvah girl at an orthodox Jewish synagogue this afternoon. She is the daughter of friends, so I went. And because it is an orthodox synagogue, I didn’t violate the Sabbath by taking notes. Thus that quote isn’t even close enough to count as a paraphrase. But that is the thought that she ended her D’var Torah with. (I’m sure as heck violating the Sabbath now by writing this, but I am not an observant Jew.)

The D’var Torah is a talk on that week’s portion of the Torah. Presenting one before the congregation is a mark of one’s coming of age. The bas mitzvah girl (or bar mitzvah boy) labors for months on the talk, which at least in the orthodox world is a work of scholarship that shows command of the Hebrew sources, that interprets the words of the Torah to find some relevant meaning and frequently some surprising insight, and that follows the carefully worked out rules that guide this interpretation as a fundamental practice of the religion.

While the Torah’s words themselves are taken as sacred and as given by G-d, they are understood to have been given to us human beings to be interpreted and applied. Further, that interpretation requires one to consult the most revered teachers (rabbis) in the tradition. An interpretation that does not present the interpretations of revered rabbis who disagree about the topic is likely to be flawed. An interpretation that writes off prior interpretations with which one disagrees is not listening carefully enough and is likely to be flawed. An interpretation that declares that it is unequivocally the correct interpretation is wrong in that certainty and is likely to be flawed in its stance.

It seems to me — and of course I’m biased — that these principles could be very helpful regardless of one’s religion or discipline. Jewish interpretation takes the Word as the given. Secular fields take facts as the given. The given is not given unless it is taken, and taking is an act of interpretation. Always.

If that taking is assumed to be subjective and without boundaries, then we end up living in fantasy worlds, shouting at those bastards who believe different fantasies. But if there are established principles that guide the interpretations, then we can talk and learn from one another.

If we interpret without consulting prior interpretations, then we’re missing the chance to reflect on the history that has shaped our ideas. This is not just arrogance but stupidity.

If we fail to consult interpretations that disagree with one another, we not only will likely miss the truth, but we will emerge from the darkness certain that we are right.

If we consult prior interpretations that disagree but insist that we must declare one right and the other wrong, we are being so arrogant that we think we can stand in unequivocal judgment of the greatest minds in our history.

If we come out of the interpretation certain that we are right, then we are far more foolish than the thirteen year old I heard speak this morning.

3 Comments »

March 1, 2017

[liveblog] Five global challenges and the role of the university

Juan Carlos De Martin is giving a lunchtime talk called “Five global challenges and the role of the university,” with Charles Nesson. These are two of my favorite people. Juan Carlos is here to talk about his new book (in Italian), Università Futura – Tra Democrazia e Bit.

Charlie introduces Juan Carlos by describing his first meeting with him at a conference in Torino at which the idea of the Nexa Center of Internet and Society
, which is now a reality.

Juan Carlos begins by tracing the book’s traIn the book and here he will talk about five global challenges. Why five? Because that’s how we he sees it, but it’s subjective.

  1. Democracy. It’s in crisis.

  2. Environment. For example, you may have heard about this global warming thing. It’s hard for us to think about such large systems.

  3. Technology. E.g., bio tech, AI, nanotech, neuro-cognition. The benefits of these are important, but the problems they raise are very difficult.

  4. Economy. Growth is slowing. Trade is slowing. How do we ensure a decent livelihood to all?

  5. Geopolitics. The world order seems to be undergoing constant change. How do we preserve the peace?

We are in uncharted waters, he says: high risk and high unpredictability. ““I don’t want to sound apocalyptic, because I’m not, but we have to face the dangers”I don’t want to sound apocalyptic, because I’m not, but we have to face the dangers.”
Juan Carlos makes three observations:

First, we are going to need lots of knowledge, more than ever before.

Second, we’ll need people capable of interpreting, using, and producing such knowledge, more than ever before.

Third, in democracies we need the knowledge to get to as many people as possible, and as many people as possible have to become better critical thinkers. “There’s a clear rejection of experts which we, as people in universities, need to take seriously…What did we do wrong to lose the trust of people?”

These three observations lead to the idea that universities should play an important role. So, what is the current state of the university?

First, for the past forty years, universities have pursued knowledge useful to the economy.

Second, there has been an emphasis on training workers, which makes sense, but has meant less emphasis on educating people as full humans and citizens.

Third, the university has been a normative organization (like non-profits and churches) that has been pushed to become more of a utilitarian organization (like businesses). This shows itself in, for example, the excessive use of quantitative metrics for promotion, an insane emphasis on publishing for its own sake, and a hyper-disciplinarity because it’s easier to publish within a smaller slice.

These mean that the historically multi-dimensional mission of the university has been flattened, and the spirit has gone from normative to utilitarian. “All of this represents a problem if we want the university to help society face … 21st century problems.” (Juan Carlos says that he wrote the book in Italian [his English is perfect] because when he began in 2008, Italian universities were beginning a seven year contraction of 20%.)

We need all kinds of knowledge — not just what looks useful right now — because we don’t know what will be useful. We need interdisciplinarity because so many societal challenges — including all the ones he began the talk with — are interdisciplinary. But the incentives are not currently in that direction. And we need “effective interaction with the general public.” This is not just about communicating or transferring knowledge; it has to be genuinely interactive.

We need, he says, the university to speak the truth.

His proposal is that we “rediscover the roots of the university” and update them to present times. There is a solution in those roots, he says.

At the root, education is a personal relationship among human beings. ““Education is not mere information transfer”Education is not mere information transfer.” This means educating human beings and citizens, not just workers.

Everyone agrees we need critical thinking, but we need to work on how to teach it and what it means. We need critical thinkers because we need people who can handle unexpected situations.

We need universities to be institutions that can take the long view, can go slowly, value silence, that enable concentration. These were characteristics of universities for a thousand years.
What universities can do:

1. To achieve inter-disciplinarity, we cannot abolish disciplines; they play an important role. But we need to avoid walls between them. “Maybe a little short fence” that people can easily cross.

2. We need to strongly encourage heterodox thinking. Some disciplines need this urgently; Juan Carlos calls out economics as an example.

3. The university should itself be a “trustee of the unborn,” i.e., of the generation to come. “The university has always had the role of bridging the dead and the unborn.” In Europe this has been a role of the state, but they’re doing it less and less.

A side effect is that the university should be the conscience and critic of society. He quotes Pres. Drew Faust on whether universities are doing this enough.

4. Universities need to engage with the public, listening to their concerns. That doesn’t mean pandering to them. Only dialogue will help people learn.

5. Universities need to actively employ the Internet to achieve its objectives. Juan Carlos’ research on this topic began with the Internet, but it flipped, focusing first on the university.

Overall, he says, “we need new ideas, critical thinking, and character”we need new ideas, critical thinking, and character. By that last he means moral commitment. Universities can move in that direction by rediscovering their roots, and updating them.

Charlie now leads a session in which we begin by posting questions to http://cyber.harvard.edu/questions/list.php . I cannot keep up with the conversation. The session is being webcast and the recording will be posted. (Charlie is a celebrated teacher with a special skill in engaging groups like this.)


I agree with everything Juan Carlos says, and especially am heartened by the idea that the university as an institution can help to re-moor us. But I then find myself thinking that it took enormous forces to knock universities off their 1,000 year mission. Those same forces are implacable. Can universities deny the fusion of powers that put them in this position in the first place?

Comments Off on [liveblog] Five global challenges and the role of the university

September 18, 2016

Lewis Carroll on where knowledge lives

On books and knowledge, from Sylvie and Bruno by Lewis Carroll, 1889:

“Which contain the greatest amount of Science, do you think, the books, or the minds?”

“Rather a profound question for a lady!” I said to myself, holding, with the conceit so natural to Man, that Woman’s intellect is essentially shallow. And I considered a minute before replying. “If you mean living minds, I don’t think it’s possible to decide. There is so much written Science that no living person has ever read: and there is so much thought-out Science that hasn’t yet been written. But, if you mean the whole human race, then I think the minds have it: everything, recorded in books, must have once been in some mind, you know.”

“Isn’t that rather like one of the Rules in Algebra?” my Lady enquired. (“Algebra too!” I thought with increasing wonder.) “I mean, if we consider thoughts as factors, may we not say that the Least Common Multiple of all the minds contains that of all the books; but not the other way?”

“Certainly we may!” I replied, delighted with the illustration. “And what a grand thing it would be,” I went on dreamily, thinking aloud rather than talking, “if we could only apply that Rule to books! You know, in finding the Least Common Multiple, we strike out a quantity wherever it occurs, except in the term where it is raised to its highest power. So we should have to erase every recorded thought, except in the sentence where it is expressed with the greatest intensity.”

My Lady laughed merrily. “Some books would be reduced to blank paper, I’m afraid!” she said.

“They would. Most libraries would be terribly diminished in bulk. But just think what they would gain in quality!”

“When will it be done?” she eagerly asked. “If there’s any chance of it in my time, I think I’ll leave off reading, and wait for it!”

“Well, perhaps in another thousand years or so—”

“Then there’s no use waiting!”, said my Lady. “Let’s sit down. Uggug, my pet, come and sit by me!”

Comments Off on Lewis Carroll on where knowledge lives

May 9, 2016

Reddit on my LARB review

There’s a small but interesting discussion at the philosophy subreddit of my review of Michael Lynch’s The Internet of Us.

Comments Off on Reddit on my LARB review

Next Page »