Joho the Blogculture Archives - Joho the Blog

August 7, 2017

Cymbeline: Shakespeare’s worst play (Or: Lordy, I hope there’s a tape)

The hosts of the BardCast podcast consider Cymbeline to probably be Shakespeare’s worst play. Not enough happens in the first two acts, the plot is kuh-razy, it’s a mishmash of styles and cultures, and it over-explains itself time and time again. That podcast is far from alone in thinking that it’s the Bard’s worst, although, as BardCast says, even the Bard’s worst is better than just about anything. Nevertheless, when was the last time you saw a performance of Cymbeline? Yeah, me neither.

We saw it yesterday afternoon, in its final performance at Shakespeare & Co in Lenox, Mass. It was fantastic: hilarious, satisfactorily coherent (which is praiseworthy because the plot is indeed crazy), and at times moving.

It was directed by the founder of the company, Tina Packer, and showed her usual commitment to modernizing Shakespeare by finding every emotional tone and every laugh in the original script. The actors enunciate clearly, but since we modern folk don’t understand many of the words and misunderstand more than that, the actors use body language, cues, and incredibly well worked out staging to make their meaning clear. We used to take our young children to Shakespeare & Co. shows, and they loved them.

I’m open to being convinced by a Shakespeare scholar that the Shakespeare & Co.’s Cymbeline was a travesty that had nothing to do with Shakespeare’s intentions, even though the players said all the words he wrote and honored the words’ magnificence. I’m willing to acknowledge that, for example, when Imogen and King Cymbeline offer each other words of condolence about the death of the wicked, wicked queen, Shakespeare didn’t think they’d wait a beat and then burst out laughing. But when Posthumus comes before the King at the end, bemoaning the death of his beloved Imogen, I would not be surprised if Shakespeare were to nod in appreciation as in this production the audience bursts into loud laughter because Imogen, still in disguise as a boy, is scrambling towards Posthumus, gesticulating ever more wildly that she is in fact she for whom he mourns. Did Shakespeare intend that? Probably not. Does it work? One hundred percent.

These two embellishments are emblematic of the problem with the play. In that final scene, it is revealed to the King in a single speech that the Queen he has loved for decades in fact always hated him, tried to poison him, and was a horrible, horrible person. There’s little or nothing in the play that explains how the King could not have had an inkling of this, and he seems to get over the sudden revelation of his mate’s iniquity in a heartbeat so that the scene can get on with its endless explication. The laugh he shares with his daughter gets a huge laugh from the audience, but only because the words of sorrow Shakespeare gives the King and Imogen seem undeserved for a Queen so resolutely evil; the addition of the laugh solves a problem with the script. Likewise, Imogen’s scramble toward Posthumus, waving her arms in a “Hey, I’m right here!” gesture, turns Posthumus’ mournful declaration of his devastation at the death of Imogen into comic over-statement.

To be clear, most of the interpretations seem to bring Shakespeare’s intentions to life, even if unexpected ways. For example, Jason Aspry’s Cloten was far different from the thuggish and thoroughly villainous character we expected. Asprey played him hilariously as a preening coward. This had me concerned because I knew that he is killed mid-play in a fight with the older of two young princes who have been brought up in a cave. (It’s a weird plot.) How can the prince kill such an enjoyable buffoon without making us feel like someone casually shot Capt. Jack Sparrow halfway through the first Pirates of the Caribbean movie? But the staging and the acting is so well done that, amazingly, the biggest laugh of the show came when the prince enters the stage holding Cloten’s severed head. (Don’t judge me. You would have laughed, too.)

So, this may well be Shakespeare’s worst play. If so, it got a performance that found everything good in it, and then some.

 


 

I do want to at least mention the brilliance and commitment of the actors. Some we have been seeing every summer for decades, and others are new or newer to us. But this is an amazing group. Among the cast members who were new to us, Ella Loudon was amazing as the older prince. I feel bad singling anyone out, but, there, I did it.

 


 

Finally, Shakespeare & Co. doesn’t post videos of performances of their plays after they’ve run. It makes me heartsick that they do not. I’ve asked them about this in past, and apparently the problem is with the actors’ union. I was brought up in a pro-union household and continue to be favorably inclined toward them, but I wish there were a way to work this out. It’d be good for the world to be able to see these exceptional performances and come to love Shakespeare.
It would of course also be good for Shakespeare & Co.

Be the first to comment »

July 20, 2017

I didn’t like the new Planet of the Apes movie. [No spoilers.]

War for Planet of the Apes has 95% positive ratings at Rotten Tomatoes. Many of the cited reviews are effusive. For example, Charles Taylor at Newsweek calls it “a consistently intelligent, morally thoughtful and often beautiful picture.”

I’d rephrase that a bit. I think it was a dumb, predictable, boring movie with a couple of nice landscape shots. We went to see it on one of our few movie nights out because we’d enjoyed the first two in this series.

If WARPA weren’t about apes but was instead about the actual human ism‘s it intends to get us to see from the Other’s perspective — racism, colonialism, militarism — we’d view it as embarrassingly trite and shallow. Casting apes as the victims doesn’t make it any less so.

It doesn’t help that while the facial animations are incredible, the ape bodies look like pretty good animations of people wearing ape suits. Plus, I have to say that these apes’ lack of genitalia or assholes diminishes the vividness of the premise of the movie: the apes we’ve treated as an inferior species are deserving of respect and dignity. Instead, we get damn, dirty hairy aliens.

But most of all, there isn’t a cliche the movie doesn’t miss. If you’re sitting in your seat thinking that the next obvious thing to happen is X, then X will happen. Guaranteed. The only surprises are the plot holes, of which there are many.

The music is bad in itself and is used as a cudgel. They might as well have skipped the music and just put in subtitles like “Feel sorrow here.”

Full marks to Andy Serkis and the motion capture crew. As others have suggested, he deserves his Special Achievement Oscar already. Well, he deserved it for Lord of the Rings, but his work in this movie is absolutely its highlight. Steve Zahn also has a good turn as the comic relief. But poor Woody Harrelson is stuck with ridiculous lines and a clumsy narrative attempt to give his character some depth. His best moment is when he shaves his head in one of the movie’s embarrassing flags that it thinks it’s on a par with films like Apocalypse Now.

Also, this movie is no fun. It’s grim. It’s boring. It’s unfair to the humans.

That last point is not a political complaint because lord knows we deserve all the monkey feces thrown at us. It’s instead a complaint about the shallowness of the movie-making.

Overall, I’d give a 95% chance of disappointing you.

2 Comments »

July 18, 2017

America's default philosophy

John McCumber — a grad school colleague with whom I have alas not kept up — has posted at Aeon an insightful historical argument that America’s default philosophy came about because of a need to justify censoring American communist professorss (resulting in a naive scientism) and a need to have a positive alternative to Marxism (resulting in the adoption of rational choice theory).

That compressed summary does not do justice to the article’s grounding in the political events of the 1950s nor to how well-written and readable it is.

1 Comment »

June 28, 2017

Re-reading Hornblower

I read all of C.S. Forester’s Horatio Hornblower series when I was in high school.

I’m on a week of vacation — i.e., a nicer place to work — and have been re-reading them.

Why isn’t everyone re-reading them? They’re wonderful. Most of the seafaring descriptions are opaque to me, but it doesn’t matter. The stories are character-based and Forester is great at expressing personality succinctly, as well as taking us deep into Hornblower’s character over the course of the books. Besides, all the talk of binneys ’round the blaggard binge don’t get in the way of understanding the action

Some prefer Patrick O’Brian’s Aubrey-Maturin “Master and Commander” series. They are wrong. I believe the Internet when it says O’Brian’s battles are more realistic because they’re based on actual events. I don’t care. I do care, however, about O’Brian’s clumsy construction of his main characters. I can sense the author trying to inflate them into three dimensions. Then they’re given implausible roles and actions.

Of course you may disagree with me entirely about that. But here’s the killer for me: O’Brian relies on long pages of back-and-forth dialogue…while not telling you who’s talking. I don’t like having to count back by twos to find the original speaker. All I need is an occasional, “‘Me, neither,’ said Jack.” Is that asking too much?

Anyway, take a look at Hornblower and the Atropos to see if you’re going to like the series. That begins with a few chapters of Hornblower arranging the logistics for the flotilla portion of Lord Nelson’s funeral. If you find yourself as engrossed in chapters about logistics as I did, you’re probably hooked forever.

1 Comment »

June 13, 2017

Top 2 Beatles songs

About a week ago, out of the blue I blurted out to my family what the two best Beatles songs are. I pronounced this with a seriousness befitting the topic, and with a confidence born of the fact that it’s a ridiculous question and it doesn’t matter anyway.

Vulture just published a complete ranking of all Beatles songs.

Nailed it.

Their #1 selection is an obvious contender. #2 is controversial and probably intentionally so. But, obviously, I think it’s a good choice.

If you want to see what they chose, click here: #1. Day in the Life #2 Strawberry Fields

By the way, the Vulture write-ups of each of the songs are good. At least the ones I read were. If you’re into this, the best book I’ve read is Ian MacDonald’s Revolution in the Head, which has an essay on each recording with comments about the social and personal context of the song and a learned explanation of the music. Astounding book.

5 Comments »

June 6, 2017

[liveblog] metaLab

Harvard metaLab is giving an informal Berkman Klein talk about their work on designing for ethical AI. Jeffrey Schnapp introduces metaLab as “an idea foundry, a knowledge-design lab, and a production studio experimenting in the networked arts and humanities.” The discussion today will be about metaLab’s various involvements in the Berkman Klein – MIT MediaLab project on ethics and governance of AI. The conference is packed with Fellows and the newly-arrived summer interns.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Matthew Battles and Jessica Yurkofsky begin by talking about Curricle, a “new platform for experimenting with shopping for courses.” How can the experience be richer, more visual, and use more of the information and data that Harvard has? They’ve come up with a UI that has three elements: traditional search, a visualization, and a list of the results.

“They’ve been grappling with the ethics of putting forward new search algorithms. ”They’ve been grappling with the ethics of putting forward new search algorithms. The design is guided by transparency, autonomy, and visualization. Transparency means that they make apparent how the search works, allowing students to assign weights to keywords. If Curricle makes recommendations, it will explain that it’s because other students like you have chosen it or because students like you have never done this, etc. Visualization shows students what’s being returned by their search and how it’s distributed.

Similar principles guide a new project, AI Compass, that is the entry point for information about Berkman Klein’s work on the Ethics and Governance of AI project. It is designed to document the research being done and to provide a tool for surveying the field more broadly. They looked at how neural nets are visualized, how training sets are presented, and other visual metaphors. They are trying to find a way to present these resources in their connections. They have decided to use Conway’s Game of Life [which I was writing about an hour ago, which freaks me out a bit]. The game allows complex structures to emerge from simple rules. AI Compass is using animated cellular automata as icons on the site.

metaLab wants to enable people to explore the information at three different scales. The macro scale shows all of the content arranged into thematic areas. This lets you see connections among the pieces. The middle scale shows the content with more information. At the lowest scale, you see the resource information itself, as well as connections to related content.

Sarah Newman talks about how AI is viewed in popular culture: the Matrix, Ahnuld, etc. “We generally don’t think about AI as it’s expressed in the tools we actually use”We generally don’t think about AI as it’s expressed in the tools we actually use, such as face recognition, search, recommendations, etc. metaLab is interested in how art can draw out the social and cultural dimensions of AI. “What can we learn about ourselves by how we interact with, tell stories about, and project logic, intelligence, and sentience onto machines?” The aim is to “provoke meaningful reflection.”

One project is called “The Future of Secrets.” Where our email and texts be in 100 years? And what does this tell us about our relationship with our tech. Why and how do we trust them? It’s an installation that’s been at the Museum of Fine Arts in Boston and recently in Berlin. People enter secrets that are printed out anonymously. People created stories, most of which weren’t true, often about the logic of the machine. People tended to project much more intelligence on the machine than was there. Cameras were watching and would occasionally print out images from the show itself.

From this came a new piece (done with fellow Rachel Kalmar) in which a computer reads the secrets out loud. It will be installed at the Berkman Klein Center soon.

Working with Kim Albrecht in Berlin, the center is creating data visualizations based on the data that a mobile phone collects, including the accelerometer. “These visualizations let us see how the device is constructing an image of the world we’re moving through”These visualizations let us see how the device is constructing an image of the world we’re moving through. That image is messy, noisy.

The lab is also collaborating on a Berlin exhibition, adding provocative framing using X degrees of Separation. It finds relationships among objects from disparate cultures. What relationships do algorithms find? How does that compare with how humans do it? What can we learn?

Starting in the fall, Jeffrey and a co-teacher are going to be leading a robotics design studio, experimenting with interior and exterior architecture in which robotic agents are copresent with human actors. This is already happening, raising regulatory and urban planning challenges. The studio will also take seriously machine vision as a way of generating new ways of thinking about mobility within city spaces.

Q&A

Q: me: For AI Compass, where’s the info coming from? How is the data represented? Open API?

Matthew: It’s designed to focus on particular topics. E.g., Youth, Governance, Art. Each has a curator. The goal is not to map the entire space. It will be a growing resource. An open API is not yet on the radar, but it wouldn’t be difficult to do.

Q: At the AI Advance, Jonathan Zittrain said that organizations are a type of AI: governed by a set of rules, they grow and learn beyond their individuals, etc.

Matthew: We hope to deal with this very capacious approach to AI is through artists. What have artists done that bear on AI beyond the cinematic tropes? There’s a rich discourse about this. We want to be in dialogue with all sorts of people about this.

Q: About Curricle: Are you integrating Q results [student responses to classes], etc.?

Sarah: Not yet. There’s mixed feeling from administrators about using that data. We want Curricle to encourage people to take new paths. The Q data tends to encourage people down old paths. Curricle will let students annotate their own paths and share it.

Jeffrey: We’re aiming at creating a curiosity engine. We’re working with a century of curricular data. This is a rare privilege.

me: It’d enrich the library if the data about resources was hooked into LibraryCloud.

Q: kendra: A useful feature would be finding a random course that fits into your schedule.

A: In the works.

Q: It’d be great to have transparency around the suggestions of unexpected courses. We don’t want people to be choosing courses simply to be unique.

A: Good point.

A: The same tool that lets you diversify your courses also lets you concentrate all of them into two days in classrooms near your dorm. Because the data includes courses from all the faculty, being unique is actually easy. The challenge is suggesting uniqueness that means something.

Q: People choose courses in part based on who else is choosing that course. It’d be great to have friends in the platform.

A: Great idea.

Q: How do you educate the people using the platform? How do you present and explain the options? How are you going to work with advisors?

A: Important concerns at the core of what we’re thinking about and working on.

1 Comment »

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Be the first to comment »

[liveblog][AI] Perspectives on community and AI

Chelsea Barabas is moderating a set of lightning talks at the AI Advance, aat Berkman Klein and MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Lionel Brossi recounts growing up in Argentina and the assumption that all boys care about football. He moved to Chile which is split between people who do and do not watch football. “Humans are inherently biased.” So, our AI systems are likely to be biased. Cognitive science has shown that the participants in their studies tend to be WEIRD: western, educated, industrialized, rich and developed. Also straight and white. He references Kate Crawford‘s “AI’s White Guy Problem.” We need not only diverse teams of developers, but also to think about how data can be more representative. We also need to think about the users. One approach is work on goal centered design.

If we ever get to unbiased AI, Borges‘ statement, “The original is unfaithful to the translation” may apply.

Chelsea: What is an inclusive way to think of cross-border countries?

Lionel: We need to co-design with more people.

Madeline Elish is at Data and Society and an anthropology of technology grad student at Columbia. She’s met designers who thought it might be a good to make a phone run faster if you yell at it. But this would train children to yell at things. What’s the context in which such designers work? She and Tim Hwang set about to build bridges between academics and businesses. They asked what designers see as their responsibility for the social implications of their work. They found four core challenges:

1. Assuring users perceive good intentions
2. Protecting privacy
3. Long term adoption
4. Accuracy and reliability

She and Tim wrote An AI Pattern Language [pdf] about the frameworks that guide design. She notes that none of them were thinking about social justice. The book argues that there’s a way to translate between the social justice framework and, for example, the accuracy framework.

Ethan Zuckerman: How much of the language you’re seeing feels familiar from other hype cycles?

Madeline: Tim and I looked at the history of autopilot litigation to see what might happen with autonomous cars. We should be looking at Big Data as the prior hype cycle.

Yarden Katz is at the BKC and at the Dept. of Systems Biology at Harvard Medical School. He talks about the history of AI, starting with 1958 claim about translation machine. 1966: Minsky Then there was an AI funding winter, but now it’s big again. “Until recently, AI was a dirty word.”

Today we use it schizophrenically: for Deep Learning or in a totally diluted sense as something done by a computer. “AI” now seems to be a branding strategy used by Silicon Valley.

“AI’s history is diverse, messy, and philosophical.” If complexit is embraced, “AI” might not be a useful caregory for policy. So we should go basvk to the politics of technology:

1. who controls the code/frameworks/data
2. Is the system inspectable/open?
3. Who sets the metrics? Who benefits from them?

The media are not going to be the watchdogs because they’re caught up in the hype. So who will be?

Q: There’s a qualitative difference in the sort of tasks now being turned over to computers. We’re entrusting machines with tasks we used to only trust to humans with good judgment.

Yarden: We already do that with systems that are not labeled AI, like “risk assessment” programs used by insurance companies.

Madeline: Before AI got popular again, there were expert systems. We are reconfiguring our understanding, moving it from a cognition frame to a behavioral one.

Chelsea: I’ve been involved in co-design projects that have backfired. These projects have sometimes been somewhat extractive: going in, getting lots of data, etc. How do we do co-design that are not extractive but that also aren’t prohibitively expensive?

Nathan: To what degree does AI change the dimensions of questions about explanation, inspectability, etc.

Yarden: The promoters of the Deep Learning narrative want us to believe you just need to feed in lots and lots of data. DL is less inspectable than other methods. DL is not learning from nothing. There are open questions about their inductive power.


Amy Zhang and Ryan Budish give a pre-alpha demo of the AI Compass being built at BKC. It’s designed to help people find resources exploring topics related to the ethics and governance of AI.

Be the first to comment »

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Be the first to comment »

March 18, 2017

How a thirteen-year-old interprets what's been given

“Of course what I’ve just said may not be right,” concluded the thirteen year old girl, “but what’s important is to engage in the interpretation and to participate in the discussion that has been going on for thousands of years.”

So said the bas mitzvah girl at an orthodox Jewish synagogue this afternoon. She is the daughter of friends, so I went. And because it is an orthodox synagogue, I didn’t violate the Sabbath by taking notes. Thus that quote isn’t even close enough to count as a paraphrase. But that is the thought that she ended her D’var Torah with. (I’m sure as heck violating the Sabbath now by writing this, but I am not an observant Jew.)

The D’var Torah is a talk on that week’s portion of the Torah. Presenting one before the congregation is a mark of one’s coming of age. The bas mitzvah girl (or bar mitzvah boy) labors for months on the talk, which at least in the orthodox world is a work of scholarship that shows command of the Hebrew sources, that interprets the words of the Torah to find some relevant meaning and frequently some surprising insight, and that follows the carefully worked out rules that guide this interpretation as a fundamental practice of the religion.

While the Torah’s words themselves are taken as sacred and as given by G-d, they are understood to have been given to us human beings to be interpreted and applied. Further, that interpretation requires one to consult the most revered teachers (rabbis) in the tradition. An interpretation that does not present the interpretations of revered rabbis who disagree about the topic is likely to be flawed. An interpretation that writes off prior interpretations with which one disagrees is not listening carefully enough and is likely to be flawed. An interpretation that declares that it is unequivocally the correct interpretation is wrong in that certainty and is likely to be flawed in its stance.

It seems to me — and of course I’m biased — that these principles could be very helpful regardless of one’s religion or discipline. Jewish interpretation takes the Word as the given. Secular fields take facts as the given. The given is not given unless it is taken, and taking is an act of interpretation. Always.

If that taking is assumed to be subjective and without boundaries, then we end up living in fantasy worlds, shouting at those bastards who believe different fantasies. But if there are established principles that guide the interpretations, then we can talk and learn from one another.

If we interpret without consulting prior interpretations, then we’re missing the chance to reflect on the history that has shaped our ideas. This is not just arrogance but stupidity.

If we fail to consult interpretations that disagree with one another, we not only will likely miss the truth, but we will emerge from the darkness certain that we are right.

If we consult prior interpretations that disagree but insist that we must declare one right and the other wrong, we are being so arrogant that we think we can stand in unequivocal judgment of the greatest minds in our history.

If we come out of the interpretation certain that we are right, then we are far more foolish than the thirteen year old I heard speak this morning.

3 Comments »

Next Page »