Joho the Blogculture Archives - Page 2 of 41 - Joho the Blog

June 13, 2017

Top 2 Beatles songs

About a week ago, out of the blue I blurted out to my family what the two best Beatles songs are. I pronounced this with a seriousness befitting the topic, and with a confidence born of the fact that it’s a ridiculous question and it doesn’t matter anyway.

Vulture just published a complete ranking of all Beatles songs.

Nailed it.

Their #1 selection is an obvious contender. #2 is controversial and probably intentionally so. But, obviously, I think it’s a good choice.

If you want to see what they chose, click here: #1. Day in the Life #2 Strawberry Fields

By the way, the Vulture write-ups of each of the songs are good. At least the ones I read were. If you’re into this, the best book I’ve read is Ian MacDonald’s Revolution in the Head, which has an essay on each recording with comments about the social and personal context of the song and a learned explanation of the music. Astounding book.

6 Comments »

June 6, 2017

[liveblog] metaLab

Harvard metaLab is giving an informal Berkman Klein talk about their work on designing for ethical AI. Jeffrey Schnapp introduces metaLab as “an idea foundry, a knowledge-design lab, and a production studio experimenting in the networked arts and humanities.” The discussion today will be about metaLab’s various involvements in the Berkman Klein – MIT MediaLab project on ethics and governance of AI. The conference is packed with Fellows and the newly-arrived summer interns.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Matthew Battles and Jessica Yurkofsky begin by talking about Curricle, a “new platform for experimenting with shopping for courses.” How can the experience be richer, more visual, and use more of the information and data that Harvard has? They’ve come up with a UI that has three elements: traditional search, a visualization, and a list of the results.

“They’ve been grappling with the ethics of putting forward new search algorithms. ”They’ve been grappling with the ethics of putting forward new search algorithms. The design is guided by transparency, autonomy, and visualization. Transparency means that they make apparent how the search works, allowing students to assign weights to keywords. If Curricle makes recommendations, it will explain that it’s because other students like you have chosen it or because students like you have never done this, etc. Visualization shows students what’s being returned by their search and how it’s distributed.

Similar principles guide a new project, AI Compass, that is the entry point for information about Berkman Klein’s work on the Ethics and Governance of AI project. It is designed to document the research being done and to provide a tool for surveying the field more broadly. They looked at how neural nets are visualized, how training sets are presented, and other visual metaphors. They are trying to find a way to present these resources in their connections. They have decided to use Conway’s Game of Life [which I was writing about an hour ago, which freaks me out a bit]. The game allows complex structures to emerge from simple rules. AI Compass is using animated cellular automata as icons on the site.

metaLab wants to enable people to explore the information at three different scales. The macro scale shows all of the content arranged into thematic areas. This lets you see connections among the pieces. The middle scale shows the content with more information. At the lowest scale, you see the resource information itself, as well as connections to related content.

Sarah Newman talks about how AI is viewed in popular culture: the Matrix, Ahnuld, etc. “We generally don’t think about AI as it’s expressed in the tools we actually use”We generally don’t think about AI as it’s expressed in the tools we actually use, such as face recognition, search, recommendations, etc. metaLab is interested in how art can draw out the social and cultural dimensions of AI. “What can we learn about ourselves by how we interact with, tell stories about, and project logic, intelligence, and sentience onto machines?” The aim is to “provoke meaningful reflection.”

One project is called “The Future of Secrets.” Where our email and texts be in 100 years? And what does this tell us about our relationship with our tech. Why and how do we trust them? It’s an installation that’s been at the Museum of Fine Arts in Boston and recently in Berlin. People enter secrets that are printed out anonymously. People created stories, most of which weren’t true, often about the logic of the machine. People tended to project much more intelligence on the machine than was there. Cameras were watching and would occasionally print out images from the show itself.

From this came a new piece (done with fellow Rachel Kalmar) in which a computer reads the secrets out loud. It will be installed at the Berkman Klein Center soon.

Working with Kim Albrecht in Berlin, the center is creating data visualizations based on the data that a mobile phone collects, including the accelerometer. “These visualizations let us see how the device is constructing an image of the world we’re moving through”These visualizations let us see how the device is constructing an image of the world we’re moving through. That image is messy, noisy.

The lab is also collaborating on a Berlin exhibition, adding provocative framing using X degrees of Separation. It finds relationships among objects from disparate cultures. What relationships do algorithms find? How does that compare with how humans do it? What can we learn?

Starting in the fall, Jeffrey and a co-teacher are going to be leading a robotics design studio, experimenting with interior and exterior architecture in which robotic agents are copresent with human actors. This is already happening, raising regulatory and urban planning challenges. The studio will also take seriously machine vision as a way of generating new ways of thinking about mobility within city spaces.

Q&A

Q: me: For AI Compass, where’s the info coming from? How is the data represented? Open API?

Matthew: It’s designed to focus on particular topics. E.g., Youth, Governance, Art. Each has a curator. The goal is not to map the entire space. It will be a growing resource. An open API is not yet on the radar, but it wouldn’t be difficult to do.

Q: At the AI Advance, Jonathan Zittrain said that organizations are a type of AI: governed by a set of rules, they grow and learn beyond their individuals, etc.

Matthew: We hope to deal with this very capacious approach to AI is through artists. What have artists done that bear on AI beyond the cinematic tropes? There’s a rich discourse about this. We want to be in dialogue with all sorts of people about this.

Q: About Curricle: Are you integrating Q results [student responses to classes], etc.?

Sarah: Not yet. There’s mixed feeling from administrators about using that data. We want Curricle to encourage people to take new paths. The Q data tends to encourage people down old paths. Curricle will let students annotate their own paths and share it.

Jeffrey: We’re aiming at creating a curiosity engine. We’re working with a century of curricular data. This is a rare privilege.

me: It’d enrich the library if the data about resources was hooked into LibraryCloud.

Q: kendra: A useful feature would be finding a random course that fits into your schedule.

A: In the works.

Q: It’d be great to have transparency around the suggestions of unexpected courses. We don’t want people to be choosing courses simply to be unique.

A: Good point.

A: The same tool that lets you diversify your courses also lets you concentrate all of them into two days in classrooms near your dorm. Because the data includes courses from all the faculty, being unique is actually easy. The challenge is suggesting uniqueness that means something.

Q: People choose courses in part based on who else is choosing that course. It’d be great to have friends in the platform.

A: Great idea.

Q: How do you educate the people using the platform? How do you present and explain the options? How are you going to work with advisors?

A: Important concerns at the core of what we’re thinking about and working on.

1 Comment »

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Comments Off on [liveblog][AI] AI and education lightning talks

[liveblog][AI] Perspectives on community and AI

Chelsea Barabas is moderating a set of lightning talks at the AI Advance, aat Berkman Klein and MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Lionel Brossi recounts growing up in Argentina and the assumption that all boys care about football. He moved to Chile which is split between people who do and do not watch football. “Humans are inherently biased.” So, our AI systems are likely to be biased. Cognitive science has shown that the participants in their studies tend to be WEIRD: western, educated, industrialized, rich and developed. Also straight and white. He references Kate Crawford‘s “AI’s White Guy Problem.” We need not only diverse teams of developers, but also to think about how data can be more representative. We also need to think about the users. One approach is work on goal centered design.

If we ever get to unbiased AI, Borges‘ statement, “The original is unfaithful to the translation” may apply.

Chelsea: What is an inclusive way to think of cross-border countries?

Lionel: We need to co-design with more people.

Madeline Elish is at Data and Society and an anthropology of technology grad student at Columbia. She’s met designers who thought it might be a good to make a phone run faster if you yell at it. But this would train children to yell at things. What’s the context in which such designers work? She and Tim Hwang set about to build bridges between academics and businesses. They asked what designers see as their responsibility for the social implications of their work. They found four core challenges:

1. Assuring users perceive good intentions
2. Protecting privacy
3. Long term adoption
4. Accuracy and reliability

She and Tim wrote An AI Pattern Language [pdf] about the frameworks that guide design. She notes that none of them were thinking about social justice. The book argues that there’s a way to translate between the social justice framework and, for example, the accuracy framework.

Ethan Zuckerman: How much of the language you’re seeing feels familiar from other hype cycles?

Madeline: Tim and I looked at the history of autopilot litigation to see what might happen with autonomous cars. We should be looking at Big Data as the prior hype cycle.

Yarden Katz is at the BKC and at the Dept. of Systems Biology at Harvard Medical School. He talks about the history of AI, starting with 1958 claim about translation machine. 1966: Minsky Then there was an AI funding winter, but now it’s big again. “Until recently, AI was a dirty word.”

Today we use it schizophrenically: for Deep Learning or in a totally diluted sense as something done by a computer. “AI” now seems to be a branding strategy used by Silicon Valley.

“AI’s history is diverse, messy, and philosophical.” If complexit is embraced, “AI” might not be a useful caregory for policy. So we should go basvk to the politics of technology:

1. who controls the code/frameworks/data
2. Is the system inspectable/open?
3. Who sets the metrics? Who benefits from them?

The media are not going to be the watchdogs because they’re caught up in the hype. So who will be?

Q: There’s a qualitative difference in the sort of tasks now being turned over to computers. We’re entrusting machines with tasks we used to only trust to humans with good judgment.

Yarden: We already do that with systems that are not labeled AI, like “risk assessment” programs used by insurance companies.

Madeline: Before AI got popular again, there were expert systems. We are reconfiguring our understanding, moving it from a cognition frame to a behavioral one.

Chelsea: I’ve been involved in co-design projects that have backfired. These projects have sometimes been somewhat extractive: going in, getting lots of data, etc. How do we do co-design that are not extractive but that also aren’t prohibitively expensive?

Nathan: To what degree does AI change the dimensions of questions about explanation, inspectability, etc.

Yarden: The promoters of the Deep Learning narrative want us to believe you just need to feed in lots and lots of data. DL is less inspectable than other methods. DL is not learning from nothing. There are open questions about their inductive power.


Amy Zhang and Ryan Budish give a pre-alpha demo of the AI Compass being built at BKC. It’s designed to help people find resources exploring topics related to the ethics and governance of AI.

Comments Off on [liveblog][AI] Perspectives on community and AI

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Comments Off on [liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

March 18, 2017

How a thirteen-year-old interprets what's been given

“Of course what I’ve just said may not be right,” concluded the thirteen year old girl, “but what’s important is to engage in the interpretation and to participate in the discussion that has been going on for thousands of years.”

So said the bas mitzvah girl at an orthodox Jewish synagogue this afternoon. She is the daughter of friends, so I went. And because it is an orthodox synagogue, I didn’t violate the Sabbath by taking notes. Thus that quote isn’t even close enough to count as a paraphrase. But that is the thought that she ended her D’var Torah with. (I’m sure as heck violating the Sabbath now by writing this, but I am not an observant Jew.)

The D’var Torah is a talk on that week’s portion of the Torah. Presenting one before the congregation is a mark of one’s coming of age. The bas mitzvah girl (or bar mitzvah boy) labors for months on the talk, which at least in the orthodox world is a work of scholarship that shows command of the Hebrew sources, that interprets the words of the Torah to find some relevant meaning and frequently some surprising insight, and that follows the carefully worked out rules that guide this interpretation as a fundamental practice of the religion.

While the Torah’s words themselves are taken as sacred and as given by G-d, they are understood to have been given to us human beings to be interpreted and applied. Further, that interpretation requires one to consult the most revered teachers (rabbis) in the tradition. An interpretation that does not present the interpretations of revered rabbis who disagree about the topic is likely to be flawed. An interpretation that writes off prior interpretations with which one disagrees is not listening carefully enough and is likely to be flawed. An interpretation that declares that it is unequivocally the correct interpretation is wrong in that certainty and is likely to be flawed in its stance.

It seems to me — and of course I’m biased — that these principles could be very helpful regardless of one’s religion or discipline. Jewish interpretation takes the Word as the given. Secular fields take facts as the given. The given is not given unless it is taken, and taking is an act of interpretation. Always.

If that taking is assumed to be subjective and without boundaries, then we end up living in fantasy worlds, shouting at those bastards who believe different fantasies. But if there are established principles that guide the interpretations, then we can talk and learn from one another.

If we interpret without consulting prior interpretations, then we’re missing the chance to reflect on the history that has shaped our ideas. This is not just arrogance but stupidity.

If we fail to consult interpretations that disagree with one another, we not only will likely miss the truth, but we will emerge from the darkness certain that we are right.

If we consult prior interpretations that disagree but insist that we must declare one right and the other wrong, we are being so arrogant that we think we can stand in unequivocal judgment of the greatest minds in our history.

If we come out of the interpretation certain that we are right, then we are far more foolish than the thirteen year old I heard speak this morning.

3 Comments »

March 12, 2017

The wheels on the watch

1. This is an awesome immersion in craft knowledge.

2. It is incomprehensible without that craft knowledge.

3. It is mesmerizing, in part because of its incomprehensibility.

4. The tools — many of which he makes for this task — are as beautiful as their results.

5. How much we must have loved clocks to have done this without these tools!

6. What sort of creatures are we that our flourishing requires doing hard things?

1 Comment »

February 13, 2017

Ricky Gervais's "Life on the Road": Review

[NO SPOILERS YET] Ricky Gervais’ new TV movie, Life on the Road, now on Netflix, suffers from the sort of mortifying errors committed by its protagonist, David Brent, the manager of The Office with whom the movie catches us up.

[TINY SPOILERS THAT WON’T SPOIL ANYTHING] The movie is amusing in some of the main ways the original The Office was. David Brent is an unself-knowing narcissist surrounded by people who see through him. It lacks the utterly charming office romance between Tim and Dawn (Jim and Pam in the US version). It lacks any other villain than Brent, unlike Gareth in the original (Dwight in the US version). It lacks the satire of office life, offering instead a satire of self-funded, doomed rock tour by an unknown, pudgy, middle-aged man. That’s not a thing, so you can’t really satirize it.

Still, Gervais is great as Brent, having honed uncomfortable self-presentation to an art, complete with a squealing giggle that alerts us to his inability to be ashamed of himself. And Gervais sings surprisingly well.

[SPOILERS] But then it ends suddenly with Brent being accepted by his band, by the office where he’s been working as a bathroom-supply salesperson, and by a woman. Nothing prepares us for this except that it’s the end of the movie and Gervais wants to give his character some peace and dignity. It’s some extraordinarily sloppy writing.

Worse, the ending seems way too close to what Gervais himself seems to want. Like Brent, he wants to be taken seriously as a musician and singer, except that Gervais’s songs are self-knowingly bad, in the style of Spinal Tap except racist. Still, you leave the movie surprised that he’s that good a singer and that the songs are quite good as comic songs. Brent-Gervais has achieved his goal.

Likewise, you leave thinking that Gervais has given us a happy ending because he, Gervais, wants to be liked, just as Brent does. It’s not the angry fuck-the-hicks sort of attitude Gervais exhibited during and immediately after The Office.

And you leave thinking that, like Brent, Gervais really wants to carry the show solely on his shoulders. The Office was an ensemble performance with some fantastic acting by Martin Freeman (!) as Tim and Lucy Davis as Dawn, as well as by Gervais. Life on the Road only cares about one character, as if Gervais wanted to prove he could do it all by his lonesome. But he can’t.

Ricky Gervais pulls his punches in this, not for the first time. Let Ricky be Ricky. Or, more exactly, Let Ricky be David.

3 Comments »

January 20, 2017

Maybe we’re not such an awful species

Wired has a post about http://astronaut.io/, a site that shows snippets of recently uploaded YouTubes that have had no views and that have a generic title.

In just a few minutes, I saw scenes from around the world of what matters to people.

Maybe I was just lucky, but what I saw is what peace is about.

1 Comment »

January 11, 2017

[liveblog][bkc] Kishonna Gray

Berkman

Kishonna Gray [#KishonnaGray] is giving a Berkman-Klein [#BKCHarvard] Tuesday lunch talk . She’s an ass’t prof and ML King Scholar at MIT as well as being a fellow at BKC and the author of Race, Gender and Deviance in Xbox Live. She’s going to talk about a framework, Black Digital Feminism.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She begins by saying, “I’ve been at a cross roads, personally and intellectually” over the Trump election, the death of black civilians at the hand of police, and the gaming controversies, including gamergate. How did we get to this point? And what point are we at? “What matters most in this moment?” She’s going to talk about the framework that helps her make sense of some of these things.

Imagine we’re celebrating the 50th birthday of the Berkman Klein Center (in 305 yrs or so)? What are we celebrating? The end of online harassment? The dismantling of heteronormative, white supremacy hierarchy? Are we telling survivor narratives?

She was moved by an article the day after the election, titled “Black women were the only ones who tried to save the world Tuesday night,” by Charles D. Ellison. She laughed at first, and retweeted it. She was “overwhelmed by the response of people who didn’t think black women have the capacity to do anything except make babies and collect welfare checks.” She recalled many women, including Sojourner Truth who spoke an important truth to a growing sense in the feminist movement that it was fundamentally a white movement. The norms are so common and hidden that when we notice them we ask how the women broke through the barriers rather than asking why the barriers were there in the first place. It’s as if these women are superhuman. But we need to ask why are there barriers in the first place? [This is a beautifully composed talk. I’m sorry to be butchering it so badly. It will be posted on line in a few days.

In 1869 Frederick Douglass argued that including women in the movement for the vote would reduce the chances of the right to vote being won for black men. “White womenhood has been central in defining white masculinity. ” E.g., in Birth of a Nation, white women need protection. Self-definition is the core of intersectionality. Masculinity has mainly protected its own interests and its own fragility, not women. It uses the protection of women to showcase its own dominance.

“Why do we have to insert our own existences into spaces? Why are we not recognized?.” The marginalized are no longer accepting their marginzalization. For example,look at black women’s digital practices.

Black women have used digital involvement to address marginalization, to breach the boundaries of what’s “normal.” Often that is looked upon as them merely “playing around” with tech. The old frameworks meant that black women couldn’t enter the digital space as who they actually are.

Black Digital Feminism has three elements:

1. Social structural oppression of technology and virtual spaces. Many digital spaces are dominated by assumptions that they are color-blind. Black Lives Matter and Say Her Name are attempts to remind us that blackness is not an intrusion.

2. Intersectional oppressions experience in virtual spaces. Women must work to dismantle the interlocking structures of oppression. Individuals experience oppression in different ways and we don’t want a one-size approach. E.g., the “solidarity is for white women” hashtag is seen as an expression of black women being angry, but it is a reminder that feminism has too often been assumed to be a white issue first.

3. The distinctness of the virtual feminist community. Black Digital Feminism privileges women’s ways of knowing. “NotYourAsianSidekick” is rooted in the radical Asian woman tradition, insisting that they control their own identity. Black women, and others, reject the idea that feminism is the same for all women, disregarding the different forms of oppression women are subject to based upon their race, ethnicity, etc. Women have used social media for social change and to advance critical activism and feminism.

The tenets of Black Digital Feminism cannot detach from the personal, communal, or political, which sets it part from techno- and cyber-feminism.

These new technologies are not creating anything. They are providing an outlet. “These groups have never been voiceless. The people in power simply haven’t been listening.” The digital amplifies these voices.

QA

Q: With the new administration, should we be thinking differently?

A: We need to identify the commonalities. Isolated marches won’t do enough. We need to find a way to bring communities together by figuring out what is the common struggle against structural oppression. Black women sacrificed to support Trump, forgetting the “super-predator” stuff from Hillary, but other groups didn’t make equivalent sacrifices.

Q: Does it mean using hashtags differently?

A: This digital culture is only one of many things we can do. We can’t forget the physical community, connecting with people. There are models for doing this.

Q: Did Net Neutrality play a role in enabling the Black community to participate? Do we need to look at NN from a feminist perspective…NN as making every packet have the same weight.

NN was key for enabling Black Lives Matter because the gov’t couldn’t suppress that movement’s language, its speech.

Q: Is this perceived as a danger insider the black feminist movement?

A: Tech isn’t neutral, is the idea. It lets us do what we need to do.

Q: Given the work you’ve done on women finding pleasure in spaces (like the Xbox) where they’re not expected to be found, what do you think about our occupying commercial spaces?

A: I’m a lifelong gamer and I get asked how I can play where there aren’t players — or developers — who look like me. I started the practice of highlighting the people who are there. We’re there, but we’re not noticed. E.g., Pew Research showed recently that half of gamers are women. The overwhelming population of console gamers are black and brown men. We really have to focus on who is in the spaces, and seek them out. My dissertation focused on finding these people, and finding their shared stories: not being noticed or valued. But we should take the extra steps to make sure we locate them. Some people are going to call 2016 the year of the black gamer, games with black protagonists. This is due to a push from marginalized games. The resistance is paying off. Even the Oscars So White has paid off in a more diverse Golden Globes nominees set.

Q: You navigate between feminist theory and observational work. How did the latter shape the former?

A: When I learned about ethnography I thought it was the most beautiful thing ever created — being immersed in a community and let them tell their own stories. But when it came time to document that, I realized why we sometimes consider ethnography to be voyeuristic and exploitative. When transcribing, I was expected to “clean up” the speech. “Hell no,” she said. E.g. she left “dem” as “dem,” not “them.” “I refer to people as narrators, not ‘research participants.'” They’re part of the process. She showed them the chapter drafts. E.g., she hasn’t published all her Ferguson work because she wants to make sure that she “leaves the place better.” You have to stay true to the transformative, liberatory practices that we say we’re doing.” She’s even been criticized for writing too plainly, eschewing academic jargon. “I wanted to make sure that a community that let me into its space understood every word that I wrote.”

Q: There’s been debate about the people who lead the movement. E.g., if I’m not black, I am not best suited to lead the movement in the fight for those rights. OTOH, if we want to advance the rights of women, we have to move the whole society with us.

A: What you’re saying is important. I stopped caring about hurting peole’s feelings because if they’re devoted to the work that needs to be done, they’ve checked their feelings, their fragility, at the door. There is tons of work for allies to do. If it’s a real ally dedicated to the work, they’ll understand. There’s so much work to do. And Trump isn’t even the president yet.

Q: About the application of Black Digital Feminism to the law. (Intersectionality started in law journals.)

A: It’s hard to see how it translates into actual policy, especially now. I don’t know how we’ll push back against what’s to come. E.g., we know evaluations of women are usually lower than of men. So when are we going to stop valuing the evaluations so highly? At the bottom of my evaluations, I write, “Just so you know, these evaluations are filtered through my black woman’s body.”

Q: What do we get things like”#IamMichelle”, which is like the “I am Spartacus” in the movie Spartacus?

A: It depends on the effect it has. I focus on marginalized folks, and their sense of empowerment and pride. There’s some power there, especially in localized communities.

Q: How can white women be supportive?

A: You’ve to go get your people, the white women who voted. What have you done to change the thinking of the women you know who voted for Trump? That’s where it has to begin. You have to first address your own circle. You may not be able to change them, but you can’t ignore them. That’s step one.

Q: I always like your work because you hearken back to a rich pedigree of black feminism. But the current moment is distinct. E.g., the issues are trans-national. So we need new visions for what we want the future. What is the future that we’re fighting for? What does the digital contribute to that vision?

A: It’s important to acknowledge what’s the same. E.g., the death of black people by police is part of the narrative of lynching. The structural and institutional inequalities are the same. Digital tools let us address this differently. BLM is no different from what happened with Rodney King. What future are we fighting for? I guess I haven’t articulated that. I don’t know how we get there. We should first ask how we transform our own spaces. I don’t want the conversation to get to big. The conservation should be small enough and digestible. We don’t want people to feel helpless.

Q: If I’m a man who asks about Black Digital Feminism [which he is], where can I learn more?

You can go to my Web site: www.kishonnaGray.com. And the Berkman Klein community is awesome and ready to go to work.

Q: You write about the importance of claiming identity online. Early on, people celebrated the fact that you could go online without a known identity. Especially now, how do you balance the important task of claiming identity and establishing solidarity with your smaller group, and bonding with your allies in a larger group? Do we need to shift the balance?

A: I haven’t figured out how to create that balance. The communities I’m in are still distinct. When Mike Brown was killed, I realized how distinct the anti-gamergate crowd was from the BLM. These are not opposing fights. They’re not so distinct that we can’t fight both at the same times. I ended up working with both, and got me thinking about how to bridge them. But I haven’t figured out how to bring them together.

Comments Off on [liveblog][bkc] Kishonna Gray

« Previous Page | Next Page »