Joho the Blogculture Archives - Joho the Blog

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Be the first to comment »

[liveblog][AI] Perspectives on community and AI

Chelsea Barabas is moderating a set of lightning talks at the AI Advance, aat Berkman Klein and MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Lionel Brossi recounts growing up in Argentina and the assumption that all boys care about football. He moved to Chile which is split between people who do and do not watch football. “Humans are inherently biased.” So, our AI systems are likely to be biased. Cognitive science has shown that the participants in their studies tend to be WEIRD: western, educated, industrialized, rich and developed. Also straight and white. He references Kate Crawford‘s “AI’s White Guy Problem.” We need not only diverse teams of developers, but also to think about how data can be more representative. We also need to think about the users. One approach is work on goal centered design.

If we ever get to unbiased AI, Borges‘ statement, “The original is unfaithful to the translation” may apply.

Chelsea: What is an inclusive way to think of cross-border countries?

Lionel: We need to co-design with more people.

Madeline Elish is at Data and Society and an anthropology of technology grad student at Columbia. She’s met designers who thought it might be a good to make a phone run faster if you yell at it. But this would train children to yell at things. What’s the context in which such designers work? She and Tim Hwang set about to build bridges between academics and businesses. They asked what designers see as their responsibility for the social implications of their work. They found four core challenges:

1. Assuring users perceive good intentions
2. Protecting privacy
3. Long term adoption
4. Accuracy and reliability

She and Tim wrote An AI Pattern Language [pdf] about the frameworks that guide design. She notes that none of them were thinking about social justice. The book argues that there’s a way to translate between the social justice framework and, for example, the accuracy framework.

Ethan Zuckerman: How much of the language you’re seeing feels familiar from other hype cycles?

Madeline: Tim and I looked at the history of autopilot litigation to see what might happen with autonomous cars. We should be looking at Big Data as the prior hype cycle.

Yarden Katz is at the BKC and at the Dept. of Systems Biology at Harvard Medical School. He talks about the history of AI, starting with 1958 claim about translation machine. 1966: Minsky Then there was an AI funding winter, but now it’s big again. “Until recently, AI was a dirty word.”

Today we use it schizophrenically: for Deep Learning or in a totally diluted sense as something done by a computer. “AI” now seems to be a branding strategy used by Silicon Valley.

“AI’s history is diverse, messy, and philosophical.” If complexit is embraced, “AI” might not be a useful caregory for policy. So we should go basvk to the politics of technology:

1. who controls the code/frameworks/data
2. Is the system inspectable/open?
3. Who sets the metrics? Who benefits from them?

The media are not going to be the watchdogs because they’re caught up in the hype. So who will be?

Q: There’s a qualitative difference in the sort of tasks now being turned over to computers. We’re entrusting machines with tasks we used to only trust to humans with good judgment.

Yarden: We already do that with systems that are not labeled AI, like “risk assessment” programs used by insurance companies.

Madeline: Before AI got popular again, there were expert systems. We are reconfiguring our understanding, moving it from a cognition frame to a behavioral one.

Chelsea: I’ve been involved in co-design projects that have backfired. These projects have sometimes been somewhat extractive: going in, getting lots of data, etc. How do we do co-design that are not extractive but that also aren’t prohibitively expensive?

Nathan: To what degree does AI change the dimensions of questions about explanation, inspectability, etc.

Yarden: The promoters of the Deep Learning narrative want us to believe you just need to feed in lots and lots of data. DL is less inspectable than other methods. DL is not learning from nothing. There are open questions about their inductive power.


Amy Zhang and Ryan Budish give a pre-alpha demo of the AI Compass being built at BKC. It’s designed to help people find resources exploring topics related to the ethics and governance of AI.

Be the first to comment »

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Be the first to comment »

March 18, 2017

How a thirteen-year-old interprets what's been given

“Of course what I’ve just said may not be right,” concluded the thirteen year old girl, “but what’s important is to engage in the interpretation and to participate in the discussion that has been going on for thousands of years.”

So said the bas mitzvah girl at an orthodox Jewish synagogue this afternoon. She is the daughter of friends, so I went. And because it is an orthodox synagogue, I didn’t violate the Sabbath by taking notes. Thus that quote isn’t even close enough to count as a paraphrase. But that is the thought that she ended her D’var Torah with. (I’m sure as heck violating the Sabbath now by writing this, but I am not an observant Jew.)

The D’var Torah is a talk on that week’s portion of the Torah. Presenting one before the congregation is a mark of one’s coming of age. The bas mitzvah girl (or bar mitzvah boy) labors for months on the talk, which at least in the orthodox world is a work of scholarship that shows command of the Hebrew sources, that interprets the words of the Torah to find some relevant meaning and frequently some surprising insight, and that follows the carefully worked out rules that guide this interpretation as a fundamental practice of the religion.

While the Torah’s words themselves are taken as sacred and as given by G-d, they are understood to have been given to us human beings to be interpreted and applied. Further, that interpretation requires one to consult the most revered teachers (rabbis) in the tradition. An interpretation that does not present the interpretations of revered rabbis who disagree about the topic is likely to be flawed. An interpretation that writes off prior interpretations with which one disagrees is not listening carefully enough and is likely to be flawed. An interpretation that declares that it is unequivocally the correct interpretation is wrong in that certainty and is likely to be flawed in its stance.

It seems to me — and of course I’m biased — that these principles could be very helpful regardless of one’s religion or discipline. Jewish interpretation takes the Word as the given. Secular fields take facts as the given. The given is not given unless it is taken, and taking is an act of interpretation. Always.

If that taking is assumed to be subjective and without boundaries, then we end up living in fantasy worlds, shouting at those bastards who believe different fantasies. But if there are established principles that guide the interpretations, then we can talk and learn from one another.

If we interpret without consulting prior interpretations, then we’re missing the chance to reflect on the history that has shaped our ideas. This is not just arrogance but stupidity.

If we fail to consult interpretations that disagree with one another, we not only will likely miss the truth, but we will emerge from the darkness certain that we are right.

If we consult prior interpretations that disagree but insist that we must declare one right and the other wrong, we are being so arrogant that we think we can stand in unequivocal judgment of the greatest minds in our history.

If we come out of the interpretation certain that we are right, then we are far more foolish than the thirteen year old I heard speak this morning.

2 Comments »

March 12, 2017

The wheels on the watch

1. This is an awesome immersion in craft knowledge.

2. It is incomprehensible without that craft knowledge.

3. It is mesmerizing, in part because of its incomprehensibility.

4. The tools — many of which he makes for this task — are as beautiful as their results.

5. How much we must have loved clocks to have done this without these tools!

6. What sort of creatures are we that our flourishing requires doing hard things?

1 Comment »

February 13, 2017

Ricky Gervais's "Life on the Road": Review

[NO SPOILERS YET] Ricky Gervais’ new TV movie, Life on the Road, now on Netflix, suffers from the sort of mortifying errors committed by its protagonist, David Brent, the manager of The Office with whom the movie catches us up.

[TINY SPOILERS THAT WON’T SPOIL ANYTHING] The movie is amusing in some of the main ways the original The Office was. David Brent is an unself-knowing narcissist surrounded by people who see through him. It lacks the utterly charming office romance between Tim and Dawn (Jim and Pam in the US version). It lacks any other villain than Brent, unlike Gareth in the original (Dwight in the US version). It lacks the satire of office life, offering instead a satire of self-funded, doomed rock tour by an unknown, pudgy, middle-aged man. That’s not a thing, so you can’t really satirize it.

Still, Gervais is great as Brent, having honed uncomfortable self-presentation to an art, complete with a squealing giggle that alerts us to his inability to be ashamed of himself. And Gervais sings surprisingly well.

[SPOILERS] But then it ends suddenly with Brent being accepted by his band, by the office where he’s been working as a bathroom-supply salesperson, and by a woman. Nothing prepares us for this except that it’s the end of the movie and Gervais wants to give his character some peace and dignity. It’s some extraordinarily sloppy writing.

Worse, the ending seems way too close to what Gervais himself seems to want. Like Brent, he wants to be taken seriously as a musician and singer, except that Gervais’s songs are self-knowingly bad, in the style of Spinal Tap except racist. Still, you leave the movie surprised that he’s that good a singer and that the songs are quite good as comic songs. Brent-Gervais has achieved his goal.

Likewise, you leave thinking that Gervais has given us a happy ending because he, Gervais, wants to be liked, just as Brent does. It’s not the angry fuck-the-hicks sort of attitude Gervais exhibited during and immediately after The Office.

And you leave thinking that, like Brent, Gervais really wants to carry the show solely on his shoulders. The Office was an ensemble performance with some fantastic acting by Martin Freeman (!) as Tim and Lucy Davis as Dawn, as well as by Gervais. Life on the Road only cares about one character, as if Gervais wanted to prove he could do it all by his lonesome. But he can’t.

Ricky Gervais pulls his punches in this, not for the first time. Let Ricky be Ricky. Or, more exactly, Let Ricky be David.

3 Comments »

January 20, 2017

Maybe we’re not such an awful species

Wired has a post about http://astronaut.io/, a site that shows snippets of recently uploaded YouTubes that have had no views and that have a generic title.

In just a few minutes, I saw scenes from around the world of what matters to people.

Maybe I was just lucky, but what I saw is what peace is about.

Be the first to comment »

January 11, 2017

[liveblog][bkc] Kishonna Gray

Berkman

Kishonna Gray [#KishonnaGray] is giving a Berkman-Klein [#BKCHarvard] Tuesday lunch talk . She’s an ass’t prof and ML King Scholar at MIT as well as being a fellow at BKC and the author of Race, Gender and Deviance in Xbox Live. She’s going to talk about a framework, Black Digital Feminism.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She begins by saying, “I’ve been at a cross roads, personally and intellectually” over the Trump election, the death of black civilians at the hand of police, and the gaming controversies, including gamergate. How did we get to this point? And what point are we at? “What matters most in this moment?” She’s going to talk about the framework that helps her make sense of some of these things.

Imagine we’re celebrating the 50th birthday of the Berkman Klein Center (in 305 yrs or so)? What are we celebrating? The end of online harassment? The dismantling of heteronormative, white supremacy hierarchy? Are we telling survivor narratives?

She was moved by an article the day after the election, titled “Black women were the only ones who tried to save the world Tuesday night,” by Charles D. Ellison. She laughed at first, and retweeted it. She was “overwhelmed by the response of people who didn’t think black women have the capacity to do anything except make babies and collect welfare checks.” She recalled many women, including Sojourner Truth who spoke an important truth to a growing sense in the feminist movement that it was fundamentally a white movement. The norms are so common and hidden that when we notice them we ask how the women broke through the barriers rather than asking why the barriers were there in the first place. It’s as if these women are superhuman. But we need to ask why are there barriers in the first place? [This is a beautifully composed talk. I’m sorry to be butchering it so badly. It will be posted on line in a few days.

In 1869 Frederick Douglass argued that including women in the movement for the vote would reduce the chances of the right to vote being won for black men. “White womenhood has been central in defining white masculinity. ” E.g., in Birth of a Nation, white women need protection. Self-definition is the core of intersectionality. Masculinity has mainly protected its own interests and its own fragility, not women. It uses the protection of women to showcase its own dominance.

“Why do we have to insert our own existences into spaces? Why are we not recognized?.” The marginalized are no longer accepting their marginzalization. For example,look at black women’s digital practices.

Black women have used digital involvement to address marginalization, to breach the boundaries of what’s “normal.” Often that is looked upon as them merely “playing around” with tech. The old frameworks meant that black women couldn’t enter the digital space as who they actually are.

Black Digital Feminism has three elements:

1. Social structural oppression of technology and virtual spaces. Many digital spaces are dominated by assumptions that they are color-blind. Black Lives Matter and Say Her Name are attempts to remind us that blackness is not an intrusion.

2. Intersectional oppressions experience in virtual spaces. Women must work to dismantle the interlocking structures of oppression. Individuals experience oppression in different ways and we don’t want a one-size approach. E.g., the “solidarity is for white women” hashtag is seen as an expression of black women being angry, but it is a reminder that feminism has too often been assumed to be a white issue first.

3. The distinctness of the virtual feminist community. Black Digital Feminism privileges women’s ways of knowing. “NotYourAsianSidekick” is rooted in the radical Asian woman tradition, insisting that they control their own identity. Black women, and others, reject the idea that feminism is the same for all women, disregarding the different forms of oppression women are subject to based upon their race, ethnicity, etc. Women have used social media for social change and to advance critical activism and feminism.

The tenets of Black Digital Feminism cannot detach from the personal, communal, or political, which sets it part from techno- and cyber-feminism.

These new technologies are not creating anything. They are providing an outlet. “These groups have never been voiceless. The people in power simply haven’t been listening.” The digital amplifies these voices.

QA

Q: With the new administration, should we be thinking differently?

A: We need to identify the commonalities. Isolated marches won’t do enough. We need to find a way to bring communities together by figuring out what is the common struggle against structural oppression. Black women sacrificed to support Trump, forgetting the “super-predator” stuff from Hillary, but other groups didn’t make equivalent sacrifices.

Q: Does it mean using hashtags differently?

A: This digital culture is only one of many things we can do. We can’t forget the physical community, connecting with people. There are models for doing this.

Q: Did Net Neutrality play a role in enabling the Black community to participate? Do we need to look at NN from a feminist perspective…NN as making every packet have the same weight.

NN was key for enabling Black Lives Matter because the gov’t couldn’t suppress that movement’s language, its speech.

Q: Is this perceived as a danger insider the black feminist movement?

A: Tech isn’t neutral, is the idea. It lets us do what we need to do.

Q: Given the work you’ve done on women finding pleasure in spaces (like the Xbox) where they’re not expected to be found, what do you think about our occupying commercial spaces?

A: I’m a lifelong gamer and I get asked how I can play where there aren’t players — or developers — who look like me. I started the practice of highlighting the people who are there. We’re there, but we’re not noticed. E.g., Pew Research showed recently that half of gamers are women. The overwhelming population of console gamers are black and brown men. We really have to focus on who is in the spaces, and seek them out. My dissertation focused on finding these people, and finding their shared stories: not being noticed or valued. But we should take the extra steps to make sure we locate them. Some people are going to call 2016 the year of the black gamer, games with black protagonists. This is due to a push from marginalized games. The resistance is paying off. Even the Oscars So White has paid off in a more diverse Golden Globes nominees set.

Q: You navigate between feminist theory and observational work. How did the latter shape the former?

A: When I learned about ethnography I thought it was the most beautiful thing ever created — being immersed in a community and let them tell their own stories. But when it came time to document that, I realized why we sometimes consider ethnography to be voyeuristic and exploitative. When transcribing, I was expected to “clean up” the speech. “Hell no,” she said. E.g. she left “dem” as “dem,” not “them.” “I refer to people as narrators, not ‘research participants.'” They’re part of the process. She showed them the chapter drafts. E.g., she hasn’t published all her Ferguson work because she wants to make sure that she “leaves the place better.” You have to stay true to the transformative, liberatory practices that we say we’re doing.” She’s even been criticized for writing too plainly, eschewing academic jargon. “I wanted to make sure that a community that let me into its space understood every word that I wrote.”

Q: There’s been debate about the people who lead the movement. E.g., if I’m not black, I am not best suited to lead the movement in the fight for those rights. OTOH, if we want to advance the rights of women, we have to move the whole society with us.

A: What you’re saying is important. I stopped caring about hurting peole’s feelings because if they’re devoted to the work that needs to be done, they’ve checked their feelings, their fragility, at the door. There is tons of work for allies to do. If it’s a real ally dedicated to the work, they’ll understand. There’s so much work to do. And Trump isn’t even the president yet.

Q: About the application of Black Digital Feminism to the law. (Intersectionality started in law journals.)

A: It’s hard to see how it translates into actual policy, especially now. I don’t know how we’ll push back against what’s to come. E.g., we know evaluations of women are usually lower than of men. So when are we going to stop valuing the evaluations so highly? At the bottom of my evaluations, I write, “Just so you know, these evaluations are filtered through my black woman’s body.”

Q: What do we get things like”#IamMichelle”, which is like the “I am Spartacus” in the movie Spartacus?

A: It depends on the effect it has. I focus on marginalized folks, and their sense of empowerment and pride. There’s some power there, especially in localized communities.

Q: How can white women be supportive?

A: You’ve to go get your people, the white women who voted. What have you done to change the thinking of the women you know who voted for Trump? That’s where it has to begin. You have to first address your own circle. You may not be able to change them, but you can’t ignore them. That’s step one.

Q: I always like your work because you hearken back to a rich pedigree of black feminism. But the current moment is distinct. E.g., the issues are trans-national. So we need new visions for what we want the future. What is the future that we’re fighting for? What does the digital contribute to that vision?

A: It’s important to acknowledge what’s the same. E.g., the death of black people by police is part of the narrative of lynching. The structural and institutional inequalities are the same. Digital tools let us address this differently. BLM is no different from what happened with Rodney King. What future are we fighting for? I guess I haven’t articulated that. I don’t know how we get there. We should first ask how we transform our own spaces. I don’t want the conversation to get to big. The conservation should be small enough and digestible. We don’t want people to feel helpless.

Q: If I’m a man who asks about Black Digital Feminism [which he is], where can I learn more?

You can go to my Web site: www.kishonnaGray.com. And the Berkman Klein community is awesome and ready to go to work.

Q: You write about the importance of claiming identity online. Early on, people celebrated the fact that you could go online without a known identity. Especially now, how do you balance the important task of claiming identity and establishing solidarity with your smaller group, and bonding with your allies in a larger group? Do we need to shift the balance?

A: I haven’t figured out how to create that balance. The communities I’m in are still distinct. When Mike Brown was killed, I realized how distinct the anti-gamergate crowd was from the BLM. These are not opposing fights. They’re not so distinct that we can’t fight both at the same times. I ended up working with both, and got me thinking about how to bridge them. But I haven’t figured out how to bring them together.

Be the first to comment »

December 16, 2016

How hackers became political

Biella Coleman has a terrific piece exploring an excellent question: How did hackers become political actors? I’d say “activists,” but that implies a less hands-on approach to the machinery of politics.

Biella combines the virtues of academic rigor with the skills of a writer who knows how to talk about ideas through narrative … sometimes a conventional story, but also through the gradual unfolding of ideas. I’m a fan.

1 Comment »

December 3, 2016

[liveblog] Kyle Drake: Making the Web Fun again

Kyle Drake, CEO of Neocities, is talking at the Web 1.0 conference. His topic is how to “bring back the spirit of geocities for the modern web.” The talk is on his “derpy” Web site

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

“When you don’t create things, you become defined by your tastes rather than ability,” said Why the Lucky Stiff. “Remember when everybody created Web sites?” Kyle asks. (He points to a screen capture of Mark Zuckerberg’s homepage, which is still available at the Internet Archive.) In the spirit of fairness, he shows his own first home page. And then some very early 90’s-ish home pages that “highlight the dorkiness of the 90’s Web.”

“They looked bad. But so what? They were fun. They were creative. They were quirky. They were interesting, And what did we replace them with? With a Twitter textbox.” Those textboxes are minimal and the same for everyone. Everyone’s profiles at Facebook has the same categories available.”It seems strange to me that we call that new and Web pages old.”

We got rid of the old Web because it wasn’t profitable. “This isn’t progress. It’s a nightmare. So, how do we take the good things about the old Web and modernize it? How do we bring back the old idea of people creating things and expressing themselves?”

That’s why Kyle founded Neocities. 1. It brings back free home pages. 2. No ads. 3. Protects sites against being shut down. It’s open source, too. It currently hosts 100,000 sites.

“This is not nostalgia,” he says. Web sites do things that social networks can’t. A Web site gives you more control and the ability to be more of who you are, with the confidence that the site will persist. And the good news about persistence is that pages still render, often perfectly, even decades later. Also, the Internet Archive can back them up easily. It also makes it easy to create curated lists and collections.

He’s working with IPFS so that Neocities sites can be community hosted.

QA

Me: How does he sustain it financially?

A: You can be a supporter for $5/month

Be the first to comment »

Next Page »