Joho the Blog » [bkc] Hate speech on Facebook
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[bkc] Hate speech on Facebook

I’m at a Very Special Harvard Berkman Klein Center for Internet & Society Tuesday luncheon featuring Monika Bickert, Facebook’s Head of Global Policy Management in conversation with Jonathan Zittrain. Monika is in charge of what types of content can be shared on FB, how advertisers and developer interact with the site, and FB’s response to terrorist content. [NOTE: I am typing quickly, getting things wrong, missing nuance, filtering through my own interests and biases, omitting what I can’t hear or parse, and not using a spelpchecker. TL;DR: Please do not assume that this is a reliable account.]

Monika: We have more than 2B users…

JZ: Including bots?

MB: Nope, verified. Billions of messages are posted every day.

[JZ posts some bullet points about MB’s career, which is awesome.]

JZ: Audience, would you want to see photos of abused dogs taken down. Assume they’re put up without context. [It sounds to me like more do not want it taken down.]

MB: The Guardian covered this. [Maybe here?] The useful part was it highlighted how much goes into the process of deciding these things. E.g., what counts as mutilation of an animal? The Guardian published what it said were FB’s standards, not all of which were.

MB: For user generated content there’s a set of standards that’s made public. When a comment is reported to FB, it goes to a FB content reviewer.

JZ: What does it take to be one of those? What does it pay?

MB: It’s not an existing field. Some have content-area expertise, e.g., terrorism. It’s not a minimum wage sort of job. It’s a difficult, serious job. People go through extensive training, and continuing training. Each reviewer is audited. They take quizzes from time to time. Our policies change constantly. We have something like a mini legislative session every two weeks to discuss proposed policy changes, considering internal suggestions, including international input, and external expert input as well, e.g., ACLU.

MB: About animal abuse: we consider context. Is it a protest against animal cruelty? After a natural disaster, you’ll see awful images. It gets very complicated. E.g., someone posts a photo of a bleeding body in Syria with no caption, or just “Wow.” What do we do?

JZ: This is worlds away from what lawyers learn about the First Amendment.

MB: Yes, we’re a private company so the Amendment doesn’t apply. Behind our rules is the idea that “You don’t have to agree with the content, but you should feel safe”FB should be a place where people feel safe connecting and expressing themselves. You don’t have to agree with the content, but you should feel safe.

JZ: Hate speech was defined as an attack against a protected category…

MB: We don’t allow hate speech, but no two people define it the same way. For us, it’s hate speech if you are attacking a person or a group of people based upon a protected characteristic — race, gender, gender identification, etc. —. Sounds easy in concept, but applying it is hard. Our rule is if I say something about a protected category and it’s an attack, we’d consider it hate speech and remove it.

JZ: The Guardian said that in training there’s a quiz. Q: Who do we protect: Women drivers, black children, or white men? A: White men.

MB: Not our policy any more. Our policy was that if there’s another characteristic beside the protected category, it’s not hate speech. So, attacking black children was ok but not white men, because of the inclusion of “children.” But we’ve changed that. Now we would consider attacks on women drivers and black children as hate speech. But when you introduce other characteristics such as profession, it’s harder. We’re evaluating and testing policies now. We try marking content and doing a blind test to see how it affects outcomes. [I don’t understand that. Sorry.]

JZ: Should the internal policy be made public?

MB: I’d be in favor of it. Making the training decks transparent would also be useful. It’s easier if you make clear where the line is.

JZ: Do protected categories shift?

MB: Yes, generally. I’ve been at FB for 5.5 yrs, in this are for 4 yrs. Overall, we’ve gotten more restrictive. Sometimes something becomes a topic of news and we want to make sure people can discuss it.

JZ: Didi Delgado’s post “all white people are racist” was deleted. But it would have been deleted if had said that all black people are racist, right?

MB: Yes. “If it’s a protected characteristic, we’ll protect it”If it’s a protected characteristic, we’ll protect it. [Ah, if only life were that symmetrical.]

JZL How about calls to violence, e.g., “Someone shoot Trump/Hillary”? If you think it should be taken down. [Sounds like most would let it stand.]

JZ: How about “Kick a person with red hair.” [most let it stand]

JZ: “How about: To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat.” [most let it stand][fuck, that’s hard to see up on the screen.]

JZ: “Let’s beat up the fat kids.” [most let it stand]

JZ: “#stab and become the fear of the Zionist” [most take it down]

MB: We don’t allow credible calls for violence.

JZ: Suppose I, a non-public figure, posted “Post one more insult and I’ll kill you.”

MB: We’d take that down. We also look at the degree of violence. Beating up and kicking might not rise to the standard. Snapping someone’s neck would be taken down, although if it were purely instructions on how to do something, we’d leave it up. “Zionist” is often associated with hate speech, and stabbing is serious, so we’d take them down. We leave room for aspirational statements wishing some bad thing would happen. “Someone should shoot them all” we’d count as a call to violence. We also look for specifity, as in “Let’s kill JZ. He leaves work at 3.” We also look at the vulnerability of people; if it’s a dangerous situation,
we’ll tend to treat all such things as calls to violence, [These are tough questions, but I’m not aligned with FB’s decisions on this.]

JZ: How long does someone spend reviewing this stuff?

MB: Some is easy. Nudity is nudity, although we let breast cancer photos through. But a beheading video is prohibited no matter what the context. Profiles can be very hard to evaluate. E.g., is this person a terrorist?

JZ: Given the importance of FB, does it seem right that these decisions reside with FB as a commercial entity. Or is there some other source that would actually be a relief?

MB: “We’re not making these decisions in a silo”We’re not making these decisions in a silo. We reach out for opinions outside of the company. We have Safety Advisory Board, a Global Safety Network [got that wrong, I think], etc.

JZ: These decisions are global? If I insult the Thai King…

MB: That doesn’t violate our global community standard. We have a group of academics around the world, and people on our team, who are counter-terrorism experts. It’s very much a conversation with the community.

JZ: FB requires real names, which can be a form of self-doxxing. Is the Real Name policy going to evolve?

MB: It’s evolved a little about what counts as their real name, i.e., the name people call you as opposed to what’s on your drivers license. Using your real name has always been a cornerstone of FB. A quinessential element of FB.

JZ: You don’t force disambiguation among all the Robert Smiths…

MB: When you communicate with people you know, you know you know them. “We don’t want people to be communicating with people who are not who you think they are”We don’t want people to be communicating with people who are not who you think they are. When you share something on FB, it’s not public or private. You can choose which groups you want to share it with, so you know who will see it. That’s part of the real name policy as well.

MB: We have our community standards. Sometimes we get requests from countries to remove violations of their law, e.g., insults to the King of Thailand. If we get such a request, if it doesn’t violate the standards, we look if the request is actually about real law in that country. Then we ask if it is political speech; if it is, to the extent possible, we’ll push back on those requests. E.g., Germans have a little more subjectivity in their hate speech laws. They may notify us about something that violates those laws, and if it does not violate our global standards, we’ll remove it in Germany only. (It’s done by IP addresses, the language you’re using, etc.) When we do that, we include it in our 6 month reports. If it’s removed, you see a notice that the content is restricted in your jurisdiction.

Q&A

Q: Have you spoken to users about people from different cultures and backgrounds reviewing their content?

A: It’s a legitimate question. E.g., when it comes to nudity, even a room of people as homogenous as this one will disagree. So, “our rules are written to be very objective”our rules are written to be very objective. And we’re increasingly using tech to make these decisions. E.g., it’s easy to automate the finding of links to porn or spam, and much harder for evaluating speech.

Q: What drives change in these policies and algorithms?

A: It’s constantly happening. And public conversation is helpful. And our reviewers raise issues.

Q: a) When there are very contentious political issues, how do you prevent bias? b) Are there checks on FB promoting some agenda?

A: a) We don’t have a rule saying that people from one or another country can review contentious posts. But we review the reviewers’ decisions every week. b) The transparency report we put out every six months is one such check. If we don’t listen to feedback, we tend to see news stories calling us out on it.

[Monika now quickly addresses some of the questions from the open question tool.]

Q: Would you send reports to Lumen? MB: We don’t currently record why decisions were made.

Q: How to prevent removal policies from being weaponized but trolls or censorious regimes? MB: We treat all reports the same — there’s an argument that we shouldn’t — but we don’t continuously re-review posts.

JZ: For all of the major platforms struggling with these issues, is it your instinct that it’s just a matter of incrementally getting this right, bringing in more people, continue to use AI, etc. OR do you think sometimes that this is just nuts; there’s got to be a better way.

There’s a tension between letting anyone see what they want, or have global standards. People say US hates hate speech and the Germans not so much, but there’s actually a spectrum in each. The catch is that there’s content that you’re going to be ok seeing but we think is not ok to be shared.

[Monika was refreshingly direct, and these are, I believe, literally impossible problems. But I came away thinking that FB’s position has a lot to do with covering their butt at the expense of protecting the vulnerable. E.g., they treat all protected classes equally, even though some of us — er, me — are in top o’ the heap, privileged classes. The result is that FB applies a rule equally to all, which can bring inequitable results. That’s easier and safer, but it’s not like I have a solution to these intractable problems.]

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon