logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

January 11, 2021

Parler and the failure of moral frameworks

This probably is not about what you think it is. It doesn’t take a moral stand about Parler or about its being chased off the major platforms and, in effect, off the Internet. Yet the title of this post is accurate: it’s about why moral frameworks don’t help us solve problems like those posed by Parler.

Traditional moral frameworks

The two major philosophical frameworks we use in the West to assess moral situations are consequentialism (mainly utilitarianism) and deontology. Utilitarianism assesses the morality of a choice based on the cumulative amount of happiness it will bring across the entire population (or how much it diminishes unhappiness). Deontology applies moral principles to cases, such as “It’s wrong to steal.”

Each has its advantages, but I don’t see how to apply them in a way that settles the issues about Parler. Or about most other things.

For example, from almost its very beginning (J.S. Mill, but not Bentham, as far as I remember), utilitarians have had to institute a hierarchy of pleasures in order to meet the objection that if we adopt that framework we should morally prefer policies that promote drunkenness and sex, over funding free Mozart concerts. (Just a tad of class bias showing there :) Worse, in a global space, do we declare a small culture’s happiness of less worth than those of a culture with a larger population? Should we declare a small culture’s happiness of less worth? Indeed, how do we apply utilitarianism to a single culture’s access to, for example,  pornography?

That last question raises a different, and common, objection with utilitarianism: suppose overall happiness is increased by ignoring the rights of others? It’s hard for utilitarianism to get over the conclusion that slavery is ok  so long as the people held slaves are greatly outnumbered by those who benefit from them. The other standard example is a contrivance in which a town’s overall happiness is greatly increased by allowing a person known by the authorities to be innocent to nevertheless be hanged. That’s because it turns out that most of us have a sense of deontological principles: We don’t care if slavery or hanging innocent people results in an overall happier society because it’s wrong on principle. 

But deontology has its own issues with being applied. The closest Immanuel Kant — the most prominent deontologist — gets to putting some particular value into his Categorical Imperative is to phrase it in terms of treating people as ends, not means, i.e., valuing autonomy. Kant argues that it is central because without it we can’t be moral creatures. But it’s not obvious that that is the highest value for humans especially in difficult moral situations,We can’t be fully moral without empathy nor is it clear how and when to limit people’s autonomy. (Many of us believe we also can’t be fully moral without empathy, but that’s a different argument.)

The relatively new  — 30 year old  — ethics of care avoids many of the issues with both of these moral frameworks by losing primary interest in general principles or generalized happiness, and instead thinking about morality in terms of relationships with distinct and particular individuals to whom we owe some responsibility of care; it takes as its fundamental and grounding moral behavior the caring of a mother for a child.  (Yes, it recognizes that fathers also care for children.) It begins with the particular, not an attempt at the general.

Applying the frameworks to Parler

So, how do any of these help us with the question of de-platforming Parler?

Utilitarians might argue that the existence of Parler as an amplifier of hate threatens to bring down the overall happiness of the world. Of course, the right-wing extremists on Parler would argue exactly the opposite, and would point to the detrimental consequences of giving the monopoly platforms this power.  I don’t see how either side convinces the other on this basis.

Deontologists might argue that the de-platforming violates the rights of the users and readers of Parler. the rights threatened by fascismOther deontologists  might talk about the rights threatened by the consequences of the growth of fascism enabled by Parler. Or they might simply make the utilitarian argument. Again, I don’t see how these frameworks lead to convincing the other side.

While there has been work done on figuring out how to apply the ethics of care to policy, it generally doesn’t make big claims about settling this sort of issue. But it may be that moral frameworks should not be measured by how effectively they convert opponents, but rather by how well they help us come to our own moral beliefs about issues. In that case, I still don’t see how they much help. 

If forced to have an opinion about Parler  — andI don’t think I have one worth stating  — I’d probably find a way to believe that the harmful consequences of Parler outweigh hindering the  human right of the participants to hang out with people they want to talk with and to say whatever they want. My point is definitely not that you ought to believe the same thing, because I’m very uncomfortable with it myself. My point is that moral frameworks don’t help us much.

And, finally, as I posted recently, I think moral questions are getting harder and harder now that we are ever more aware of more people, more opinions, and the complex dynamic networks of people, beliefs, behavior, and policies.

* * *

My old friend AKMA — so learned, wise, and kind that you could plotz — takes me to task in a very thought-provoking way. I reply in the comments.

Tweet
Follow me

Categories: echo chambers, ethics, everyday chaos, media, philosophy, policy, politics, social media Tagged with: ethics • free speech • morality • parler • philosophy • platforms Date: January 11th, 2021 dw

Be the first to comment »

September 19, 2017

[bkc] Hate speech on Facebook

I’m at a Very Special Harvard Berkman Klein Center for Internet & Society Tuesday luncheon featuring Monika Bickert, Facebook’s Head of Global Policy Management in conversation with Jonathan Zittrain. Monika is in charge of what types of content can be shared on FB, how advertisers and developer interact with the site, and FB’s response to terrorist content. [NOTE: I am typing quickly, getting things wrong, missing nuance, filtering through my own interests and biases, omitting what I can’t hear or parse, and not using a spelpchecker. TL;DR: Please do not assume that this is a reliable account.]

Monika: We have more than 2B users…

JZ: Including bots?

MB: Nope, verified. Billions of messages are posted every day.

[JZ posts some bullet points about MB’s career, which is awesome.]

JZ: Audience, would you want to see photos of abused dogs taken down. Assume they’re put up without context. [It sounds to me like more do not want it taken down.]

MB: The Guardian covered this. [Maybe here?] The useful part was it highlighted how much goes into the process of deciding these things. E.g., what counts as mutilation of an animal? The Guardian published what it said were FB’s standards, not all of which were.

MB: For user generated content there’s a set of standards that’s made public. When a comment is reported to FB, it goes to a FB content reviewer.

JZ: What does it take to be one of those? What does it pay?

MB: It’s not an existing field. Some have content-area expertise, e.g., terrorism. It’s not a minimum wage sort of job. It’s a difficult, serious job. People go through extensive training, and continuing training. Each reviewer is audited. They take quizzes from time to time. Our policies change constantly. We have something like a mini legislative session every two weeks to discuss proposed policy changes, considering internal suggestions, including international input, and external expert input as well, e.g., ACLU.

MB: About animal abuse: we consider context. Is it a protest against animal cruelty? After a natural disaster, you’ll see awful images. It gets very complicated. E.g., someone posts a photo of a bleeding body in Syria with no caption, or just “Wow.” What do we do?

JZ: This is worlds away from what lawyers learn about the First Amendment.

MB: Yes, we’re a private company so the Amendment doesn’t apply. Behind our rules is the idea that “You don’t have to agree with the content, but you should feel safe”FB should be a place where people feel safe connecting and expressing themselves. You don’t have to agree with the content, but you should feel safe.

JZ: Hate speech was defined as an attack against a protected category…

MB: We don’t allow hate speech, but no two people define it the same way. For us, it’s hate speech if you are attacking a person or a group of people based upon a protected characteristic — race, gender, gender identification, etc. —. Sounds easy in concept, but applying it is hard. Our rule is if I say something about a protected category and it’s an attack, we’d consider it hate speech and remove it.

JZ: The Guardian said that in training there’s a quiz. Q: Who do we protect: Women drivers, black children, or white men? A: White men.

MB: Not our policy any more. Our policy was that if there’s another characteristic beside the protected category, it’s not hate speech. So, attacking black children was ok but not white men, because of the inclusion of “children.” But we’ve changed that. Now we would consider attacks on women drivers and black children as hate speech. But when you introduce other characteristics such as profession, it’s harder. We’re evaluating and testing policies now. We try marking content and doing a blind test to see how it affects outcomes. [I don’t understand that. Sorry.]

JZ: Should the internal policy be made public?

MB: I’d be in favor of it. Making the training decks transparent would also be useful. It’s easier if you make clear where the line is.

JZ: Do protected categories shift?

MB: Yes, generally. I’ve been at FB for 5.5 yrs, in this are for 4 yrs. Overall, we’ve gotten more restrictive. Sometimes something becomes a topic of news and we want to make sure people can discuss it.

JZ: Didi Delgado’s post “all white people are racist” was deleted. But it would have been deleted if had said that all black people are racist, right?

MB: Yes. “If it’s a protected characteristic, we’ll protect it”If it’s a protected characteristic, we’ll protect it. [Ah, if only life were that symmetrical.]

JZL How about calls to violence, e.g., “Someone shoot Trump/Hillary”? If you think it should be taken down. [Sounds like most would let it stand.]

JZ: How about “Kick a person with red hair.” [most let it stand]

JZ: “How about: To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat.” [most let it stand][fuck, that’s hard to see up on the screen.]

JZ: “Let’s beat up the fat kids.” [most let it stand]

JZ: “#stab and become the fear of the Zionist” [most take it down]

MB: We don’t allow credible calls for violence.

JZ: Suppose I, a non-public figure, posted “Post one more insult and I’ll kill you.”

MB: We’d take that down. We also look at the degree of violence. Beating up and kicking might not rise to the standard. Snapping someone’s neck would be taken down, although if it were purely instructions on how to do something, we’d leave it up. “Zionist” is often associated with hate speech, and stabbing is serious, so we’d take them down. We leave room for aspirational statements wishing some bad thing would happen. “Someone should shoot them all” we’d count as a call to violence. We also look for specifity, as in “Let’s kill JZ. He leaves work at 3.” We also look at the vulnerability of people; if it’s a dangerous situation,
we’ll tend to treat all such things as calls to violence, [These are tough questions, but I’m not aligned with FB’s decisions on this.]

JZ: How long does someone spend reviewing this stuff?

MB: Some is easy. Nudity is nudity, although we let breast cancer photos through. But a beheading video is prohibited no matter what the context. Profiles can be very hard to evaluate. E.g., is this person a terrorist?

JZ: Given the importance of FB, does it seem right that these decisions reside with FB as a commercial entity. Or is there some other source that would actually be a relief?

MB: “We’re not making these decisions in a silo”We’re not making these decisions in a silo. We reach out for opinions outside of the company. We have Safety Advisory Board, a Global Safety Network [got that wrong, I think], etc.

JZ: These decisions are global? If I insult the Thai King…

MB: That doesn’t violate our global community standard. We have a group of academics around the world, and people on our team, who are counter-terrorism experts. It’s very much a conversation with the community.

JZ: FB requires real names, which can be a form of self-doxxing. Is the Real Name policy going to evolve?

MB: It’s evolved a little about what counts as their real name, i.e., the name people call you as opposed to what’s on your drivers license. Using your real name has always been a cornerstone of FB. A quinessential element of FB.

JZ: You don’t force disambiguation among all the Robert Smiths…

MB: When you communicate with people you know, you know you know them. “We don’t want people to be communicating with people who are not who you think they are”We don’t want people to be communicating with people who are not who you think they are. When you share something on FB, it’s not public or private. You can choose which groups you want to share it with, so you know who will see it. That’s part of the real name policy as well.

MB: We have our community standards. Sometimes we get requests from countries to remove violations of their law, e.g., insults to the King of Thailand. If we get such a request, if it doesn’t violate the standards, we look if the request is actually about real law in that country. Then we ask if it is political speech; if it is, to the extent possible, we’ll push back on those requests. E.g., Germans have a little more subjectivity in their hate speech laws. They may notify us about something that violates those laws, and if it does not violate our global standards, we’ll remove it in Germany only. (It’s done by IP addresses, the language you’re using, etc.) When we do that, we include it in our 6 month reports. If it’s removed, you see a notice that the content is restricted in your jurisdiction.

Q&A

Q: Have you spoken to users about people from different cultures and backgrounds reviewing their content?

A: It’s a legitimate question. E.g., when it comes to nudity, even a room of people as homogenous as this one will disagree. So, “our rules are written to be very objective”our rules are written to be very objective. And we’re increasingly using tech to make these decisions. E.g., it’s easy to automate the finding of links to porn or spam, and much harder for evaluating speech.

Q: What drives change in these policies and algorithms?

A: It’s constantly happening. And public conversation is helpful. And our reviewers raise issues.

Q: a) When there are very contentious political issues, how do you prevent bias? b) Are there checks on FB promoting some agenda?

A: a) We don’t have a rule saying that people from one or another country can review contentious posts. But we review the reviewers’ decisions every week. b) The transparency report we put out every six months is one such check. If we don’t listen to feedback, we tend to see news stories calling us out on it.

[Monika now quickly addresses some of the questions from the open question tool.]

Q: Would you send reports to Lumen? MB: We don’t currently record why decisions were made.

Q: How to prevent removal policies from being weaponized but trolls or censorious regimes? MB: We treat all reports the same — there’s an argument that we shouldn’t — but we don’t continuously re-review posts.

JZ: For all of the major platforms struggling with these issues, is it your instinct that it’s just a matter of incrementally getting this right, bringing in more people, continue to use AI, etc. OR do you think sometimes that this is just nuts; there’s got to be a better way.

There’s a tension between letting anyone see what they want, or have global standards. People say US hates hate speech and the Germans not so much, but there’s actually a spectrum in each. The catch is that there’s content that you’re going to be ok seeing but we think is not ok to be shared.

[Monika was refreshingly direct, and these are, I believe, literally impossible problems. But I came away thinking that FB’s position has a lot to do with covering their butt at the expense of protecting the vulnerable. E.g., they treat all protected classes equally, even though some of us — er, me — are in top o’ the heap, privileged classes. The result is that FB applies a rule equally to all, which can bring inequitable results. That’s easier and safer, but it’s not like I have a solution to these intractable problems.]

Tweet
Follow me

Categories: culture Tagged with: facebook • free speech • governance • hate speech Date: September 19th, 2017 dw

1 Comment »

December 19, 2013

Rights, Don’t Ask Don’t Tell, and Ducks

So, some guy on a TV show I never saw said some stuff I don’t agree with about homosexuality. He thinks it’s a sin akin to a whole bunch of other sex-related sins. After the affair blew up, he responded, “I would never treat anyone with disrespect just because they are different from me. We are all created by the Almighty and like Him, I love all of humanity.” In the original interview he also described his experience as “white trash” working alongside African-Americans, saying that he never saw them mistreated. I believe him. He never saw that. Ok.

I don’t much care about the details of the incident, so if you want to tell me that I’m not understanding the horribleness of what he said, I’m not going to argue with you. I really haven’t researched it. But the debate is irking me.

I am reading too many of my compatriots — and, by the way, welcome to marriage equality, New Mexico! — saying that it was ok for A&E to fire Phil Robertson (the Duck Dynasty guy in question) because the First Amendment constrains the actions only of the government. So, I assume A&E had every legal and Constitutional right to fire Robertson for what he said.

So what? The question isn’t what A&E is allowed to do and what the First Amendment forbids. The question is: What makes this country a better place in which to live? Do we want to live in a place where you can’t state your opinion without worrying that you may be fired? How much variance from the orthodoxy are we willing to permit? And, yes, I feel the same way about not buying from a local store that has a political sign in its window that you disagree with. Your Republican hardware store owner has a right to make a living!

Do we really think America is better if the many people who think homosexuality is a sin are forbidden from saying so? The ironic revenge of Don’t ask, don’t tell?

Jeez. We need some room for disagreement here!


Just to anticipate the comments: Yes, I would feel the same way if he had said, “Everyone knows the Jews own the banks.” And, yes, there are things he could say that would make him so toxic that I’d agree that the network should fire him. For example, if he had threatened violence, or had used language so inflammatory that it could lead to violence. There are lines. We’re just drawing them wrong. IMO.

Tweet
Follow me

Categories: culture Tagged with: duck dynasty • free speech Date: December 19th, 2013 dw

5 Comments »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!