logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

September 19, 2017

[bkc] Hate speech on Facebook

I’m at a Very Special Harvard Berkman Klein Center for Internet & Society Tuesday luncheon featuring Monika Bickert, Facebook’s Head of Global Policy Management in conversation with Jonathan Zittrain. Monika is in charge of what types of content can be shared on FB, how advertisers and developer interact with the site, and FB’s response to terrorist content. [NOTE: I am typing quickly, getting things wrong, missing nuance, filtering through my own interests and biases, omitting what I can’t hear or parse, and not using a spelpchecker. TL;DR: Please do not assume that this is a reliable account.]

Monika: We have more than 2B users…

JZ: Including bots?

MB: Nope, verified. Billions of messages are posted every day.

[JZ posts some bullet points about MB’s career, which is awesome.]

JZ: Audience, would you want to see photos of abused dogs taken down. Assume they’re put up without context. [It sounds to me like more do not want it taken down.]

MB: The Guardian covered this. [Maybe here?] The useful part was it highlighted how much goes into the process of deciding these things. E.g., what counts as mutilation of an animal? The Guardian published what it said were FB’s standards, not all of which were.

MB: For user generated content there’s a set of standards that’s made public. When a comment is reported to FB, it goes to a FB content reviewer.

JZ: What does it take to be one of those? What does it pay?

MB: It’s not an existing field. Some have content-area expertise, e.g., terrorism. It’s not a minimum wage sort of job. It’s a difficult, serious job. People go through extensive training, and continuing training. Each reviewer is audited. They take quizzes from time to time. Our policies change constantly. We have something like a mini legislative session every two weeks to discuss proposed policy changes, considering internal suggestions, including international input, and external expert input as well, e.g., ACLU.

MB: About animal abuse: we consider context. Is it a protest against animal cruelty? After a natural disaster, you’ll see awful images. It gets very complicated. E.g., someone posts a photo of a bleeding body in Syria with no caption, or just “Wow.” What do we do?

JZ: This is worlds away from what lawyers learn about the First Amendment.

MB: Yes, we’re a private company so the Amendment doesn’t apply. Behind our rules is the idea that “You don’t have to agree with the content, but you should feel safe”FB should be a place where people feel safe connecting and expressing themselves. You don’t have to agree with the content, but you should feel safe.

JZ: Hate speech was defined as an attack against a protected category…

MB: We don’t allow hate speech, but no two people define it the same way. For us, it’s hate speech if you are attacking a person or a group of people based upon a protected characteristic — race, gender, gender identification, etc. —. Sounds easy in concept, but applying it is hard. Our rule is if I say something about a protected category and it’s an attack, we’d consider it hate speech and remove it.

JZ: The Guardian said that in training there’s a quiz. Q: Who do we protect: Women drivers, black children, or white men? A: White men.

MB: Not our policy any more. Our policy was that if there’s another characteristic beside the protected category, it’s not hate speech. So, attacking black children was ok but not white men, because of the inclusion of “children.” But we’ve changed that. Now we would consider attacks on women drivers and black children as hate speech. But when you introduce other characteristics such as profession, it’s harder. We’re evaluating and testing policies now. We try marking content and doing a blind test to see how it affects outcomes. [I don’t understand that. Sorry.]

JZ: Should the internal policy be made public?

MB: I’d be in favor of it. Making the training decks transparent would also be useful. It’s easier if you make clear where the line is.

JZ: Do protected categories shift?

MB: Yes, generally. I’ve been at FB for 5.5 yrs, in this are for 4 yrs. Overall, we’ve gotten more restrictive. Sometimes something becomes a topic of news and we want to make sure people can discuss it.

JZ: Didi Delgado’s post “all white people are racist” was deleted. But it would have been deleted if had said that all black people are racist, right?

MB: Yes. “If it’s a protected characteristic, we’ll protect it”If it’s a protected characteristic, we’ll protect it. [Ah, if only life were that symmetrical.]

JZL How about calls to violence, e.g., “Someone shoot Trump/Hillary”? If you think it should be taken down. [Sounds like most would let it stand.]

JZ: How about “Kick a person with red hair.” [most let it stand]

JZ: “How about: To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat.” [most let it stand][fuck, that’s hard to see up on the screen.]

JZ: “Let’s beat up the fat kids.” [most let it stand]

JZ: “#stab and become the fear of the Zionist” [most take it down]

MB: We don’t allow credible calls for violence.

JZ: Suppose I, a non-public figure, posted “Post one more insult and I’ll kill you.”

MB: We’d take that down. We also look at the degree of violence. Beating up and kicking might not rise to the standard. Snapping someone’s neck would be taken down, although if it were purely instructions on how to do something, we’d leave it up. “Zionist” is often associated with hate speech, and stabbing is serious, so we’d take them down. We leave room for aspirational statements wishing some bad thing would happen. “Someone should shoot them all” we’d count as a call to violence. We also look for specifity, as in “Let’s kill JZ. He leaves work at 3.” We also look at the vulnerability of people; if it’s a dangerous situation,
we’ll tend to treat all such things as calls to violence, [These are tough questions, but I’m not aligned with FB’s decisions on this.]

JZ: How long does someone spend reviewing this stuff?

MB: Some is easy. Nudity is nudity, although we let breast cancer photos through. But a beheading video is prohibited no matter what the context. Profiles can be very hard to evaluate. E.g., is this person a terrorist?

JZ: Given the importance of FB, does it seem right that these decisions reside with FB as a commercial entity. Or is there some other source that would actually be a relief?

MB: “We’re not making these decisions in a silo”We’re not making these decisions in a silo. We reach out for opinions outside of the company. We have Safety Advisory Board, a Global Safety Network [got that wrong, I think], etc.

JZ: These decisions are global? If I insult the Thai King…

MB: That doesn’t violate our global community standard. We have a group of academics around the world, and people on our team, who are counter-terrorism experts. It’s very much a conversation with the community.

JZ: FB requires real names, which can be a form of self-doxxing. Is the Real Name policy going to evolve?

MB: It’s evolved a little about what counts as their real name, i.e., the name people call you as opposed to what’s on your drivers license. Using your real name has always been a cornerstone of FB. A quinessential element of FB.

JZ: You don’t force disambiguation among all the Robert Smiths…

MB: When you communicate with people you know, you know you know them. “We don’t want people to be communicating with people who are not who you think they are”We don’t want people to be communicating with people who are not who you think they are. When you share something on FB, it’s not public or private. You can choose which groups you want to share it with, so you know who will see it. That’s part of the real name policy as well.

MB: We have our community standards. Sometimes we get requests from countries to remove violations of their law, e.g., insults to the King of Thailand. If we get such a request, if it doesn’t violate the standards, we look if the request is actually about real law in that country. Then we ask if it is political speech; if it is, to the extent possible, we’ll push back on those requests. E.g., Germans have a little more subjectivity in their hate speech laws. They may notify us about something that violates those laws, and if it does not violate our global standards, we’ll remove it in Germany only. (It’s done by IP addresses, the language you’re using, etc.) When we do that, we include it in our 6 month reports. If it’s removed, you see a notice that the content is restricted in your jurisdiction.

Q&A

Q: Have you spoken to users about people from different cultures and backgrounds reviewing their content?

A: It’s a legitimate question. E.g., when it comes to nudity, even a room of people as homogenous as this one will disagree. So, “our rules are written to be very objective”our rules are written to be very objective. And we’re increasingly using tech to make these decisions. E.g., it’s easy to automate the finding of links to porn or spam, and much harder for evaluating speech.

Q: What drives change in these policies and algorithms?

A: It’s constantly happening. And public conversation is helpful. And our reviewers raise issues.

Q: a) When there are very contentious political issues, how do you prevent bias? b) Are there checks on FB promoting some agenda?

A: a) We don’t have a rule saying that people from one or another country can review contentious posts. But we review the reviewers’ decisions every week. b) The transparency report we put out every six months is one such check. If we don’t listen to feedback, we tend to see news stories calling us out on it.

[Monika now quickly addresses some of the questions from the open question tool.]

Q: Would you send reports to Lumen? MB: We don’t currently record why decisions were made.

Q: How to prevent removal policies from being weaponized but trolls or censorious regimes? MB: We treat all reports the same — there’s an argument that we shouldn’t — but we don’t continuously re-review posts.

JZ: For all of the major platforms struggling with these issues, is it your instinct that it’s just a matter of incrementally getting this right, bringing in more people, continue to use AI, etc. OR do you think sometimes that this is just nuts; there’s got to be a better way.

There’s a tension between letting anyone see what they want, or have global standards. People say US hates hate speech and the Germans not so much, but there’s actually a spectrum in each. The catch is that there’s content that you’re going to be ok seeing but we think is not ok to be shared.

[Monika was refreshingly direct, and these are, I believe, literally impossible problems. But I came away thinking that FB’s position has a lot to do with covering their butt at the expense of protecting the vulnerable. E.g., they treat all protected classes equally, even though some of us — er, me — are in top o’ the heap, privileged classes. The result is that FB applies a rule equally to all, which can bring inequitable results. That’s easier and safer, but it’s not like I have a solution to these intractable problems.]

Tweet
Follow me

Categories: culture Tagged with: facebook • free speech • governance • hate speech Date: September 19th, 2017 dw

1 Comment »

February 18, 2017

The Keynesian Marketplace of Ideas

The awesome Tim Hwang (disclosure: I am a complete fanboy) has posted an essay
arguing that we should take something like a Keynesian approach to the “marketplace of ideas” that we were promised with the Internet. I think there’s something really helpful about this, but that ultimately the metaphor gets in the way of itself.

The really helpful piece:

…our mental model of the marketplace of ideas has stayed roughly fixed even as the markets themselves have changed dramatically.

…I wonder if we might take a more Keynesian approach to the marketplace of ideas: holding that free economies of ideas are frequently efficient, and functional. But, like economic marketplaces, they are susceptible to persistent recessions and bad, self-reinforcing equilibria that require systemic intervention at critical junctures.

This gives us a way to think about intervening when necessary, rather than continually bemoaning the failure of idea markets or, worse, fleeing from them entirely.

The analogy leads Tim to two major suggestions:

…major, present day idea marketplaces like Facebook are not laissez-faire. They feature deep, constant interventionism on the part of the platform to mediate and shape idea market outcomes through automation and algorithm. Digital Keynesians would resist these designs: marketplaces of ideas are typically functional without heavy mediation and platform involvement, and doing so creates perverse distortions. Roll back algorithmic content curation, roll back friend suggestions, and so on.

Second, we should develop a

clearer definition of the circumstances under which platforms and governments would intervene to right the ship more extensively during a crisis in the marketplace.

There’s no arguing with the desirability of the second suggestion. In fact, we can ask why we haven’t developed these criteria and box of tools already.

“ a way to think about intervening, rather than bemoaning the failure of idea markets”The answer I think is in Tim’s observation that “marketplaces of ideas are typically functional without heavy mediation and platform involvement.” I think that misses the mark both in old-fashioned and new-fangled marketplaces of ideas. All of them assume a particular embodiment of those ideas, and thus those ideas are always mediated by the affordances of their media — one-to-many newspapers, a Republic of Letters that moves at the speed of wind, even backyard fences over which neighbors chat — and by norms and regulations (or architecture, law, markets, and norms, as Larry Lessig says). Facebook and Twitter cannot exist except as interventions. What else can you call Facebook’s decisions about which options to offer about who gets to see your posts, and Twitter’s insistence on a 140 character limit? It seems artificial to me to insist on a difference between those interventions and the algorithmic filtering that Facebook does in order to address its scale issues (as well as to make a buck or two).

As a result, in the Age of the Internet, we have something closer to a marketplace of idea marketplaces “we have something closer to a marketplace of idea marketplaces” that span a spectrum of how laissez their faire is.[note.] (I know that’s wrong) These marketplaces usually can’t “trade” across their boundaries except in quite primitive ways, such as pasting a tweet link into Facebook. And they don’t agree about the most basic analogic elements of an economy: who gets to participate and under what circumstances, what counts as currency, what counts as a transaction, how to measure the equivalence of an exchange, the role of intermediaries, the mechanisms of trust and the recourses for when trust is broken.

So, Twitter, Facebook, and the comments section of Medium are all mediated marketplaces and thus cannot adopt Tim’s first suggestion — that they cease intervening — because they are their policies and mechanisms of intervention.

That’s why I appreciate that towards the end Tim wonders, “Should we accept a transactional market frame in the first place?” Even though I think the disanalogies are strong, I will repeat Tim’s main point because I think it is indeed a very useful framing:

…free economies of ideas are frequently efficient, and functional. But, like economic marketplaces, they are susceptible to persistent recessions and bad, self-reinforcing equilibria that require systemic intervention at critical junctures.

I like this because it places responsibility — and agency — on those providing a marketplace of ideas. If your conversational space isn’t working, it’s your fault. Fix it.

And, yes, it’d be so worth the effort for us to better understand how.

Tweet
Follow me

Categories: cluetrain Tagged with: conversation • facebook • markets • twitter Date: February 18th, 2017 dw

2 Comments »

November 27, 2016

Fake news sucks but isn't the end of civilization

Because fake news only works if it captures our attention, and because presenting ideas that are outside the normal range is a very effective way to capture our attention, fake news will with some inevitably tend to present extreme positions.

Real news items often uses the same technique these days: serious news stories often will have clickbait headlines. “Clickbait, whether fake or real, thus tends to make us think that the world is full of extremes. The normal doesn’t seem very normal any more.”Clickbait, whether fake or real, thus tends to make us think that the world is full of extremes. The normal doesn’t seem very normal any more.

Of course, clickbait is nothing new. Tabloids have been using it forever. For the past thirty years, in the US, local TV stations have featured the latest stabbing or fire as the lead story on the news. (This is usually said to have begun in Miami
, and is characterized as “If it bleeds, it leads,” i.e., it is the first item in the news broadcast.)

At the same time, however, the Internet makes it easier than ever to find news that doesn’t simply try to set our nerves on fire. Fact checking abounds, at sites dedicated to the task and as one of the most common of distributed Internet activities. Even while we form echo chambers that reinforce our beliefs, “we are also more likely than ever before to come across contrary views”we are also more likely than ever before to come across contrary views. Indeed, I suspect (= I have no evidence) that one reason we seem so polarized is that we can now see the extremities of belief that have always been present in our culture — extremities that in the age of mass communication were hidden from us.

Now that there are economic reasons to promulgate fake news — you can make a good living at it — we need new mechanisms to help us identify it, just as the rise of “native advertising” (= ads that pose as news stories) has led to new norms about letting the reader know that they’re ads. The debate we’re currently having is the discussion that leads to new techniques and norms.

Some of the most important techniques can best be applied by the platforms through which fake news promulgates. We need to press those platforms to do the right thing, even if it means a marginal loss of revenues for them. The first step is to stop them from thinking, as I believe some of them genuinely do, that they are mere open platforms that cannot interfere with what people say and share on them. Baloney. As Zeynep Tufekci, among others, has repeatedly pointed out, these platforms already use algorithms to decide which items to show us from the torrent of possibilities. Because the major Western platforms genuinely hold to democratic ideals, they may well adjust their algorithms to achieve better social ends. I have some hope about this.

Just as with spam, “native advertising,” and popup ads, we are going to have to learn to live with fake news both by creating techniques that prevent it from being as effective as it would like to be and by accepting its inevitability. If part of this is that we learn to be more “meta” — not accepting all content at its face value — then fake news will be part of our moral and intellectual evolution.

Tweet
Follow me

Categories: journalism, politics Tagged with: clickbait • facebook • fake news • platforms • twitter Date: November 27th, 2016 dw

Be the first to comment »

February 15, 2016

What Facebook should learn from this "Colonialism" debacle

DigitalTrends has posted my post about the reaction to Marc Andreessen’s response to India’s saying No to Free Basics, the Facebook version of the Internet. Andreessen’s framing it in terms of colonialism was — unfortunately for him — all too apt.

Tweet
Follow me

Categories: culture, net neutrality Tagged with: facebook • india Date: February 15th, 2016 dw

2 Comments »

November 6, 2015

My odd talk on Monday

The Emerson Engagement Lab (of which I am a fan) is having me in for a talk that is apparently open to the public on Monday at 2pm. I’m talking to Paul Mihailidis‘ course in Emerson’s Greene Theater about whether and how we’ve managed to let the Internet become just yet another mass medium or possible the Worst. Mass Medium. Ever. I’ll be talking about why my aging cohort had such high hopes for the Net, how well the Argument from Architecture has held up, and why I am not quite as depressed as most of my friends.

This is an odd talk in part because I’m not using slides or notes. That changes things. For the better? Well, there are reasons why people use slides and why people like me, who only have three remaining neurons devoted to memory, use notes.

Has anyone seen my keys?

 


Meanwhile, I’m looking forward to talking with Penn State’s Center for Humanities and Information this afternoon. I’m giving a talk about our changing ideas about how the future works, but I believe there will be lots of time for conversation.

Tweet
Follow me

Categories: internet, policy Tagged with: facebook • optimism • technodeterminism Date: November 6th, 2015 dw

1 Comment »

September 25, 2015

Facebook now 0.1% less Orwellian

We should be grateful that Facebook has renamed its Internet access service from Internet.org to Free Basics by Facebook. The idea is that if you’re in the developing world, you’ll get access to the “Internet” which is really access to Facebook and all that it permits.

Calling that arrangement “Internet.org” was as Orwellian as marketing gets, like advertising Snickers as a “lunch bar.” No no no. A Snickers bar may be delicious, and may even give you enough of a burst of energy that for the final fifteen seconds of your Powerpoint presentation at the weekly status meeting you have an overbearing confidence that alienates your boss’s boss who happens to have dropped by, dooming your long-term prospects at that company, but it is not lunch. It lacks all the essential properties of lunch, even if you may at some point eat one because you forgot your lunch and your wallet and have no friends who will share with you.

The Facebook service is to the Internet as Snickers is to lunch: a poor replacement that lacks all of the essential elements that make a lunch a lunch and the Internet the Internet.

The new name has the advantage of sounding like an hypoallergenic mascara that’s hired Christie Brinkley as its spokesmodel.

Tweet
Follow me

Categories: marketing Tagged with: facebook Date: September 25th, 2015 dw

1 Comment »

May 7, 2015

Facebook, filtering, polarization, and a flawed study?

Facebook researchers have published an article in Science, certainly one of the most prestigious peer-reviewed journals. It concludes (roughly) that Facebook’s filtering out of news from sources whose politics you disagree with does not cause as much polarization as some have thought.

Unfortunately, a set of researchers clustered around the Berkman Center think that the study’s methodology is deeply flawed, and that its conclusions badly misstate the actual findings. Here are three responses well worth reading:

  • Christian Sandvig

  • Zeynep Tufekci

  • Eszter Hargittai

Also see Eli Pariser‘s response.

Tweet
Follow me

Categories: echo chambers Tagged with: echo chambers • facebook • politics • research Date: May 7th, 2015 dw

1 Comment »

December 27, 2014

Oculus Thrift

I just received Google’s Oculus Rift emulator. Given that it’s made of cardboard, it’s all kinds of awesome.

Google Cardboard is a poke in Facebook’s eyes. FB bought Oculus Rift, the virtual reality headset, for $2B. Oculus hasn’t yet shipped a product, but its prototypes are mind-melting. My wife and I tried one last year at an Israeli educational tech lab, and we literally had to have people’s hands on our shoulders so we wouldn’t get so disoriented that we’d swoon. The Lab had us on a virtual roller coaster, with the ability to turn our heads to look around. It didn’t matter that it was an early, low-resolution prototype. Swoon.

Oculus is rumored to be priced at around $350 when it ships, and they will sell tons at that price. Basically, anyone who tries one will be a customer or will wish s/he had the money to be a customer. Will it be confined to game players? Not a chance on earth.

So, in the midst of all this justifiable hype about the Oculus Rift, Google announced Cardboard: detailed plans for how to cut out and assemble a holder for your mobile phone that positions it in front of your eyes. The Cardboard software divides the screen in two and creates a parallaxed view so you think you’re seeing in 3D. It uses your mobile phone’s kinetic senses to track the movement of your head as you purview your synthetic domain.

I took a look at the plans for building the holder and gave up. For $15 I instead ordered one from Unofficial Cardboard.

When it arrived this morning, I took it out of its shipping container (made out of cardboard, of course), slipped in my HTC mobile phone, clicked on the Google Cardboard software, chose a demo, and was literally — in the virtual sense — flying over the earth in any direction I looked, watching a cartoon set in a forest that I was in, or choosing YouTube music videos by turning to look at them on a circular wall.

Obviously I’m sold on the concept. But I’m also sold on the pure cheekiness of Google’s replicating the core functionality of the Oculus Rift by using existing technology, including one made of cardboard.

(And, yeah, I’m a little proud of the headline.)

Tweet
Follow me

Categories: misc, science Tagged with: facebook • games • google • vr Date: December 27th, 2014 dw

4 Comments »

November 29, 2014

Before Facebook, there was DeanSpace

Here’s a four-minute video from July 13, 2003, of Zack Rosen describing the social networking tool he and his group were building for the Howard Dean campaign. DeanSpace let Dean supporters connect with one another on topics, form groups, and organize action. This was before Facebook, remember.

This comes from Lisa Rein’s archive. I’m sorry to say that I’ve lost touch with Lisa, so I hope she’s ok with my uploading this to YouTube. The talk itself was part of iLaw 2003, an event put on every couple of years or so by the Berkman Center and Harvard Law.

(I think that’s Aaron Swartz sitting in front.)

Tweet
Follow me

Categories: politics, social media Tagged with: democracy • facebook • howard dean • politics • social networks Date: November 29th, 2014 dw

2 Comments »

August 22, 2014

The social Web before social networks: a report from 2003

The Web was social before it had social networking software. It just hadn’t yet evolved a pervasive layer of software specifically designed to help us be social.

In 2003 it was becoming clear that we needed — and were getting — a new class of application, unsurprisingly called “social software.” But what sort of sociality were we looking for? What sort could such software bestow?

That was the theme of Clay Shirky’s 2003 keynote at the ETech conference, the most important gathering of Web developers of its time. Clay gave a brilliant talk,“A Group Is Its Own Worst Enemy,” in which he pointed to an important dynamic of online groups. I replied to him at the same conference (“The Unspoken of Groups”). This was a year before Facebook launched. The two talks, especially Clay’s, serve as reminders of what the Internet looked like before social networks.

Here’s what for me was the take-away from these two talks:

The Web was designed to connect pages. People, being people, quickly created ways for groups to form. But there was no infrastructure for connecting those groups, and your participation in one group did nothing to connect you to your participation in another group. By 2003 it was becoming obvious (well, to people like Clay) that while the Internet made it insanely easy to form a group, we needed help — built into the software, but based on non-technological understanding of human sociality — sustaining groups, especially now that everything was scaling beyond imagination.

So this was a moment when groups were increasingly important to the Web, but they were failing to scale in two directions: (1) a social group that gets too big loses the intimacy that gives it its value; and (2) there was a proliferation of groups but they were essential disconnected from every other group.

Social software was the topic of the day because it tried to address the first problem by providing better tools. But not much was addressing the second problem, for that is truly an infrastructural issue. Tim Berners-Lee’s invention of the Web let the global aggregation of online documents scale by creating an open protocol for linking them. Mark Zuckerberg addressed the issue of groups scaling by creating a private company, with deep consequences for how we are together online.


Clay’s 2003 analysis of the situation is awesome. What he (and I, of course) did not predict was that a single company would achieve the position of de facto social infrastructure.


When Clay gave his talk, “social software” was all the rage, as he acknowledges in his very first line. He defines it uncontroversially as “software that supports group interaction.” The fact that social software needed a definition already tells you something about the state of the Net back then. As Clay said, the idea of social software was “rather radical” because “Prior to the Internet, the last technology that had any real effect on the way people sat down and talked together was the table,” and even the Internet so far was not doing a great job supporting sociality at the group level.

He points out that designers of social software are always surprised by what people do with their software, but thinks there are some patterns worth attending to. So he divides his talk into three parts: (1) pre-Internet research that explains why groups tend to become their own worst enemy; (2) the “revolution in social software” that makes this worth thinking about; and (3) “about a half dozen things…that I think are core to any software that supports larger, long-lived groups.”

Part 1 uses the research of W.R. Bion from his 1961 book, Experiences in Groups that leads him, and Clay, to conclude that because groups have a tendency to sandbag “their sophisticated goals with…basic urges,” groups need explicit formulations of acceptable behaviors. “Constitutions are a necessary component of large, long-lived, heterogenous groups.”

Part 2 asks: if this has been going on for a long time, why is it so important now? “I can’t tell you precisely why, but observationally there is a revolution in social software going on. The number of people writing tools to support or enhance group collaboration or communication is astonishing.”

The Web was getting very very big by 2003 and Clay points says that “we blew past” the “interesting scale of small groups.” Conversation doesn’t scale.

“We’ve gotten weblogs and wikis, and I think, even more importantly, we’re getting platform stuff. We’re getting RSS. We’re getting shared Flash objects. We’re getting ways to quickly build on top of some infrastructure we can take for granted, that lets us try new things very rapidly.”

Why did it take so long to get weblogs? The tech was ready from the day we had Mosaic, Clay says. “I don’t know. It just takes a while for people to get used to these ideas.” But now (2003) we’re fully into the fully social web. [The social nature of the Web was also a main theme of The Cluetrain Manifesto in 2000.]

What did this look like in 2003, beyond blogs and wikis? Clay gives an extended, anecdotal example. He was on a conference all with Joi Ito, Peter Kaminski, and a few others. Without planning to, the group started using various modalities simultaneously. Someone opened a chat window, and “the interrupt logic” got moved there. Pete opened a wiki and posted its URL into the chat. The conversation proceeded along several technological and social forms simultaneously. Of course this is completely unremarkable now. But that’s the point. It was unusual enough that Clay had to carefully describe it to a room full of the world’s leading web developers. It was a portent of the future:

This is a broadband conference call, but it isn’t a giant thing. It’s just three little pieces of software laid next to each other and held together with a little bit of social glue. This is an incredibly powerful pattern. It’s different from: Let’s take the Lotus juggernaut and add a web front-end.

Most important, he says, access is becoming ubiquitous. Not uniformly, of course. But it’s a pattern. (Clay’s book Here Comes Everybody expands on this.)

In Part 3, he asks: “‘What is required to make a large, long-lived online group successful?’ and I think I can now answer with some confidence: ‘It depends.’ I’m hoping to flesh that answer out a little bit in the next ten years.” He suggests we look for the pieces of social software that work, given that “The normal experience of social software is failure.” He suggests that if you’re designing social software, you should accept three things:

  1. You can’t separate the social from the technical.
  2. Groups need a core that watches out for the well-being of the group itself.
  3. That core “has rights that trump individual rights in some situations.” (In this section, Clay refers to Wikipedia as “the Wikipedia.” Old timer!)

Then there are four things social software creators ought to design for:


  1. Provide for persistent identities so that reputations can accrue. These identities can of course be pseudonyms.
  2. Provide a way for members’ good work to be recognized.
  3. Put in some barriers to participation so that the interactions become high-value.
  4. As the site’s scale increases, enable forking, clustering, useful fragmentation.

Clay ends the talk by reminding us that: “The users are there for one another. They may be there on hardware and software paid for by you, but the users are there for one another.”

This is what “social software” looked like in 2003 before online sociality was largely captured by a single entity. It is also what brilliance sounds like.


I gave an informal talk later at that same conference. I spoke extemporaneously and then wrote up what I should have said. My overall point was that one reason we keep making the mistake that Clay points to is that groups rely so heavily on unspoken norms. Making those norms explicit, as in a group constitution, can actually do violence to the group — not knife fights among the members, but damage to the groupiness of the group.

I said that I had two premises: (1) groups are really, really important to the Net; and (2) “The Net is really bad at supporting groups.”

It’s great for letting groups form, but there are no services built in for helping groups succeed. There’s no agreed-upon structure for representing groups. And if groups are so important, why can’t I even see what groups I’m in? I have no idea what they all are, much less can I manage my participation in them. Each of the groups I’m in is treated as separate from every other.

I used Friendster as my example “because it’s new and appealing.” (Friendster was an early social networking site, kids. It’s now a gaming site.) Friendster suffers from having to ask us to make explicit the implicit stuff that actually matters to friendships, including writing a profile describing yourself and having to accept or reject a “friend me” request. “I’m not suggesting that Friendster made a poor design decision. I’m suggesting that there is no good design decision to be made here.” Making things explicit often does violence to them.

That helps explains why we keep making the mistake Clay points to. Writing a constitution requires a group to make explicit decisions that often break the groups apart. Worse, I suggest, groups can’t really write a constitution “until they’ve already entangled themselves in thick, messy, ambiguous, open-ended relationships,” for “without that thicket of tangles, the group doesn’t know itself well enough to write a constitution.”

I suggest that there’s hope in social software if it is considered to be emergent, rather than relying on users making explicit decisions about their sociality. I suggested two ways it can be considered emergent: “First, it enables social groups to emerge. It goes not from implicit to explicit, but from potential to actual.” Second, social software should enable “the social network’s shape to emerge,” rather than requiring upfront (or, worse, topdown) provisioning of groups. I suggest a platform view, much like Clay’s.

I, too, ask why social software was a buzzword in 2003. In part because the consultants needed a new topic, and in part because entrepreneurs needed a new field. But perhaps more important (I suggested), recent experience had taught us to trust that we could engage in bottom-up sociality without vandals ripping it all to part. This came on the heels of companies realizing that the first-generation topdown social software (e.g., Lotus Notes) was stifling as much sociality and creativity as it was enabling. But our experience with blogs and wikis over the prior few years had been very encouraging:

Five years ago, it was obvious beyond question that groups need to be pre-structured if the team is to “hit the ground running.” Now, we have learned — perhaps — that many groups organize themselves best by letting the right structure emerge over time.

I end on a larger, vaguer, and wrong-er point: “Could we at last be turning from the great lie of the Age of Computers, that the world is binary?” Could we be coming to accept that the “world is ambiguous, with every thought, perception and feeling just a surface of an unspoken depth?”

Nah.

Tweet
Follow me

Categories: cluetrain, culture, social media Tagged with: clay shirky • facebook • friendster • history • old-timer • social media • social text Date: August 22nd, 2014 dw

3 Comments »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!