Tarleton Gillespie of Cornell is giving a Berkman lunchtime talk on the politics of online media. He’s been interested in how we are shaping cultural discourse through the confluence of tech, policy, economics, etc. Today he wants to look at how social platforms are shaping social discourse.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
He begins with YouTube’s announcement in Dec. 2008 that they’re going to become more conservative in blocking offensive videos: removing some, moving some behind an age firewall, and algorithmically demoting some so they won’t appear on the most popular lists. This combines traditional tactics with newfangled technical management of where things appear. We don’t really have a language for how these sorts of innovations work.
He asks: How do we take the tradition of asking questions about how commercial providers shape the public discourse … with the basis that these providers, especially the most prominent ones, are playing a role in determining what ends up online, viewed and possible? How do we apply this to new media? Three differences in how online media work: 1. Emphasis on user-generated content. 2. Gatekeeping or comprehensiveness? E.g., Google wants comprehensiveness for Google Books. That changes why they would include or exclude. 3. They cater “to active niche communities, trying to produce a coherent site, consistent brand, and commodifiable audience.”
“How do these sites promise to be everything and not everything at the same time?” How have they cultivated the notion that they provide everything in a neutral manner? How do they intervene in what they provide? “What obligations are we willing to impose, to protect free speech and ensure a healthy public discourse?”
What about the promises they make that makes them appear neutral? How do they articulate their services and sell themselves to the various stakeholders, setting the terms for how they’re judged? Part of the answer: They use the term “platform.” “The role this term plays is indicative of the type of positioning a youtube, facebook or flickr would like to establish.” These terms are carefully chosen and are carefully massaged. Why has this term fit so comfortable in these sites’ characterizations and why have we accepted it? E.g., before being bought by Google, Youtube referred to itself as a service and a community. Afterward, it became a “platform.” The term draws on the computational meme: an infrastructure on which tools can be built. Marc Andreesen disagrees because you can’t build tools for it. [This is the original geeky meaning, but its meaning has shifted, IMO – dw] It also has a political meaning. And architectural. All these meanings help the term resonate. There are a series of connotations that are powerful in this tool: An open space, egalitarian, wide, limitless, facilitating something of value.
The term “platform” manages the conflicts among stakeholders for youtube. For users, it’s a platform from which to be heard. For advertisers, it’s a platform of opportunity. For media partners, it’s a distribution platform. For lawmakers, it’s a fragile, valuable platform that enables free speech. When they are talking about liability, they are merely a platform. Structurally, “platform” is not unlike “conduit.” [Hmm. I think that for advertisers, YT is a platform in that it’s an open space where millions of users come together. – dw]
So, how do you begin to find a language for the technique and justifications online media make about what belongs on their site and what doesn’t. Facebook, youtube, and flickr adopt different strategies. Youtube maintains that it doesn’t look at content proactively but only when their users flag it. But they do look for spam and are obliged to look for child porn [actually, I think they are not required to proactively search out child porn — dw]. Youtube has a figure 8 model of community governance: Users flag content. Users can comment on the guidelines. Users can game the system, but Youtube can decide which flags to ignore. Users can complain about being flagged. So, while Youtube positions itself as non-interventionist, it actually isn’t. It says it’s defending the community according to the community’s norms, but those norms have been crafted by YT in accordance with its legal and economic interests.
YT’s algorithmic demotion of videos manages their front page. They don’t want it to look like a soft core porn site; those videos are there, but it’s not their image. Flickr does this carefully as well. Their front page tells you that this is a site for landscape photos, and birds, and arty shots. Amazon’s best seller list excludes “adult” literature. Not to mention (he says) Amazon’s removing from the Kindle a copy of 1984 that was posted in violation of copyright; what seems like ours isn’t really.
[I’m doing a terrible job capturing the questions. Basically, I just couldn’t hear the first couple. Sorry!]
Q: [wendy] Platforms vs. intermediaries. “The lawyers tend to talk about intermediary liability or immunity, whereas economists talk more about platforms.”
A: Intermediaries such as ISPs have a different set of protections. YouTube wants the protections but doesn’t fit neatly into that definition. Viacom calls YTY a “distributor.”
Q: Couldn’t these platforms get out of the dilemma by providing curated and uncurated versions?
A: Flickr comes closer to that. It tries to have it all but not be visible about having it all. They have a “Porn is in the back” approach.
[me] We’re in a confusing time. We’ve invented new things that don’t fit the old vocabulary perfectly. What should we do about the lack of a vocab. Invent a new one? Be vigilant about understand how people use terms?
A: All vocabularies are strategic. We should unpack the terms and recognize that they’re doing work, and that the connotations matter. E.g., issues of liability depends on whether we see them as intermediaries or distributors. Is it about imposing a new vocab? Or maintaining vigilance? I’m torn about the impulses in those directions.
Q: We had a system that had user ratings. We a “text jockey” looking at msgs 24/7. We call them global ratings vs. contextual ratings. It’s gotten very very complex. E.g., cleavage photos have to have a head included.
Q: [jodi] How has the near real time feedback influenced accountability/exposure of algorithms and decisions? The Twitter/#amazonfail incident, for instance. Amazon was faced with a decision to respond or not, and then further faced with a decision of what to do about the allegation.
A: The Amazon FAIL revealed what was going on all along. Now the reaction can be faster, is more public.
[Missed some more questions because my hearing is getting worse.]