Yet another brilliant post by Ethan. (I think I’m going to turn that into a keyboard macro. I’ll just have to type ^EthanTalk and that opening sentence will get filled in.) It’s a reflection on the reaction to his piece in the Atlantic about advertising as the Net’s original sin, and the focus on his “confession” that he wrote the code for the Net’s first popup ad.
But I think I actually disagree with one of his key points. In other words, I’m very likely wrong. Nevertheless…
Ethan explains why the Net has come to rely on advertising money:
We had a failure of imagination. And the millions of smart young programmers and businesspeople spending their lives trying to get us to click on ads are also failing to imagine something better. We’re all starting from the same assumptions: everything on the internet is free, we pay with our attention, and our attention is worth more if advertisers know more about who we are and what we do, we start business with money from venture capitalists who need businesses to grow explosively if they’re going to make money.
He recommends that we question our assumptions so we can come up with more imaginative solutions.
I agree with Ethan’s statement of the problem, and admire his ability to put it forward with such urgency. But it seems to me that the problem is less a failure of imagination than the success of the power of incumbent systems.Is access to the Net in exactly the wrong hands because of the failure of someone to imagine a better way, or because of the structural corruption of capitalism? Similarly, why are we failing to slow global warming in an appreciable way? (Remember when Pres. Reagan took down the solar panels Pres. Carter had installed on the White House?) Why are elections still disproportionately determined by the wealthy? In each of these cases, imagination has lost to entrenched systems. We had innovative ways of accessing the Net, we’ve had many great ideas for slowing global warming, we have had highly imaginative attempts to get big money out of politics, and they all failed to one degree or another. Thuggish systems steal great ideas’ lunch money. Over and over and over.
Ethan of course recognizes this. But he ties these failures to failures of the imagination when one could just as well conclude that imagination is no match for corrupt systems — especially since we’ve now gone through a period when imagination was unleashed with a force never before seen, and yet the fundamental systems haven’t budged. This seems to be Larry Lessig’s conclusion, since he moved from CreativeCommons — an imaginative, disruptive approach — to a super-Pac that plays on the existing field, but plays for the Good Guys ‘n’ Gals.
Likewise, one could suggest that the solution — if there is one — is not more imagination, but more organizing. More imagination will only work if the medium still is pliable. Experience suggests it never was as pliable as some of us thought.
But the truth is that I really don’t know. I don’t fully believe the depressing “bad thugs beat good ideas” line I’ve just adumbrated. I certainly agree that it’s turning out to be much harder to overturn the old systems than I’d thought twenty or even five years ago. But I also think that we’ve come much further than we often realize. I take it as part of my job to remind people of that, which is why I am almost always on the chirpier side of these issues. And I certainly think that good ideas can be insanely disruptive, starting with the Net and the Web, and including Skype, eBay, Open Source, maps and GPS, etc.
So, while I don’t want to pin the failure of the Net on our failure of imagination, I also still have hope that bold acts of imagination can make progress, that our ability to iterate at scale can create social formations that are new in the world, and that this may be a multi-generational fight.
I therefore come out of Ethan’s post with questions: (1) What about this age made it possible even to think that imagination could disrupt our most entrenched systems? (2) What makes some ideas effectively disruptive, and why do other equally imaginative good ideas fail? And what about unimaginative ideas that make a real difference? The Birmingham bus boycott was not particularly imaginative, but it sure packed a wallop. (3) What can we do to make it easier for great acts of imagination to become real?
For me, #1 has to do with the Internet. (Color me technodeterminist.) I don’t have anything worthwhile to say about #2. And I still have hope that the answer to #3 has something to do with the ability of billions of people to make common cause— and, more powerfully, to iterate together — over the Net. Obviously #3 also needs regulatory reform to make sure the Internet remains at least a partially open ecosystem.
So, I find myself in deep sympathy with the context of what Ethan describes so well and so urgently. But I don’t find the rhetoric of imagination convincing.
This week there were two out-of-the-park posts by Berkman folk: Ethan Zuckerman on advertising as the Net’s original sin, and Zeynep Tufecki on the power of the open Internet as demonstrated by coverage of the riots in Ferguson. Each provides a view on whether the Net is a failed promise. Each is brilliant and brilliantly written.
Zeynep on Ferguson
Zeynep, who has written with wisdom and insight on the role of social media in the Turkish protests (e.g., here and here), looks at how Twitter brought the Ferguson police riots onto the national agenda and how well Twitter “covered” them. But those events didn’t make a dent in Facebook’s presentation of news. Why? she asks.
Twitter is an open platform where anyone can post whatever they want. It therefore reflects our interests — although no medium is a mere reflection. FB, on the other hand, uses algorithms to determine what it thinks our interests are … except that its algorithms are actually tuned to get us to click more so that FB can show us more ads. (Zeynep made that point about an early and errant draft of my CNN.com commentary on the FB mood experiment. Thanks, Zeynep!) She uses this to make an important point about the Net’s value as a medium the agenda of which is not set by commercial interests. She talks about this as “Net Neutrality,” extending it from its usual application to the access providers (Comcast, Verizon and their small handful of buddies) to those providing important platforms such as Facebook.
She concludes (but please read it all!):
How the internet is run, governed and filtered is a human rights issue.
And despite a lot of dismal developments, this fight is far from over, and its enemy is cynicism and dismissal of this reality.
Don’t let anyone tell you otherwise.
What happens to #Ferguson affects what happens to Ferguson.
Yup yup yup. This post is required reading for all of the cynics who would impress us with their wake-up-and-smell-the-shitty-coffee pessimism.
Ethan on Ads
Ethan cites a talk by Maciej Ceglowski for the insight that “we’ve ended up with surveillance as the default, if not sole, internet business model.” Says Ethan,
I have come to believe that advertising is the original sin of the web. The fallen state of our Internet is a direct, if unintentional, consequence of choosing advertising as the default model to support online content and services.
Since Internet ads are more effective as a business model than as an actual business, companies are driven ever more frantically to gather customer data in order to hold out the hope of making their ads more effective. And there went out privacy. (This is a very rough paraphrase of Ethan’s argument.)
Ethan pays more than lip service to the benefits — promised and delivered — of the ad-supported Web. But he points to four rather devastating drawbacks, include the distortions caused by algorithmic filtering that Zeynep warns us about. Then he discusses what we can do about it.
I’m not going to try to summarize any further. You need to read this piece. And you will enjoy it. For example, betcha can’t guess who wrote the code for the world’s first pop-up ads. Answer: Ethan .
Also recommended: Jeff Jarvis’ response and Mathew Ingram’s response to both. I myself have little hope that advertising can be made significantly better, where “better” means being unreservedly in the interests of “consumers” and sufficiently valuable to the advertisers. I’m of course not confident about this, and maybe tomorrow someone will come up with the solution, but my thinking is based on the assumption that the open Web is always going to be a better way for us to discover what we care about because the native building material of the Web is in fact what we find mutually interesting.
Read both these articles. They are important contributions to understanding the Web We Want.
, echo chambers
, net neutrality
, open access
, social media
Tagged with: advertising
• net neutrality
• social media
Date: August 15th, 2014 dw
The debate over whether municipalities should be allowed to provide Internet access has been heating up. Twenty states ban it. Tom Wheeler, the chair of the FCC, has said he wants to “preempt” those laws. Congress is maneuvering to extend the ban nationwide.
Jim Baller, who has been writing about the laws, policies, and economics of network deployment for decades, has found an eerie resonance of this contemporary debate. Here’s a scan of the table of contents of a 1906 (yes, 1906) issue of Moody’s that features a symposium on “Municipal Ownership and Operation.”
Click image to enlarge
The Moody’s articles are obviously not talking about the Internet. They’re talking about the electric grid.
In a 1994 (yes, 1994) article published just as the Clinton administration (yes, Clinton) was developing principles for the deployment of the “information superhighway,” Jim wrote that if we want the far-reaching benefits foreseen by the National Telecommunications and Information Administration (and they were amazingly prescient (but why can’t I find the report online??)), then we ought to learn four things from the deployment of the electric grid in the 1880s and 1890s:
First, the history of the electric power industry teaches that one cannot expect private profit-maximizing firms to provide “universal service” or anything like it in the early years (or decades) of their operations, when the allure of the most profitable markets is most compelling.
Second, the history of the electric power industry teaches that opening the doors to anyone willing to provide critical public services can be counterproductive and that it is essential to watch carefully the growth of private firms that enter the field. If such growth is left unchecked, the firms may become so large and complex that government institutions can no longer control or even understand them. Until government eventually catches up, the public may suffer incalculable injury.
Third, the history of the electric power industry teaches that monopolists will use all means available to influence the opinions of lawmakers and the public in their favor and will sometimes have frightening success
Fourth, and most important, the history of the electric power industry teaches that the presence or threat of competition from the public sector is one of the best and surest ways to secure quality service and reasonable prices from private enterprises involved in the delivery of critical public services.
Learn from history? Repeat it? Or intervene as citizens to get the history we want? I’ll take door number 3, please.
Here’s a fantastic 11-minute video from Vi Hart that explains Net Neutrality and more.
Categories: net neutrality
Tagged with: explainers
• net neutrality
Date: May 8th, 2014 dw
I just posted at Medium.com about why it’s important to remember the difference between the Net and the Web. Here’s the beginning:
A note to NPR and other media that have been reporting on “the 25th anniversary of the Internet”: NO, IT’S NOT. It’s the 25th anniversary of the Web. The Internet is way older than that. And the difference matters.
The Internet is a set of protocols?—?agreements?—?about how information will be sliced up, sent over whatever media the inter-networked networks use, and reassembled when it gets there. The World Wide Web uses the Internet to move information around. The Internet by itself doesn’t know or care about Web pages, browsers, or the hyperlinks we’ve come to love. Rather, the Internet enables things like the World Wide Web, email, Skype, and much much more to be specified and made real. By analogy, the Internet is like an operating system, and the Web, Skype, and email are like applications that run on top of it.
This is not a technical quibble. The difference between the Internet and the Web matters more than ever for at least two reasons.
Continued at Medium.com…
Categories: net neutrality
Tagged with: internet
Date: March 15th, 2014 dw
William McGeveran [twitter:BillMcGev] has written an article for University of Minnesota Law School that suggests how to make “frictionless sharing” well-behaved. He defines frictionless sharing as “disclosing “individuals’ activities automatically, rather than waiting for them to authorize a particular disclosure.” For example:
…mainstream news websites, including the Washington Post, offer “social reading” applications (“apps”) in Facebook. After a one- time authorization, these apps send routine messages through Facebook to users’ friends identifying articles the users view.
Bill’s article considers the pros and cons:
Social media confers considerable advantages on individuals, their friends, and, of course, intermediaries like Spotify and Facebook. But many implementations of frictionless architecture have gone too far, potentially invading privacy and drowning useful information in a tide of meaningless spam.
Bill is not trying to build walls. “The key to online disclosures … turns out to be the correct amount of friction, not its elimination.” To assess what constitutes “the correct amount” he offers an heuristic, which I am happy to call McGeveran’s Law of Friction: “It should not be easier to ‘share’ an action online than to do it.” (Bill does not suggest naming the law after him! He is a modest fellow.)
One of the problems with the unintentional sharing of information are “misclosures,” a term he attributes to Kelly Caine.
Frictionless sharing makes misclosures more likely because it removes practical obscurity on which people have implicitly relied when assessing the likely audience that would find out about their activities. In other words, frictionless sharing can wrench individuals’ actions from one context to another, undermining their privacy expectations in the process.
Not only does this reveal, say, that you’ve been watching Yoga for Health: Depression and Gastrointestinal Problems (to use an example from Sen. Franken that Bill cites), it reveals that fact to your most intimate friends and family. (In my case, the relevant example would be The Amazing Race, by far the worst TV I watch, but I only do it when I’m looking for background noise while doing something else. I swear!) Worse, says Bill, “preference falsification” — our desire to have our known preferences support our social image — can alter our tastes, leading to more conformity and less diversity in our media diets.
Bill points to other problems with making social sharing frictionless, including reducing the quality of information that scrolls past us, turning what could be a useful set of recommendations from friends into little more than spam: “…friends who choose to look at an article because I glanced at it for 15 seconds probably do not discover hidden gems as a result.”
Bill’s aim is to protect the value of intentionally shared information; he is not a hoarder. McGeveran’s Law thus tries to add in enough friction that sharing is intentional, but not so much that it gets in the way of that intention. For example, he asks us to imagine Netflix presenting the user with two buttons: “Play” and “Play and Share.” Sharing thus would require exactly as much work as playing, thus satisfying McGeveran’s Law. But having only a “Play” button that then automatically shares the fact that you just watched Dumb and Dumberer distinctly fails the Law because it does not “secure genuine consent.” As Bill points out, his Law of Friction is tied to the technology in use, and thus is flexible enough to be useful even as the technology and its user interfaces change.
I like it.
Tagged with: privacy
• programming the social
Date: January 12th, 2014 dw
Noam Chomsky and Barton Gellman were interviewed at the Engaging Big Data conference put on by MIT’s Senseable City Lab on Nov. 15. When Prof. Chomsky was asked what we can do about government surveillance, he reiterated his earlier call for us to understand the NSA surveillance scandal within an historical context that shows that governments always use technology for their own worst purposes. According to my liveblogging (= inaccurate, paraphrased) notes, Prof. Chomsky said:
Governments have been doing this for a century, using the best technology they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe this is universal. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications. [Emphasis added, of course.]
I was glad that Barton Gellman — hardly an NSA apologist — called Prof. Chomsky on his lumping of the NSA with the Stasi, for there is simply no comparison between the freedom we have in the US and the thuggish repression omnipresent in East Germany. But I was still bothered, albeit by a much smaller point. I have no serious quarrel with Prof. Chomsky’s points that government incursions on rights are nothing new, and that governments generally (always?) believe they are acting for the best of purposes. I am a little bit hung-up, however, on his equivocating on “information.”
Prof. Chomsky is of course right in his implied definition of information. (He is Noam Chomsky, after all, and knows a little more about the topic than I do.) Modern information is often described as a measure of surprise. A string of 100 alternating ones and zeroes conveys less information than a string of 100 bits that are less predictable, for if you can predict with certainty what the next bit will be, then you don’t learn anything from that bit; it carries no information. Information theory lets us quantify how much information is conveyed by streams of varying predictability.
So, when U.S. security folks say they are spying on us for our own security, are they saying literally nothing? Is that claim without meaning? Only in the technical sense of information. It is, in fact, quite meaningful, even if quite predictable, in the ordinary sense of the term “information.”
First, Prof. Chomsky’s point that governments do bad things while thinking they’re doing good is an important reminder to examine our own assumptions. Even the bad guys think they’re the good guys.
Second, I disagree with Prof. Chomsky’s generalization that governments always justify surveillance in the name of security. For example, governments sometimes record traffic (including the movement of identifiable cars through toll stations) with the justification that the information will be used to ease congestion. Tracking the position of mobile phones has been justified as necessary for providing swift EMT responses. Governments require us to fill out detailed reports on our personal finances every year on the grounds that they need to tax us fairly. Our government hires a fleet of people every ten years to visit us where we live in order to compile a census. These are all forms of surveillance, but in none of these cases is security given as the justification. And if you want to say that these other forms don’t count, I suspect it’s because it’s not surveillance done in the name of security…which is my point.
Third, governments rarely cite security as the justification without specifying what the population is being secured against; as Prof. Chomsky agrees, that’s an inherent part of the fear-mongering required to get us to accept being spied upon. So governments proclaim over and over what threatens our security: Spies in our midst? Civil unrest? Traitorous classes of people? Illegal aliens? Muggers and murderers? Terrorists? Thus, the security claim isn’t made on its own. It’s made with specific threats in mind, which makes the claim less predictable — and thus more informational — than Prof. Chomsky says.
So, I disagree with Prof. Chomsky’s argument that a government that justifies spying on the grounds of security is literally saying something without meaning. Even if it were entirely predictable that governments will always respond “Because security” when asked to justify surveillance — and my second point disputes that — we wouldn’t treat the response as meaningless but as requiring a follow-up question. And even if the government just kept repeating the word “Security” in response to all our questions, that very act would carry meaning as well, like a doctor who won’t tell you what a shot is for beyond saying “It’s to keep you healthy.” The lack of meaning in the Information Theory sense doesn’t carry into the realm in which people and their public officials engage in discourse.
Here’s an analogy. Prof. Chomsky’s argument is saying, “When a government justifies creating medical programs for health, what they’re saying is meaningless. They always say that! The Nazis said the same thing when they were sterilizing ‘inferiors,’ and Medieval physicians engaged in barbarous [barber-ous, actually – heyo!] practices in the name of health.” Such reasoning would rule out a discussion of whether current government-sponsored medical programs actually promote health. But that is just the sort of conversation we need to have now about the NSA.
Prof. Chomsky’s repeated appeals to history in this interview covers up exactly what we need to be discussing. Yes, both the NSA and the Stasi claimed security as their justification for spying. But far from that claim being meaningless, it calls for a careful analysis of the claim: the nature and severity of the risk, the most effective tactics to ameliorate that threat, the consequences of those tactics on broader rights and goods — all considerations that comparisons to the Stasi and Genghis Khan obscure. History counts, but not as a way to write off security considerations as meaningless by invoking a technical definition of “information.”
Tagged with: information
Date: November 17th, 2013 dw
The sociologist Saskia Sassen is giving a plenary talk at Engaging Data 2013. [I had a little trouble hearing some of it. Sorry. And in the press of time I haven’t had a chance to vet this for even obvious typos, etc.]
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
1. The term Big Data is ambiguous. “Big Data” implies we’re in a technical zone. it becomes a “technical problem” as when morally challenging technologies are developed by scientists who thinks they are just dealing with a technical issue. Big Data comes with a neutral charge. “Surveillance” brings in the state, the logics of power, how citizens are affected.
Until recently, citizens could not relate to a map that came out in 2010 that shows how much surveillance there is in the US. It was published by the Washington Post, but it didn’t register. 1,271 govt orgs and 1,931 private companies work on programs related to counterterrorism, homeland security and intelligence. There are more than 1 million people with stop-secret clearance, and maybe a third are private contractors. In DC and enirons, 33 building complexes are under construction or have been built for top-secret intelligence since 9/11. Together they are 22x the size of Congress. Inside these environments, the govt regulates everything. By 2010, DC had 4,000 corporate office buildings that handle classified info,all subject to govt regulation. “We’re dealing with a massive material apparatus.” We should not be distracted by the small individual devices.
Cisco lost 28% of its sales, in part as a result of its being tainted by the NSA taking of its data. This is alienating citzens and foreign govts. How do we stop this? We’re dealing with a kind of assemblage of technical capabilities, tech firms that sell the notion that for security we all have to be surveilled, and people. How do we get a handle on this? I ask: Are there spaces where we can forget about them? Our messy, nice complex cities are such spaces. All that data cannot be analyzed. (She notes that she did a panel that included the brother of a Muslim who has been indefinitely detained, so now her name is associated with him.)
3. How can I activate large, diverse spaces in cities? How can we activate local knowledges? We can “outsource the neighborhood.” The language of “neighborhood” brings me pleasure, she says.
If you think of institutions, they are codified, and they notice when there are violations. Every neighborhood has knowledge about the city that is different from the knowledge at the center. The homeless know more about rats than the center. Make open access networks available to them into a reverse wiki so that local knowledge can find a place. Leak that knowledge into those codified systems. That’s the beginning of activating a city. From this you’d get a Big Data set, capturing the particularities of each neighborhood. [A knowledge network. I agree! :)]
The next step is activism, a movement. In my fantasy, at one end it’s big city life and at the other it’s neighborhood residents enabled to feel that their knowledge matters.
Q: If local data is being aggregated, could that become Big Data that’s used against the neighborhoods?
A: Yes, that’s why we need neighborhood activism. The polticizing of the neighborhoods shapes the way the knowledge isued.
Q: Disempowered neighborhoods would be even less able to contribute this type of knowledge.
A: The problem is to value them. The neighborhood has knowledge at ground level. That’s a first step of enabling a devalued subject. The effect of digital networks on formal knowledge creates an informal network. Velocity itself has the effect of informalizing knowledge. I’ve compared environmental activists and financial traders. The environmentalists pick up knowledge on the ground. So, the neighborhoods may be powerless, but they have knowledge. Digital interactive open access makes it possible bring together those bits of knowledge.
Q: Those who control the pipes seem to control the power. How does Big Data avoid the world being dominated by brainy people?
A: The brainy people at, say, Goldman Sachs are part of a larger institution. These institutions have so much power that they don’t know how to govern it. The US govt has been the post powerful in the world, with the result that it doesn’t know how to govern its own power. It has engaged in disastrous wars. So “brainy people” running the world through the Ciscos, etc., I’m not sure. I’m talking about a different idea of Big Data sets: distributed knowledges. E.g, Forest Watch uses indigenous people who can’t write, but they can tell before the trained biologists when there is something wrong in the ecosystem. There’s lots of data embedded in lots of places.
[She’s aggregating questions] Q1: Marginalized neighborhoods live being surveilled: stop and frisk, background checks, etc. Why did it take tapping Angela Merkel’s telephone to bring awareness? Q2: How do you convince policy makers to incorporate citizen data? Q3: There are strong disincentives to being out of the mainstream, so how can we incentivize difference.
A: How do we get the experts to use the knowledge? For me that’s not the most important aim. More important is activating the residents. What matters is that they become part of a conversation. A: About difference: Neighborhoods are pretty average places, unlike forest watchers. And even they’re not part of the knowledge-making circuit. We should bring them in. A: The participation of the neighborhoods isn’t just a utility for the central govt but is a first step toward mobilizing people who have been reudced to thinking that they don’t count. I think is one of the most effective ways to contest the huge apparatus with the 10,000 buildings.
Tagged with: 2b2k
• big data
Date: November 15th, 2013 dw
I’m at the Engaging Data 2013conference where Noam Chomsky and Pulitzer Prize winner (twice!) Barton Gellman are going to talk about Big Data in the Snowden Age, moderated by Ludwig Siegele of the Economist. (Gellman is one of the three people Snowden vouchsafed his documents with.) The conference aims at having us rethink how we use Big Data and how it’s used.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
LS: Prof. Chomsky, what’s your next book about?
NC: Philosophy of mind and language. I’ve been writing articles that are pretty skeptical about Big Data. [Please read the orange disclaimer: I’m paraphrasing and making errors of every sort.]
LS: You’ve said that Big Data is for people who want to do the easy stuff. But shouldn’t you be thrilled as a linguist?
NC: When I got to MIT at 1955, I was hired to work on a machine translation program. But I refused to work on it. “The only way to deal with machine translation at the current stage of understanding was by brute force, which after 30-40 years is how it’s being done.” A principled understanding based on human cognition is far off. Machine translation is useful but you learn precisely nothing about human thought, cognition, language, anything else from it. I use the Internet. Glad to have it. It’s easier to push some buttons on your desk than to walk across the street to use the library. But the transition from no libraries to libraries was vastly greater than the transition from librarites to Internet. [Cool idea and great phrase! But I think I disagree. It depends.] We can find lots of data; the problem is understanding it. And a lot of data around us go through a filter so it doesn’t reach us. E.g., the foreign press reports that Wikileaks released a chapter about the secret TPP (Trans Pacific Partnership). It was front page news in Australia and Europe. You can learn about it on the Net but it’s not news. The chapter was on Intellectual Property rights, which means higher prices for less access to pharmaceuticals, and rams through what SOPA tried to do, restricting use of the Net and access to data.
LS: For you Big Data is useless?
NC: Big data is very useful. If you want to find out about biology, e.g. But why no news about TPP? As Sam Huntington said, power remains strongest in the dark. [approximate] We should be aware of the long history of surveillance.
LS: Bart, as a journalist what do you make of Big Data?
BG: It’s extraordinarily valuable, especially in combination with shoe-leather, person-to-person reporting. E.g., a colleague used traditional reporting skills to get the entire data set of applicants for presidential pardons. Took a sample. More reporting. Used standard analytics techniques to find that white people are 4x more likely to get pardons, that campaign contributors are also more likely. It would be likely in urban planning [which is Senseable City Labs’ remit]. But all this leads to more surveillance. E.g., I could make the case that if I had full data about everyone’s calls, I could do some significant reporting, but that wouldn’t justify it. We’ve failed to have the debate we need because of the claim of secrecy by the institutions in power. We become more transparent to the gov’t and to commercial entities while they become more opaque to us.
LS: Does the availability of Big Data and the Internet automatically mean we’ll get surveillance? Were you surprised by the Snowden revelations>
NC: I was surprised at the scale, but it’s been going on for 100 years. We need to read history. E.g., the counter-insurgency “pacification” of the Philippines by the US. See the book by McCoy [maybe this. The operation used the most sophisticated tech at the time to get info about the population to control and undermine them. That tech was immediately used by the US and Britain to control their own populations, .g., Woodrow Wilson’s Red Scare. Any system of power — the state, Google, Amazon — will use the best available tech to control, dominate, and maximize their power. And they’ll want to do it in secret. Assange, Snowden and Manning, and Ellsberg before them, are doing the duty of citizens.
BG: I’m surprised how far you can get into this discussion without assuming bad faith on the part of the government. For the most part what’s happening is that these security institutions genuinely believe most of the time that what they’re doing is protecting us from big threats that we don’t understand. The opposition comes when they don’t want you to know what they’re doing because they’re afraid you’d call it off if you knew. Keith Alexander said that he wishes that he could bring all Americans into this huddle, but then all the bad guys would know. True, but he’s also worried that we won’t like the plays he’s calling.
LS: Bruce Schneier says that the NSA is copying what Google and Yahoo, etc. are doing. If the tech leads to snooping, what can we do about it?
NC: Govts have been doing this for a century, using the best tech they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe these are universals. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications.
BG: The capacity to rationalize may be universal, but you’ll take the conversation off track if you compare what’s happening here to the Stasi. The Stasi were blackmailing people, jailing them, preventing dissent. As a journalist I’d be very happy to find that our govt is spying on NGOs or using this power for corrupt self-enriching purposes.
NC: I completely agree with that, but that’s not the point: The same appeal is made in the most monstrous of circumstances. The freedom we’ve won sharply restricts state power to control and dominate, but they’ll do whatever they can, and they’ll use the same appeals that monstrous systems do.
LS: Aren’t we all complicit? We use the same tech. E.g., Prof. Chomsky, you’re the father of natural language processing, which is used by the NSA.
NC: We’re more complicit because we let them do it. In this country we’re very free, so we have more responsibility to try to control our govt. If we do not expose the plea of security and separate out the parts that might be valid from the vast amount that’s not valid, then we’re complicit because we have the oppty and the freedom.
LS: Does it bug you that the NSA uses your research?
NC: To some extent, but you can’t control that. Systems of power will use whatever is available to them. E.g., they use the Internet, much of which was developed right here at MIT by scientists who wanted to communicate freely. You can’t prevent the powers from using it for bad goals.
BG: Yes, if you use a free online service, you’re the product. But if you use a for-pay service, you’re still the product. My phone tracks me and my social network. I’m paying Verizon about $1,000/year for the service, and VZ is now collecting and selling my info. The NSA couldn’t do its job as well if the commercial entities weren’t collecting and selling personal data. The NSA has been tapping into the links between their data centers. Google is racing to fix this, but a cynical way of putting this is that Google is saying “No one gets to spy on our customers except us.”
LS: Is there a way to solve this?
BG: I have great faith that transparency will enable the development of good policy. The more we know, the more we can design policies to keep power in place. Before this, you couldn’t shop for privacy. Now a free market for privacy is developing as the providers now are telling us more about what they’re doing. Transparency allows legislation and regulation to be debated. The House Repubs came within 8 votes of prohibiting call data collection, which would have been unthinkable before Snowden. And there’s hope in the judiciary.
NC: We can do much more than transparency. We can make use of the available info to prevent surveillance. E.g., we can demand the defeat of TPP. And now hardware in computers is being designed to detect your every keystroke, leading some Americans to be wary of Chinese-made computers, but the US manufacturers are probably doing it better. And manufacturers for years have been trying to dsign fly-sized drones to collect info; that’ll be around soon. Drones are a perfect device for terrorists. We can learn about this and do something about it. We don’t have to wait until it’s exposed by Wikileaks. It’s right there in mainstream journals.
LS: Are you calling for a political movement?
NC: Yes. We’re going to need mass action.
BG: A few months ago I noticed a small gray box with an EPA logo on it outside my apartment in NYC. It monitors energy usage, useful to preventing brown outs. But it measures down to the apartment level, which could be useful to the police trying to establish your personal patterns. There’s no legislation or judicial review of the use of this data. We can’t turn back the clock. We can try to draw boundaries, and then have sufficient openness so that we can tell if they’ve crossed those boundaries.
LS: Bart, how do you manage the flow of info from Snowden?
BG: Snowden does not manage the release of the data. He gave it to three journalists and asked us to use your best judgment — he asked us to correct for his bias about what the most important stories are — and to avoid direct damage to security. The documents are difficult. They’re often incomplete and can be hard to interpret.
Q: What would be a first step in forming a popular movement?
NC: Same as always. E.g., the women’s movement began in the 1960s (at least in the modern movement) with consciousness-raising groups.
Q: Where do we draw the line between transparency and privacy, given that we have real enemies?
BG: First you have to acknowledge that there is a line. There are dangerous people who want to do dangerous things, and some of these tools are helpful in preventing that. I’ve been looking for stories that elucidate big policy decisions without giving away specifics that would harm legitimate action.
Q: Have you changed the tools you use?
BG: Yes. I keep notes encrypted. I’ve learn to use the tools for anonymous communication. But I can’t go off the grid and be a journalist, so I’ve accepted certain trade-offs. I’m working much less efficiently than I used to. E.g., I sometimes use computers that have never touched the Net.
Q: In the women’s movement, at least 50% of the population stood to benefit. But probably a large majority of today’s population would exchange their freedom for convenience.
NC: The trade-off is presented as being for security. But if you read the documents, the security issue is how to keep the govt secure from its citizens. E.g., Ellsberg kept a volume of the Pentagon Papers secret to avoid affecting the Vietnam negotiations, although I thought the volume really only would have embarrassed the govt. Security is in fact not a high priority for govts. The US govt is now involved in the greatest global terrorist campaign that has ever been carried out: the drone campaign. Large regions of the world are now being terrorized. If you don’t know if the guy across the street is about to be blown away, along with everyone around, you’re terrorized. Every time you kill an Al Qaeda terrorist, you create 40 more. It’s just not a concern to the govt. In 1950, the US had incomparable security; there was only one potential threat: the creation of ICBM’s with nuclear warheads. We could have entered into a treaty with Russia to ban them. See McGeorge Bundy’s history. It says that he was unable to find a single paper, even a draft, suggesting that we do something to try to ban this threat of total instantaneous destruction. E.g., Reagan tested Russian nuclear defenses that could have led to horrible consequences. Those are the real security threats. And it’s true not just of the United States.
The FCC’s Open Internet Advisory Committee’s 2013 Annual Report has been posted. The OIAC is a civilian group, headed by Jonathan Zittrain [twitter:zittrain] . The report is rich, but I want to point to one part that I found especially interesting: the section on “specialized services.”
Specialized services are interesting because when the FCC adopted the Open Internet Order (its “Net Neutrality” policy), it permitted the carriers to use their Internet-delivery infrastructure to provide some specific type of content or service to side of the Internet. As Harold Feld put it in 2009, in theory the introduction of “managed services”
allows services like telemedicine to get dedicated capacity without resorting to “tiering” that is anathema to network neutrality. In reality, is great new way for incumbents to privilege their own VOIP and video services over traffic of others.
The danger is that the providers will circumvent the requirement that they not discriminate in favor of their own content (or in favor of content from companies that pay them) by splintering off that content and calling it a a special service. (For better explanations, check Technoverse, Ars Technica, Commissioner Copps’ statement.)
So, a lot comes down to the definition of a “specialized service.” This Annual Report undertakes the challenge. The summary begins on page 9, and the full section begins on p. 66.
I won’t pretend to have the expertise to evaluate the definitions. But I do like the principles that guided the group:
Regulation should not create a perverse incentive for operators to move away from a converged IP infrastructure
A service should not be able to escape regulatory burden or acquire a burden by moving to IP
The Specialized Services group was led by David Clark, and manifests a concern for what Jonathan Zittrain calls “generativity“: it’s not enough to measure the number of bits going through a line to a person’s house; we also have to make sure that the user is able to do more with those bits than simply consume them.
I’m happy to see the Committee address the difficult issue of specialized services, and to do so with the clear intent of (a) not letting access to the open Internet be sacrificed, and(b) not allowing special services to be an end run around an open Internet.
Note: Jonathan Zittrain is my boss’ boss at the Harvard Law Library. I’ve known him through the Berkman Center for ten years before that.
Categories: net neutrality
Tagged with: fcc
• net neutrality
Date: August 21st, 2013 dw
Next Page »