Secretary of State John Kerry gave a speech in Seoul yesterday about the Internet, setting out five principles of cybersecurity.
The talk is quite enthusiastic and progressive about the Net. Sort of. For example, he says, “[t]he United States considers the promotion of an open and secure internet to be a key component of our foreign policy,” but he says this in support of his idea that it’s crucial to govern the Internet. On the third hand, the governance he has in mind is designed to keep the Net open to all people and all ideas. On the fourth hand, predictably, we don’t know how much structural freedom he’s willing to give up to stop the very Worst People on Earth: those who share content they do not own.
Overall, it’s a speech that we can be pretty proud of.
Here’s why he thinks the Net is important:
…to begin with, America believes – as I know you do – that the internet should be open and accessible to everyone. We believe it should be interoperable, so it can connect seamlessly across international borders. We believe people are entitled to the same rights of free expression online as they possess offline. We believe countries should work together to deter and respond effectively to online threats. And we believe digital policy should seek to fulfill the technology’s potential as a vehicle for global stability and sustained economic development; as an innovative way to enhance the transparency of governments and hold governments accountable; and also as a means for social empowerment that is also the most democratic form of public expression ever invented.
At its best, the internet is an equal-opportunity platform from which the voice of a student can have as much reach as that of a billionaire; a chief executive may be able to be out-debated by an entry-level employee – and there’s nothing wrong with that.
Great, although why he needed to add a Seinfeldian “Not that there’s anything wrong with that” is a bit concerning.
He then goes on to say that everyone’s human rights extend to online behavior, which is an important position, although it falls short of Hillary Clinton’s claim while Secretary of State that there is a universal “freedom to connect.”
He then in an odd way absolves the Internet from blame for the disruption it seems to cause:
The internet is, among many other things, an instrument of freedom. It’s a tool people resort to in response to the absence and failure or abuse of government…Anyone who blames the internet for the disorder or turmoil in today’s world is just not using their head to connect the dots correctly. And banning the internet in a misguided attempt to impose order will never succeed in quashing the universal desire for freedom.
This separates him from those who think that the Net actually gives people an idea of freedom, encourages them to speak their minds, or is anything except a passive medium. But that’s fine since in this section he’s explaining why dictators shouldn’t shut down the Net. So we can just keep the “inspires an ambition for political freedom” part quiet for now.
“The remedy for the speech that we do not like is more speech,” he says, always a good trope. But he follows it up with an emphasis on bottom-up conversation, which is refreshing: “It’s the credible voices of real people that must not only be enabled, but they need to be amplified.”
To make the point that the Net empowers all sectors of society, and thus it would be disastrous if it were disrupted globally, he suggests that we watch The Day the Earth Stood Still, which makes me think Secretary Kerry has not watched either version of that movie lately. Klaatu barada nikto, Mr. Kerry.
To enable international commerce, he opposes data localization standards, in the course of which he uses “google” as a verb. Time to up your campaign contributions, Bing.
Kerry pre-announces an international initiative to address the digital divide, “in combination with partner countries, development banks, engineers, and industry leaders.” Details to follow.
Kerry tries to position the NSA’s data collection as an enlightened policy:
Further, unlike many, we have taken steps to respect and safeguard the privacy of the citizens of other countries and to use the information that we do collect solely to address the very specific threat to the United States and to our allies. We don’t use security concerns as an excuse to suppress criticisms of our policies or to give a competitive advantage to an American company and any commercial interests at all.
You have our word on that. So, we’re good? Moving on.
Kerry acknowledges that the Telecomm Act of 1996 is obsolete, noting that “Barely anybody in 1996 was talking about data, and data transformation, and data management. It was all about telephony – the telephone.”
Finally, he gets to governance:
So this brings me to another issue that should concern us all, and that is governance – because even a technology founded on freedom needs rules to be able to flourish and work properly. We understand that. Unlike many models of government that are basically top-down, the internet allows all stakeholders – the private sector, civil society, academics, engineers, and governments – to all have seats at the table. And this multi-stakeholder approach is embodied in a myriad of institutions that each day address internet issues and help digital technology to be able to function.
“Stakeholders” get a “seat at the table”? It’s our goddamned table. And it’s more like a blanket on the ground than polished rare wood in a board room. Here’s an idea for you, World Leaders: How about if you take your stakes and get off our blanket?
Well, that felt good. Back to governing the Internet into the ground. And to be fair, Kerry seems aware of the dangers of top-down control, even if he doesn’t appreciate the benefits of bottom-up self-organization:
That’s why we have to be wary of those who claim that the system is broken or who advocate replacing it with a more centralized arrangement – where governments would have a monopoly on the decision-making. That’s dangerous. Now, I don’t know what you think, but I am confident that if we were to ask any large group of internet users anywhere in the world what their preferences are, the option “leave everything to the government” would be at the absolute bottom of the list.
Kerry now enunciates his five principles.
First, no country should conduct or knowingly support online activity that intentionally damages or impedes the use of another country’s critical infrastructure.
Second, no country should seek either to prevent emergency teams from responding to a cybersecurity incident, or allow its own teams to cause harm.
Third, no country should conduct or support cyber-enabled theft of intellectual property, trade secrets, or other confidential business information for commercial gain.
Fourth, every country should mitigate malicious cyber activity emanating from its soil, and they should do so in a transparent, accountable and cooperative way.
And fifth, every country should do what it can to help states that are victimized by a cyberattack.
Two particular points:
First, #2 establishes Internet repair teams as the medical support people in the modern battleground: you don’t fire on them.
Second, #3 gets my goat. Earlier in the talk, Sect’y Kerry said: “We understand that freedom of expression is not a license to incite imminent violence. It’s not a license to commit fraud. It’s not a license to indulge in libel, or sexually exploit children.” But the one crime that gets called out in his five principles is violating copyright or patent laws. And it’s not even aimed at other governments doing so, for it explicitly limits the prohibition to acts committed “for commercial gain.” Why the hell is protecting “IP” more important than preventing cross-border libel, doxxing or other privacy violations, organizing human trafficking, or censorship?
Oh, right. Disney. Hollywood. A completely corrupt electoral process. Got it.
Now, it’s easy to be snarky and dismissive about this speech — or any speech — by a Secretary of State about the Internet, but just consider how bad it could have been. Imagine a speech by a Secretary of State in an administration that sees the Internet primarily as a threat to security, to morals, to business as usual. There’s actually a lot to like in this talk, given its assumptions that the Net needs governments to govern it and that it’s ok to spy on everyone so long as we don’t do Bad Things with that information that we gather.
So, before you vote Republican, re-read Hillary Clinton’s two speeches [2010 2011] on Internet freedom.
Tagged with: copyleft
Date: May 19th, 2015 dw
Bruce is one of the most visible, articulate, and smartest voices on behalf of preserving our privacy. (His new book, Data and Goliath, is both very readable and very well documented.) At an event at West Point, he met Admiral Mike Rogers, Director of the NSA. Bruce did an extensive liveblog of the Rogers’ keynote.
There was no visible explosion, forcing physicists to rethink their understanding of matter and anti-matter.
Tim Hwang started a little memefest by suggesting that that photo was announcing a new movie. Contributions by the likes of Tim, Nathan Mathias, Sam Klein, and Ryan Budish include:
Security Chasers : The Chastening
US Confidential: What you don’t know you don’t know can kill you
Selma and Louise: Deep Cover
Tango and Hooch: The Spookening
Open and Shut: The Legend Begins
Tagged with: nsa
Date: May 14th, 2015 dw
Yet another brilliant post by Ethan. (I think I’m going to turn that into a keyboard macro. I’ll just have to type ^EthanTalk and that opening sentence will get filled in.) It’s a reflection on the reaction to his piece in the Atlantic about advertising as the Net’s original sin, and the focus on his “confession” that he wrote the code for the Net’s first popup ad.
But I think I actually disagree with one of his key points. In other words, I’m very likely wrong. Nevertheless…
Ethan explains why the Net has come to rely on advertising money:
We had a failure of imagination. And the millions of smart young programmers and businesspeople spending their lives trying to get us to click on ads are also failing to imagine something better. We’re all starting from the same assumptions: everything on the internet is free, we pay with our attention, and our attention is worth more if advertisers know more about who we are and what we do, we start business with money from venture capitalists who need businesses to grow explosively if they’re going to make money.
He recommends that we question our assumptions so we can come up with more imaginative solutions.
I agree with Ethan’s statement of the problem, and admire his ability to put it forward with such urgency. But it seems to me that the problem is less a failure of imagination than the success of the power of incumbent systems.Is access to the Net in exactly the wrong hands because of the failure of someone to imagine a better way, or because of the structural corruption of capitalism? Similarly, why are we failing to slow global warming in an appreciable way? (Remember when Pres. Reagan took down the solar panels Pres. Carter had installed on the White House?) Why are elections still disproportionately determined by the wealthy? In each of these cases, imagination has lost to entrenched systems. We had innovative ways of accessing the Net, we’ve had many great ideas for slowing global warming, we have had highly imaginative attempts to get big money out of politics, and they all failed to one degree or another. Thuggish systems steal great ideas’ lunch money. Over and over and over.
Ethan of course recognizes this. But he ties these failures to failures of the imagination when one could just as well conclude that imagination is no match for corrupt systems — especially since we’ve now gone through a period when imagination was unleashed with a force never before seen, and yet the fundamental systems haven’t budged. This seems to be Larry Lessig’s conclusion, since he moved from CreativeCommons — an imaginative, disruptive approach — to a super-Pac that plays on the existing field, but plays for the Good Guys ‘n’ Gals.
Likewise, one could suggest that the solution — if there is one — is not more imagination, but more organizing. More imagination will only work if the medium still is pliable. Experience suggests it never was as pliable as some of us thought.
But the truth is that I really don’t know. I don’t fully believe the depressing “bad thugs beat good ideas” line I’ve just adumbrated. I certainly agree that it’s turning out to be much harder to overturn the old systems than I’d thought twenty or even five years ago. But I also think that we’ve come much further than we often realize. I take it as part of my job to remind people of that, which is why I am almost always on the chirpier side of these issues. And I certainly think that good ideas can be insanely disruptive, starting with the Net and the Web, and including Skype, eBay, Open Source, maps and GPS, etc.
So, while I don’t want to pin the failure of the Net on our failure of imagination, I also still have hope that bold acts of imagination can make progress, that our ability to iterate at scale can create social formations that are new in the world, and that this may be a multi-generational fight.
I therefore come out of Ethan’s post with questions: (1) What about this age made it possible even to think that imagination could disrupt our most entrenched systems? (2) What makes some ideas effectively disruptive, and why do other equally imaginative good ideas fail? And what about unimaginative ideas that make a real difference? The Birmingham bus boycott was not particularly imaginative, but it sure packed a wallop. (3) What can we do to make it easier for great acts of imagination to become real?
For me, #1 has to do with the Internet. (Color me technodeterminist.) I don’t have anything worthwhile to say about #2. And I still have hope that the answer to #3 has something to do with the ability of billions of people to make common cause— and, more powerfully, to iterate together — over the Net. Obviously #3 also needs regulatory reform to make sure the Internet remains at least a partially open ecosystem.
So, I find myself in deep sympathy with the context of what Ethan describes so well and so urgently. But I don’t find the rhetoric of imagination convincing.
This week there were two out-of-the-park posts by Berkman folk: Ethan Zuckerman on advertising as the Net’s original sin, and Zeynep Tufecki on the power of the open Internet as demonstrated by coverage of the riots in Ferguson. Each provides a view on whether the Net is a failed promise. Each is brilliant and brilliantly written.
Zeynep on Ferguson
Zeynep, who has written with wisdom and insight on the role of social media in the Turkish protests (e.g., here and here), looks at how Twitter brought the Ferguson police riots onto the national agenda and how well Twitter “covered” them. But those events didn’t make a dent in Facebook’s presentation of news. Why? she asks.
Twitter is an open platform where anyone can post whatever they want. It therefore reflects our interests — although no medium is a mere reflection. FB, on the other hand, uses algorithms to determine what it thinks our interests are … except that its algorithms are actually tuned to get us to click more so that FB can show us more ads. (Zeynep made that point about an early and errant draft of my CNN.com commentary on the FB mood experiment. Thanks, Zeynep!) She uses this to make an important point about the Net’s value as a medium the agenda of which is not set by commercial interests. She talks about this as “Net Neutrality,” extending it from its usual application to the access providers (Comcast, Verizon and their small handful of buddies) to those providing important platforms such as Facebook.
She concludes (but please read it all!):
How the internet is run, governed and filtered is a human rights issue.
And despite a lot of dismal developments, this fight is far from over, and its enemy is cynicism and dismissal of this reality.
Don’t let anyone tell you otherwise.
What happens to #Ferguson affects what happens to Ferguson.
Yup yup yup. This post is required reading for all of the cynics who would impress us with their wake-up-and-smell-the-shitty-coffee pessimism.
Ethan on Ads
Ethan cites a talk by Maciej Ceglowski for the insight that “we’ve ended up with surveillance as the default, if not sole, internet business model.” Says Ethan,
I have come to believe that advertising is the original sin of the web. The fallen state of our Internet is a direct, if unintentional, consequence of choosing advertising as the default model to support online content and services.
Since Internet ads are more effective as a business model than as an actual business, companies are driven ever more frantically to gather customer data in order to hold out the hope of making their ads more effective. And there went out privacy. (This is a very rough paraphrase of Ethan’s argument.)
Ethan pays more than lip service to the benefits — promised and delivered — of the ad-supported Web. But he points to four rather devastating drawbacks, include the distortions caused by algorithmic filtering that Zeynep warns us about. Then he discusses what we can do about it.
I’m not going to try to summarize any further. You need to read this piece. And you will enjoy it. For example, betcha can’t guess who wrote the code for the world’s first pop-up ads. Answer: Ethan .
Also recommended: Jeff Jarvis’ response and Mathew Ingram’s response to both. I myself have little hope that advertising can be made significantly better, where “better” means being unreservedly in the interests of “consumers” and sufficiently valuable to the advertisers. I’m of course not confident about this, and maybe tomorrow someone will come up with the solution, but my thinking is based on the assumption that the open Web is always going to be a better way for us to discover what we care about because the native building material of the Web is in fact what we find mutually interesting.
Read both these articles. They are important contributions to understanding the Web We Want.
, echo chambers
, net neutrality
, open access
, social media
Tagged with: advertising
• net neutrality
• social media
Date: August 15th, 2014 dw
The debate over whether municipalities should be allowed to provide Internet access has been heating up. Twenty states ban it. Tom Wheeler, the chair of the FCC, has said he wants to “preempt” those laws. Congress is maneuvering to extend the ban nationwide.
Jim Baller, who has been writing about the laws, policies, and economics of network deployment for decades, has found an eerie resonance of this contemporary debate. Here’s a scan of the table of contents of a 1906 (yes, 1906) issue of Moody’s that features a symposium on “Municipal Ownership and Operation.”
Click image to enlarge
The Moody’s articles are obviously not talking about the Internet. They’re talking about the electric grid.
In a 1994 (yes, 1994) article published just as the Clinton administration (yes, Clinton) was developing principles for the deployment of the “information superhighway,” Jim wrote that if we want the far-reaching benefits foreseen by the National Telecommunications and Information Administration (and they were amazingly prescient (but why can’t I find the report online??)), then we ought to learn four things from the deployment of the electric grid in the 1880s and 1890s:
First, the history of the electric power industry teaches that one cannot expect private profit-maximizing firms to provide “universal service” or anything like it in the early years (or decades) of their operations, when the allure of the most profitable markets is most compelling.
Second, the history of the electric power industry teaches that opening the doors to anyone willing to provide critical public services can be counterproductive and that it is essential to watch carefully the growth of private firms that enter the field. If such growth is left unchecked, the firms may become so large and complex that government institutions can no longer control or even understand them. Until government eventually catches up, the public may suffer incalculable injury.
Third, the history of the electric power industry teaches that monopolists will use all means available to influence the opinions of lawmakers and the public in their favor and will sometimes have frightening success
Fourth, and most important, the history of the electric power industry teaches that the presence or threat of competition from the public sector is one of the best and surest ways to secure quality service and reasonable prices from private enterprises involved in the delivery of critical public services.
Learn from history? Repeat it? Or intervene as citizens to get the history we want? I’ll take door number 3, please.
Here’s a fantastic 11-minute video from Vi Hart that explains Net Neutrality and more.
Categories: net neutrality
Tagged with: explainers
• net neutrality
Date: May 8th, 2014 dw
I just posted at Medium.com about why it’s important to remember the difference between the Net and the Web. Here’s the beginning:
A note to NPR and other media that have been reporting on “the 25th anniversary of the Internet”: NO, IT’S NOT. It’s the 25th anniversary of the Web. The Internet is way older than that. And the difference matters.
The Internet is a set of protocols?—?agreements?—?about how information will be sliced up, sent over whatever media the inter-networked networks use, and reassembled when it gets there. The World Wide Web uses the Internet to move information around. The Internet by itself doesn’t know or care about Web pages, browsers, or the hyperlinks we’ve come to love. Rather, the Internet enables things like the World Wide Web, email, Skype, and much much more to be specified and made real. By analogy, the Internet is like an operating system, and the Web, Skype, and email are like applications that run on top of it.
This is not a technical quibble. The difference between the Internet and the Web matters more than ever for at least two reasons.
Continued at Medium.com…
Categories: net neutrality
Tagged with: internet
Date: March 15th, 2014 dw
William McGeveran [twitter:BillMcGev] has written an article for University of Minnesota Law School that suggests how to make “frictionless sharing” well-behaved. He defines frictionless sharing as “disclosing “individuals’ activities automatically, rather than waiting for them to authorize a particular disclosure.” For example:
…mainstream news websites, including the Washington Post, offer “social reading” applications (“apps”) in Facebook. After a one- time authorization, these apps send routine messages through Facebook to users’ friends identifying articles the users view.
Bill’s article considers the pros and cons:
Social media confers considerable advantages on individuals, their friends, and, of course, intermediaries like Spotify and Facebook. But many implementations of frictionless architecture have gone too far, potentially invading privacy and drowning useful information in a tide of meaningless spam.
Bill is not trying to build walls. “The key to online disclosures … turns out to be the correct amount of friction, not its elimination.” To assess what constitutes “the correct amount” he offers an heuristic, which I am happy to call McGeveran’s Law of Friction: “It should not be easier to ‘share’ an action online than to do it.” (Bill does not suggest naming the law after him! He is a modest fellow.)
One of the problems with the unintentional sharing of information are “misclosures,” a term he attributes to Kelly Caine.
Frictionless sharing makes misclosures more likely because it removes practical obscurity on which people have implicitly relied when assessing the likely audience that would find out about their activities. In other words, frictionless sharing can wrench individuals’ actions from one context to another, undermining their privacy expectations in the process.
Not only does this reveal, say, that you’ve been watching Yoga for Health: Depression and Gastrointestinal Problems (to use an example from Sen. Franken that Bill cites), it reveals that fact to your most intimate friends and family. (In my case, the relevant example would be The Amazing Race, by far the worst TV I watch, but I only do it when I’m looking for background noise while doing something else. I swear!) Worse, says Bill, “preference falsification” — our desire to have our known preferences support our social image — can alter our tastes, leading to more conformity and less diversity in our media diets.
Bill points to other problems with making social sharing frictionless, including reducing the quality of information that scrolls past us, turning what could be a useful set of recommendations from friends into little more than spam: “…friends who choose to look at an article because I glanced at it for 15 seconds probably do not discover hidden gems as a result.”
Bill’s aim is to protect the value of intentionally shared information; he is not a hoarder. McGeveran’s Law thus tries to add in enough friction that sharing is intentional, but not so much that it gets in the way of that intention. For example, he asks us to imagine Netflix presenting the user with two buttons: “Play” and “Play and Share.” Sharing thus would require exactly as much work as playing, thus satisfying McGeveran’s Law. But having only a “Play” button that then automatically shares the fact that you just watched Dumb and Dumberer distinctly fails the Law because it does not “secure genuine consent.” As Bill points out, his Law of Friction is tied to the technology in use, and thus is flexible enough to be useful even as the technology and its user interfaces change.
I like it.
Tagged with: privacy
• programming the social
Date: January 12th, 2014 dw
Noam Chomsky and Barton Gellman were interviewed at the Engaging Big Data conference put on by MIT’s Senseable City Lab on Nov. 15. When Prof. Chomsky was asked what we can do about government surveillance, he reiterated his earlier call for us to understand the NSA surveillance scandal within an historical context that shows that governments always use technology for their own worst purposes. According to my liveblogging (= inaccurate, paraphrased) notes, Prof. Chomsky said:
Governments have been doing this for a century, using the best technology they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe this is universal. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications. [Emphasis added, of course.]
I was glad that Barton Gellman — hardly an NSA apologist — called Prof. Chomsky on his lumping of the NSA with the Stasi, for there is simply no comparison between the freedom we have in the US and the thuggish repression omnipresent in East Germany. But I was still bothered, albeit by a much smaller point. I have no serious quarrel with Prof. Chomsky’s points that government incursions on rights are nothing new, and that governments generally (always?) believe they are acting for the best of purposes. I am a little bit hung-up, however, on his equivocating on “information.”
Prof. Chomsky is of course right in his implied definition of information. (He is Noam Chomsky, after all, and knows a little more about the topic than I do.) Modern information is often described as a measure of surprise. A string of 100 alternating ones and zeroes conveys less information than a string of 100 bits that are less predictable, for if you can predict with certainty what the next bit will be, then you don’t learn anything from that bit; it carries no information. Information theory lets us quantify how much information is conveyed by streams of varying predictability.
So, when U.S. security folks say they are spying on us for our own security, are they saying literally nothing? Is that claim without meaning? Only in the technical sense of information. It is, in fact, quite meaningful, even if quite predictable, in the ordinary sense of the term “information.”
First, Prof. Chomsky’s point that governments do bad things while thinking they’re doing good is an important reminder to examine our own assumptions. Even the bad guys think they’re the good guys.
Second, I disagree with Prof. Chomsky’s generalization that governments always justify surveillance in the name of security. For example, governments sometimes record traffic (including the movement of identifiable cars through toll stations) with the justification that the information will be used to ease congestion. Tracking the position of mobile phones has been justified as necessary for providing swift EMT responses. Governments require us to fill out detailed reports on our personal finances every year on the grounds that they need to tax us fairly. Our government hires a fleet of people every ten years to visit us where we live in order to compile a census. These are all forms of surveillance, but in none of these cases is security given as the justification. And if you want to say that these other forms don’t count, I suspect it’s because it’s not surveillance done in the name of security…which is my point.
Third, governments rarely cite security as the justification without specifying what the population is being secured against; as Prof. Chomsky agrees, that’s an inherent part of the fear-mongering required to get us to accept being spied upon. So governments proclaim over and over what threatens our security: Spies in our midst? Civil unrest? Traitorous classes of people? Illegal aliens? Muggers and murderers? Terrorists? Thus, the security claim isn’t made on its own. It’s made with specific threats in mind, which makes the claim less predictable — and thus more informational — than Prof. Chomsky says.
So, I disagree with Prof. Chomsky’s argument that a government that justifies spying on the grounds of security is literally saying something without meaning. Even if it were entirely predictable that governments will always respond “Because security” when asked to justify surveillance — and my second point disputes that — we wouldn’t treat the response as meaningless but as requiring a follow-up question. And even if the government just kept repeating the word “Security” in response to all our questions, that very act would carry meaning as well, like a doctor who won’t tell you what a shot is for beyond saying “It’s to keep you healthy.” The lack of meaning in the Information Theory sense doesn’t carry into the realm in which people and their public officials engage in discourse.
Here’s an analogy. Prof. Chomsky’s argument is saying, “When a government justifies creating medical programs for health, what they’re saying is meaningless. They always say that! The Nazis said the same thing when they were sterilizing ‘inferiors,’ and Medieval physicians engaged in barbarous [barber-ous, actually – heyo!] practices in the name of health.” Such reasoning would rule out a discussion of whether current government-sponsored medical programs actually promote health. But that is just the sort of conversation we need to have now about the NSA.
Prof. Chomsky’s repeated appeals to history in this interview covers up exactly what we need to be discussing. Yes, both the NSA and the Stasi claimed security as their justification for spying. But far from that claim being meaningless, it calls for a careful analysis of the claim: the nature and severity of the risk, the most effective tactics to ameliorate that threat, the consequences of those tactics on broader rights and goods — all considerations that comparisons to the Stasi and Genghis Khan obscure. History counts, but not as a way to write off security considerations as meaningless by invoking a technical definition of “information.”
Tagged with: information
Date: November 17th, 2013 dw
The sociologist Saskia Sassen is giving a plenary talk at Engaging Data 2013. [I had a little trouble hearing some of it. Sorry. And in the press of time I haven’t had a chance to vet this for even obvious typos, etc.]
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
1. The term Big Data is ambiguous. “Big Data” implies we’re in a technical zone. it becomes a “technical problem” as when morally challenging technologies are developed by scientists who thinks they are just dealing with a technical issue. Big Data comes with a neutral charge. “Surveillance” brings in the state, the logics of power, how citizens are affected.
Until recently, citizens could not relate to a map that came out in 2010 that shows how much surveillance there is in the US. It was published by the Washington Post, but it didn’t register. 1,271 govt orgs and 1,931 private companies work on programs related to counterterrorism, homeland security and intelligence. There are more than 1 million people with stop-secret clearance, and maybe a third are private contractors. In DC and enirons, 33 building complexes are under construction or have been built for top-secret intelligence since 9/11. Together they are 22x the size of Congress. Inside these environments, the govt regulates everything. By 2010, DC had 4,000 corporate office buildings that handle classified info,all subject to govt regulation. “We’re dealing with a massive material apparatus.” We should not be distracted by the small individual devices.
Cisco lost 28% of its sales, in part as a result of its being tainted by the NSA taking of its data. This is alienating citzens and foreign govts. How do we stop this? We’re dealing with a kind of assemblage of technical capabilities, tech firms that sell the notion that for security we all have to be surveilled, and people. How do we get a handle on this? I ask: Are there spaces where we can forget about them? Our messy, nice complex cities are such spaces. All that data cannot be analyzed. (She notes that she did a panel that included the brother of a Muslim who has been indefinitely detained, so now her name is associated with him.)
3. How can I activate large, diverse spaces in cities? How can we activate local knowledges? We can “outsource the neighborhood.” The language of “neighborhood” brings me pleasure, she says.
If you think of institutions, they are codified, and they notice when there are violations. Every neighborhood has knowledge about the city that is different from the knowledge at the center. The homeless know more about rats than the center. Make open access networks available to them into a reverse wiki so that local knowledge can find a place. Leak that knowledge into those codified systems. That’s the beginning of activating a city. From this you’d get a Big Data set, capturing the particularities of each neighborhood. [A knowledge network. I agree! :)]
The next step is activism, a movement. In my fantasy, at one end it’s big city life and at the other it’s neighborhood residents enabled to feel that their knowledge matters.
Q: If local data is being aggregated, could that become Big Data that’s used against the neighborhoods?
A: Yes, that’s why we need neighborhood activism. The polticizing of the neighborhoods shapes the way the knowledge isued.
Q: Disempowered neighborhoods would be even less able to contribute this type of knowledge.
A: The problem is to value them. The neighborhood has knowledge at ground level. That’s a first step of enabling a devalued subject. The effect of digital networks on formal knowledge creates an informal network. Velocity itself has the effect of informalizing knowledge. I’ve compared environmental activists and financial traders. The environmentalists pick up knowledge on the ground. So, the neighborhoods may be powerless, but they have knowledge. Digital interactive open access makes it possible bring together those bits of knowledge.
Q: Those who control the pipes seem to control the power. How does Big Data avoid the world being dominated by brainy people?
A: The brainy people at, say, Goldman Sachs are part of a larger institution. These institutions have so much power that they don’t know how to govern it. The US govt has been the post powerful in the world, with the result that it doesn’t know how to govern its own power. It has engaged in disastrous wars. So “brainy people” running the world through the Ciscos, etc., I’m not sure. I’m talking about a different idea of Big Data sets: distributed knowledges. E.g, Forest Watch uses indigenous people who can’t write, but they can tell before the trained biologists when there is something wrong in the ecosystem. There’s lots of data embedded in lots of places.
[She’s aggregating questions] Q1: Marginalized neighborhoods live being surveilled: stop and frisk, background checks, etc. Why did it take tapping Angela Merkel’s telephone to bring awareness? Q2: How do you convince policy makers to incorporate citizen data? Q3: There are strong disincentives to being out of the mainstream, so how can we incentivize difference.
A: How do we get the experts to use the knowledge? For me that’s not the most important aim. More important is activating the residents. What matters is that they become part of a conversation. A: About difference: Neighborhoods are pretty average places, unlike forest watchers. And even they’re not part of the knowledge-making circuit. We should bring them in. A: The participation of the neighborhoods isn’t just a utility for the central govt but is a first step toward mobilizing people who have been reudced to thinking that they don’t count. I think is one of the most effective ways to contest the huge apparatus with the 10,000 buildings.
Tagged with: 2b2k
• big data
Date: November 15th, 2013 dw
Next Page »