Joho the BlogNovember 2013 - Joho the Blog

November 27, 2013

Pronouns were a mistake that we can fix

I think half the questions I ask a certain set of people are of the sort “Wait, which ‘he’?” or “Sheila or Marg?” All pronouns do is introduce ambiguity, error, and irked moods. We’d be better off without them, by which I mean without pronouns.

Worse, in some languages pronouns force us to make decisions about the gender of inanimate objects and even of abstractions. How does that help? Why don’t we also pretend that every object has a race, an eye color, and a favorite fruit? Assigning everything hit points would actually make more sense.

There are two arguments in favor of pronouns. First, some people’s names are long. Might I suggest that if we got rid of pronouns, people would soon start taking shorter names? Or we’d come up with a convention to shorten the names in unambiguous ways. Perhaps “y” would be appended to the first syllable, as in “First Lady Michy said to President Oby, “Bary, I think you ought to meet with ex-Governor Schwarzy,” all without any implied disrespect.

Second, plural pronouns can be useful when the group doesn’t have a known name or an obvious common descriptor. For example, “Two men, a woman, and a cockerspaniel drove up to an off-duty waiter and a former action movie star, and said ‘Hey, why don’t you get in our car?,” which they did.” The ‘they’ in that sentence does some useful work. I will allow it.

First person plural is an interesting challenge. I think I would allow “we” only for an indefinite group, as in the title of this post. Otherwise, instead use the name or descriptor of the group you have in mind. Think about how clarifying it would be to actually have to specify who you’re pretending to speak for?

I will allow the use of the first person singular since it is less ambiguous than using your own name: “Give that to David” isn’t helpful when I’m in a room with David Winer, David Cameron, and Michelangelo’s Statue of David. Second person singular pronouns should not be allowed, however, since there can be ambiguity about whom one is addressing. Second person plural I will allow on the same grounds as third person plural. Plus, by disallowing the second person singular, I have solved the ambiguity caused by using the exact same frigging word for second person singular and plural. What were we thinking?

Y’all are welcome.


November 24, 2013

How to write

I spent some time this morning happily browsing advice from famous writers on how to write, thanks to Maria Popova’s [twitter:BrainPickings] own writings on those writers writing about writing. Here’s Maria’s latest, which is about Anne Lamont’s Bird By Bird, an excellent (and excellently written!) piece that also contains links to famous writers on said topic.

Some of these pieces were familiar, some not, but all convinced me of one thing: writers should re-label their advice on how to write as “How I Write.” I find myself irked by every one of them into looking for counter-examples, even though I personally agree with much of what they say, and in many instances find their comments remarkably insightful.

Still, I want to push back when, for example, Susan Sontag says:

Your job is to see people as they really are, and to do this, you have to know who you are in the most compassionate possible sense. Then you can recognize others.

Yet you can’t throw a cat into a room full of writers without hitting someone wildly self-deceptive and unknowing. For example, Sontag’s own writing about writing ranges from breathtakingly perceptive to provocative to transparently self-aggrandizing.

Likewise, Elmore Leonard’s brilliant 10 rules of writing are clearly not rules for how to write, but rules for how to write like Elmore Leonard. (His ten rules are themselves a great example of his own style.) For instance, there’s #4:

Never use an adverb to modify the verb “said”


I even find myself pushing back against one of his rules that I greatly admire:

“If it sounds like writing … rewrite it.”

I love that…except that what do we do with Bernini? His Apollo and Daphne statue — the one where Daphne’s fingers sprout translucent leaves — is so realistic and yet so marble that one cannot look at it without thinking, “Holy crap! That’s marble!!!” (By the way, I just violated Leonard’s rule #5: “Keep your exclamation points under control.” He’s right about that.) Likewise, are we sure that no poetry is allowed to sound like writing?

Meanwhile, David Ogilvy — the model for Dan Draper in pitch-mode, and a writer I admire greatly — is stylistically in sync with Elmore Leonard, but disagrees with both Leonard’s and Sontag’s rules. (Note: That was a highly imperfect sentence. Welcome to my blog.) Agreeing with Leonard, Ogilvy demands simplicity and avoiding pretentious, abstract terms. But his second rule says:

Write the way you talk. Naturally.

What do you say to that, Elmore? If you write the way you talk, will it sound like writing? And, David, suppose you don’t talk so good?

And Ogilvy’s eighth rule says:

If it is something important, get a colleague to improve it.

I’m not sure that Sontag’s insistence that writing requires something like personal authenticity allows for editing by colleagues. Why can’t “Hire yourself the best goddamn editor you can find” be an important Rule for Writers? And before you assume that such a needy writer must be a pathetic schlub who on her/his own is writing schlock, keep in mind that The New Yorker has a tradition of featuring truly superb writers in part because of the strength of its editors.

Maria Popova’s essays on writers advising writers (which, let me reiterate, I admire and enjoy) includes some pieces of advice that are incontestable, but in the bad sense that they are verge on being tautologies. For example, Lamont says:

Perfectionism is the voice of the oppressor, the enemy of the people. It will keep you cramped and insane your whole life, and it is the main obstacle between you and a shitty first draft.

That’s certainly true if it perfectionism means a paralyzing perfectionism, i.e., the sort of perfectionism that keeps you cramped and insane, and that prevents you from doing a shitty first draft. (You have to love Lamont’s rule-violating use of “shitty.”) But there is also a type of perfectionism that makes an author worry over every broken rhythm and soft imprecision, and that ultimately results in lapidary works. Also, I’d venture that for most authors, the real obstacle to getting to that shitty first draft is not perfectionism but the fact that they’re just too damn tired when they get home from work.

The thing is, I agree with Lamont about perfectionism. It’s one reason I like blogging. I’m in favor of filling in the spaces between writing and speaking, between publishing and drafting. Even so, I find myself so insistently pushing back against advice from writers that it makes me wonder why. Maybe…

…Maybe it’s because I don’t think there’s such a thing as “writing” except in its most literal sense: putting marks on a rectangular surface. Beyond that, there is nothing that holds the concept of writing together.

This still makes it better than “communication,” an abstraction that gets wrong what it is an abstraction from. Still, communication provides a useful analogy. To give advice on how to communicate well, one will have to decide ahead of time what type of communication one is referring to. Wooing? Convincing a jury? Praying? Writing a murder mystery? Asking for change from strangers? Muttering imprecations at the fact of dusk? Yelling “Fie! Her!!” in a crowded theater? Even basic rules like “Speak clearly” assume that one is communicating orally and that one is not Marlon Brando auditioning for a part. And even within anyone one domain or task of communication, the best practices are really about maintaining a form of rhetoric, not about communicating well.

There are plenty of tips about how to write the thing one wants to write. These tips can be very helpful. For example, I have a friend who swears by Write Or Die to help her get her shitty first draft down on paper. (No, my friend, your first drafts really aren’t shitty. I was using a technique I recommend that everyone use because I use it: the callback.) That tip works for her, but not for me. Still, I’m in favor of tips! But tips are “How I write” or “How I’ve heard some other people write,” not “How to write.”

How to write? I dunno. Lots of ways, I guess.

Be the first to comment »

November 20, 2013

[liveblog][2b2k] David Eagleman on the brain as networks

I’m at re comm 13, an odd conference in Kitzbühel, Austria: 2.5 days of talks to 140 real estate executives, but the talks are about anything except real estate. David Eagleman, a neural scientist at Baylor, and a well-known author, is giving a talk. (Last night we had one of those compressed conversations that I can’t wait to be able to continue.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

How do we know your thinking is in your brain? If you damage your finger, you don’t change, but damage to your brain can change basic facets of your life. “The brain is the densest representation of who you are.” We’re the only species trying to figure out our own progamming language. We’ve discovered the most complicated device in the universe: our own brains. Ten billion neurons. Every single neuron contains the entire human genome and thousands of protens doing complicated computations. Each neuron is is connected to tens of thousands of its neighbors, meaning there are 100s of trillions of connections. These numbers “bankrupt the language.”

Almost all of the operations of the brain are happening at a level invisible to us. Taking a drink of water requires a “lightning storm” of acvitity at the neural level. This leads us to a concept of the unconscious. The conscious part of you is the smallest bit of what’s happening in the brain. It’s like a stowaway on a transatlantic journey that’s taking credit for the entire trip. When you think of something, your brain’s been working on it for hours or days. “It wasn’t really you that thought of it.”

About the unconscious: Psychologists gave photos of women to men and asked them to evaluate how attractive they are. Some of the photos were the same women, but with dilated eyes. The men rated them as being more attractive but none of them noticed the dilation. Dilated eyes are a sign of sexual readiness in women. Men made their choices with no idea of why.

More examples: In the US, if your name is Dennis or Denise, you’re more likely to become a dentist. These dentists have a conscious narrative about why they became dentists that misses the trick their brain has played on them. Likewise, people are statistically more likely to marry someone whose first name begins with the same first letter as theirs. And, i you are holding a warm mug of coffee, you’ll describe the relationship with your mother as warmer than if you’re holding an iced cup. There is an enormous gap between what you’re doing and what your conscious mind is doing.

“We should be thankful for that gap.” There’s so much going on under the hood, that we need to be shielded from the details. The conscious mind gets in trouble when it starts paying attention to what it’s doing. E.g., try signing your name with both hands in opposite directions simultaneously: it’s easy until you think about it. Likewise, if you now think about how you steer when making a lane change, you’re likely to enact it wrong. (You actually turn left and then turn right to an equal measure.)

Know thyself, sure. But neuroscience teaches us that you are many things. The brain is not a computer with a single output. It has many networks that are always competing. The brain is like a parliament that debates an action. When deciding between two sodas, one network might care about the price, another about the experience, another about the social aspect (cool or lame), etc. They battle. David looks at three of those networks:

1. How does the brain make decisions about valuation? E.g., people will walk 10 mins to save 10 € on a 20 € pen but not on a 557 € suit. Also, we have trouble making comparisons of worth among disparate items unless they are in a shared context. E.g., Williams Sonoma had a bread baking machine for $275 that did not sell. Once they added a second one for $370, it started selling. In real estate, if a customer is trying to decide between two homes, one modern and one traditional, if you want them to buy the modern one, show them another modern one. That gives them the context by which they can decide to buy it.

Everything is associated with everything else in the brain. (It’s an associative network.) Coffee used to be $0.50. When Starbucks started, they had to unanchor it from the old model so they made the coffee houses arty and renamed the sizes. Having lost the context for comparison, the price of Starbucks coffee began to seem reasonable.

2. Emotional experience is a big part of decision making. If you’re in a bad-smelling room, you’ll make harsher moral decisions. The trolley dilemma: 5 people have been tied to the tracks. A trolley is approaching rapidly. You can switch the trolley to a track with only one person tied to it. Everyone would switch the trolley. But now instead, you can push a fat man onto the trolley to stop the car. Few would. In the second scenario, touching someone engages the emotional system. The first scenario is just a math problem. The logic and emotional systems are always fighting it out. The Greeks viewed the self as someone steering a chariot drawn by the white horse of reason and the black horse of passion. [From Plato’s Phaedrus]

3. A lot of the machinery of the brain deals with other brains. We use the same circuitry to think about people andor corporations. When a company betrays us, our brain responds the way it would if a friend betrayed us. Traditional economics says customer interactions are short-term but the brain takes a much longer-range view. Breaches of trust travel fast. (David plays “United Breaks Guitars.”) Smart companies use social media that make you believe that the company is your friend.

The battle among these three networks drives decisions. “Know thyselves.”

This is unsettling. The self is not at the center. It’s like when Galileo repositioned us in the universe. This seemed like a dethroning of man. The upside is that we’ve discovered the Cosmos is much bigger, more subtle, and more magnificent than we thought. As we sail into the inner cosmos of the brain, the brain is much subtle and magnificent than we ever considered.

“We’ve found the most wondrous thing in the universe, and it’s us.”

Q: Won’t this let us be manipulated?

A: Neural science is just catching up with what advertisers have known for 100 years.

Q: What about free will?

A: My labs and others have done experiments, and there’s no single experiment in neuroscience that proves that we do or do not have free will. But if we have free will, it’s a very small player in the system. We have genetics and experiences, and they make brains very different from one another. I argue for a legal system that recognizes a difference between people who may have committed the same crime. There are many different types of brains.

Be the first to comment »

November 17, 2013

Noam Chomsky, security, and equivocal information

Noam Chomsky and Barton Gellman were interviewed at the Engaging Big Data conference put on by MIT’s Senseable City Lab on Nov. 15. When Prof. Chomsky was asked what we can do about government surveillance, he reiterated his earlier call for us to understand the NSA surveillance scandal within an historical context that shows that governments always use technology for their own worst purposes. According to my liveblogging (= inaccurate, paraphrased) notes, Prof. Chomsky said:

Governments have been doing this for a century, using the best technology they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe this is universal. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications. [Emphasis added, of course.]

I was glad that Barton Gellman — hardly an NSA apologist — called Prof. Chomsky on his lumping of the NSA with the Stasi, for there is simply no comparison between the freedom we have in the US and the thuggish repression omnipresent in East Germany. But I was still bothered, albeit by a much smaller point. I have no serious quarrel with Prof. Chomsky’s points that government incursions on rights are nothing new, and that governments generally (always?) believe they are acting for the best of purposes. I am a little bit hung-up, however, on his equivocating on “information.”

Prof. Chomsky is of course right in his implied definition of information. (He is Noam Chomsky, after all, and knows a little more about the topic than I do.) Modern information is often described as a measure of surprise. A string of 100 alternating ones and zeroes conveys less information than a string of 100 bits that are less predictable, for if you can predict with certainty what the next bit will be, then you don’t learn anything from that bit; it carries no information. Information theory lets us quantify how much information is conveyed by streams of varying predictability.

So, when U.S. security folks say they are spying on us for our own security, are they saying literally nothing? Is that claim without meaning? Only in the technical sense of information. It is, in fact, quite meaningful, even if quite predictable, in the ordinary sense of the term “information.”

First, Prof. Chomsky’s point that governments do bad things while thinking they’re doing good is an important reminder to examine our own assumptions. Even the bad guys think they’re the good guys.

Second, I disagree with Prof. Chomsky’s generalization that governments always justify surveillance in the name of security. For example, governments sometimes record traffic (including the movement of identifiable cars through toll stations) with the justification that the information will be used to ease congestion. Tracking the position of mobile phones has been justified as necessary for providing swift EMT responses. Governments require us to fill out detailed reports on our personal finances every year on the grounds that they need to tax us fairly. Our government hires a fleet of people every ten years to visit us where we live in order to compile a census. These are all forms of surveillance, but in none of these cases is security given as the justification. And if you want to say that these other forms don’t count, I suspect it’s because it’s not surveillance done in the name of security…which is my point.

Third, governments rarely cite security as the justification without specifying what the population is being secured against; as Prof. Chomsky agrees, that’s an inherent part of the fear-mongering required to get us to accept being spied upon. So governments proclaim over and over what threatens our security: Spies in our midst? Civil unrest? Traitorous classes of people? Illegal aliens? Muggers and murderers? Terrorists? Thus, the security claim isn’t made on its own. It’s made with specific threats in mind, which makes the claim less predictable — and thus more informational — than Prof. Chomsky says.

So, I disagree with Prof. Chomsky’s argument that a government that justifies spying on the grounds of security is literally saying something without meaning. Even if it were entirely predictable that governments will always respond “Because security” when asked to justify surveillance — and my second point disputes that — we wouldn’t treat the response as meaningless but as requiring a follow-up question. And even if the government just kept repeating the word “Security” in response to all our questions, that very act would carry meaning as well, like a doctor who won’t tell you what a shot is for beyond saying “It’s to keep you healthy.” The lack of meaning in the Information Theory sense doesn’t carry into the realm in which people and their public officials engage in discourse.

Here’s an analogy. Prof. Chomsky’s argument is saying, “When a government justifies creating medical programs for health, what they’re saying is meaningless. They always say that! The Nazis said the same thing when they were sterilizing ‘inferiors,’ and Medieval physicians engaged in barbarous [barber-ous, actually – heyo!] practices in the name of health.” Such reasoning would rule out a discussion of whether current government-sponsored medical programs actually promote health. But that is just the sort of conversation we need to have now about the NSA.

Prof. Chomsky’s repeated appeals to history in this interview covers up exactly what we need to be discussing. Yes, both the NSA and the Stasi claimed security as their justification for spying. But far from that claim being meaningless, it calls for a careful analysis of the claim: the nature and severity of the risk, the most effective tactics to ameliorate that threat, the consequences of those tactics on broader rights and goods — all considerations that comparisons to the Stasi and Genghis Khan obscure. History counts, but not as a way to write off security considerations as meaningless by invoking a technical definition of “information.”

1 Comment »

November 15, 2013

[liveblog][2b2k] Saskia Sassen

The sociologist Saskia Sassen is giving a plenary talk at Engaging Data 2013. [I had a little trouble hearing some of it. Sorry. And in the press of time I haven’t had a chance to vet this for even obvious typos, etc.]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

1. The term Big Data is ambiguous. “Big Data” implies we’re in a technical zone. it becomes a “technical problem” as when morally challenging technologies are developed by scientists who thinks they are just dealing with a technical issue. Big Data comes with a neutral charge. “Surveillance” brings in the state, the logics of power, how citizens are affected.

Until recently, citizens could not relate to a map that came out in 2010 that shows how much surveillance there is in the US. It was published by the Washington Post, but it didn’t register. 1,271 govt orgs and 1,931 private companies work on programs related to counterterrorism, homeland security and intelligence. There are more than 1 million people with stop-secret clearance, and maybe a third are private contractors. In DC and enirons, 33 building complexes are under construction or have been built for top-secret intelligence since 9/11. Together they are 22x the size of Congress. Inside these environments, the govt regulates everything. By 2010, DC had 4,000 corporate office buildings that handle classified info,all subject to govt regulation. “We’re dealing with a massive material apparatus.” We should not be distracted by the small individual devices.

Cisco lost 28% of its sales, in part as a result of its being tainted by the NSA taking of its data. This is alienating citzens and foreign govts. How do we stop this? We’re dealing with a kind of assemblage of technical capabilities, tech firms that sell the notion that for security we all have to be surveilled, and people. How do we get a handle on this? I ask: Are there spaces where we can forget about them? Our messy, nice complex cities are such spaces. All that data cannot be analyzed. (She notes that she did a panel that included the brother of a Muslim who has been indefinitely detained, so now her name is associated with him.)

3. How can I activate large, diverse spaces in cities? How can we activate local knowledges? We can “outsource the neighborhood.” The language of “neighborhood” brings me pleasure, she says.

If you think of institutions, they are codified, and they notice when there are violations. Every neighborhood has knowledge about the city that is different from the knowledge at the center. The homeless know more about rats than the center. Make open access networks available to them into a reverse wiki so that local knowledge can find a place. Leak that knowledge into those codified systems. That’s the beginning of activating a city. From this you’d get a Big Data set, capturing the particularities of each neighborhood. [A knowledge network. I agree! :)]

The next step is activism, a movement. In my fantasy, at one end it’s big city life and at the other it’s neighborhood residents enabled to feel that their knowledge matters.


Q: If local data is being aggregated, could that become Big Data that’s used against the neighborhoods?

A: Yes, that’s why we need neighborhood activism. The polticizing of the neighborhoods shapes the way the knowledge isued.

Q: Disempowered neighborhoods would be even less able to contribute this type of knowledge.

A: The problem is to value them. The neighborhood has knowledge at ground level. That’s a first step of enabling a devalued subject. The effect of digital networks on formal knowledge creates an informal network. Velocity itself has the effect of informalizing knowledge. I’ve compared environmental activists and financial traders. The environmentalists pick up knowledge on the ground. So, the neighborhoods may be powerless, but they have knowledge. Digital interactive open access makes it possible bring together those bits of knowledge.

Q: Those who control the pipes seem to control the power. How does Big Data avoid the world being dominated by brainy people?

A: The brainy people at, say, Goldman Sachs are part of a larger institution. These institutions have so much power that they don’t know how to govern it. The US govt has been the post powerful in the world, with the result that it doesn’t know how to govern its own power. It has engaged in disastrous wars. So “brainy people” running the world through the Ciscos, etc., I’m not sure. I’m talking about a different idea of Big Data sets: distributed knowledges. E.g, Forest Watch uses indigenous people who can’t write, but they can tell before the trained biologists when there is something wrong in the ecosystem. There’s lots of data embedded in lots of places.

[She’s aggregating questions] Q1: Marginalized neighborhoods live being surveilled: stop and frisk, background checks, etc. Why did it take tapping Angela Merkel’s telephone to bring awareness? Q2: How do you convince policy makers to incorporate citizen data? Q3: There are strong disincentives to being out of the mainstream, so how can we incentivize difference.

A: How do we get the experts to use the knowledge? For me that’s not the most important aim. More important is activating the residents. What matters is that they become part of a conversation. A: About difference: Neighborhoods are pretty average places, unlike forest watchers. And even they’re not part of the knowledge-making circuit. We should bring them in. A: The participation of the neighborhoods isn’t just a utility for the central govt but is a first step toward mobilizing people who have been reudced to thinking that they don’t count. I think is one of the most effective ways to contest the huge apparatus with the 10,000 buildings.

Be the first to comment »

[liveblog] Noam Chomsky and Bart Gellman at Engaging Data

I’m at the Engaging Data 2013conference where Noam Chomsky and Pulitzer Prize winner (twice!) Barton Gellman are going to talk about Big Data in the Snowden Age, moderated by Ludwig Siegele of the Economist. (Gellman is one of the three people Snowden vouchsafed his documents with.) The conference aims at having us rethink how we use Big Data and how it’s used.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

LS: Prof. Chomsky, what’s your next book about?

NC: Philosophy of mind and language. I’ve been writing articles that are pretty skeptical about Big Data. [Please read the orange disclaimer: I’m paraphrasing and making errors of every sort.]

LS: You’ve said that Big Data is for people who want to do the easy stuff. But shouldn’t you be thrilled as a linguist?

NC: When I got to MIT at 1955, I was hired to work on a machine translation program. But I refused to work on it. “The only way to deal with machine translation at the current stage of understanding was by brute force, which after 30-40 years is how it’s being done.” A principled understanding based on human cognition is far off. Machine translation is useful but you learn precisely nothing about human thought, cognition, language, anything else from it. I use the Internet. Glad to have it. It’s easier to push some buttons on your desk than to walk across the street to use the library. But the transition from no libraries to libraries was vastly greater than the transition from librarites to Internet. [Cool idea and great phrase! But I think I disagree. It depends.] We can find lots of data; the problem is understanding it. And a lot of data around us go through a filter so it doesn’t reach us. E.g., the foreign press reports that Wikileaks released a chapter about the secret TPP (Trans Pacific Partnership). It was front page news in Australia and Europe. You can learn about it on the Net but it’s not news. The chapter was on Intellectual Property rights, which means higher prices for less access to pharmaceuticals, and rams through what SOPA tried to do, restricting use of the Net and access to data.

LS: For you Big Data is useless?

NC: Big data is very useful. If you want to find out about biology, e.g. But why no news about TPP? As Sam Huntington said, power remains strongest in the dark. [approximate] We should be aware of the long history of surveillance.

LS: Bart, as a journalist what do you make of Big Data?

BG: It’s extraordinarily valuable, especially in combination with shoe-leather, person-to-person reporting. E.g., a colleague used traditional reporting skills to get the entire data set of applicants for presidential pardons. Took a sample. More reporting. Used standard analytics techniques to find that white people are 4x more likely to get pardons, that campaign contributors are also more likely. It would be likely in urban planning [which is Senseable City Labs’ remit]. But all this leads to more surveillance. E.g., I could make the case that if I had full data about everyone’s calls, I could do some significant reporting, but that wouldn’t justify it. We’ve failed to have the debate we need because of the claim of secrecy by the institutions in power. We become more transparent to the gov’t and to commercial entities while they become more opaque to us.

LS: Does the availability of Big Data and the Internet automatically mean we’ll get surveillance? Were you surprised by the Snowden revelations>

NC: I was surprised at the scale, but it’s been going on for 100 years. We need to read history. E.g., the counter-insurgency “pacification” of the Philippines by the US. See the book by McCoy [maybe this. The operation used the most sophisticated tech at the time to get info about the population to control and undermine them. That tech was immediately used by the US and Britain to control their own populations, .g., Woodrow Wilson’s Red Scare. Any system of power — the state, Google, Amazon — will use the best available tech to control, dominate, and maximize their power. And they’ll want to do it in secret. Assange, Snowden and Manning, and Ellsberg before them, are doing the duty of citizens.

BG: I’m surprised how far you can get into this discussion without assuming bad faith on the part of the government. For the most part what’s happening is that these security institutions genuinely believe most of the time that what they’re doing is protecting us from big threats that we don’t understand. The opposition comes when they don’t want you to know what they’re doing because they’re afraid you’d call it off if you knew. Keith Alexander said that he wishes that he could bring all Americans into this huddle, but then all the bad guys would know. True, but he’s also worried that we won’t like the plays he’s calling.

LS: Bruce Schneier says that the NSA is copying what Google and Yahoo, etc. are doing. If the tech leads to snooping, what can we do about it?

NC: Govts have been doing this for a century, using the best tech they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe these are universals. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications.

BG: The capacity to rationalize may be universal, but you’ll take the conversation off track if you compare what’s happening here to the Stasi. The Stasi were blackmailing people, jailing them, preventing dissent. As a journalist I’d be very happy to find that our govt is spying on NGOs or using this power for corrupt self-enriching purposes.

NC: I completely agree with that, but that’s not the point: The same appeal is made in the most monstrous of circumstances. The freedom we’ve won sharply restricts state power to control and dominate, but they’ll do whatever they can, and they’ll use the same appeals that monstrous systems do.

LS: Aren’t we all complicit? We use the same tech. E.g., Prof. Chomsky, you’re the father of natural language processing, which is used by the NSA.

NC: We’re more complicit because we let them do it. In this country we’re very free, so we have more responsibility to try to control our govt. If we do not expose the plea of security and separate out the parts that might be valid from the vast amount that’s not valid, then we’re complicit because we have the oppty and the freedom.

LS: Does it bug you that the NSA uses your research?

NC: To some extent, but you can’t control that. Systems of power will use whatever is available to them. E.g., they use the Internet, much of which was developed right here at MIT by scientists who wanted to communicate freely. You can’t prevent the powers from using it for bad goals.

BG: Yes, if you use a free online service, you’re the product. But if you use a for-pay service, you’re still the product. My phone tracks me and my social network. I’m paying Verizon about $1,000/year for the service, and VZ is now collecting and selling my info. The NSA couldn’t do its job as well if the commercial entities weren’t collecting and selling personal data. The NSA has been tapping into the links between their data centers. Google is racing to fix this, but a cynical way of putting this is that Google is saying “No one gets to spy on our customers except us.”

LS: Is there a way to solve this?

BG: I have great faith that transparency will enable the development of good policy. The more we know, the more we can design policies to keep power in place. Before this, you couldn’t shop for privacy. Now a free market for privacy is developing as the providers now are telling us more about what they’re doing. Transparency allows legislation and regulation to be debated. The House Repubs came within 8 votes of prohibiting call data collection, which would have been unthinkable before Snowden. And there’s hope in the judiciary.

NC: We can do much more than transparency. We can make use of the available info to prevent surveillance. E.g., we can demand the defeat of TPP. And now hardware in computers is being designed to detect your every keystroke, leading some Americans to be wary of Chinese-made computers, but the US manufacturers are probably doing it better. And manufacturers for years have been trying to dsign fly-sized drones to collect info; that’ll be around soon. Drones are a perfect device for terrorists. We can learn about this and do something about it. We don’t have to wait until it’s exposed by Wikileaks. It’s right there in mainstream journals.

LS: Are you calling for a political movement?

NC: Yes. We’re going to need mass action.

BG: A few months ago I noticed a small gray box with an EPA logo on it outside my apartment in NYC. It monitors energy usage, useful to preventing brown outs. But it measures down to the apartment level, which could be useful to the police trying to establish your personal patterns. There’s no legislation or judicial review of the use of this data. We can’t turn back the clock. We can try to draw boundaries, and then have sufficient openness so that we can tell if they’ve crossed those boundaries.

LS: Bart, how do you manage the flow of info from Snowden?

BG: Snowden does not manage the release of the data. He gave it to three journalists and asked us to use your best judgment — he asked us to correct for his bias about what the most important stories are — and to avoid direct damage to security. The documents are difficult. They’re often incomplete and can be hard to interpret.


Q: What would be a first step in forming a popular movement?

NC: Same as always. E.g., the women’s movement began in the 1960s (at least in the modern movement) with consciousness-raising groups.

Q: Where do we draw the line between transparency and privacy, given that we have real enemies?

BG: First you have to acknowledge that there is a line. There are dangerous people who want to do dangerous things, and some of these tools are helpful in preventing that. I’ve been looking for stories that elucidate big policy decisions without giving away specifics that would harm legitimate action.

Q: Have you changed the tools you use?

BG: Yes. I keep notes encrypted. I’ve learn to use the tools for anonymous communication. But I can’t go off the grid and be a journalist, so I’ve accepted certain trade-offs. I’m working much less efficiently than I used to. E.g., I sometimes use computers that have never touched the Net.

Q: In the women’s movement, at least 50% of the population stood to benefit. But probably a large majority of today’s population would exchange their freedom for convenience.

NC: The trade-off is presented as being for security. But if you read the documents, the security issue is how to keep the govt secure from its citizens. E.g., Ellsberg kept a volume of the Pentagon Papers secret to avoid affecting the Vietnam negotiations, although I thought the volume really only would have embarrassed the govt. Security is in fact not a high priority for govts. The US govt is now involved in the greatest global terrorist campaign that has ever been carried out: the drone campaign. Large regions of the world are now being terrorized. If you don’t know if the guy across the street is about to be blown away, along with everyone around, you’re terrorized. Every time you kill an Al Qaeda terrorist, you create 40 more. It’s just not a concern to the govt. In 1950, the US had incomparable security; there was only one potential threat: the creation of ICBM’s with nuclear warheads. We could have entered into a treaty with Russia to ban them. See McGeorge Bundy’s history. It says that he was unable to find a single paper, even a draft, suggesting that we do something to try to ban this threat of total instantaneous destruction. E.g., Reagan tested Russian nuclear defenses that could have led to horrible consequences. Those are the real security threats. And it’s true not just of the United States.

1 Comment »

[2b2k] Big Data and the Commons

I’m at the Engaging Big Data 2013 conference put on by Senseable City Lab at MIT. After the morning’s opener by Noam Chomsky (!), I’m leading one of 12 concurrent sessions. I’m supposed to talk for 15-20 mins and then lead a discussion. Here’s a summary of what I’m planning on saying:

Overall point: To look at the end state of the knowledge network/Commons we want to get to

Big Data started as an Info Age concept: magnify the storage and put it on a network. But you can see how the Net is affecting it:

First, there are a set of values that are being transformed:
– From accuracy to scale
– From control to innovation
– From ownership to collaboration
– From order to meaning

Second, the Net is transforming knowledge, which is changing the role of Big Data
– From filtered to scaled
– From settled to unsettled and under discussion
– From orderly to messy
– From done in private to done in public
– From a set of stopping points to endless lilnks

If that’s roughly the case, then we can see a larger Net effect. The old Info Age hope (naive, yes, but it still shows up at times) was that we’d be able to create models that ultimate interoperate and provide an ever-increasing and ever-more detailed integrated model of the world. But in the new Commons, we recognize that not only won’t we ever derive a single model, there is tremendous strength in the diversity of models. This Commons then is enabled if:

  • All have access to all
  • There can be social engagement to further enrich our understanding
  • The conversations default to public

So, what can we do to get there? Maybe:

  • Build platforms and services
  • Support Open Access (and, as Lewis Hyde says, “beat the bounds” of the Commons regularly)
  • Support Linked Open Data

Questions if the discussion needs kickstarting:

  • What Big Data policies would help the Commons to flourish?
  • How can we improve the diversity of those who access and contribute to the Commons?
  • What are the personal and institutional hesitations that are hindering the further development of the Commons?
  • What role can and should Big Data play in knowledge-focused discussions? With participants who are not mathematically or statistically inclined?
  • Does anyone have experience with Linked Data? Tell us about it?


I just checked the agenda, which of course I should have done earlier, and discovered that of the 12 sessions today, 1211 are being led by men. Had I done that homework, I would not have accepted their invitation.


November 14, 2013

[2b2k] No more magic knowledge

I gave a talk at the EdTechTeacher iPad Summit this morning, and felt compelled to throw in an Angry Old Man slide about why iPads annoy me, especially as education devices. Here’s my List of Grievances:

  • Apple censors apps

  • iPads are designed for consumers. [This is false for these educators, however. They are using iPad apps to enable creativity.]

  • They are closed systems and thus lock users in

  • Apps generally don’t link out

That last point was the one that meant the most in the context of the talk, since I was stressing the social obligation we all have to add to the Commons of ideas, data, knowledge, arguments, discussion, etc.

I was sorry I brought the whole thing up, though. None of the points I raised is new, and this particular audience is using iPads in creative ways, to engage students, to let them explore in depth, to create, and to make learning mobile.

Nevertheless, as I was talking, I threw in one more: you can’t View Source the way you can in a browser. That is, browsers let you see the code beneath the surface. This capability means you can learn how to re-create what you like on pages you visit…although that’s true only to some extent these days. Nevertheless, the HTML code is right there for you. But not with apps.

Even though very few of us ever do peek beneath the hood — why would we? — the fact that we know there’s an openable hood changes things. It tells us that what we see on screen, no matter how slick, is the product of human hands. And that is the first lesson I’d like students to learn about knowledge: it often looks like something that’s handed to us finished and perfect, but it’s always something that we built together. And it’s all the cooler because of that.

There is no magic, just us humans as we move through history trying to make every mistake possible.


November 13, 2013

Protecting library privacy with a hard opt-in

Marshall Breeding gave a talk today to the Harvard Library system as part of its Discoverability Day. Marshall is an expert in discovery systems, i.e., technology that enables library users to find what they need and what they didn’t know they needed, across every medium and metadata boundary.

It’s a stupendously difficult problem, not least because the various providers of the metadata about non-catalog items — journal articles, etc. — don’t cooperate. On top of that, there’s a demand for “single searchbox solutions,” so that you can not only search everything the Googley way, but the results that come back will magically sort themselves in the order of what’s most useful to you. To bring us closer to that result, Marshall said that systems are beginning to use personal profiles and usage data. The personal profile lets the search engine know that you’re an astronomer, so that when you search for “mercury” you’re probably not looking for information about the chemical, the outboard motor company, or Queen. The usage data will let the engine sort based on what your community has voted on with its checkouts, recommendations, etc.

Marshall was careful to stipulate that using profiles or usage data will require user consent. I’m very interested in this because the Library Innovation Lab where I work has created an online library browser — StackLife — that sorts results based on a variety of measures of Harvard community usage. StackLife computes a “stackscore” based on a simple calculation of the number of checkouts by faculty, grad students or undergrads, how many copies are in Harvard’s 73 libraries, and potentially other metrics such as how often it’s put on reserve or called back early. The stackscores are based on 10-year aggregates without any personal identifiers, and with no knowledge of which books were checked out together. And our Awesome Box project, now in more than 40 libraries, provides a returns box into which users can deposit books that they thought were “awesome,” generating particularly delicious user-based (but completely anonymized) data.

Marshall is right: usage data is insanely useful for a community, and I’d love for us to be able to get our hands on more of it. But, I got into a Twitter discussion about the danger of re-identification with Mark Ockerbloom [twitter:jmarkockerbloom] and John Wilbanks [twitter:wilbanks], two people I greatly respect, and I agree that a simple opt-in isn’t enough, because people may not fully recognize the possibility that their info may be made public. So, I had an idea.

Suppose you are not allowed to do a “soft” opt-in, by which I mean an opt-in that requires you to read some terms and ticking a box that permits the sharing of information about what you check out from the library. Instead, you would be clearly told that you are opting-in to publishing your check-outs. Not to letting your checkouts be made public if someone figures out how to get them, or even to making your checkouts public to anyone who asks for them. No, you’d be agreeing to having a public page with your name on it that lists your checkouts. This is a service a lot of people want anyway, but the point would be to make it completely clear to you that ticking the checkbox means that, yes, your checkouts are so visible that they get their own page. And if you want to agree to the “soft” opt-in, but don’t want that public page posted, you can’t.

Presumably the library checkout system would allow you to exempt particular checkouts, but by default they all get posted. That would, I think, drive home what the legal language expressed in the “soft” version really entails.


Here are a couple of articles by Marshall Breeding: 1. Infotoday 2. Digital Shift

1 Comment »

November 9, 2013

Aaron Swartz and the future of libraries

I was unable to go to our local Aaron Swartz Hackathon, one of twenty around the world, because I’d committed (very happily) to give the after dinner talk at the University of Rhode Island Graduate Library and Information Studies 50th anniversary gala last night.

The event brought together an amazing set of people, including Senator Jack Reed, the current and most recent presidents of the American Library Association, Joan Ress Reeves, 50 particularly distinguished alumni (out of the three thousand (!) who have been graduated), and many, many more. These are heroes of libraries. (My cousin’s daughter, Alison Courchesne, also got an award. Yay, Alison!)

Although I’d worked hard on my talk, I decided to open it differently. I won’t try to reproduce what I actually said because the adrenalin of speaking in front of a crowd, especially one as awesome as last night’s, wipes out whatever short term memory remains. But it went very roughly something like this:

It’s awesome to be in a room with teachers, professors, researchers, a provost, deans, and librarians: people who work to make the world better…not to mention the three thousand alumni who are too busy do-ing to be able to be here tonight.

But it makes me remember another do-er: Aaron Swartz, the champion of open access, open data, open metadata, open government, open everything. Maybe I’m thinking about Aaron tonight because today is his birthday.

When we talk about the future of libaries, I usually promote the idea of libraries as platforms — platforms that make openly available everything that libraries know: all the data, all the metadata, what the community is making of what they get from the library (privacy accommodated, of course), all the guidance and wisdom of librarians, all the content especially if we can ever fix the insane copyright laws. Everything. All accessible to anyone who wants to write an application that puts it to use.

And the reason for that is because in my heart I don’t think librarians are going to invent the future of libraries. It’s too big a job for any one group. It will take the world to invent the future of libraries. It will take 14 year olds like Aaron to invent the future of libraries. We need supply them with platforms that enable them.

I should add that I co-direct a Library Innovation Lab where we do work that I’m very proud of. So, of course libraries will participate in the invention of their future. But it’ll take the world — a world that contains people with the brilliance and commitment of an Aaron Swartz — to invent that future fully.


Here are wise words delivered at an Aaron Hackathon last night by Carl Malamud: Hacking Authority. For me, Carl is reminding us that the concept of hacking over-promises when the changes threaten large institutions that represent long-held values and assumptions. Change often requires the persistence and patience that Aaron exhibited, even as he hacked.


Next Page »