Joho the Blog » liveblog

April 9, 2014

[shorenstein] Andy Revkin on communicating climate science

I’m at a talk by Andrew Revkin of the NY Times’ Dot Earth blog at the Shorenstein Center. [Alex Jones mentions in his introduction that Andy is a singer-songwriter who played with Pete Seeger. Awesome!]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Andy says he’s been a science reporter for 31 years. His first magazine article was about the dangers of the anti-pot herbicide paraquat. (The article won an award for investigative journalism). It had all the elements — bad guys, victims, drama — typical of “Woe is me. Shame on you” environmental reporting. His story on global warming in 1988 has “virtually the same cast of characters” that you see in today’s coverage. “And public attitudes are about the same…Essentially the landscape hasn’t changed.” Over time, however, he has learned how complex climate science is.

In 2010, his blog moved from NYT’s reporting to editorial, so now he is freer to express his opinions. He wants to talk with us today about the sort of “media conversation” that occurs now, but didn’t when he started as a journalist. We now have a cloud of people who follow a journalist, ready to correct them. “You can say this is terrible. It’s hard to separate noise from signal. And that’s correct.” “It can be noisy, but it’s better than the old model, because the old model wasn’t always right.” Andy points to the NYT coverage on the build up to the invasion of Iraq. But this also means that now readers have to do a lot of the work themselves.

He left the NYT in his mid-fifties because he saw that access to info more often than not doesn’t change you, but instead reinforces your positions. So at Pace U he studies how and why people understand ecological issues. “What is it about us that makes us neglect long-term imperatives?” This works better in a blog in a conversation drawing upon other people’s expertise than an article. “I’m a shitty columnist,” he says. People read columns to reinforce their beliefs, although maybe you’ll read George Will to refresh your animus :) “This makes me not a great spokesperson for a position.” Most positions are one-sided, whereas Andy is interested in the processes by which we come to our understanding.

Q: [alex jones] People seem stupider about the environment than they were 20 years ago. They’re more confused.

A: In 1991 there was a survey of museum goers who thought that global warming was about the ozone hole, not about greenhouse gases. A 2009 study showed that on a scale of 1-6 of alarm, most Americans were at 5 (“concerned,” not yet “alarmed”). Yet, Andy points out, the Cap and Trade bill failed. Likewise,the vast majority support rebates on solar panels and fuel-efficient vehicles. They support requiring 45mph fuel efficiency across vehicle fleets, even at a $1K price premium. He also points to some Gallup data that showed that more than half of the respondents worry a great a deal or a fair amount, but that number hasn’t changed since they Gallup began asking the question, in 1989. [link] Furthermore, global warming doesn’t show up as one of the issues they worry about.

The people we need to motivate are innovators. We’ll have 9B on the planet soon, and 2B who can’t make reasonable energy choices.

Q: Are we heading toward a climate tipping point?

A: There isn’t evidence that tipping points in climate are real and if they are, we can’t really predict them. [link]

Q: The permafrost isn’t going to melt?

A: No, it is melting. But we don’t know if it will be catastrophic.

Andy points to a photo of despair at a climate conference. But then there’s Scott H. DeLisi who represents a shift in how we relate to communities: Facebook, Twitter, Google Hangouts. Inside Climate News won the Pulitzer last year. “That says there are new models that may work. Can they sustain their funding?” Andy’s not sure.

“Journalism is a shinking wedge of a growing pie of ways to tell stories.”

“Escape from the Nerd Loop”: people talking to one another about how to communicate science issues. Andy loves Twitter. The hashtag is as big an invention as photovoltaics, he says. He references Chris Messina, its inventor, and points to how useful it is for separating and gathering strands of information, including at NASA’s Asteroid Watch. Andy also points to descriptions by a climate scientist who went to the Arctic [or Antarctic?] that he curated, and to a singing scientist.

Q: I’m a communications student. There was a guy named Marshall McLuhan, maybe you haven’t heard of him. Is the medium the message?

A: There are different tools for different jobs. I could tell you the volume of the atmosphere, but Adam Nieman, a science illustrator, used this way to show it to you.

Q: Why is it so hard to get out of catastrophism and into thinking about solutions?

A: Journalism usually focuses on the down side.If there’s no “Woe is me” element, it tends not to make it onto the front page. At Pace U. we travel each spring and do a film about a sustainable resource farming question. The first was on shrimp-farming in Belize. It’s got thousands of views but it’s not on the nightly news. How do we shift our norms in the media?

[david ropiek] Inherent human psychology: we pay more attention to risks. People who want to move the public dial inherently are attracted to the more attention-getting headlines, like “You’re going to die.”

A: Yes. And polls show that what people say about global warming depends on the weather outside that day.

A report recently drew the connection between climate change and other big problems facing us: poverty, war, etc. What did you think of it?

A: It was good. But is it going to change things? The Extremes report likewise. The city that was most affected by the recent typhoon had tripled its population, mainly with poor people. Andy values Jesse Ausubel who says that most politics is people pulling on disconnected levels.

Q: Any reflections on the disconnect between breezy IPCC executive summaries and the depth of the actual scientific report?

A: There have been demands for IPCC to write clearer summaries. Its charter has it focused on the down sides.

Q: How can we use open data and community tools to make better decisions about climate change? Will the data Obama opened up last month help?

A: The forces of stasis can congregate on that data and raise questions about it based on tiny inconsistencies. So I’m not sure it will change things. But I’m all for transparency. It’s an incredibly powerful tool, like when the US Embassy was doing its own twitter feed on Beijing air quality. We have this wonderful potential now; Greenpeace (who Andy often criticizes) did on-the-ground truthing about companies deforesting organgutang habitats in Indonesia. Then they did a great campaign to show who’s using the palm oil: Buying a Kitkat bar contributes to the deforesting of Borneo. You can do this ground-truthing now.

Q: In the past 6 months there seems to have been a jump in climate change coverage. No?

A: I don’t think there’s more coverage.

Q: India and Pakistan couldn’t agree on water control in part because the politicians talked about scarcity while the people talked in terms of their traditional animosities. How can we find the right vocabularies?

A: If the conversation is about reducing vulnerabilities and energy efficiency, you can get more consensus than talking about global warming.

Q: How about using data visualizations instead of words?

A: I love visualizations. They spill out from journalism. How much it matters is another question. Ezra Klein just did a piece that says that information doesn’t matter.

Q: Can we talk about your “Years of Living Dangerously” piece? [Couldn't hear the rest of the question].

A: My blog is edited by the op-ed desk, and I don’t always understand their decisions. Journalism migrates toward controversy. The Times has a feature “Room for Debate,” and I keep proposing “Room for Agreement” [link], where you’d see what people who disagree about an issue can agree on.

Q: [me] Should we still be engaging with deniers? With whom should we be talking?

A: Yes, we should engage. We taxpayers subsidize second mortgages on houses in wild fire zones in Colorado. Why? So firefighters have to put themselves at risk? [link] That’s an issue that people agree on across the spectrum. When it comes to deniers, we have to ask what exactly are you denying, Particular data? Scientific method? Physics? I’ve come to the conclusion that even if we had perfect information, we still wouldn’t galvanize the action we need.

[Andy ends by singing a song about liberated carbon. That's not something you see every day at the Shorenstein Center.]

[UPDATE (the next day): I added some more links.]

2 Comments »

December 3, 2013

[berkman] Jérôme Hergeux on the motives of Wikipedians

Jérôme Hergeux is giving a Berkman lunch talk on “Cooperation in a peer prodiuction economy: experimental evidence from Wikipedia.” He lists as co-authors: Yann Algan, Yochai Benkler, and Mayo Fuster-Morell.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jérôme explains the broader research agenda behind the paper. People are collaborating on the Web, sometimes on projects that compete with or replace major products from proprietary businesses and institutions. Standard economic theory doesn’t have a good way of making sense of this with its usual assumptions of behavior guided by perfect rationality and self-interest. Instead, Jérôme will look at Wikipedia where people are not paid and their contributions have no signaling value on the labor market. (Jérôme quotes Kizor: “The problem with Wikipedia is that it only works in practice. In theory it can never work.”)

Instead we should think of contributing to Wikipedia as a Public Goods dilemma: contributing has personal cost and not enough countervailing personal benefit, but it has a social benefit higher than the individual cost. The literature has mainly focused on the “prosocial preferences” that lead people to include the actions/interets of others, which leads them to overcome the Public Goods dilemma.

There are three classes of models commonly used by economists to explain prosocial behavior:

First, the altruism motive. Second, reciprocity: you respond in kind to kind actions of others. Third, “social image”: contributing to the public good signals something that brings you other utility. (He cites Napoleon: “Give me enough meals and I will win you any war.”)

His research’s method: Elicit the social prefs of a representative sample of Wikipedia contributors via an online experiment, and use those preferences to predict subjects’ field contributions to the Wikipedia project.

To check the reciprocity motive, they ran a simple public goods game. Four people in a group. Each has $10. Each has to decide how much to invest in a public project. You get some money back, but the group gets more. You can condition your contribution on the contributions of the other group members. This enables the researchers to measure how much the reciprocity motive matters to you. [I know I’m not getting this right. Hard to keep up. Sorry.] They also used a standard online trust game: You get some money from a partner, and can respond in kind.

Q: Do these tests correlate with real world behavior?

A: That’s the point of this paper. This is the first comprehensive test of all three motives.

For studying altruism, the dictator game is the standard. The dictator can give as much as s/he wants to the other person. The dictator has no reason to transfer the money. This thus measures altruism. But people might contribute to Wikipedia out of altruism just to their own Wikipedia in-group, not general altruism (“directed altruism”). So they ran another game to measure in-group altruism.

Social image is hard to measure experimentally, so they relied on observational data. “Consider as ‘social signalers’ subjects who have a Wikipedia user page whose size is bigger than the median in the sample.” You can be a quite engaged contributor to Wikipedia and not have a personal user page. But a bigger page means more concern with social image. Second, they looked at Barnstars data. Barnstars are a “social rewarding practice” that’s mainly restricted to heavy contributors: contribute well to a Wikipedia article and you might be given a barnstar. These shows up on Talk pages. About half of the people move it to their user page where it is more visible. If you move one of those awards manually to your user page, Jérôme will count you as a social signaller, i.e., someone who cares about his/her image.

He talks about some of the practical issues they faced in doing this experiment online. They illustrated the working of each game by using some simple Flash animations. And they provided calculators so you could see the effect of your decisions before you make them.

The subject pool came from registered Wikipedia users, and looked at the number of edits the user has made. (The number of contributions at Wikipedia follows a strong power law distribution.) 200,000 people register at Wikipedia account each month (2011) but only 2% make ten contributions in the their first month, and only 10% make one contribution or more within the next year. So, they recruited the cohort of new Wikipedia contributors (190,000 subjects), the group of engaged Wikipedia contributors (at least 300 edits) (18,989), and Wikipedia administrators (1,388 subjects). To recruit people, they teamed up with the Wikimedia Foundation to put a banner up on a Wikipedia page if the user met the criteria as a subject. The banner asked the reader to help with research. If readers click through, they go to the experiment page where they are paid in real money if they complete the 25 minute experiment within eight hours.

The demographics of the experiment’s subjects (1,099) matched quite closely the overall demographics of those subject pools. (The pool had 9% women, and the experiment had 8%).

Jérôme shows the regression tables and explains them. Holding the demographics steady, what is the relation between the three motives and the number of contributions? For the altruistic motive, there is no predictive power. Reciprocity in both games (public and trust) is a highly significant predictive. This tells us that reciprocal preference can lead you from being a non-contributor to being an engaged contributor; once you’re an engaged contributor, it doesn’t predict how far you’re going to go. Social image is correlated with the number of contributions; 81% of people who have received barnstars are super-contributors. Being a social signaler is associated with a 130% rise in the number of contributions you make. By both user-page length and barnstar, social image motivates for more contributions even among super-contributors.

Reciprocity incentivizes contributions only for those who are not concerned about their social image. So, reciprocity and social image are both at play among the contributors, but among separate groups. I.e., if you’re motivated by reciprocity, you are likely not motivated by social image, and vice versa.

Now Jérôme focuses on Wikipedia administrators. Altruism has no predictive value. But Wikipedia participation is negatively associated with reciprocity; perhaps this is because admins have to have thick skins to deal with disruptive users. For social image, the user page has significant revelance for admins, but not barnstars. Social image is less strong among admins than among other contributors.

Jérôme now explores his “thick skin hypothesis” to explain the admin results. In the trust game, look at how much the trustor decides how much to give to the stranger/partner. Jérôme ’s hypothesis: Among admins, those who decide to perform more of their policing role will be less trusting of strangers. There’s a negative correlation among admins between the results from the trust game and their contributions. The more time they say they do admin edits, the less trusting they are of strangers in the tests. That sort of make sense, says Jérôme. These admins are doing a valuable job for which they have self-selected, but it requires dealing with irritating people.

QA

Q: Maybe an admin is above others and is thus not being reciprocated by the group.

A: Perfectly reasonable explanation, and it is not ruled out by the data.

Q: Did you come into this with an idea of what might motivate the Wikipedians?

A: These are the three theories that are prevalent. We wanted to see how well they map onto actual field behavior.

Q: Maybe the causation goes the other way: working in Wikipedia is making people more concerned about social image or reciprocity?

A: The correlations could go in either direction. But we want to know if those explanations actually match what people do in the field.

Q: Heather Ford looks at why articles are deleted for non-Western topics. She found the notability criteria change for people not close to the topics. Maybe the motives change depending on how close you are to the event.

A: Sounds fascinating.

Q: Admins have an inherent bias in that they focus on the small percentage of contributors who are annoying jerks. If you spend your time working with jerks, it affects your sense of trust.

A: Good point. I don’t have the data to answer it.

Q: [me] If I’m a journalist I’m likely to take away the wrong conclusions from this talk, so I want to make sure I’m understanding. For example, I might conclude that Wikipedia admins are not motivated by altruism, whereas the right conclusion is (isn’t it?) that the standard altruism test doesn’t really measure altruism. Why not ask for self-reports to see?

A: Economists are skeptical about self-reports. If the reciprocity game predicts a correlation, that’s significant.

Yochai Benkler: Altruism has a special meaning among economists. It refers to any motivation other than “What’s in it for me?” [Because I asked the question, I didn’t do a good job recording the answers. Sorry.]

Q: Aren’t admins control freaks?

A: I wouldn’t say that. But control is not a pro-social motive, and I wanted to start with the theories that are current.

Q: You use the number of words someone writes on a user page as a sign of caring about social image, but this is in an context where people are there to write. And you’re correlating that to how much they write as editors and contributors. Maybe people at Wikipedia like to write. And maybe they write in those two different places for different reasons. Also, what do you do with these findings? Economists like to figure out which levers we pull if we’re not getting enough contributors.

Q: This sort of data seems to work well for large platforms with lots of users. What’s the scope of the methods you’re using? Only the top 100 web sites in the world?

A: I’d like to run this on all the peer production platforms in the world. Wikipedia is unusual if only because it’s been so successful. We’re already working on another project with 1,000 contributors at SourceForge especially to look at the effects of money, since about half of Open Source contributions are for money.


Fascinating talk. But it makes me want to be very dumb about it, because, well, I have no choice. So, here goes.

We can take this research as telling us something about Wikipedians’ motivations, about whether economists have picked the right three prosocial motivations, or about whether the standard tests of those motivations actually correlate to real-world motivations. I thought the point had to do with the last two alternatives and not so much the first. But I may have gotten it wrong.

So, suppose instead of talking about altruism, reciprocity, and social image we instead talk about the correlation between the six tests the researchers used and Wikipedia contributions. We would then have learned that Test #1 is a good predictor of the contribution levels of beginner Wikipedians, Test #2 predicts contributions by admins, Test #3 has a negative correlation with contributions by engaged Wikipedians, etc. But that would be of no interest, since we have (ex hypothesis) not made any assumptions about what the tests are testing for. Rather, the correlation would be a provocation to more research: why the heck does playing one of these odd little games correlate to Wikipedian productivity? It’d be like finding out that Wikipedian productivity is correlated to being a middle child or to wearing rings on both hands. How fascinating!… because these correlations have no implied explanatory power.

Now let’s plug back in the English terms that indicate some form of motivation. So now we can say that Test #3 shows that scoring high in altruism (in the game) does not correlate with being a Wikipedia admin. From this we can either conclude that Wikipedia admins are not motivated by altruism, or that the game fails to predict the existing altruism among Wikipedia admins. Is there anything else we can conclude without doing some independent study of what motivates Wikipedia admins? Because it flies in the face of both common sense and my own experience of Wikipedia admins; I’m pretty convinced one reason they work so hard is so everyone can have a free, reliable, neutral encyclopedia. So my strong inclination – admittedly based on anecdote and “common sense” (= “I believe what I believe!”) – is to conclude that any behavioral test that misses altruism as a component of the motivation of someone who spends thousands of hours working for free on an open encyclopedia…well, there’s something hinky about that behavioral test.

Even if the altruism tests correlate well with people engaged in activities we unproblematically associate with altruism – volunteering in a soup kitchen, giving away much of one’s income – I’d still not conclude from the lack of correlation with Wikipedia admins that those admins are not motivated by altruism, among other motivations. It just doesn’t correlate with the sort of altruism the game tests for. Just ask those admins if they’d put in the same amount of time creating a commercial encyclopedia.

So, I come out of Jérôme’s truly fascinating talk feeling like I’ve learned more about the reliability of the tests than about the motivations of Wikipedians. Based on Jérôme’s and Yochai’s responses, I think that’s what I’m supposed to have learned, but the paper also seems to be putting forward interesting conclusions (e.g., admins are not trusting types) that rely upon the tests not just correlating with the quantity of edits, but also being reliable measures of altruism, self-image, and reciprocity as motives. I assume (and thus may be wrong) that’s why Jérôme offered an hypothesis to explain the lack-of-trust result, rather than discounting the finding that admins lack trust (to oversimplify it).

(Two concluding comments: 1. Yochai’s The Leviathan and the Penguin uses behavioral tests like these, as well as case studies and observation, to make the case that we are a cooperative species. Excellent, enjoyable book. (Here’s a podcast interview I did with him about it.) 2. I’m truly sorry to be this ignorant.)

1 Comment »

November 20, 2013

[liveblog][2b2k] David Eagleman on the brain as networks

I’m at re comm 13, an odd conference in Kitzbühel, Austria: 2.5 days of talks to 140 real estate executives, but the talks are about anything except real estate. David Eagleman, a neural scientist at Baylor, and a well-known author, is giving a talk. (Last night we had one of those compressed conversations that I can’t wait to be able to continue.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

How do we know your thinking is in your brain? If you damage your finger, you don’t change, but damage to your brain can change basic facets of your life. “The brain is the densest representation of who you are.” We’re the only species trying to figure out our own progamming language. We’ve discovered the most complicated device in the universe: our own brains. Ten billion neurons. Every single neuron contains the entire human genome and thousands of protens doing complicated computations. Each neuron is is connected to tens of thousands of its neighbors, meaning there are 100s of trillions of connections. These numbers “bankrupt the language.”

Almost all of the operations of the brain are happening at a level invisible to us. Taking a drink of water requires a “lightning storm” of acvitity at the neural level. This leads us to a concept of the unconscious. The conscious part of you is the smallest bit of what’s happening in the brain. It’s like a stowaway on a transatlantic journey that’s taking credit for the entire trip. When you think of something, your brain’s been working on it for hours or days. “It wasn’t really you that thought of it.”

About the unconscious: Psychologists gave photos of women to men and asked them to evaluate how attractive they are. Some of the photos were the same women, but with dilated eyes. The men rated them as being more attractive but none of them noticed the dilation. Dilated eyes are a sign of sexual readiness in women. Men made their choices with no idea of why.

More examples: In the US, if your name is Dennis or Denise, you’re more likely to become a dentist. These dentists have a conscious narrative about why they became dentists that misses the trick their brain has played on them. Likewise, people are statistically more likely to marry someone whose first name begins with the same first letter as theirs. And, i you are holding a warm mug of coffee, you’ll describe the relationship with your mother as warmer than if you’re holding an iced cup. There is an enormous gap between what you’re doing and what your conscious mind is doing.

“We should be thankful for that gap.” There’s so much going on under the hood, that we need to be shielded from the details. The conscious mind gets in trouble when it starts paying attention to what it’s doing. E.g., try signing your name with both hands in opposite directions simultaneously: it’s easy until you think about it. Likewise, if you now think about how you steer when making a lane change, you’re likely to enact it wrong. (You actually turn left and then turn right to an equal measure.)

Know thyself, sure. But neuroscience teaches us that you are many things. The brain is not a computer with a single output. It has many networks that are always competing. The brain is like a parliament that debates an action. When deciding between two sodas, one network might care about the price, another about the experience, another about the social aspect (cool or lame), etc. They battle. David looks at three of those networks:

1. How does the brain make decisions about valuation? E.g., people will walk 10 mins to save 10 € on a 20 € pen but not on a 557 € suit. Also, we have trouble making comparisons of worth among disparate items unless they are in a shared context. E.g., Williams Sonoma had a bread baking machine for $275 that did not sell. Once they added a second one for $370, it started selling. In real estate, if a customer is trying to decide between two homes, one modern and one traditional, if you want them to buy the modern one, show them another modern one. That gives them the context by which they can decide to buy it.

Everything is associated with everything else in the brain. (It’s an associative network.) Coffee used to be $0.50. When Starbucks started, they had to unanchor it from the old model so they made the coffee houses arty and renamed the sizes. Having lost the context for comparison, the price of Starbucks coffee began to seem reasonable.

2. Emotional experience is a big part of decision making. If you’re in a bad-smelling room, you’ll make harsher moral decisions. The trolley dilemma: 5 people have been tied to the tracks. A trolley is approaching rapidly. You can switch the trolley to a track with only one person tied to it. Everyone would switch the trolley. But now instead, you can push a fat man onto the trolley to stop the car. Few would. In the second scenario, touching someone engages the emotional system. The first scenario is just a math problem. The logic and emotional systems are always fighting it out. The Greeks viewed the self as someone steering a chariot drawn by the white horse of reason and the black horse of passion. [From Plato's Phaedrus]

3. A lot of the machinery of the brain deals with other brains. We use the same circuitry to think about people andor corporations. When a company betrays us, our brain responds the way it would if a friend betrayed us. Traditional economics says customer interactions are short-term but the brain takes a much longer-range view. Breaches of trust travel fast. (David plays “United Breaks Guitars.”) Smart companies use social media that make you believe that the company is your friend.

The battle among these three networks drives decisions. “Know thyselves.”

This is unsettling. The self is not at the center. It’s like when Galileo repositioned us in the universe. This seemed like a dethroning of man. The upside is that we’ve discovered the Cosmos is much bigger, more subtle, and more magnificent than we thought. As we sail into the inner cosmos of the brain, the brain is much subtle and magnificent than we ever considered.

“We’ve found the most wondrous thing in the universe, and it’s us.”

Q: Won’t this let us be manipulated?

A: Neural science is just catching up with what advertisers have known for 100 years.

Q: What about free will?

A: My labs and others have done experiments, and there’s no single experiment in neuroscience that proves that we do or do not have free will. But if we have free will, it’s a very small player in the system. We have genetics and experiences, and they make brains very different from one another. I argue for a legal system that recognizes a difference between people who may have committed the same crime. There are many different types of brains.

Be the first to comment »

November 15, 2013

[liveblog][2b2k] Saskia Sassen

The sociologist Saskia Sassen is giving a plenary talk at Engaging Data 2013. [I had a little trouble hearing some of it. Sorry. And in the press of time I haven't had a chance to vet this for even obvious typos, etc.]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

1. The term Big Data is ambiguous. “Big Data” implies we’re in a technical zone. it becomes a “technical problem” as when morally challenging technologies are developed by scientists who thinks they are just dealing with a technical issue. Big Data comes with a neutral charge. “Surveillance” brings in the state, the logics of power, how citizens are affected.

Until recently, citizens could not relate to a map that came out in 2010 that shows how much surveillance there is in the US. It was published by the Washington Post, but it didn’t register. 1,271 govt orgs and 1,931 private companies work on programs related to counterterrorism, homeland security and intelligence. There are more than 1 million people with stop-secret clearance, and maybe a third are private contractors. In DC and enirons, 33 building complexes are under construction or have been built for top-secret intelligence since 9/11. Together they are 22x the size of Congress. Inside these environments, the govt regulates everything. By 2010, DC had 4,000 corporate office buildings that handle classified info,all subject to govt regulation. “We’re dealing with a massive material apparatus.” We should not be distracted by the small individual devices.

Cisco lost 28% of its sales, in part as a result of its being tainted by the NSA taking of its data. This is alienating citzens and foreign govts. How do we stop this? We’re dealing with a kind of assemblage of technical capabilities, tech firms that sell the notion that for security we all have to be surveilled, and people. How do we get a handle on this? I ask: Are there spaces where we can forget about them? Our messy, nice complex cities are such spaces. All that data cannot be analyzed. (She notes that she did a panel that included the brother of a Muslim who has been indefinitely detained, so now her name is associated with him.)

3. How can I activate large, diverse spaces in cities? How can we activate local knowledges? We can “outsource the neighborhood.” The language of “neighborhood” brings me pleasure, she says.

If you think of institutions, they are codified, and they notice when there are violations. Every neighborhood has knowledge about the city that is different from the knowledge at the center. The homeless know more about rats than the center. Make open access networks available to them into a reverse wiki so that local knowledge can find a place. Leak that knowledge into those codified systems. That’s the beginning of activating a city. From this you’d get a Big Data set, capturing the particularities of each neighborhood. [A knowledge network. I agree! :)]

The next step is activism, a movement. In my fantasy, at one end it’s big city life and at the other it’s neighborhood residents enabled to feel that their knowledge matters.

Q&A

Q: If local data is being aggregated, could that become Big Data that’s used against the neighborhoods?

A: Yes, that’s why we need neighborhood activism. The polticizing of the neighborhoods shapes the way the knowledge isued.

Q: Disempowered neighborhoods would be even less able to contribute this type of knowledge.

A: The problem is to value them. The neighborhood has knowledge at ground level. That’s a first step of enabling a devalued subject. The effect of digital networks on formal knowledge creates an informal network. Velocity itself has the effect of informalizing knowledge. I’ve compared environmental activists and financial traders. The environmentalists pick up knowledge on the ground. So, the neighborhoods may be powerless, but they have knowledge. Digital interactive open access makes it possible bring together those bits of knowledge.

Q: Those who control the pipes seem to control the power. How does Big Data avoid the world being dominated by brainy people?

A: The brainy people at, say, Goldman Sachs are part of a larger institution. These institutions have so much power that they don’t know how to govern it. The US govt has been the post powerful in the world, with the result that it doesn’t know how to govern its own power. It has engaged in disastrous wars. So “brainy people” running the world through the Ciscos, etc., I’m not sure. I’m talking about a different idea of Big Data sets: distributed knowledges. E.g, Forest Watch uses indigenous people who can’t write, but they can tell before the trained biologists when there is something wrong in the ecosystem. There’s lots of data embedded in lots of places.

[She's aggregating questions] Q1: Marginalized neighborhoods live being surveilled: stop and frisk, background checks, etc. Why did it take tapping Angela Merkel’s telephone to bring awareness? Q2: How do you convince policy makers to incorporate citizen data? Q3: There are strong disincentives to being out of the mainstream, so how can we incentivize difference.

A: How do we get the experts to use the knowledge? For me that’s not the most important aim. More important is activating the residents. What matters is that they become part of a conversation. A: About difference: Neighborhoods are pretty average places, unlike forest watchers. And even they’re not part of the knowledge-making circuit. We should bring them in. A: The participation of the neighborhoods isn’t just a utility for the central govt but is a first step toward mobilizing people who have been reudced to thinking that they don’t count. I think is one of the most effective ways to contest the huge apparatus with the 10,000 buildings.

Be the first to comment »

[liveblog] Noam Chomsky and Bart Gellman at Engaging Data

I’m at the Engaging Data 2013conference where Noam Chomsky and Pulitzer Prize winner (twice!) Barton Gellman are going to talk about Big Data in the Snowden Age, moderated by Ludwig Siegele of the Economist. (Gellman is one of the three people Snowden vouchsafed his documents with.) The conference aims at having us rethink how we use Big Data and how it’s used.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

LS: Prof. Chomsky, what’s your next book about?

NC: Philosophy of mind and language. I’ve been writing articles that are pretty skeptical about Big Data. [Please read the orange disclaimer: I'm paraphrasing and making errors of every sort.]

LS: You’ve said that Big Data is for people who want to do the easy stuff. But shouldn’t you be thrilled as a linguist?

NC: When I got to MIT at 1955, I was hired to work on a machine translation program. But I refused to work on it. “The only way to deal with machine translation at the current stage of understanding was by brute force, which after 30-40 years is how it’s being done.” A principled understanding based on human cognition is far off. Machine translation is useful but you learn precisely nothing about human thought, cognition, language, anything else from it. I use the Internet. Glad to have it. It’s easier to push some buttons on your desk than to walk across the street to use the library. But the transition from no libraries to libraries was vastly greater than the transition from librarites to Internet. [Cool idea and great phrase! But I think I disagree. It depends.] We can find lots of data; the problem is understanding it. And a lot of data around us go through a filter so it doesn’t reach us. E.g., the foreign press reports that Wikileaks released a chapter about the secret TPP (Trans Pacific Partnership). It was front page news in Australia and Europe. You can learn about it on the Net but it’s not news. The chapter was on Intellectual Property rights, which means higher prices for less access to pharmaceuticals, and rams through what SOPA tried to do, restricting use of the Net and access to data.

LS: For you Big Data is useless?

NC: Big data is very useful. If you want to find out about biology, e.g. But why no news about TPP? As Sam Huntington said, power remains strongest in the dark. [approximate] We should be aware of the long history of surveillance.

LS: Bart, as a journalist what do you make of Big Data?

BG: It’s extraordinarily valuable, especially in combination with shoe-leather, person-to-person reporting. E.g., a colleague used traditional reporting skills to get the entire data set of applicants for presidential pardons. Took a sample. More reporting. Used standard analytics techniques to find that white people are 4x more likely to get pardons, that campaign contributors are also more likely. It would be likely in urban planning [which is Senseable City Labs' remit]. But all this leads to more surveillance. E.g., I could make the case that if I had full data about everyone’s calls, I could do some significant reporting, but that wouldn’t justify it. We’ve failed to have the debate we need because of the claim of secrecy by the institutions in power. We become more transparent to the gov’t and to commercial entities while they become more opaque to us.

LS: Does the availability of Big Data and the Internet automatically mean we’ll get surveillance? Were you surprised by the Snowden revelations>

NC: I was surprised at the scale, but it’s been going on for 100 years. We need to read history. E.g., the counter-insurgency “pacification” of the Philippines by the US. See the book by McCoy [maybe this. The operation used the most sophisticated tech at the time to get info about the population to control and undermine them. That tech was immediately used by the US and Britain to control their own populations, .g., Woodrow Wilson’s Red Scare. Any system of power — the state, Google, Amazon — will use the best available tech to control, dominate, and maximize their power. And they’ll want to do it in secret. Assange, Snowden and Manning, and Ellsberg before them, are doing the duty of citizens.

BG: I’m surprised how far you can get into this discussion without assuming bad faith on the part of the government. For the most part what’s happening is that these security institutions genuinely believe most of the time that what they’re doing is protecting us from big threats that we don’t understand. The opposition comes when they don’t want you to know what they’re doing because they’re afraid you’d call it off if you knew. Keith Alexander said that he wishes that he could bring all Americans into this huddle, but then all the bad guys would know. True, but he’s also worried that we won’t like the plays he’s calling.

LS: Bruce Schneier says that the NSA is copying what Google and Yahoo, etc. are doing. If the tech leads to snooping, what can we do about it?

NC: Govts have been doing this for a century, using the best tech they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe these are universals. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications.

BG: The capacity to rationalize may be universal, but you’ll take the conversation off track if you compare what’s happening here to the Stasi. The Stasi were blackmailing people, jailing them, preventing dissent. As a journalist I’d be very happy to find that our govt is spying on NGOs or using this power for corrupt self-enriching purposes.

NC: I completely agree with that, but that’s not the point: The same appeal is made in the most monstrous of circumstances. The freedom we’ve won sharply restricts state power to control and dominate, but they’ll do whatever they can, and they’ll use the same appeals that monstrous systems do.

LS: Aren’t we all complicit? We use the same tech. E.g., Prof. Chomsky, you’re the father of natural language processing, which is used by the NSA.

NC: We’re more complicit because we let them do it. In this country we’re very free, so we have more responsibility to try to control our govt. If we do not expose the plea of security and separate out the parts that might be valid from the vast amount that’s not valid, then we’re complicit because we have the oppty and the freedom.

LS: Does it bug you that the NSA uses your research?

NC: To some extent, but you can’t control that. Systems of power will use whatever is available to them. E.g., they use the Internet, much of which was developed right here at MIT by scientists who wanted to communicate freely. You can’t prevent the powers from using it for bad goals.

BG: Yes, if you use a free online service, you’re the product. But if you use a for-pay service, you’re still the product. My phone tracks me and my social network. I’m paying Verizon about $1,000/year for the service, and VZ is now collecting and selling my info. The NSA couldn’t do its job as well if the commercial entities weren’t collecting and selling personal data. The NSA has been tapping into the links between their data centers. Google is racing to fix this, but a cynical way of putting this is that Google is saying “No one gets to spy on our customers except us.”

LS: Is there a way to solve this?

BG: I have great faith that transparency will enable the development of good policy. The more we know, the more we can design policies to keep power in place. Before this, you couldn’t shop for privacy. Now a free market for privacy is developing as the providers now are telling us more about what they’re doing. Transparency allows legislation and regulation to be debated. The House Repubs came within 8 votes of prohibiting call data collection, which would have been unthinkable before Snowden. And there’s hope in the judiciary.

NC: We can do much more than transparency. We can make use of the available info to prevent surveillance. E.g., we can demand the defeat of TPP. And now hardware in computers is being designed to detect your every keystroke, leading some Americans to be wary of Chinese-made computers, but the US manufacturers are probably doing it better. And manufacturers for years have been trying to dsign fly-sized drones to collect info; that’ll be around soon. Drones are a perfect device for terrorists. We can learn about this and do something about it. We don’t have to wait until it’s exposed by Wikileaks. It’s right there in mainstream journals.

LS: Are you calling for a political movement?

NC: Yes. We’re going to need mass action.

BG: A few months ago I noticed a small gray box with an EPA logo on it outside my apartment in NYC. It monitors energy usage, useful to preventing brown outs. But it measures down to the apartment level, which could be useful to the police trying to establish your personal patterns. There’s no legislation or judicial review of the use of this data. We can’t turn back the clock. We can try to draw boundaries, and then have sufficient openness so that we can tell if they’ve crossed those boundaries.

LS: Bart, how do you manage the flow of info from Snowden?

BG: Snowden does not manage the release of the data. He gave it to three journalists and asked us to use your best judgment — he asked us to correct for his bias about what the most important stories are — and to avoid direct damage to security. The documents are difficult. They’re often incomplete and can be hard to interpret.

Q&A

Q: What would be a first step in forming a popular movement?

NC: Same as always. E.g., the women’s movement began in the 1960s (at least in the modern movement) with consciousness-raising groups.

Q: Where do we draw the line between transparency and privacy, given that we have real enemies?

BG: First you have to acknowledge that there is a line. There are dangerous people who want to do dangerous things, and some of these tools are helpful in preventing that. I’ve been looking for stories that elucidate big policy decisions without giving away specifics that would harm legitimate action.

Q: Have you changed the tools you use?

BG: Yes. I keep notes encrypted. I’ve learn to use the tools for anonymous communication. But I can’t go off the grid and be a journalist, so I’ve accepted certain trade-offs. I’m working much less efficiently than I used to. E.g., I sometimes use computers that have never touched the Net.

Q: In the women’s movement, at least 50% of the population stood to benefit. But probably a large majority of today’s population would exchange their freedom for convenience.

NC: The trade-off is presented as being for security. But if you read the documents, the security issue is how to keep the govt secure from its citizens. E.g., Ellsberg kept a volume of the Pentagon Papers secret to avoid affecting the Vietnam negotiations, although I thought the volume really only would have embarrassed the govt. Security is in fact not a high priority for govts. The US govt is now involved in the greatest global terrorist campaign that has ever been carried out: the drone campaign. Large regions of the world are now being terrorized. If you don’t know if the guy across the street is about to be blown away, along with everyone around, you’re terrorized. Every time you kill an Al Qaeda terrorist, you create 40 more. It’s just not a concern to the govt. In 1950, the US had incomparable security; there was only one potential threat: the creation of ICBM’s with nuclear warheads. We could have entered into a treaty with Russia to ban them. See McGeorge Bundy’s history. It says that he was unable to find a single paper, even a draft, suggesting that we do something to try to ban this threat of total instantaneous destruction. E.g., Reagan tested Russian nuclear defenses that could have led to horrible consequences. Those are the real security threats. And it’s true not just of the United States.

1 Comment »

October 25, 2013

[dplafest] Advanced Research and the DPLA

I’m at a DPLAfest session. Jean Bauer (Digital Humanities Librarian, Brown U.), Jim Egan (English Prof, Brown), Kathryn Shaughnessy (Assoc. Prof, University Libraries, St. John’s U), and David Smth (Ass’t Prof CS, Northeastern).

Rather than liveblogging in this blog, I contributed to the collaboratively-written Google Doc designated for the session notes. It’s here.

Be the first to comment »

[dplafest] Dan Cohen opens DPLA meeting

Dan Cohen has some announcements in his welcome to the DPLAfest.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The collection now has 5M items. These come from partner hubs (large institutions) and service hubs (aggregations of smaller providers). Three new hubs have joined, bringing the total to nine, from NY, North Carolina, and Texas. Dan stresses the diversity of contributors.

The DPLA sends visitors back to the contributing organizations. E.g., Minnesota Reflections is up 55% in visitors and 62% in unique visitors over the year since it joined the DPLA.

He also announces the DPLA Bookshelf, which is a contribution from the Harvard Library Innovation Lab that I co-direct. It’s an embedded version of the Stacklife browser, which you can see by going to DP.LA and searching for a book. (You can use the Harvard version here.

Dan announces a $1M grant from the Bill & Melinda Gates Foundation, to help local libraries curate material in the DPLA and start scanning in local collections. Also, an anonymous donor gave $450,000. [I don't want to say who it was, but, well, you're welcome.] Dan Cohen suggests we become a sponsor athttp://www.dp.la/donate. T-shirts and, yes, tote bags.

There have been 1,7M uses of the DPLA API as of September 2013. Examples of work already done:

Dan talks about DPA Local, and idea that would enable local communities to use the services the DPLA provides.

Dan says that all of the sessions have Google Docs already set up for collaborative note-taking [an approach I'm very fond of].

Be the first to comment »

June 20, 2013

[lodlam] Topics for Day 2

Here are the sessions people are proposing for the second day of the LODLAM conference in Montreal:


  • Getty Vocabulary goes open


  • Linked data on mobiles, wearable devices


  • Do cool things with the data sets that you have on your laptop – let’s build stuff!


  • Your tools and solutions


  • NLP for linked open data for libraries, archives, and museums. Data extraction, taxonomy alignment, context extraction, etc.


  • World War I in LOD


  • LOD and accessibility & assistive devices


  • The Pundit software package


  • the KARMA mapping tool


  • Tools and techniques for generating concordances between people


  • Why Schema.org?


  • Copying and synching linked data


  • FRBR and other standards [couldn't hear]


  • How to create a new generation of LOD professionals. Getting students involved in projects.


  • The future of LODLAM


  • Normalizing ata models and licensing models


The official list is here.

Be the first to comment »

June 19, 2013

[lodlam] Convert to RDF with KARMA

KARMA from University of Southern California takes tools for a wide variety of sources and maps it to your ontologies and generates linked data. It is open source and free. [I have not even re-read this post. Running to the next session.]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

They are demo-ing using a folder full of OWL ontology files. [OWL files contain the rules that define ontologies. KARMA runs in your browser. The mapping format is R2RML, which is designed for relational databases, but they've extended it to handle more types of databases. You can import from a database, files, or a service. For the demo, they're using CSV files from a Smithsonian database that consists of display names, IDs represented unique people, and a variant or married name. They want to map it to the Europeana ontology. KARMA shows the imported CSV and lets you (for example) create a URI for every person's name in the table. You can use Python to transform the variant names into a standard name ontology, e.g. transforming "married name" into aac-ont:married (American Art Consortium), You can model the data and it learns it. E.g., it asks if you want to map the original's ConstituentID to saam-ont:constituentID or saam-ont:objectId. (It recognizes that the ID is all numerals.) There's an advanced option that lets you mp it to, for example, a URI for aac-ont:Person1.

He clicks on the "display name" and KARMA suggests that it's a SKOS altLabel, or a FOAF name, etc. If there are no useful suggestions, you can pick one that's close and then edit it. You can browse the ontologies in the folders you've configured it to load. You can have synonyms ("a FOAF person can be a SKOS person.") [There's yet more functionality, but this where I topped out.]

You can save this as a process that can be run in batch mode.

Be the first to comment »

June 2, 2013

[2b2k] Knowledge in its natural state

I gave a 20 minute talk at the Wired Next Fest in Milan on June 1, 2013. Because I needed to keep the talk to its allotted time and because it was being simultaneously translated into Italian, I wrote it out and gave a copy to the translators. Inevitably, I veered from the script a bit, but not all that much. What follows is the script with the veerings that I can remember. The paragraph breaks track to the slide changes

(I began by thanking the festival, and my progressive Italian publisher, Codice Edizioni Codice are pragmatic idealists and have been fantastic to work with.)

Knowledge seems to fit so perfectly into books. But to marvel at how well Knowledge fits into books…

… is to marvel at how well each rock fits into its hole in the ground. Knowledge fits books because we’ve shaped knowledge around books and paper.

And knowledge has taken on the properties of books and paper. Like books, knowledge is ordered and orderly. It is bounded, just as books stretch from cover to cover. It is the product of an individual mind that then is filtered. It is kept private and we’re not responsible for it until it’s published. Once published, it cannot be undone. It creates a privileged class of experts, like the privileged books that are chosen to be published and then chosen to be in a library

Released from the bounds of paper, knowledge takes on the shape of its new medium, the Internet. It takes on the properties of its new medium just it had taken on the properties of its old paper medium. It’s my argument today that networked knowledge assumes a more natural shape. Here are some of the properties of new, networked knowledge

1. First, because it’s a network, it’s linked.

2. These links have no natural stopping point for your travels. If anything, the network gives you temptations to continue, not stopping points.

3. And, like the Net, it’s too big for any one head, Michael Nielsen, the author of Reinventing Discovery, uses the discovery of the Higgs Boson as an example. That discovery required gigantic networks of equipment and vast networks of people. There is no one person who understands everything about the system that proved that that particle exists. That knowledge lives in the system, in the network.

4. Like the net, networked knowledge is in perpetual disagreement. There is nothing about which everyone agrees. We like to believe this is a temporary state, but after thousands of years of recorded history, we can now see for sure that we are never going to agree about anything. The hope for networked knoweldge is that we’re learning to disagree more fruitfully, in a linked environment

5. And, as the Internet makes very clear, we are fallible creatures. We get everything wrong. So, networked knowledge becomes more credible when it acknowledges fallibility. This is very different from the old paper based authorities who saw fallibility as a challenge to their authority.

6. Finally, knowledge is taking on the humor of the Internet. We’re on the Internet voluntarily and freed of the constrictions of paper, it turns out that we like being with one another. Even when the topic is serious like this topic at Reddit [a discussion of a physics headline], within a few comments, we’re making jokes. And then going back to the serious topic. Paper squeezed the humor out of knowledge. But that’s unnatural.

These properties of networked knowledge are also properties of the Network. But they’re also properties that are more human and more natural than the properties of traditional knowledge.

But there’s one problem:

There is no such thing as natural knowledge. Knowledge is a construct. Our medium may have changed, but we haven’t, at least so it seems. And so we’re not free to reinvent knowledge any way we’d like. Significant problems based on human tendencies are emerging. I’ll point to four quick problem areas.

First, We see the old patterns of concentration of power reemerge on the Net. Some sites have an enormous number of viewers, but the vast majority of sites have very few. [Slide shows Clay Shirky’s Power Law distribution chart, and a photo of Clay]

Albert-László Barabási has shown that this type of clustering is typical of networks even in nature, and it is certainly true of the Internet

Second, on the Internet, without paper to anchor it, knowledge often loses its context. A tweet…

Slips free into the wild…

It gets retweeted and perhaps loses its author

And then gets retweeted and lose its meaning. And now it circulates as fact. [My example was a tweet about the government not allowing us to sell body parts morphing into a tweet about the government selling body parts. I made it up.]

Third, the Internet provides an incentive to overstate.

Fourth, even though the Net contains lots of different sorts of people and ideas and thus should be making us more open in our beliefs…

… we tend to hang out with people who are like us. It’s a natural human thing to prefer people “like us,” or “people we’re comfortable with.” And this leads to confirmation bias — our existing beliefs get reinforced — and possibly to polarization, in which our beliefs become more extreme.

This is known as the echo chamber problem, and it’s a real problem. I personally think it’s been overstated, but it is definitely there.

So there are four problems with networked knowledge. Not one of them is new. Each has a analog from before the Net.

  1. The loss of context has always been with us. Most of what we believe we believe because we believe it, not because of evidence. At its best we call it, in English, common sense. But history has shown us that common sense can include absurdities and lead to great injustices.

  2. Yes, the Net is not a flat, totally equal place. But it is far less centralized than the old media were, where only a handful of people were allowed to broadcast their ideas and to choose which ideas were broadcast.

  3. Certainly the Internet tends towards overstatement. But we have had mass media that have been built on running over-stated headlines. This newspaper [Weekly World News] is a humor paper, but it’s hard to distinguish from serious broadcast news.

  4. And speaking of Fox, yes, on the Internet we can simply stick with ideas that we already agree with, and get more confirmed in our beliefs. But that too is nothing new. The old media actually were able to put us into even more tightly controlled echo chambers. We are more likely to run into opposing ideas — and even just to recognize that there are opposing ideas — on the Net than in a rightwing or leftwing newspaper.

It’s not simply that all the old problems with knowledge have reemerged. Rather, they’ve re-emerged in an environment that offers new and sometimes quite substantial ways around them.

  1. For example, if something loses its context, we can search for that context. And links often add context.

  2. And, yes, the Net forms hubs, but as Clay Shirky and Chris Anderson have pointed out, the Net also lets a long tail form, so that voices that in the past simply could not have been heard, now can be. And the activity in that long tail surpasses the attention paid to the head of the tail.

  3. Yes, we often tend to overstate things on the Net, but we also have a set of quite powerful tools for pushing back. We review our reviews. We have sites like the well-regarded American site, Snopes.com, that will tell you if some Internet rumor is true. Snopes is highly reliable. Then we have all of the ways we talk with one another on the Net, evaluating the truth of what we’ve read there.

  4. And, the echo chamber is a real danger, but we also have on the Net the occasional fulfillment of our old ideal of being able to have honest, respectful conversations with people with whom we fundamentally disagree. These examples are from Reddit, but there are others.

So, yes, there are problems of knowledge that persist even when our technology of knowledge changes. That’s because these are not technical problems so much as human problems…

…and thus require human solutions. And the fundamental solution is that we need to become more self-aware about knowledge.

Our old technology — paper — gave us an idea of knowledge that said that knowledge comes from experts who are filtered, printed, and then it’s settled, because that’s how books work. Our new technology shows us we are complicit in knowing. In order to let knowledge get as big as our new medium allows, we have to recognize that knowledge comes from all of us (including experts), it is to be linked, shared, discussed, argued about, made fun of, and is never finished and done. It is thoroughly ours – something we build together, not a product manufactured by unknown experts and delivered to us as if it were more than merely human.

The required human solution therefore is to accept our human responsibility for knowledge, to embrace and improve the technology that gives knowledge to us –- for example, by embracing Open Access and the culture of linking and of the Net, and to be explicit about these values.

Becoming explicit is vital because our old medium of knowledge did its best to hide the human qualities of knowledge. Our new medium makes that responsibility inescapable. With the crumbling of the paper authorities, it bcomes more urgent than ever that we assume personal and social responsibility for what we know.

Knowing is an unnatural act. If we can remember that –- remember the human role in knowing — we now have the tools and connections that will enable even everyday knowledge to scale to a dimension envisioned in the past only by the mad and the God-inspired.

Thank you.

2 Comments »

Next Page »


Switch to our mobile site