Joho the Blog » science

April 12, 2014

[2b2k] Protein Folding, 30 years ago

Simply in terms of nostalgia, this 1985 video called “Knowledge Engineering: Artificial Intelligence Research at the Stanford Heuristic Programming Project” from the Stanford archives is charming right down to its Tron-like digital soundtrack.

But it’s also really interesting if you care about the way we’ve thought about knowledge. The Stanford Heuristic Programming Project under Edward Feigenbaum did groundbreaking work in how computers represent knowledge, emphasizing the content and not just the rules. (Here is a 1980 article about the Project and its projects.)

And then at the 8:50 mark, it expresses optimism that an expert system would be able to represent not only every atom of proteins but how they fold.

Little could it have been predicted that protein folding even 30 years later would be better recognized by the human brain than by computers, and that humans playing a game — Fold.It — would produce useful results.

It’s certainly the case that we have expert systems all over the place now, from Google Maps to the Nest thermostat. But we also see another type of expert system that was essentially unpredictable in 1985. One might think that the domain of computer programming would be susceptible to being represented in an expert system because it is governed by a finite set of perfectly knowable rules, unlike the fields the Stanford project was investigating. And there are of course expert systems for programming. But where do the experts actually go when they have a problem? To StackOverflow where other human beings can make suggestions and iterate on their solutions. One could argue that at this point StackOverflow is the most successful “expert system” for computer programming in that it is the computer-based place most likely to give you an answer to a question. But it does not look much like what the Stanford project had in mind, for how could even Edward Feigenbaum have predicted what human beings can and would do if connected at scale?

(Here’s an excellent interview with Feigenbaum.)

Be the first to comment »

April 9, 2014

[shorenstein] Andy Revkin on communicating climate science

I’m at a talk by Andrew Revkin of the NY Times’ Dot Earth blog at the Shorenstein Center. [Alex Jones mentions in his introduction that Andy is a singer-songwriter who played with Pete Seeger. Awesome!]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Andy says he’s been a science reporter for 31 years. His first magazine article was about the dangers of the anti-pot herbicide paraquat. (The article won an award for investigative journalism). It had all the elements — bad guys, victims, drama — typical of “Woe is me. Shame on you” environmental reporting. His story on global warming in 1988 has “virtually the same cast of characters” that you see in today’s coverage. “And public attitudes are about the same…Essentially the landscape hasn’t changed.” Over time, however, he has learned how complex climate science is.

In 2010, his blog moved from NYT’s reporting to editorial, so now he is freer to express his opinions. He wants to talk with us today about the sort of “media conversation” that occurs now, but didn’t when he started as a journalist. We now have a cloud of people who follow a journalist, ready to correct them. “You can say this is terrible. It’s hard to separate noise from signal. And that’s correct.” “It can be noisy, but it’s better than the old model, because the old model wasn’t always right.” Andy points to the NYT coverage on the build up to the invasion of Iraq. But this also means that now readers have to do a lot of the work themselves.

He left the NYT in his mid-fifties because he saw that access to info more often than not doesn’t change you, but instead reinforces your positions. So at Pace U he studies how and why people understand ecological issues. “What is it about us that makes us neglect long-term imperatives?” This works better in a blog in a conversation drawing upon other people’s expertise than an article. “I’m a shitty columnist,” he says. People read columns to reinforce their beliefs, although maybe you’ll read George Will to refresh your animus :) “This makes me not a great spokesperson for a position.” Most positions are one-sided, whereas Andy is interested in the processes by which we come to our understanding.

Q: [alex jones] People seem stupider about the environment than they were 20 years ago. They’re more confused.

A: In 1991 there was a survey of museum goers who thought that global warming was about the ozone hole, not about greenhouse gases. A 2009 study showed that on a scale of 1-6 of alarm, most Americans were at 5 (“concerned,” not yet “alarmed”). Yet, Andy points out, the Cap and Trade bill failed. Likewise,the vast majority support rebates on solar panels and fuel-efficient vehicles. They support requiring 45mph fuel efficiency across vehicle fleets, even at a $1K price premium. He also points to some Gallup data that showed that more than half of the respondents worry a great a deal or a fair amount, but that number hasn’t changed since they Gallup began asking the question, in 1989. [link] Furthermore, global warming doesn’t show up as one of the issues they worry about.

The people we need to motivate are innovators. We’ll have 9B on the planet soon, and 2B who can’t make reasonable energy choices.

Q: Are we heading toward a climate tipping point?

A: There isn’t evidence that tipping points in climate are real and if they are, we can’t really predict them. [link]

Q: The permafrost isn’t going to melt?

A: No, it is melting. But we don’t know if it will be catastrophic.

Andy points to a photo of despair at a climate conference. But then there’s Scott H. DeLisi who represents a shift in how we relate to communities: Facebook, Twitter, Google Hangouts. Inside Climate News won the Pulitzer last year. “That says there are new models that may work. Can they sustain their funding?” Andy’s not sure.

“Journalism is a shinking wedge of a growing pie of ways to tell stories.”

“Escape from the Nerd Loop”: people talking to one another about how to communicate science issues. Andy loves Twitter. The hashtag is as big an invention as photovoltaics, he says. He references Chris Messina, its inventor, and points to how useful it is for separating and gathering strands of information, including at NASA’s Asteroid Watch. Andy also points to descriptions by a climate scientist who went to the Arctic [or Antarctic?] that he curated, and to a singing scientist.

Q: I’m a communications student. There was a guy named Marshall McLuhan, maybe you haven’t heard of him. Is the medium the message?

A: There are different tools for different jobs. I could tell you the volume of the atmosphere, but Adam Nieman, a science illustrator, used this way to show it to you.

Q: Why is it so hard to get out of catastrophism and into thinking about solutions?

A: Journalism usually focuses on the down side.If there’s no “Woe is me” element, it tends not to make it onto the front page. At Pace U. we travel each spring and do a film about a sustainable resource farming question. The first was on shrimp-farming in Belize. It’s got thousands of views but it’s not on the nightly news. How do we shift our norms in the media?

[david ropiek] Inherent human psychology: we pay more attention to risks. People who want to move the public dial inherently are attracted to the more attention-getting headlines, like “You’re going to die.”

A: Yes. And polls show that what people say about global warming depends on the weather outside that day.

A report recently drew the connection between climate change and other big problems facing us: poverty, war, etc. What did you think of it?

A: It was good. But is it going to change things? The Extremes report likewise. The city that was most affected by the recent typhoon had tripled its population, mainly with poor people. Andy values Jesse Ausubel who says that most politics is people pulling on disconnected levels.

Q: Any reflections on the disconnect between breezy IPCC executive summaries and the depth of the actual scientific report?

A: There have been demands for IPCC to write clearer summaries. Its charter has it focused on the down sides.

Q: How can we use open data and community tools to make better decisions about climate change? Will the data Obama opened up last month help?

A: The forces of stasis can congregate on that data and raise questions about it based on tiny inconsistencies. So I’m not sure it will change things. But I’m all for transparency. It’s an incredibly powerful tool, like when the US Embassy was doing its own twitter feed on Beijing air quality. We have this wonderful potential now; Greenpeace (who Andy often criticizes) did on-the-ground truthing about companies deforesting organgutang habitats in Indonesia. Then they did a great campaign to show who’s using the palm oil: Buying a Kitkat bar contributes to the deforesting of Borneo. You can do this ground-truthing now.

Q: In the past 6 months there seems to have been a jump in climate change coverage. No?

A: I don’t think there’s more coverage.

Q: India and Pakistan couldn’t agree on water control in part because the politicians talked about scarcity while the people talked in terms of their traditional animosities. How can we find the right vocabularies?

A: If the conversation is about reducing vulnerabilities and energy efficiency, you can get more consensus than talking about global warming.

Q: How about using data visualizations instead of words?

A: I love visualizations. They spill out from journalism. How much it matters is another question. Ezra Klein just did a piece that says that information doesn’t matter.

Q: Can we talk about your “Years of Living Dangerously” piece? [Couldn't hear the rest of the question].

A: My blog is edited by the op-ed desk, and I don’t always understand their decisions. Journalism migrates toward controversy. The Times has a feature “Room for Debate,” and I keep proposing “Room for Agreement” [link], where you’d see what people who disagree about an issue can agree on.

Q: [me] Should we still be engaging with deniers? With whom should we be talking?

A: Yes, we should engage. We taxpayers subsidize second mortgages on houses in wild fire zones in Colorado. Why? So firefighters have to put themselves at risk? [link] That’s an issue that people agree on across the spectrum. When it comes to deniers, we have to ask what exactly are you denying, Particular data? Scientific method? Physics? I’ve come to the conclusion that even if we had perfect information, we still wouldn’t galvanize the action we need.

[Andy ends by singing a song about liberated carbon. That's not something you see every day at the Shorenstein Center.]

[UPDATE (the next day): I added some more links.]

2 Comments »

January 2, 2014

[2b2k] Social Science in the Age of Too Big to Know

Gary King [twitter:kinggarry] , Director of Harvard’s Institute for Quantitative Social Science, has published an article (Open Access!) on the current status of this branch of science. Here’s the abstract:

The social sciences are undergoing a dramatic transformation from studying problems to solving them; from making do with a small number of sparse data sets to analyzing increasing quantities of diverse, highly informative data; from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit focused inward to having a major impact on public policy, commerce and industry, other academic fields, and some of the major problems that affect individuals and societies. In the midst of all this productive chaos, we have been building the Institute for Quantitative Social Science at Harvard, a new type of center intended to help foster and respond to these broader developments. We offer here some suggestions from our experiences for the increasing number of other universities that have begun to build similar institutions and for how we might work together to advance social science more generally.

In the article, Gary argues that Big Data requires Big Collaboration to be understood:

Social scientists are now transitioning from working primarily on their own, alone in their officesâ??a style that dates back to when the offices were in monasteriesâ??to working in highly collaborative, interdisciplinary, larger scale, lab-style research teams. The knowledge and skills necessary to access and use these new data sources and methods often do not exist within any one of the traditionally defined social science disciplines and are too complicated for any one scholar to accomplish alone

He begins by giving three excellent examples of how quantitative social science is opening up new possibilities for research.

1. Latanya Sweeney [twitter:LatanyaSweeney] found “clear evidence of racial discrimination” in the ads served up by newspaper websites.

2. A study of all 187M registered voters in the US showed that a third of those listed as “inactive” in fact cast ballots, “and the problem is not politically neutral.”

3. A study of 11M social media posts from China showed that the Chinese government is not censoring speech but is censoring “attempts at collective action, whether for or against the government…”

Studies such as these “depended on IQSS infrastructure, including access to experts in statistics, the social sciences, engineering, computer science, and American and Chinese area studies. ”

Gary also points to “the coming end of the quantitative-qualitative divide” in the social sciences, as new techniques enable massive amounts of qualitative data to be quantified, enriching purely quantitative data and extracting additional information from the qualitative reports.

Instead of quantitative researchers trying to build fully automated methods and qualitative researchers trying to make do with traditional human-only methods, now both are heading toward using or developing computer-assisted methods that empower both groups.

We are seeing a redefinition of social science, he argues:

We instead use the term “social science” more generally to refer to areas of scholarship dedicated to understanding, or improving the well-being of, human populations, using data at the level of (or informative about) individual people or groups of people.

This definition covers the traditional social science departments in faculties of schools of arts and science, but it also includes most research conducted at schools of public policy, business, and education. Social science is referred to by other names in other areas but the definition is wider than use of the term. It includes what law school faculty call “empirical research,” and many aspects of research in other areas, such as health policy at schools of medicine. It also includes research conducted by faculty in schools of public health, although they have different names for these activities, such as epidemiology, demography, and outcomes research.

The rest of the article reflects on pragmatic issues, including what this means for the sorts of social science centers to build, since community is “by far the most important component leading to success…” ” If academic research became part of the X-games, our competitive event would be “‘extreme cooperation’”.

1 Comment »

November 20, 2013

[liveblog][2b2k] David Eagleman on the brain as networks

I’m at re comm 13, an odd conference in Kitzbühel, Austria: 2.5 days of talks to 140 real estate executives, but the talks are about anything except real estate. David Eagleman, a neural scientist at Baylor, and a well-known author, is giving a talk. (Last night we had one of those compressed conversations that I can’t wait to be able to continue.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

How do we know your thinking is in your brain? If you damage your finger, you don’t change, but damage to your brain can change basic facets of your life. “The brain is the densest representation of who you are.” We’re the only species trying to figure out our own progamming language. We’ve discovered the most complicated device in the universe: our own brains. Ten billion neurons. Every single neuron contains the entire human genome and thousands of protens doing complicated computations. Each neuron is is connected to tens of thousands of its neighbors, meaning there are 100s of trillions of connections. These numbers “bankrupt the language.”

Almost all of the operations of the brain are happening at a level invisible to us. Taking a drink of water requires a “lightning storm” of acvitity at the neural level. This leads us to a concept of the unconscious. The conscious part of you is the smallest bit of what’s happening in the brain. It’s like a stowaway on a transatlantic journey that’s taking credit for the entire trip. When you think of something, your brain’s been working on it for hours or days. “It wasn’t really you that thought of it.”

About the unconscious: Psychologists gave photos of women to men and asked them to evaluate how attractive they are. Some of the photos were the same women, but with dilated eyes. The men rated them as being more attractive but none of them noticed the dilation. Dilated eyes are a sign of sexual readiness in women. Men made their choices with no idea of why.

More examples: In the US, if your name is Dennis or Denise, you’re more likely to become a dentist. These dentists have a conscious narrative about why they became dentists that misses the trick their brain has played on them. Likewise, people are statistically more likely to marry someone whose first name begins with the same first letter as theirs. And, i you are holding a warm mug of coffee, you’ll describe the relationship with your mother as warmer than if you’re holding an iced cup. There is an enormous gap between what you’re doing and what your conscious mind is doing.

“We should be thankful for that gap.” There’s so much going on under the hood, that we need to be shielded from the details. The conscious mind gets in trouble when it starts paying attention to what it’s doing. E.g., try signing your name with both hands in opposite directions simultaneously: it’s easy until you think about it. Likewise, if you now think about how you steer when making a lane change, you’re likely to enact it wrong. (You actually turn left and then turn right to an equal measure.)

Know thyself, sure. But neuroscience teaches us that you are many things. The brain is not a computer with a single output. It has many networks that are always competing. The brain is like a parliament that debates an action. When deciding between two sodas, one network might care about the price, another about the experience, another about the social aspect (cool or lame), etc. They battle. David looks at three of those networks:

1. How does the brain make decisions about valuation? E.g., people will walk 10 mins to save 10 € on a 20 € pen but not on a 557 € suit. Also, we have trouble making comparisons of worth among disparate items unless they are in a shared context. E.g., Williams Sonoma had a bread baking machine for $275 that did not sell. Once they added a second one for $370, it started selling. In real estate, if a customer is trying to decide between two homes, one modern and one traditional, if you want them to buy the modern one, show them another modern one. That gives them the context by which they can decide to buy it.

Everything is associated with everything else in the brain. (It’s an associative network.) Coffee used to be $0.50. When Starbucks started, they had to unanchor it from the old model so they made the coffee houses arty and renamed the sizes. Having lost the context for comparison, the price of Starbucks coffee began to seem reasonable.

2. Emotional experience is a big part of decision making. If you’re in a bad-smelling room, you’ll make harsher moral decisions. The trolley dilemma: 5 people have been tied to the tracks. A trolley is approaching rapidly. You can switch the trolley to a track with only one person tied to it. Everyone would switch the trolley. But now instead, you can push a fat man onto the trolley to stop the car. Few would. In the second scenario, touching someone engages the emotional system. The first scenario is just a math problem. The logic and emotional systems are always fighting it out. The Greeks viewed the self as someone steering a chariot drawn by the white horse of reason and the black horse of passion. [From Plato's Phaedrus]

3. A lot of the machinery of the brain deals with other brains. We use the same circuitry to think about people andor corporations. When a company betrays us, our brain responds the way it would if a friend betrayed us. Traditional economics says customer interactions are short-term but the brain takes a much longer-range view. Breaches of trust travel fast. (David plays “United Breaks Guitars.”) Smart companies use social media that make you believe that the company is your friend.

The battle among these three networks drives decisions. “Know thyselves.”

This is unsettling. The self is not at the center. It’s like when Galileo repositioned us in the universe. This seemed like a dethroning of man. The upside is that we’ve discovered the Cosmos is much bigger, more subtle, and more magnificent than we thought. As we sail into the inner cosmos of the brain, the brain is much subtle and magnificent than we ever considered.

“We’ve found the most wondrous thing in the universe, and it’s us.”

Q: Won’t this let us be manipulated?

A: Neural science is just catching up with what advertisers have known for 100 years.

Q: What about free will?

A: My labs and others have done experiments, and there’s no single experiment in neuroscience that proves that we do or do not have free will. But if we have free will, it’s a very small player in the system. We have genetics and experiences, and they make brains very different from one another. I argue for a legal system that recognizes a difference between people who may have committed the same crime. There are many different types of brains.

Be the first to comment »

September 27, 2013

[2b2k] Popular Science incompetently manages its comments, gives up

Popular Science has announced that it’s shutting down comments on its articles. The post by Suzanne LeBarre says this is because ” trolls and spambots” have overwhelmed the useful comments. But what I hear instead is: “We don’t know how to run a comment board, so shut up.”

Suzanne cites research that suggests that negative comments on an article reduce the credibility of the article, even if those negative comments are entirely unfounded. Thus, the trolls don’t just ruin the conversation, they hurt the cause of science.

Ok, let’s accept that. Scientific American cited the same research but came to a different decision. Rather than shut down its comments, it decided to moderate them using some sensible rules designed to encourage useful conversation. Their idea of a “useful conversation” is likely quite similar to Popular Science’s: not only no spam, but the discourse must be within the norms of science. So, it doesn’t matter how loudly Jesus told you that there is no climate change going on, your message is going to be removed if it doesn’t argue for your views within the evidentiary rules of science.

You may not like this restriction at Scientific American. Tough. You have lots of others places you can talk about Jesus’ beliefs about climate change. I posted at length about the Scientific American decision at the time, and especially about why this makes clear problems with the “echo chamber” meme, but I fundamentally agree with it.

If comments aren’t working on your site, then it’s your fault. Fix your site.

[Tip o' the hat to Joshua Beckerman for pointing out the PopSci post.]

Be the first to comment »

September 11, 2013

Spot the octopus!

Science Friday has posted a brief, phenomenal video about how octopuses and other cephalopods manage to camouflage themselves incredibly quickly. It explains the skin’s mechanism (which is mind-blowing in itself), but leaves open how they manage this even though they’re color blind. (Hat tip to Joe Mahoney.)

Be the first to comment »

July 28, 2013

The shockingly short history of the history of technology

In 1960, the academic journal Technology and Culture devoted its entire Autumn edition [1] to essays about a single work, the fifth and final volume of which had come out in 1958: A History of Technology, edited by Charles Singer, E. J. Holmyard, A. R. Hall, and Trevor I. Williams. Essay after essay implies or outright states something I found quite remarkable: A History of Technology is the first history of technology.

You’d think the essays would have some clever twist explaining why all those other things that claimed to be histories were not, perhaps because they didn’t get the concept of “technology” right in some modern way. But, no, the statements are pretty untwisty. The journal’s editor matter-of-factly claims that the history of technology is a “new discipline.”[2] Robert Woodbury takes the work’s publication as the beginning of the discipline as well, although he thinks it pales next to the foundational work of the history of science [3], a field the journal’s essays generally take as the history of technology’s older sibling, if not its parent. Indeed, fourteen years later, in 1974, Robert Multhauf wrote an article for that same journal, called “Some Observations on the State of the History of Technology,”[4] that suggested that the discipline was only then coming into its own. Why some universities have even recognized that there is such a thing as an historian of science!

The essay by Lewis Mumford, whom one might have mistaken for a prior historian of technology, marks the volumes as a first history of technology, pans them as a history of technology, and acknowledges prior attempts that border on being histories of technology. [5] His main objection to A History of Technology— and he is far from alone in this among the essays — is that the volumes don’t do the job of synthesizing the events recounted, failing to put them into the history of ideas, culture, and economics that explain both how technology took the turns that it did and what the meaning of those turns meant for human life. At least, Mumford says, these five volumes do a better job than the works of three British nineteenth century who wrote something like histories of technology: Andrew Ure, Samuel Smiles, and Charles Babbage. (Yes, that Charles Babbage.) (Multhauf points also to Louis Figuier in France, and Franz Reuleaux in Germany.[6])

Mumford comes across as a little miffed in the essay he wrote about A History of Technology, but, then, Mumford often comes across as at least a little miffed. In the 1963 introduction to his 1934 work, Technics and Civilization, Mumford seems to claim the crown for himself, saying that his work was “the first to summarize the technical history of the last thousand years of Western Civilization…” [7]. And, indeed, that book does what he claims is missing from A History of Technology, looking at the non-technical factors that made the technology socially feasible, and at the social effects the technology had. It is a remarkable work of synthesis, driven by a moral fervor that borders on the rhetoric of a prophet. (Mumford sometimes crossed that border; see his 1946 anti-nuke essay, “Gentlemen: You are Mad!” [8]) Still, in 1960 Mumford treated A History of Technology as a first history of technology not only in the academic journal Technology and Culture, but also in The New Yorker, claiming that until recently the history of technology had been “ignored,” and “…no matter what the oversights or lapses in this new “History of Technology, one must be grateful that it has come into existence at all.”[9]

So, there does seem to be a rough consensus that the first history of technology appeared in 1958. That the newness of this field is shocking, at least to me, is a sign of how dominant technology as a concept — as a frame — has become in the past couple of decades.


[1] Techology and Culture. Autumn, 1960. Vol. 1, Issue 4.

[2] Melvin Kranzberg. “Charles Singer and ‘A History of Technology’” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 299-302. p. 300.

[3] Robert S. Woodbury. “The Scholarly Future of the History of Technology” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 345-8. P. 345.

[4] Robert P. Multhauf, “Some Observations on the State of the History of Technology.” Techology and Culture. Jan, 1974. Vol. 15, no. 1. pp. 1-12

[5] Lewis Mumford. “Tools and the Man.” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 320-334.

[6] Multhauf, p. 3.

[7] Lewis Mumford. Technics and Civilization. (Harcourt Brace, 1934. New edition 1963), p. xi.

[8] Lewis Mumford. “Gentlemen: You Are Mad!” Saturday Review of Literature. March 2, 1946, pp. 5-6.

[9] Lewis Mumford. “From Erewhon to Nowhere.” The New Yorker. Oct. 8, 1960. pp. 180-197.

2 Comments »

May 26, 2013

[2b2k] Is big data degrading the integrity of science?

Amanda Alvarez has a provocative post at GigaOm:

There’s an epidemic going on in science: experiments that no one can reproduce, studies that have to be retracted, and the emergence of a lurking data reliability iceberg. The hunger for ever more novel and high-impact results that could lead to that coveted paper in a top-tier journal like Nature or Science is not dissimilar to the clickbait headlines and obsession with pageviews we see in modern journalism.

The article’s title points especially to “dodgy data,” and the item in this list that’s by far the most interesting to me is the “data reliability iceberg,” and its tie to the rise of Big Data. Amanda writes:

…unlike in science…, in big data accuracy is not as much of an issue. As my colleague Derrick Harris points out, for big data scientists the abilty to churn through huge amounts of data very quickly is actually more important than complete accuracy. One reason for this is that they’re not dealing with, say, life-saving drug treatments, but with things like targeted advertising, where you don’t have to be 100 percent accurate. Big data scientists would rather be pointed in the right general direction faster — and course-correct as they go – than have to wait to be pointed in the exact right direction. This kind of error-tolerance has insidiously crept into science, too.

But, the rest of the article contains no evidence that the last sentence’s claim is true because of the rise of Big Data. In fact, even if we accept that science is facing a crisis of reliability, the article doesn’t pin this on an “iceberg” of bad data. Rather, it seems to be a melange of bad data, faulty software, unreliable equipment, poor methodology, undue haste, and o’erweening ambition.

The last part of the article draws some of the heat out of the initial paragraphs. For example: “Some see the phenomenon not as an epidemic but as a rash, a sign that the research ecosystem is getting healthier and more transparent.” It makes the headline and the first part seem a bit overstated — not unusual for a blog post (not that I would ever do such a thing!) but at best ironic given this post’s topic.

I remain interested in Amanda’s hypothesis. Is science getting sloppier with data?

4 Comments »

February 4, 2013

[2b2k] Are all good conversations echo chambers?

Bora Zivkovic, the blog editor at Scientific American, has a great post about bad comment threads. This is a topic that has come up every day this week, which may just be a coincidence, or perhaps is a sign that the Zeitgeist is recognizing that when it talks to itself, it sounds like an idiot.

Bora cites a not-yet-published paper that presents evidence that a nasty, polarized comment thread can cause readers who arrive with no opinion about the paper’s topic to come to highly polarized opinions about it. This is in line with off-line research Cass Sunstein cites that suggests echo chambers increase polarization, except this new research indicates that it increases polarization even on first acquaintance. (Bora considers the echo chamber idea to be busted, citing a prior post that is closely aligned with the sort of arguments I’ve been making, although I am more worried about the effects of homophily — our tendency to hang out with people who agree with us — than he is.)

Much of Bora’s post is a thoughtful yet strongly voiced argument that it is the responsibility of the blog owner to facilitate good discussions by moderating comments. He writes:

So, if I write about a wonderful dinner I had last night, and somewhere in there mention that one of the ingredients was a GMO product, but hey, it was tasty, then a comment blasting GMOs is trolling.

Really? Then why did Bora go out of his way to mention that it was a GMO product? He seems to me to be trolling for a response. Now, I think Bora just picked a bad example in this case, but it does show that the concept of “off-topic” contains a boatload of norms and assumptions. And Bora should be fine with this, since his piece begins by encouraging bloggers to claim their conversation space as their own, rather than treating it as a public space governed by the First Amendment. It’s up to the blogger to do what’s necessary to enable the type of conversations that the blogger wants. All of which I agree with.

Nevertheless, Bora’s particular concept of being on-topic highlights a perpetual problem of conversation and knowledge. He makes a very strong case — nicely argued — for why he nukes climate-change denials from his comment thread. Read his post, but the boiled down version is: (a) These comments are without worth because they do not cite real evidence and most of them are astroturf anyway. (b) They create a polarized environment that has the bad effect of raising unjustified doubts in the minds of readers of the post (as per the research he mentions at the beginning of his post). (c) They prevent conversation from advancing thought because they stall the conversation at first principles. Sounds right to me. And I agree with his subsequent denial of the echo chamber effect as well:

The commenting threads are not a place to showcase the whole spectrum of opinions, no matter how outrageous some of them are, but to educate your readers, and to, in turn, get educated by your readers who always know something you don’t.

But this is why the echo chamber idea is so slippery. Conversation consists of the iteration of small differences upon a vast ground of agreement. A discussion of a scientific topic among readers of Scientific American has value insofar as they can assume that, say, evolution is an established theory, that assertions need to be backed by facts of a certain evidentiary sort (e.g., “God told me” doesn’t count), that some assertions are outside of the scope of discussion (“Evolution is good/evil”), etc. These are criteria of a successful conversation, but they are also the marks of an echo chamber. The good Scientific American conversation that Bora curates looks like an echo chamber to the climate change deniers and the creationists. If one looks only at the structure of the conversation, disregarding all the content and norms, the two conversations are indistinguishable.

But now I have to be really clear about what I’m not saying. I am not saying that there’s no difference between creationists and evolutionary biologists, or that they are equally true. I am not saying that both conversations follow the same rules of evidence. I am certainly not saying that their rules of evidence are equally likely to lead to scientific truths. I am not even saying that Bora needs to throw open the doors of his comments. I’m saying something much more modest than that: To each side, the other’s conversation looks like a bunch of people who are reinforcing one another in their wrong beliefs by repeating those beliefs as if they were obviously right. Even the conversation I deeply believe is furthering our understanding — the evolutionary biologists, if you haven’t guessed where I stand on this issue — has the structure of an echo chamber.

This seems to me to have two implications.

First, it should keep us alert to the issue that Bora’s post tries to resolve. He encourages us to exclude views challenging settled science because including ignorant trolls leads casual visitors to think that the issues discussed are still in play. But climate change denial and creationist sites also want to promote good conversations (by their lights), and thus Bora is apparently recommending that those sites also should exclude those who are challenging the settled beliefs that form the enabling ground of conversation — even though in this case it would mean removing comments from all those science-y folks who keep “trolling” them. It seems to me that this leads to a polarized culture in which the echo chamber problem gets worse. Now, I continue to believe that Bora is basically right in his recommendation. I just am not as happy about it as he seems to be. Perhaps Bora is in practice agreeing with Too Big to Know’s recommendation that we recognize that knowledge is fragmented and is not going to bring us all together.

Second, the fact that we cannot structurally distinguish a good conversation from a bad echo chamber I think indicates that we don’t have a good theory of conversation. The echo chamber fear grows in the space that a theory of conversation should inhabit.

I don’t have a theory of conversation in my hip pocket to give you. But I presume that such a theory would include the notion, evident in Bora’s post, that conversations have aims, and that when a conversation is open to the entire world (a radically new phenomenon…thank you WWW!) those aims should be explicitly stated. Likewise for the norms of the conversation. I’m also pretty sure that conversations are never only about they say they’re about because they are always embedded in complex social environments. And because conversations iterate on differences on a vast ground of similarity, conversations rarely are about changing people’s minds about those grounds. Also, I personally would be suspicious of any theory of conversation that began by viewing conversations as composed fundamentally of messages that are encoded by the sender and decoded by the recipient; that is, I’m not at all convinced that we can get a theory of conversation out of an information-based theory of communication.

But I dunno. I’m confused by this entire topic. Nothing that a good conversation wouldn’t cure.

4 Comments »

January 27, 2013

Alfred Russel Wallace’s letters go online, with a very buried CC license that maybe doesn’t apply anyway

The letters of Lord Alfred Russel Wallace, co-discoverer of the theory of evolution by natural selection, are now online. As the Alfred Russel Wallace Correspondence Project explains, the collection consists of 4,000 letters gathered from about 100 different institutions, with about half in the British Natural History Museum and British Library.

The Correspondence Project has, admirably, been releasing the scans without waiting for transcription; more faster is better! Predictably annoyingly, the letters, written by a man who died ten years before the Perpetual Copyright date of 1923, seem to be (but are they?) carefully obstructed by copyright: The Natural History Museum, which houses the collection, asserts copyright over “data held in the Wallace Letters Online database (including letter summaries)” [pdf oddly unreadable in Mac Preview]. Beyond the summaries, exactly what data is this referring to? Not sure. Don’t know.

But that isn’t the full story anyway, for the NHM sends us to the Wallace Fund for more information about the copyright. That page tells us that the unpublished letters are copyrighted until 2039, with this very helpful footnote:

Unless the work was published with the permission of his Literary Estate before 1 August 1989, in which case the work will be in copyright for 70 years after Wallace’s death, unless he died more than 20 years before the work’s publication, in which case copyright would expire 50 years after publication.

Oh.

Eventually it gets to some good news:

Authors wishing to publish such works would ordinarily need to obtain permission from the copyright holder before doing so. However, on July 31st 2011, in an attempt to facilitate the scholarly study of ARW’s writings, the co-executors of ARW’s Literary Estate agreed to allow third parties to publish ARW’s copyright works non-commercially without first having to ask the Literary Estate for permission, under the terms and conditions of Creative Commons license “Attribution-NonCommercial-ShareAlike 3.0 Unported”

So, are the letters published on the NHM site actually available under a Creative Commons non-commercial license? The Wallace Fund that aggregated them seems to think so. The NHM that published them maybe thinks not.

Because copyright is just so magical.

 


TWO HOURS LATER: Please see the first comment, from George Beccaloni, Director of the Wallace Correspondence Project. Thanks, George.

He explains that the transcribed text is available under a Creative Commons non-commercial license, but the digitized images are not. Plus some further complications, such as the content of the database being under copyright, although it is not clear from the site what data that is.

Since the aim of CC is to make it easier for people to re-use material, may I suggest (in the friendliest of fashions) that this be prominently clarified on the sites themselves?

7 Comments »

Next Page »