Joho the Blog » science

October 18, 2015

The Martian

My wife and I just saw The Martian. Loved it. It was as good a movie as could possibly be made out of a book that’s about sciencing the shit out of problems.

The book was the most fun I’ve had in a long time. So I was ready to be disappointed by the movie. Nope.

Compared to say, Gravity? Gravity‘s choreography was awesome, and the very ending of it worked for me. (No spoilers here!) But, it had irksome moment and themes, especially Sandra Bullock’s backstory. (No spoilers!)

The Martian was much less pretentious, IMO. It’s about science as problem-solving. Eng Fi, if you will. But the theme that emerges from this is:

Also, Let’s go the fuck to Mars!

(I still think Interstellar is a better movie, although it’s nowhere near as much fun. But I’m not entirely reasonable about Interstellar.)

1 Comment »

October 14, 2015

August 18, 2015

Newton’s non-clockwork universe

The New Atlantis has just published five essays exploring “The Unknown Newton”. It is — bless its heart! — open access. Here’s the table of contents:

Rob Iliffe provides an overview of Newton’s religious thought, including his radically unorthodox theology.

William R. Newman examines the scientific ambitions in Newton’s alchemical labors, which are often written off as deviations from science.

Stephen D. Snobelen — who in the course of writing his essay discovered Newton’s personal, dog-eared copy of a book that had been lost — provides an in-depth look at the connection between Newton’s interpretation of biblical prophecy and his cosmological views.

Andrew Janiak explains how Newton reconciled the apparent tensions between the Bible and the new view of the world described by physics.

Finally, Sarah Dry describes the curious fate of Newton’s unpublished papers, showing what they mean for our understanding of the man and why they remained hidden for so long.

Stephen Snobelen’s article, “Cosmos and Apocalypse,” begins with a paper in the John Locke collection at the Bodelian: Newton’s hand-drawn timeline of the events in Revelations. Snobelen argues that we’ve read too much of The Enlightenment back into Newton.

In particular, the concept of the universe as a pure clockwork that forever operates according to mechanical laws comes from Laplace, not Newton, says Snobelen. He refers to David Kubrin’s 1967 paper “Newton and the Cyclical Cosmos“; it is not open access. (Sign up for free with Jstor and you get constrained access to its many riches.) Kubrin’s paper is a great piece of work. He makes the case — convincingly to an amateur like me — that Newton and many of his cohorts feared that a perfectly clockwork universe that did not need Divine intervention to operate would be seen as also not needing God to start up. Newton instead thought that without God’s intervention, the universe would wind down. He hypothesized that comets — newly discovered — were God’s way of refreshing the Universe.

The second half of the Kubrin article is about the extent to which Newton’s late cosmogeny was shaped by his Biblical commitments. Most of Snobelen’s article is about a discovery in 2004 of a new document that confirms this, and adds to it that God’s intervention heads the universe in a particular direction:

In sum, Newton’s universe winds down, but God also renews it and ensures that it is going somewhere. The analogy of the clockwork universe so often applied to Newton in popular science publications, some of them even written by scientists and scholars, turns out to be wholly unfitting for his biblically informed cosmology.

Snobelen attributes this to Newton’s recognition that the universe consists of forces all acting on one another at the same time:

Newton realized that universal gravity signaled the end of Kepler’s stable orbits along perfect ellipses. These regular geometric forms might work in theory and in a two-body system, but not in the real cosmos where many more bodies are involved.

To maintain the order represented by perfect ellipses required nudges and corrections that only a Deity could accomplish.

Snobelen points out that the idea of the universe as a clockwork was more Leibniz’s idea than Newton’s. Newton rejected it. Leibniz got God into the universe through a far odder idea than as the Pitcher of Comets: souls (“monads”) experience inhabiting a shared space in which causality obtains only because God coordinatis a string of experiences in perfect sync across all the monads.

“Newton’s so-called clockwork universe is hardly timeless, regular, and machine-like,” writes Snobelen. “[I]nstead, it acts more like an organism that is subject to ongoing growth, decay, and renewal.” I’m not sold on the “organism” metaphor based on Snobelen’s evidence, but that tiny point aside, this is a fascinating article.

1 Comment »

May 28, 2015

I’m a winner! A limerick winner!

After many years of intermittent entries, I have at long last won the monthly mini-Annals of Improbable Research Limerick Competition. Woohoo! Ish.

AIR presents research that one might find celebrated at the Ig Nobels. In fact, AIR is the creator of the Ig Nobels. AIR’s monthly mini version is free and amusing.

The limerick had to be about: “Preoperative and postoperative gait analyses of patients undergoing great toe-to-thumb transfer,” from the Journal of Hand Surgery, vol. 12, no. 1, 1987, pp 66-69. Rich comic material, obviously.

“Your gait will be fine, understand,
If we sew a toe onto your hand.
   If we did the reverse
   It might be much worse,”
Said the doc in remarks made off hand.

This month’s article for your limericking is: “Improving Phrap-Based Assembly of the Rat Using ‘Reliable’ Overlaps.”

I shall see you on the five-line field of battle!


December 27, 2014

Oculus Thrift

I just received Google’s Oculus Rift emulator. Given that it’s made of cardboard, it’s all kinds of awesome.

Google Cardboard is a poke in Facebook’s eyes. FB bought Oculus Rift, the virtual reality headset, for $2B. Oculus hasn’t yet shipped a product, but its prototypes are mind-melting. My wife and I tried one last year at an Israeli educational tech lab, and we literally had to have people’s hands on our shoulders so we wouldn’t get so disoriented that we’d swoon. The Lab had us on a virtual roller coaster, with the ability to turn our heads to look around. It didn’t matter that it was an early, low-resolution prototype. Swoon.

Oculus is rumored to be priced at around $350 when it ships, and they will sell tons at that price. Basically, anyone who tries one will be a customer or will wish s/he had the money to be a customer. Will it be confined to game players? Not a chance on earth.

So, in the midst of all this justifiable hype about the Oculus Rift, Google announced Cardboard: detailed plans for how to cut out and assemble a holder for your mobile phone that positions it in front of your eyes. The Cardboard software divides the screen in two and creates a parallaxed view so you think you’re seeing in 3D. It uses your mobile phone’s kinetic senses to track the movement of your head as you purview your synthetic domain.

I took a look at the plans for building the holder and gave up. For $15 I instead ordered one from Unofficial Cardboard.

When it arrived this morning, I took it out of its shipping container (made out of cardboard, of course), slipped in my HTC mobile phone, clicked on the Google Cardboard software, chose a demo, and was literally — in the virtual sense — flying over the earth in any direction I looked, watching a cartoon set in a forest that I was in, or choosing YouTube music videos by turning to look at them on a circular wall.

Obviously I’m sold on the concept. But I’m also sold on the pure cheekiness of Google’s replicating the core functionality of the Oculus Rift by using existing technology, including one made of cardboard.

(And, yeah, I’m a little proud of the headline.)


November 24, 2014

[siu] Panel: Capturing the research lifecycle

It’s the first panel of the morning at Shaking It Up. Six men from six companies give brief overviews of their products. The session is led by Courtney Soderberg from the
Center for Open Science, which sounds great. [Six panelists means that I won’t be able to keep up. Or keep straight who is who, since there are no name plates. So, I’ll just distinguish them by referring to them as “Another White Guy,” ‘k?]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Riffyn: “Manufacturing-grade quality in the R&D process.” This can easily double R&D productivity “because you stop missing those false negatives.” It starts with design

Github: “GitHub is a place where people do software development together.” 10M people. 15M software repositories. He points to Zenodo, a respository for research outputs. Open source communities are better at collaborating than most academic research communities are. The principles of open source can be applied to private projects as well. A key principle: everything has a URL. Also, the processes should be “lock-free” so they can be done in parallel and the decision about branching can be made later.

Texas Advanced Computing Center: Agave is a Science-as-a-Service platform. It’s a platform, that provides lots of services as well as APIs. “It’s SalesForce for science.”

CERN is partnering with GitHub. “GitHub meets Zenodo.” But it also exports the software into INSPIRE which links the paper with the software. [This
might be the INSPIRE he’s referring to. Sorry. I know I should know this.

Overleaf was inspired by etherpad, the collaborative editor. But Etherpad doesn’t do figures or equations. OverLeaf does that and much more.

Publiscize helps researchers translate their work into terms that a broader audience can understand. He sees three audiences: intradisciplinary, interdisciplinary, and the public. The site helps scientists create a version readable by the public, and helps them disseminate them through social networks.


Some white guys provided answers I couldn’t quite hear to questions I couldn’t hear. They all seem to favor openness, standards, users owning their own data, and interoperability.

[They turned on the PA, so now I can hear. Yay. I missed the first couple of questions.]

Github: Libraries have uploaded 100,000 open access books, all for free. “Expect the unexpected. That happens a lot.” “Academics have been among the most abusive of our platform…in the best possible way.”

Zenodo: The most unusual uses are the ones who want to instal a copy at their local institutions. “We’re happy to help them fork off Zenodo.”

Q: Where do you see physical libraries fitting in?

AWG: We keep track of some people’s libraries.

AWG: People sometimes accidentally delete their entire company’s repos. We can get it back for you easily if you do.

AWG: Zenodo works with Chris Erdmann at Harvard Library.

AWG: We work with FigShare and others.

AWG: We can provide standard templates for Overleaf so, for example, your grad students’ theses can be managed easily.

AWG: We don’t do anything particular with libraries, but libraries are great.

Courtney:We’re working with ARL on a shared notification system

Q: Mr. GitHub (Arfon Smith), you said in your comments that reproducibility is a workflow issue?

GitHub: You get reproducibility as a by-product of using tools like the ones represented on this panel. [The other panelists agree. Reproducibility should be just part of the infrastructure that you don’t have to think about.]


[siu] Geoff Bilder on getting the scholarly cyberinfrastructure right

I’m at “Shaking It Up: How to thrive in — and change — the research ecosystem,” an event co-sponsored by Digital Science, Microsoft, Harvard, and MIT. (I think, based on little, that Digital Science is the primary instigator.) I’m late to the opening talk, by Geoff Bilder [twitter:gbilder] , dir. of strategic initiatives at CrossRef. He’s also deeply involved in Orcid, an authority-base that provides a stable identity reference for scholars. He refers to Orcid’s principles as the basis of this talk.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Geoff Bilder

Geoff is going through what he thinks is required for organizations contributing to a scholarly cyberinfrastructure. I missed the first few.

It should transcend disciplines and other boundaries.

An organization nees a living will: what will happen to it when it ends? That means there should be formal incentives to fulfill the mission and wind down.

Sustainability: time-limited funds should be used only for time-limited activities. You need other sources for sustaining fundamental operations. The goal should be to generate surplus so the organization isn’t brittle and can respond to new opportunities. There should be a contingency fund sufficient to keep it going for 12 months. This builds trust in the organization.

The revenues ought to be based on series, not on data. You certainly shouldn’t raise money by doing things that are against your mission.

But, he says, people are still wary about establishing a single organization that is central and worldwide. So people need the insurance of forkability. Make sure the data is open (within the limits of privacy) and is available in practical ways. “If we turn evil, you can take the code and the data and start up your own system. If you can bring the community with you, you will win.” It also helps to have a patent non-assertion so no one can tie it up.

He presents a version of Maslow’s hierarchy of needs for a scholarly cyberinfrastructure: tools, safety, esteem, self-actualization.

He ends by pointing to Building 20, MIT’s temporary building for WW II researchers. It produced lots of great results but little infrastructure. “We have to stop asking researchers how to fund infrastructure.” They aren’t particularly good at it. We need to get people who are good at it and are eager to fund a research infrastructure independent of funding individual research projects.


October 27, 2014

[liveblog] Christine Borgmann

Christine Borgman, chair of Info Studies at UCLA, and author of the essential Scholarship in the Digital Age, is giving a talk on The Knowledge Infrastructure of Astronomy. Her new book is Big Data, Little Data, No Data: Scholarship in the Networked World, but you’ll have to wait until January. (And please note that precisely because this is a well-organized talk with clearly marked sections, it comes across as choppy in these notes.)

NOTE: Live-blogging. Getting things wrong. Missing points.Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Her new book draws on 15 yrs of studying various disciplines and 7-8 years focusing on astronomy as a discipline. It’s framed around the change to more data-intensive research across the sciences and humanities plus, the policy push for open access to content and to data. (The team site.)

They’ve been looking at four groups:

The world thinks that astronomy and genomics have figured out how to do data intensive science, she says. But scientists in these groups know that it’s not that straightforward. Christine’s group is trying to learn from these groups and help them learn from one another

Knowledge Infrastructures are “string and baling wire.” Pieces pulled together. The new layered on top of the old.

The first English scientific journal began almost 350 yrs ago. (Philosophical Transactions of the Royal Academy.) We no longer think of the research object as a journal but as a set of articles, objects, and data. People don’t have a simple answer to what is their data. The raw files? The tables of data? When they’re told to share their data, they’re not sure what data is meant.”Even in astronomy we don’t have a single, crisp idea of what are our data.”

It’s very hard to find and organize all the archives of data. Even establishing a chronology is difficult. E.g., “Yes, that project has that date stamp but it’s really a transfer from a prior project twenty years older than that.” It’s hard to map the pieces.

Seamless Astronomy: ADS All Sky Survey, mapping data onto the sky. Also, they’re trying to integrate various link mappings, e.g., Chandra, NED, Simbad, WorldWide Telescope,, Visier, Aladin. But mapping these collections doesn’t tell you why they’re being linked, what they have in common, or what are their differences. What kind of science is being accomplished by making those relationships? Christine hopes her project will help explain this, although not everyone will agree with the explanations.

Her group wants to draw some maps and models: “A Christmas Tree of Links!” She shows a variety of maps, possible ways of organizing the field. E.g., one from 5 yrs ago clusters services, repositories, archives and publishers. Another scheme: Publications, Objects, Observations; the connection between pubs (citations) and observations is the most loosely coupled. “The trend we’re seeing is that astronomy is making considerable progress in tying together the observations, publications, and data.” “Within astronomy, you’ve built many more pieces of your infrastructure than any other field we’ve looked at.”

She calls out Chris Erdmann [sitting immediately in front of me] as a leader in trying to get data curation and custodianship taken up by libraries. Others are worrying about bit-rot and other issues.

Astronomy is committed to open access, but the resource commitments are uneven.

Strengths of astronomy:

  • collaboration and openness.

  • International coordination.

  • Long term value of data.

  • Agreed standards.

  • Shared resources.

Gaps of astronomy:

  • Investment in data sstewardship: varies by mission and by type of research. E.g., space-based missions get more investment than the ground-based ones. (An audience member says that that’s because the space research was so expensive that there was more insistence on making the data public and usable. A lively discussion ensues…)

  • The access to data varies.

  • Curation of tools and technologies

  • International coordination. Sould we curate existing data? But you don’t get funding for using existing data. So, invest in getting new data from new instruments??

Christine ends with some provocative questions about openness. What does it mean exactly? What does it get us?


Q: As soon as you move out of the Solar System to celestial astronomy, all the standards change.

A: When it takes ten years to build an instrument, it forces you to make early decisions about standards. But when you’re deploying sensors in lakes, you don’t always note that this is #127 that Eric put the tinfoil on top of because it wasn’t working well. Or people use Google Docs and don’t even label the rows and columns because all the readers know what they mean. That makes going back to it is much harder. “Making it useful for yourself is hard enough.” It’s harder still to make it useful for someone in 5 yrs, and harder still to make it useful for an unknown scientist in another country speaking another language and maybe from another discipline.

Q: You have to put a data management plan into every proposal, but you can’t make it a budget item… [There is a lively discussion of which funders reasonably fund this]

Q: Why does Europe fund ground-based data better than the US does?

A: [audience] Because of Riccardo Giacconi.

A: [Christine] We need to better fund the invisible workforce that makes science work. We’re trying to cast a light on this invisible infrastructure.

1 Comment »

April 12, 2014

[2b2k] Protein Folding, 30 years ago

Simply in terms of nostalgia, this 1985 video called “Knowledge Engineering: Artificial Intelligence Research at the Stanford Heuristic Programming Project” from the Stanford archives is charming right down to its Tron-like digital soundtrack.

But it’s also really interesting if you care about the way we’ve thought about knowledge. The Stanford Heuristic Programming Project under Edward Feigenbaum did groundbreaking work in how computers represent knowledge, emphasizing the content and not just the rules. (Here is a 1980 article about the Project and its projects.)

And then at the 8:50 mark, it expresses optimism that an expert system would be able to represent not only every atom of proteins but how they fold.

Little could it have been predicted that protein folding even 30 years later would be better recognized by the human brain than by computers, and that humans playing a game — Fold.It — would produce useful results.

It’s certainly the case that we have expert systems all over the place now, from Google Maps to the Nest thermostat. But we also see another type of expert system that was essentially unpredictable in 1985. One might think that the domain of computer programming would be susceptible to being represented in an expert system because it is governed by a finite set of perfectly knowable rules, unlike the fields the Stanford project was investigating. And there are of course expert systems for programming. But where do the experts actually go when they have a problem? To StackOverflow where other human beings can make suggestions and iterate on their solutions. One could argue that at this point StackOverflow is the most successful “expert system” for computer programming in that it is the computer-based place most likely to give you an answer to a question. But it does not look much like what the Stanford project had in mind, for how could even Edward Feigenbaum have predicted what human beings can and would do if connected at scale?

(Here’s an excellent interview with Feigenbaum.)

Be the first to comment »

April 9, 2014

[shorenstein] Andy Revkin on communicating climate science

I’m at a talk by Andrew Revkin of the NY Times’ Dot Earth blog at the Shorenstein Center. [Alex Jones mentions in his introduction that Andy is a singer-songwriter who played with Pete Seeger. Awesome!]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Andy says he’s been a science reporter for 31 years. His first magazine article was about the dangers of the anti-pot herbicide paraquat. (The article won an award for investigative journalism). It had all the elements — bad guys, victims, drama — typical of “Woe is me. Shame on you” environmental reporting. His story on global warming in 1988 has “virtually the same cast of characters” that you see in today’s coverage. “And public attitudes are about the same…Essentially the landscape hasn’t changed.” Over time, however, he has learned how complex climate science is.

In 2010, his blog moved from NYT’s reporting to editorial, so now he is freer to express his opinions. He wants to talk with us today about the sort of “media conversation” that occurs now, but didn’t when he started as a journalist. We now have a cloud of people who follow a journalist, ready to correct them. “You can say this is terrible. It’s hard to separate noise from signal. And that’s correct.” “It can be noisy, but it’s better than the old model, because the old model wasn’t always right.” Andy points to the NYT coverage on the build up to the invasion of Iraq. But this also means that now readers have to do a lot of the work themselves.

He left the NYT in his mid-fifties because he saw that access to info more often than not doesn’t change you, but instead reinforces your positions. So at Pace U he studies how and why people understand ecological issues. “What is it about us that makes us neglect long-term imperatives?” This works better in a blog in a conversation drawing upon other people’s expertise than an article. “I’m a shitty columnist,” he says. People read columns to reinforce their beliefs, although maybe you’ll read George Will to refresh your animus :) “This makes me not a great spokesperson for a position.” Most positions are one-sided, whereas Andy is interested in the processes by which we come to our understanding.

Q: [alex jones] People seem stupider about the environment than they were 20 years ago. They’re more confused.

A: In 1991 there was a survey of museum goers who thought that global warming was about the ozone hole, not about greenhouse gases. A 2009 study showed that on a scale of 1-6 of alarm, most Americans were at 5 (“concerned,” not yet “alarmed”). Yet, Andy points out, the Cap and Trade bill failed. Likewise,the vast majority support rebates on solar panels and fuel-efficient vehicles. They support requiring 45mph fuel efficiency across vehicle fleets, even at a $1K price premium. He also points to some Gallup data that showed that more than half of the respondents worry a great a deal or a fair amount, but that number hasn’t changed since they Gallup began asking the question, in 1989. [link] Furthermore, global warming doesn’t show up as one of the issues they worry about.

The people we need to motivate are innovators. We’ll have 9B on the planet soon, and 2B who can’t make reasonable energy choices.

Q: Are we heading toward a climate tipping point?

A: There isn’t evidence that tipping points in climate are real and if they are, we can’t really predict them. [link]

Q: The permafrost isn’t going to melt?

A: No, it is melting. But we don’t know if it will be catastrophic.

Andy points to a photo of despair at a climate conference. But then there’s Scott H. DeLisi who represents a shift in how we relate to communities: Facebook, Twitter, Google Hangouts. Inside Climate News won the Pulitzer last year. “That says there are new models that may work. Can they sustain their funding?” Andy’s not sure.

“Journalism is a shinking wedge of a growing pie of ways to tell stories.”

“Escape from the Nerd Loop”: people talking to one another about how to communicate science issues. Andy loves Twitter. The hashtag is as big an invention as photovoltaics, he says. He references Chris Messina, its inventor, and points to how useful it is for separating and gathering strands of information, including at NASA’s Asteroid Watch. Andy also points to descriptions by a climate scientist who went to the Arctic [or Antarctic?] that he curated, and to a singing scientist.

Q: I’m a communications student. There was a guy named Marshall McLuhan, maybe you haven’t heard of him. Is the medium the message?

A: There are different tools for different jobs. I could tell you the volume of the atmosphere, but Adam Nieman, a science illustrator, used this way to show it to you.

Q: Why is it so hard to get out of catastrophism and into thinking about solutions?

A: Journalism usually focuses on the down side.If there’s no “Woe is me” element, it tends not to make it onto the front page. At Pace U. we travel each spring and do a film about a sustainable resource farming question. The first was on shrimp-farming in Belize. It’s got thousands of views but it’s not on the nightly news. How do we shift our norms in the media?

[david ropiek] Inherent human psychology: we pay more attention to risks. People who want to move the public dial inherently are attracted to the more attention-getting headlines, like “You’re going to die.”

A: Yes. And polls show that what people say about global warming depends on the weather outside that day.

A report recently drew the connection between climate change and other big problems facing us: poverty, war, etc. What did you think of it?

A: It was good. But is it going to change things? The Extremes report likewise. The city that was most affected by the recent typhoon had tripled its population, mainly with poor people. Andy values Jesse Ausubel who says that most politics is people pulling on disconnected levels.

Q: Any reflections on the disconnect between breezy IPCC executive summaries and the depth of the actual scientific report?

A: There have been demands for IPCC to write clearer summaries. Its charter has it focused on the down sides.

Q: How can we use open data and community tools to make better decisions about climate change? Will the data Obama opened up last month help?

A: The forces of stasis can congregate on that data and raise questions about it based on tiny inconsistencies. So I’m not sure it will change things. But I’m all for transparency. It’s an incredibly powerful tool, like when the US Embassy was doing its own twitter feed on Beijing air quality. We have this wonderful potential now; Greenpeace (who Andy often criticizes) did on-the-ground truthing about companies deforesting organgutang habitats in Indonesia. Then they did a great campaign to show who’s using the palm oil: Buying a Kitkat bar contributes to the deforesting of Borneo. You can do this ground-truthing now.

Q: In the past 6 months there seems to have been a jump in climate change coverage. No?

A: I don’t think there’s more coverage.

Q: India and Pakistan couldn’t agree on water control in part because the politicians talked about scarcity while the people talked in terms of their traditional animosities. How can we find the right vocabularies?

A: If the conversation is about reducing vulnerabilities and energy efficiency, you can get more consensus than talking about global warming.

Q: How about using data visualizations instead of words?

A: I love visualizations. They spill out from journalism. How much it matters is another question. Ezra Klein just did a piece that says that information doesn’t matter.

Q: Can we talk about your “Years of Living Dangerously” piece? [Couldn’t hear the rest of the question].

A: My blog is edited by the op-ed desk, and I don’t always understand their decisions. Journalism migrates toward controversy. The Times has a feature “Room for Debate,” and I keep proposing “Room for Agreement” [link], where you’d see what people who disagree about an issue can agree on.

Q: [me] Should we still be engaging with deniers? With whom should we be talking?

A: Yes, we should engage. We taxpayers subsidize second mortgages on houses in wild fire zones in Colorado. Why? So firefighters have to put themselves at risk? [link] That’s an issue that people agree on across the spectrum. When it comes to deniers, we have to ask what exactly are you denying, Particular data? Scientific method? Physics? I’ve come to the conclusion that even if we had perfect information, we still wouldn’t galvanize the action we need.

[Andy ends by singing a song about liberated carbon. That’s not something you see every day at the Shorenstein Center.]

[UPDATE (the next day): I added some more links.]


Next Page »

Creative Commons License
Joho the Blog by David Weinberger is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.

Creative Commons license: Share it freely, but attribute it to me, and don't use it commercially without my permission.

Joho the Blog gratefully uses WordPress blogging software.
Thank you, WordPress!