Joho the Blog » science

November 24, 2014

[siu] Panel: Capturing the research lifecycle

It’s the first panel of the morning at Shaking It Up. Six men from six companies give brief overviews of their products. The session is led by Courtney Soderberg from the
Center for Open Science, which sounds great. [Six panelists means that I won’t be able to keep up. Or keep straight who is who, since there are no name plates. So, I’ll just distinguish them by referring to them as “Another White Guy,” ‘k?]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Riffyn: “Manufacturing-grade quality in the R&D process.” This can easily double R&D productivity “because you stop missing those false negatives.” It starts with design

Github: “GitHub is a place where people do software development together.” 10M people. 15M software repositories. He points to Zenodo, a respository for research outputs. Open source communities are better at collaborating than most academic research communities are. The principles of open source can be applied to private projects as well. A key principle: everything has a URL. Also, the processes should be “lock-free” so they can be done in parallel and the decision about branching can be made later.

Texas Advanced Computing Center: Agave is a Science-as-a-Service platform. It’s a platform, that provides lots of services as well as APIs. “It’s SalesForce for science.”

CERN is partnering with GitHub. “GitHub meets Zenodo.” But it also exports the software into INSPIRE which links the paper with the software. [This
might be the INSPIRE he’s referring to. Sorry. I know I should know this.
]

Overleaf was inspired by etherpad, the collaborative editor. But Etherpad doesn’t do figures or equations. OverLeaf does that and much more.

Publiscize helps researchers translate their work into terms that a broader audience can understand. He sees three audiences: intradisciplinary, interdisciplinary, and the public. The site helps scientists create a version readable by the public, and helps them disseminate them through social networks.

Q&A

Some white guys provided answers I couldn’t quite hear to questions I couldn’t hear. They all seem to favor openness, standards, users owning their own data, and interoperability.

[They turned on the PA, so now I can hear. Yay. I missed the first couple of questions.]

Github: Libraries have uploaded 100,000 open access books, all for free. “Expect the unexpected. That happens a lot.” “Academics have been among the most abusive of our platform…in the best possible way.”

Zenodo: The most unusual uses are the ones who want to instal a copy at their local institutions. “We’re happy to help them fork off Zenodo.”

Q: Where do you see physical libraries fitting in?

AWG: We keep track of some people’s libraries.

AWG: People sometimes accidentally delete their entire company’s repos. We can get it back for you easily if you do.

AWG: Zenodo works with Chris Erdmann at Harvard Library.

AWG: We work with FigShare and others.

AWG: We can provide standard templates for Overleaf so, for example, your grad students’ theses can be managed easily.

AWG: We don’t do anything particular with libraries, but libraries are great.

Courtney:We’re working with ARL on a shared notification system

Q: Mr. GitHub (Arfon Smith), you said in your comments that reproducibility is a workflow issue?

GitHub: You get reproducibility as a by-product of using tools like the ones represented on this panel. [The other panelists agree. Reproducibility should be just part of the infrastructure that you don’t have to think about.]

5 Comments »

[siu] Geoff Bilder on getting the scholarly cyberinfrastructure right

I’m at “Shaking It Up: How to thrive in — and change — the research ecosystem,” an event co-sponsored by Digital Science, Microsoft, Harvard, and MIT. (I think, based on little, that Digital Science is the primary instigator.) I’m late to the opening talk, by Geoff Bilder [twitter:gbilder] , dir. of strategic initiatives at CrossRef. He’s also deeply involved in Orcid, an authority-base that provides a stable identity reference for scholars. He refers to Orcid’s principles as the basis of this talk.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Geoff Bilder

Geoff is going through what he thinks is required for organizations contributing to a scholarly cyberinfrastructure. I missed the first few.


It should transcend disciplines and other boundaries.


An organization nees a living will: what will happen to it when it ends? That means there should be formal incentives to fulfill the mission and wind down.


Sustainability: time-limited funds should be used only for time-limited activities. You need other sources for sustaining fundamental operations. The goal should be to generate surplus so the organization isn’t brittle and can respond to new opportunities. There should be a contingency fund sufficient to keep it going for 12 months. This builds trust in the organization.

The revenues ought to be based on series, not on data. You certainly shouldn’t raise money by doing things that are against your mission.


But, he says, people are still wary about establishing a single organization that is central and worldwide. So people need the insurance of forkability. Make sure the data is open (within the limits of privacy) and is available in practical ways. “If we turn evil, you can take the code and the data and start up your own system. If you can bring the community with you, you will win.” It also helps to have a patent non-assertion so no one can tie it up.


He presents a version of Maslow’s hierarchy of needs for a scholarly cyberinfrastructure: tools, safety, esteem, self-actualization.


He ends by pointing to Building 20, MIT’s temporary building for WW II researchers. It produced lots of great results but little infrastructure. “We have to stop asking researchers how to fund infrastructure.” They aren’t particularly good at it. We need to get people who are good at it and are eager to fund a research infrastructure independent of funding individual research projects.

3 Comments »

October 27, 2014

[liveblog] Christine Borgmann

Christine Borgman, chair of Info Studies at UCLA, and author of the essential Scholarship in the Digital Age, is giving a talk on The Knowledge Infrastructure of Astronomy. Her new book is Big Data, Little Data, No Data: Scholarship in the Networked World, but you’ll have to wait until January. (And please note that precisely because this is a well-organized talk with clearly marked sections, it comes across as choppy in these notes.)

NOTE: Live-blogging. Getting things wrong. Missing points.Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Her new book draws on 15 yrs of studying various disciplines and 7-8 years focusing on astronomy as a discipline. It’s framed around the change to more data-intensive research across the sciences and humanities plus, the policy push for open access to content and to data. (The team site.)

They’ve been looking at four groups:

The world thinks that astronomy and genomics have figured out how to do data intensive science, she says. But scientists in these groups know that it’s not that straightforward. Christine’s group is trying to learn from these groups and help them learn from one another

Knowledge Infrastructures are “string and baling wire.” Pieces pulled together. The new layered on top of the old.

The first English scientific journal began almost 350 yrs ago. (Philosophical Transactions of the Royal Academy.) We no longer think of the research object as a journal but as a set of articles, objects, and data. People don’t have a simple answer to what is their data. The raw files? The tables of data? When they’re told to share their data, they’re not sure what data is meant.”Even in astronomy we don’t have a single, crisp idea of what are our data.”

It’s very hard to find and organize all the archives of data. Even establishing a chronology is difficult. E.g., “Yes, that project has that date stamp but it’s really a transfer from a prior project twenty years older than that.” It’s hard to map the pieces.

Seamless Astronomy: ADS All Sky Survey, mapping data onto the sky. Also, they’re trying to integrate various link mappings, e.g., Chandra, NED, Simbad, WorldWide Telescope, Arxiv.org, Visier, Aladin. But mapping these collections doesn’t tell you why they’re being linked, what they have in common, or what are their differences. What kind of science is being accomplished by making those relationships? Christine hopes her project will help explain this, although not everyone will agree with the explanations.

Her group wants to draw some maps and models: “A Christmas Tree of Links!” She shows a variety of maps, possible ways of organizing the field. E.g., one from 5 yrs ago clusters services, repositories, archives and publishers. Another scheme: Publications, Objects, Observations; the connection between pubs (citations) and observations is the most loosely coupled. “The trend we’re seeing is that astronomy is making considerable progress in tying together the observations, publications, and data.” “Within astronomy, you’ve built many more pieces of your infrastructure than any other field we’ve looked at.”

She calls out Chris Erdmann [sitting immediately in front of me] as a leader in trying to get data curation and custodianship taken up by libraries. Others are worrying about bit-rot and other issues.

Astronomy is committed to open access, but the resource commitments are uneven.

Strengths of astronomy:

  • collaboration and openness.

  • International coordination.

  • Long term value of data.

  • Agreed standards.

  • Shared resources.

Gaps of astronomy:


  • Investment in data sstewardship: varies by mission and by type of research. E.g., space-based missions get more investment than the ground-based ones. (An audience member says that that’s because the space research was so expensive that there was more insistence on making the data public and usable. A lively discussion ensues…)


  • The access to data varies.


  • Curation of tools and technologies


  • International coordination. Sould we curate existing data? But you don’t get funding for using existing data. So, invest in getting new data from new instruments??


Christine ends with some provocative questions about openness. What does it mean exactly? What does it get us?


Q&A


Q: As soon as you move out of the Solar System to celestial astronomy, all the standards change.


A: When it takes ten years to build an instrument, it forces you to make early decisions about standards. But when you’re deploying sensors in lakes, you don’t always note that this is #127 that Eric put the tinfoil on top of because it wasn’t working well. Or people use Google Docs and don’t even label the rows and columns because all the readers know what they mean. That makes going back to it is much harder. “Making it useful for yourself is hard enough.” It’s harder still to make it useful for someone in 5 yrs, and harder still to make it useful for an unknown scientist in another country speaking another language and maybe from another discipline.


Q: You have to put a data management plan into every proposal, but you can’t make it a budget item… [There is a lively discussion of which funders reasonably fund this]


Q: Why does Europe fund ground-based data better than the US does?


A: [audience] Because of Riccardo Giacconi.

A: [Christine] We need to better fund the invisible workforce that makes science work. We’re trying to cast a light on this invisible infrastructure.

1 Comment »

April 12, 2014

[2b2k] Protein Folding, 30 years ago

Simply in terms of nostalgia, this 1985 video called “Knowledge Engineering: Artificial Intelligence Research at the Stanford Heuristic Programming Project” from the Stanford archives is charming right down to its Tron-like digital soundtrack.

But it’s also really interesting if you care about the way we’ve thought about knowledge. The Stanford Heuristic Programming Project under Edward Feigenbaum did groundbreaking work in how computers represent knowledge, emphasizing the content and not just the rules. (Here is a 1980 article about the Project and its projects.)

And then at the 8:50 mark, it expresses optimism that an expert system would be able to represent not only every atom of proteins but how they fold.

Little could it have been predicted that protein folding even 30 years later would be better recognized by the human brain than by computers, and that humans playing a game — Fold.It — would produce useful results.

It’s certainly the case that we have expert systems all over the place now, from Google Maps to the Nest thermostat. But we also see another type of expert system that was essentially unpredictable in 1985. One might think that the domain of computer programming would be susceptible to being represented in an expert system because it is governed by a finite set of perfectly knowable rules, unlike the fields the Stanford project was investigating. And there are of course expert systems for programming. But where do the experts actually go when they have a problem? To StackOverflow where other human beings can make suggestions and iterate on their solutions. One could argue that at this point StackOverflow is the most successful “expert system” for computer programming in that it is the computer-based place most likely to give you an answer to a question. But it does not look much like what the Stanford project had in mind, for how could even Edward Feigenbaum have predicted what human beings can and would do if connected at scale?

(Here’s an excellent interview with Feigenbaum.)

Be the first to comment »

April 9, 2014

[shorenstein] Andy Revkin on communicating climate science

I’m at a talk by Andrew Revkin of the NY Times’ Dot Earth blog at the Shorenstein Center. [Alex Jones mentions in his introduction that Andy is a singer-songwriter who played with Pete Seeger. Awesome!]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Andy says he’s been a science reporter for 31 years. His first magazine article was about the dangers of the anti-pot herbicide paraquat. (The article won an award for investigative journalism). It had all the elements — bad guys, victims, drama — typical of “Woe is me. Shame on you” environmental reporting. His story on global warming in 1988 has “virtually the same cast of characters” that you see in today’s coverage. “And public attitudes are about the same…Essentially the landscape hasn’t changed.” Over time, however, he has learned how complex climate science is.

In 2010, his blog moved from NYT’s reporting to editorial, so now he is freer to express his opinions. He wants to talk with us today about the sort of “media conversation” that occurs now, but didn’t when he started as a journalist. We now have a cloud of people who follow a journalist, ready to correct them. “You can say this is terrible. It’s hard to separate noise from signal. And that’s correct.” “It can be noisy, but it’s better than the old model, because the old model wasn’t always right.” Andy points to the NYT coverage on the build up to the invasion of Iraq. But this also means that now readers have to do a lot of the work themselves.

He left the NYT in his mid-fifties because he saw that access to info more often than not doesn’t change you, but instead reinforces your positions. So at Pace U he studies how and why people understand ecological issues. “What is it about us that makes us neglect long-term imperatives?” This works better in a blog in a conversation drawing upon other people’s expertise than an article. “I’m a shitty columnist,” he says. People read columns to reinforce their beliefs, although maybe you’ll read George Will to refresh your animus :) “This makes me not a great spokesperson for a position.” Most positions are one-sided, whereas Andy is interested in the processes by which we come to our understanding.

Q: [alex jones] People seem stupider about the environment than they were 20 years ago. They’re more confused.

A: In 1991 there was a survey of museum goers who thought that global warming was about the ozone hole, not about greenhouse gases. A 2009 study showed that on a scale of 1-6 of alarm, most Americans were at 5 (“concerned,” not yet “alarmed”). Yet, Andy points out, the Cap and Trade bill failed. Likewise,the vast majority support rebates on solar panels and fuel-efficient vehicles. They support requiring 45mph fuel efficiency across vehicle fleets, even at a $1K price premium. He also points to some Gallup data that showed that more than half of the respondents worry a great a deal or a fair amount, but that number hasn’t changed since they Gallup began asking the question, in 1989. [link] Furthermore, global warming doesn’t show up as one of the issues they worry about.

The people we need to motivate are innovators. We’ll have 9B on the planet soon, and 2B who can’t make reasonable energy choices.

Q: Are we heading toward a climate tipping point?

A: There isn’t evidence that tipping points in climate are real and if they are, we can’t really predict them. [link]

Q: The permafrost isn’t going to melt?

A: No, it is melting. But we don’t know if it will be catastrophic.

Andy points to a photo of despair at a climate conference. But then there’s Scott H. DeLisi who represents a shift in how we relate to communities: Facebook, Twitter, Google Hangouts. Inside Climate News won the Pulitzer last year. “That says there are new models that may work. Can they sustain their funding?” Andy’s not sure.

“Journalism is a shinking wedge of a growing pie of ways to tell stories.”

“Escape from the Nerd Loop”: people talking to one another about how to communicate science issues. Andy loves Twitter. The hashtag is as big an invention as photovoltaics, he says. He references Chris Messina, its inventor, and points to how useful it is for separating and gathering strands of information, including at NASA’s Asteroid Watch. Andy also points to descriptions by a climate scientist who went to the Arctic [or Antarctic?] that he curated, and to a singing scientist.

Q: I’m a communications student. There was a guy named Marshall McLuhan, maybe you haven’t heard of him. Is the medium the message?

A: There are different tools for different jobs. I could tell you the volume of the atmosphere, but Adam Nieman, a science illustrator, used this way to show it to you.

Q: Why is it so hard to get out of catastrophism and into thinking about solutions?

A: Journalism usually focuses on the down side.If there’s no “Woe is me” element, it tends not to make it onto the front page. At Pace U. we travel each spring and do a film about a sustainable resource farming question. The first was on shrimp-farming in Belize. It’s got thousands of views but it’s not on the nightly news. How do we shift our norms in the media?

[david ropiek] Inherent human psychology: we pay more attention to risks. People who want to move the public dial inherently are attracted to the more attention-getting headlines, like “You’re going to die.”

A: Yes. And polls show that what people say about global warming depends on the weather outside that day.

A report recently drew the connection between climate change and other big problems facing us: poverty, war, etc. What did you think of it?

A: It was good. But is it going to change things? The Extremes report likewise. The city that was most affected by the recent typhoon had tripled its population, mainly with poor people. Andy values Jesse Ausubel who says that most politics is people pulling on disconnected levels.

Q: Any reflections on the disconnect between breezy IPCC executive summaries and the depth of the actual scientific report?

A: There have been demands for IPCC to write clearer summaries. Its charter has it focused on the down sides.

Q: How can we use open data and community tools to make better decisions about climate change? Will the data Obama opened up last month help?

A: The forces of stasis can congregate on that data and raise questions about it based on tiny inconsistencies. So I’m not sure it will change things. But I’m all for transparency. It’s an incredibly powerful tool, like when the US Embassy was doing its own twitter feed on Beijing air quality. We have this wonderful potential now; Greenpeace (who Andy often criticizes) did on-the-ground truthing about companies deforesting organgutang habitats in Indonesia. Then they did a great campaign to show who’s using the palm oil: Buying a Kitkat bar contributes to the deforesting of Borneo. You can do this ground-truthing now.

Q: In the past 6 months there seems to have been a jump in climate change coverage. No?

A: I don’t think there’s more coverage.

Q: India and Pakistan couldn’t agree on water control in part because the politicians talked about scarcity while the people talked in terms of their traditional animosities. How can we find the right vocabularies?

A: If the conversation is about reducing vulnerabilities and energy efficiency, you can get more consensus than talking about global warming.

Q: How about using data visualizations instead of words?

A: I love visualizations. They spill out from journalism. How much it matters is another question. Ezra Klein just did a piece that says that information doesn’t matter.

Q: Can we talk about your “Years of Living Dangerously” piece? [Couldn’t hear the rest of the question].

A: My blog is edited by the op-ed desk, and I don’t always understand their decisions. Journalism migrates toward controversy. The Times has a feature “Room for Debate,” and I keep proposing “Room for Agreement” [link], where you’d see what people who disagree about an issue can agree on.

Q: [me] Should we still be engaging with deniers? With whom should we be talking?

A: Yes, we should engage. We taxpayers subsidize second mortgages on houses in wild fire zones in Colorado. Why? So firefighters have to put themselves at risk? [link] That’s an issue that people agree on across the spectrum. When it comes to deniers, we have to ask what exactly are you denying, Particular data? Scientific method? Physics? I’ve come to the conclusion that even if we had perfect information, we still wouldn’t galvanize the action we need.

[Andy ends by singing a song about liberated carbon. That’s not something you see every day at the Shorenstein Center.]

[UPDATE (the next day): I added some more links.]

2 Comments »

January 2, 2014

[2b2k] Social Science in the Age of Too Big to Know

Gary King [twitter:kinggarry] , Director of Harvard’s Institute for Quantitative Social Science, has published an article (Open Access!) on the current status of this branch of science. Here’s the abstract:

The social sciences are undergoing a dramatic transformation from studying problems to solving them; from making do with a small number of sparse data sets to analyzing increasing quantities of diverse, highly informative data; from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit focused inward to having a major impact on public policy, commerce and industry, other academic fields, and some of the major problems that affect individuals and societies. In the midst of all this productive chaos, we have been building the Institute for Quantitative Social Science at Harvard, a new type of center intended to help foster and respond to these broader developments. We offer here some suggestions from our experiences for the increasing number of other universities that have begun to build similar institutions and for how we might work together to advance social science more generally.

In the article, Gary argues that Big Data requires Big Collaboration to be understood:

Social scientists are now transitioning from working primarily on their own, alone in their officesâ??a style that dates back to when the offices were in monasteriesâ??to working in highly collaborative, interdisciplinary, larger scale, lab-style research teams. The knowledge and skills necessary to access and use these new data sources and methods often do not exist within any one of the traditionally defined social science disciplines and are too complicated for any one scholar to accomplish alone

He begins by giving three excellent examples of how quantitative social science is opening up new possibilities for research.

1. Latanya Sweeney [twitter:LatanyaSweeney] found “clear evidence of racial discrimination” in the ads served up by newspaper websites.

2. A study of all 187M registered voters in the US showed that a third of those listed as “inactive” in fact cast ballots, “and the problem is not politically neutral.”

3. A study of 11M social media posts from China showed that the Chinese government is not censoring speech but is censoring “attempts at collective action, whether for or against the government…”

Studies such as these “depended on IQSS infrastructure, including access to experts in statistics, the social sciences, engineering, computer science, and American and Chinese area studies. ”

Gary also points to “the coming end of the quantitative-qualitative divide” in the social sciences, as new techniques enable massive amounts of qualitative data to be quantified, enriching purely quantitative data and extracting additional information from the qualitative reports.

Instead of quantitative researchers trying to build fully automated methods and qualitative researchers trying to make do with traditional human-only methods, now both are heading toward using or developing computer-assisted methods that empower both groups.

We are seeing a redefinition of social science, he argues:

We instead use the term “social science” more generally to refer to areas of scholarship dedicated to understanding, or improving the well-being of, human populations, using data at the level of (or informative about) individual people or groups of people.

This definition covers the traditional social science departments in faculties of schools of arts and science, but it also includes most research conducted at schools of public policy, business, and education. Social science is referred to by other names in other areas but the definition is wider than use of the term. It includes what law school faculty call “empirical research,” and many aspects of research in other areas, such as health policy at schools of medicine. It also includes research conducted by faculty in schools of public health, although they have different names for these activities, such as epidemiology, demography, and outcomes research.

The rest of the article reflects on pragmatic issues, including what this means for the sorts of social science centers to build, since community is “by far the most important component leading to success…” ” If academic research became part of the X-games, our competitive event would be “‘extreme cooperation'”.

1 Comment »

November 20, 2013

[liveblog][2b2k] David Eagleman on the brain as networks

I’m at re comm 13, an odd conference in Kitzbühel, Austria: 2.5 days of talks to 140 real estate executives, but the talks are about anything except real estate. David Eagleman, a neural scientist at Baylor, and a well-known author, is giving a talk. (Last night we had one of those compressed conversations that I can’t wait to be able to continue.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

How do we know your thinking is in your brain? If you damage your finger, you don’t change, but damage to your brain can change basic facets of your life. “The brain is the densest representation of who you are.” We’re the only species trying to figure out our own progamming language. We’ve discovered the most complicated device in the universe: our own brains. Ten billion neurons. Every single neuron contains the entire human genome and thousands of protens doing complicated computations. Each neuron is is connected to tens of thousands of its neighbors, meaning there are 100s of trillions of connections. These numbers “bankrupt the language.”

Almost all of the operations of the brain are happening at a level invisible to us. Taking a drink of water requires a “lightning storm” of acvitity at the neural level. This leads us to a concept of the unconscious. The conscious part of you is the smallest bit of what’s happening in the brain. It’s like a stowaway on a transatlantic journey that’s taking credit for the entire trip. When you think of something, your brain’s been working on it for hours or days. “It wasn’t really you that thought of it.”

About the unconscious: Psychologists gave photos of women to men and asked them to evaluate how attractive they are. Some of the photos were the same women, but with dilated eyes. The men rated them as being more attractive but none of them noticed the dilation. Dilated eyes are a sign of sexual readiness in women. Men made their choices with no idea of why.

More examples: In the US, if your name is Dennis or Denise, you’re more likely to become a dentist. These dentists have a conscious narrative about why they became dentists that misses the trick their brain has played on them. Likewise, people are statistically more likely to marry someone whose first name begins with the same first letter as theirs. And, i you are holding a warm mug of coffee, you’ll describe the relationship with your mother as warmer than if you’re holding an iced cup. There is an enormous gap between what you’re doing and what your conscious mind is doing.

“We should be thankful for that gap.” There’s so much going on under the hood, that we need to be shielded from the details. The conscious mind gets in trouble when it starts paying attention to what it’s doing. E.g., try signing your name with both hands in opposite directions simultaneously: it’s easy until you think about it. Likewise, if you now think about how you steer when making a lane change, you’re likely to enact it wrong. (You actually turn left and then turn right to an equal measure.)

Know thyself, sure. But neuroscience teaches us that you are many things. The brain is not a computer with a single output. It has many networks that are always competing. The brain is like a parliament that debates an action. When deciding between two sodas, one network might care about the price, another about the experience, another about the social aspect (cool or lame), etc. They battle. David looks at three of those networks:

1. How does the brain make decisions about valuation? E.g., people will walk 10 mins to save 10 € on a 20 € pen but not on a 557 € suit. Also, we have trouble making comparisons of worth among disparate items unless they are in a shared context. E.g., Williams Sonoma had a bread baking machine for $275 that did not sell. Once they added a second one for $370, it started selling. In real estate, if a customer is trying to decide between two homes, one modern and one traditional, if you want them to buy the modern one, show them another modern one. That gives them the context by which they can decide to buy it.

Everything is associated with everything else in the brain. (It’s an associative network.) Coffee used to be $0.50. When Starbucks started, they had to unanchor it from the old model so they made the coffee houses arty and renamed the sizes. Having lost the context for comparison, the price of Starbucks coffee began to seem reasonable.

2. Emotional experience is a big part of decision making. If you’re in a bad-smelling room, you’ll make harsher moral decisions. The trolley dilemma: 5 people have been tied to the tracks. A trolley is approaching rapidly. You can switch the trolley to a track with only one person tied to it. Everyone would switch the trolley. But now instead, you can push a fat man onto the trolley to stop the car. Few would. In the second scenario, touching someone engages the emotional system. The first scenario is just a math problem. The logic and emotional systems are always fighting it out. The Greeks viewed the self as someone steering a chariot drawn by the white horse of reason and the black horse of passion. [From Plato’s Phaedrus]

3. A lot of the machinery of the brain deals with other brains. We use the same circuitry to think about people andor corporations. When a company betrays us, our brain responds the way it would if a friend betrayed us. Traditional economics says customer interactions are short-term but the brain takes a much longer-range view. Breaches of trust travel fast. (David plays “United Breaks Guitars.”) Smart companies use social media that make you believe that the company is your friend.

The battle among these three networks drives decisions. “Know thyselves.”

This is unsettling. The self is not at the center. It’s like when Galileo repositioned us in the universe. This seemed like a dethroning of man. The upside is that we’ve discovered the Cosmos is much bigger, more subtle, and more magnificent than we thought. As we sail into the inner cosmos of the brain, the brain is much subtle and magnificent than we ever considered.

“We’ve found the most wondrous thing in the universe, and it’s us.”

Q: Won’t this let us be manipulated?

A: Neural science is just catching up with what advertisers have known for 100 years.

Q: What about free will?

A: My labs and others have done experiments, and there’s no single experiment in neuroscience that proves that we do or do not have free will. But if we have free will, it’s a very small player in the system. We have genetics and experiences, and they make brains very different from one another. I argue for a legal system that recognizes a difference between people who may have committed the same crime. There are many different types of brains.

Be the first to comment »

September 27, 2013

[2b2k] Popular Science incompetently manages its comments, gives up

Popular Science has announced that it’s shutting down comments on its articles. The post by Suzanne LeBarre says this is because ” trolls and spambots” have overwhelmed the useful comments. But what I hear instead is: “We don’t know how to run a comment board, so shut up.”

Suzanne cites research that suggests that negative comments on an article reduce the credibility of the article, even if those negative comments are entirely unfounded. Thus, the trolls don’t just ruin the conversation, they hurt the cause of science.

Ok, let’s accept that. Scientific American cited the same research but came to a different decision. Rather than shut down its comments, it decided to moderate them using some sensible rules designed to encourage useful conversation. Their idea of a “useful conversation” is likely quite similar to Popular Science’s: not only no spam, but the discourse must be within the norms of science. So, it doesn’t matter how loudly Jesus told you that there is no climate change going on, your message is going to be removed if it doesn’t argue for your views within the evidentiary rules of science.

You may not like this restriction at Scientific American. Tough. You have lots of others places you can talk about Jesus’ beliefs about climate change. I posted at length about the Scientific American decision at the time, and especially about why this makes clear problems with the “echo chamber” meme, but I fundamentally agree with it.

If comments aren’t working on your site, then it’s your fault. Fix your site.

[Tip o’ the hat to Joshua Beckerman for pointing out the PopSci post.]

Be the first to comment »

September 11, 2013

Spot the octopus!

Science Friday has posted a brief, phenomenal video about how octopuses and other cephalopods manage to camouflage themselves incredibly quickly. It explains the skin’s mechanism (which is mind-blowing in itself), but leaves open how they manage this even though they’re color blind. (Hat tip to Joe Mahoney.)

Be the first to comment »

July 28, 2013

The shockingly short history of the history of technology

In 1960, the academic journal Technology and Culture devoted its entire Autumn edition [1] to essays about a single work, the fifth and final volume of which had come out in 1958: A History of Technology, edited by Charles Singer, E. J. Holmyard, A. R. Hall, and Trevor I. Williams. Essay after essay implies or outright states something I found quite remarkable: A History of Technology is the first history of technology.

You’d think the essays would have some clever twist explaining why all those other things that claimed to be histories were not, perhaps because they didn’t get the concept of “technology” right in some modern way. But, no, the statements are pretty untwisty. The journal’s editor matter-of-factly claims that the history of technology is a “new discipline.”[2] Robert Woodbury takes the work’s publication as the beginning of the discipline as well, although he thinks it pales next to the foundational work of the history of science [3], a field the journal’s essays generally take as the history of technology’s older sibling, if not its parent. Indeed, fourteen years later, in 1974, Robert Multhauf wrote an article for that same journal, called “Some Observations on the State of the History of Technology,”[4] that suggested that the discipline was only then coming into its own. Why some universities have even recognized that there is such a thing as an historian of science!

The essay by Lewis Mumford, whom one might have mistaken for a prior historian of technology, marks the volumes as a first history of technology, pans them as a history of technology, and acknowledges prior attempts that border on being histories of technology. [5] His main objection to A History of Technology— and he is far from alone in this among the essays — is that the volumes don’t do the job of synthesizing the events recounted, failing to put them into the history of ideas, culture, and economics that explain both how technology took the turns that it did and what the meaning of those turns meant for human life. At least, Mumford says, these five volumes do a better job than the works of three British nineteenth century who wrote something like histories of technology: Andrew Ure, Samuel Smiles, and Charles Babbage. (Yes, that Charles Babbage.) (Multhauf points also to Louis Figuier in France, and Franz Reuleaux in Germany.[6])

Mumford comes across as a little miffed in the essay he wrote about A History of Technology, but, then, Mumford often comes across as at least a little miffed. In the 1963 introduction to his 1934 work, Technics and Civilization, Mumford seems to claim the crown for himself, saying that his work was “the first to summarize the technical history of the last thousand years of Western Civilization…” [7]. And, indeed, that book does what he claims is missing from A History of Technology, looking at the non-technical factors that made the technology socially feasible, and at the social effects the technology had. It is a remarkable work of synthesis, driven by a moral fervor that borders on the rhetoric of a prophet. (Mumford sometimes crossed that border; see his 1946 anti-nuke essay, “Gentlemen: You are Mad!” [8]) Still, in 1960 Mumford treated A History of Technology as a first history of technology not only in the academic journal Technology and Culture, but also in The New Yorker, claiming that until recently the history of technology had been “ignored,” and “…no matter what the oversights or lapses in this new “History of Technology, one must be grateful that it has come into existence at all.”[9]

So, there does seem to be a rough consensus that the first history of technology appeared in 1958. That the newness of this field is shocking, at least to me, is a sign of how dominant technology as a concept — as a frame — has become in the past couple of decades.


[1] Techology and Culture. Autumn, 1960. Vol. 1, Issue 4.

[2] Melvin Kranzberg. “Charles Singer and ‘A History of Technology'” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 299-302. p. 300.

[3] Robert S. Woodbury. “The Scholarly Future of the History of Technology” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 345-8. P. 345.

[4] Robert P. Multhauf, “Some Observations on the State of the History of Technology.” Techology and Culture. Jan, 1974. Vol. 15, no. 1. pp. 1-12

[5] Lewis Mumford. “Tools and the Man.” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 320-334.

[6] Multhauf, p. 3.

[7] Lewis Mumford. Technics and Civilization. (Harcourt Brace, 1934. New edition 1963), p. xi.

[8] Lewis Mumford. “Gentlemen: You Are Mad!” Saturday Review of Literature. March 2, 1946, pp. 5-6.

[9] Lewis Mumford. “From Erewhon to Nowhere.” The New Yorker. Oct. 8, 1960. pp. 180-197.

2 Comments »

Next Page »