On Friday, I had the tremendous honor of being awarded a Doctor of Letters degree from Simmons College, and giving the Commencement address at the Simmons graduate students’ ceremony.
Simmons is an inspiring place, and not only for its deep commitment to educating women. Being honored this way — especially along with Ruth Ellen Fitch, Madeleine M. Joullié, and Billie Jean King — made me very, very happy.
Thank you so much to Simmons College, President Drinan, and the Board of Trustees for this honor, which means the world to me. I’m just awed by it. Also, Professor Candy Schwartz and Dean Eileen Abels, a special thank you. And this honor is extra-special meaningful because my father-in-law, Marvin Geller, is here today, and his sister, Jeannie Geller Mason, was, as she called herself, a Simmons girl, class of 1940. Afterwards, Marvin will be happy to sing you the old “We are the girls of Simmons C” college song if you ask.
So, first, to the parents: I have been in your seat, and I know how proud – and maybe relieved – you are. So, congratulations to you. And to the students, it’s such an honor to be here with you to celebrate your being graduated from Simmons College, a school that takes seriously the privilege of helping its students not only to become educated experts, but to lead the next cohort in their disciplines and professions.
Now, as I say this, I know that some of you may be shaking your inner heads, because a commencement speaker is telling you about how bright your futures are, but maybe you have a little uncertainty about what will happen in your professions and with your career. That’s not only natural, it’s reasonable. But, some of you – I don’t know how many — may be feeling beyond that an uncertainty about your own abilities. You’re being officially certified with an advanced degree in your field, but you may not quite feel the sense of mastery you expected.
In other words, you feel the way I do now. And the way I did in 1979 when I got my doctorate in philosophy. I knew well enough the work of the guy I wrote my dissertation on, but I looked out at the field and knew just how little I knew about so much of it. And I looked at other graduates, and especially at the scholars and experts who had been teaching us, and I thought to myself, “They know so much more than I do.” I could fake it pretty well, but actually not all that well.
So, I want to reassure those of you who feel the way that I did and do, I want to reassure you that that feeling of not really knowing what you should, that feeling may stay with you forever. In fact, I hope it does — for your sake, for your profession, and for all of us.
But before explaining, I need to let you in on the secret: You do know enough. It’s like Gloria Steinem’s response, when she was forty, to people saying that she didn’t look forty. Steinem replied, “This is what forty looks like.” And this is what being a certified expert in your field feels like. Simmons knows what deserving your degree means, and its standards are quite high. So, congratulations. You truly earned this and deserve it.
But here’s why it’s good to get comfortable with always having a little lack of confidence. First, if you admit no self-doubt, you lose your impulse to learn. Second, you become a smug, know-it-all and no one likes you. Third, what’s even worse, is that you become a soldier in the army of ignorance. Your body language tells everyone else that their questions are a sign of weakness, which shuts down what should have been a space for learning.
The one skill I’ve truly mastered is asking stupid questions. And I don’t mean questions that I pretend are stupid but then, like Socrates, they show the folly of all those around me. No, they’re just dumb questions. Things I really should know by now. And quite often it turns out that I’m not the only one in the room with those questions. I’ve learned far more by being in over my head than by knowing what I’m talking about. And, as I’ll get to, we happen to be in the greatest time for being in over our heads in all of human history.
Let me give you just one quick example. In 1986 I became a marketing writer at a local tech startup called Interleaf that made text-and-graphic word processors. In 1986 that was a big deal, and what Interleaf was doing was waaaay over my head. So, I hung out with the engineers, and I asked the dumbest questions. What’s a font family? How can the spellchecker look up words as fast as you type them? When you fill a shape with say, purple, how does the purple know where to stop? Really basic. But because it was clear that I was a marketing guy who was genuinely interested in what the engineers were doing, they gave me a lot of time and an amazing education. Those were eight happy years being in over my head.
I’m still way over my head in the world of libraries, which are incredibly deep institutions. Compared to “normal” information technology, the data libraries deal with is amazingly profound and human. And librarians have been very generous in helping me learn just a small portion of what they know. Again, this is in part because they know my dumb questions are spurred by a genuine desire to understand what they’re doing, down to the details.
In fact, going down to the details is one very good way to make sure that you are continually over your head. We will never run out of details. The world’s just like that: there’s no natural end to how closely you can look at thing. And one thing I’ve learned is that everything is interesting if looked at at the appropriate level of detail.
Now, it used to be that you’d have to seek out places to plunge in over your head. But now, in the age of the Internets, all we have to do is stand still and the flood waters rise over our heads. We usually call this “information overload,” and we’re told to fear it. But I think that’s based on an old idea we need to get rid of.
Here’s what I mean. So, you know Flickr, the photo sharing site? If you go there and search for photos tagged “vista,” you’ll get two million photos, more vistas than you could look at if you made it your full time job.
If you go to Google and search for apple pie recipes, you’ll get over 1.3 million of them. Want to try them all out to find the best one. Not gonna happen.
If you go to Google Images and search for “cute cats,” you’ll get over seven million photos of the most adorable kittens ever, as well as some ads and porn, of course, because Internet.
So that’s two million vista photos. 1.3 million apple pie recipes. 7.6 million cute cat photos. We’re constantly warned about information overload, yet we never hear one word single word about the dangers of Vista Overload, Apple Pie Overload, or Cute Kitten overload. How have the media missed these overloads! It’s a scandal!
I think there’s actually a pretty clear reason why we pay no attention to these overloads. We only feel overloaded by that which we feel responsible for mastering. There’s no expectation that we’ll master vista photos, apple pie recipes, or photos of cute cats, so we feel no overload. But with information it’s different because we used to have so much less of it that back then mastery seemed possible. For example, in the old days if you watched the daily half hour broadcast news or spent twenty minutes with a newspaper, you had done your civic duty: you had kept up with The News. Now we can see before our eyes what an illusion that sense of mastery was. There’s too much happening on our diverse and too-interesting planet to master it, and we can see it all happening within our browsers.
The concept of Information Overload comes from that prior age, before we accepted what the Internet makes so clear: There is too, too much to know. As we accept that, the idea of mastery will lose its grip, We’ll stop feeling overloaded even though we’re confronted with exactly the same amount of information.
Now, I want to be careful because we’re here to congratulate you on having mastered your discipline. And grad school is a place where mastery still applies: in order to have a discipline — one that can talk with itself — institutions have to agree on a master-able set of ideas, knowledge, and skills that are required for your field. And that makes complete sense.
But, especially as the Internet becomes our dominant medium of ideas, knowledge, culture, and entertainment, we are all learning just how much there is that we don’t know and will never know.
And it’s not just the quantity of information that makes true mastery impossible in the Age of the Internet. It’s also what it’s doing to the domains we want to master — the topics and disciplines. In the Encyclopedia Britannica — remember that? — an article on a topic extends from the first word to the last, maybe with a few suggested “See also’s” at the end. The article’s job is to cover the topic in that one stretch of text. Wikipedia has different idea. At Wikipedia, the articles are often relatively short, but they typically have dozens or even hundreds of links. So rather than trying to get everything about, say, Shakespeare into a couple of thousand words, Wikipedia lets you click on links to other articles about what it mention — to Stratford-on-Avon, or iambic pentameter, or about the history of women in the theater. Shakespeare at Wikipedia, in other words, is a web of linked articles. Shakespeare on the Web is a web. And it seems to me that that webby structure actually is a more accurate reflection of the shape of knowledge: it’s an endless series of connected ideas and facts, limited by interest, not an article that starts here and ends there. In fact, I’d say that Shakespeare himself was a web, and so am I, and so are you.
But if topics and disciplines are webs, then they don’t have natural and clear edges. Where does the Shakespeare web end? Who decides if the article about, say, women in the theater is part of the Shakespeare web or not? These webs don’t have clearcut edges. But that means that we also can’t be nearly as clear about what it means to master Shakespeare. There’s always more. The very shape of the Web means we’re always in over our heads.
And just one more thing about these messy webs. They’re full of disagreement, contradiction, argument, differences in perspective. Just a few minutes on the Web reveals a fundamental truth: We don’t agree about anything. And we never will. My proof of that broad statement is all of human history. How do you master a field, even if you could define its edges, when the field doesn’t agree with itself?
So, the concept of mastery is tough in this Internet Age. But that’s just a more accurate reflection of the way it always was even if we couldn’t see it because we just didn’t have enough room to include every voice and every idea and every contradiction, and we didn’t have a way to link them so that you can go from one to another with the smallest possible motion of your hand: the shallow click of a mouse button.
The Internet has therefore revealed the truth of what the less confident among us already suspected: We’re all in over our heads. Forever. This isn’t a temporary swim in the deep end of the pool. Being in over our heads is the human condition.
The other side of this is that the world is far bigger, more complex, and more unfathomably interesting than our little brains can manage. If we can accept that, then we can happily be in over our heads forever…always a little worried that we really are supposed to know more than we do, but also, I hope, always willing to say that out loud. It’s the condition for learning from one another…
…And if the Internet has shown us how overwhelmed we are, it’s also teaching us how much we can learn from one another. In public. Acknowledging that we’re just humans, in a sea of endless possibility, within which we can flourish only in our shared and collaborative ignorance.
So, I know you’re prepared because I know the quality of the Simmons faculty, the vision of its leadership, and the dedication of its staff. I know the excellence of the education you’ve participated in. You’re ready to lead in your field. May that field always be about this high over your head — the depth at which learning occurs, curiosity is never satisfied, and we rely on one another’s knowledge, insight, and love.
, too big to know
Tagged with: 2b2k
Date: May 11th, 2014 dw
The New Republic continues to favor articles debunking claims that the Internet is bringing about profound changes. This time it’s an article on the digital humanities, titled “The Pseudo-Revolution,” by Adam Kirsch, a senior editor there. [This seems to be the article. Tip of the hat to Jose Afonso Furtado.]
I am not an expert in the digital humanities, but it’s clear to the people in the field who I know that the meaning of the term is not yet settled. Indeed, the nature and extent of the discipline is itself a main object of study of those in the discipline. This means the field tends to attract those who think that the rise of the digital is significant enough to warrant differentiating the digital humanities from the pre-digital humanities. The revolutionary tone that bothers Adam so much is a natural if not inevitable consequence of the sociology of how disciplines are established. That of course doesn’t mean he’s wrong to critique it.
But Adam is exercised not just by revolutionary tone but by what he perceives as an attempt to establish claims through the vehemence of one’s assertions. That is indeed something to watch out for. But I think it also betrays a tin-eared reading by Adam. Those assertions are being made in a context the authors I think properly assume readers understand: the digital humanities is not a done deal. The case has to be made for it as a discipline. At this stage, that means making provocative claims, proposing radical reinterpretations, and challenging traditional values. While I agree that this can lead to thoughtless triumphalist assumptions by the digital humanists, it also needs to be understood within its context. Adam calls it “ideological,” and I can see why. But making bold and even over-bold claims is how discourses at this stage proceed. You challenge the incumbents, and then you challenge your cohort to see how far you can go. That’s how the territory is explored. This discourse absolutely needs the incumbents to push back. In fact, the discourse is shaped by the assumption that the environment is adversarial and the beatings will arrive in short order. In this case, though, I think Adam has cherry-picked the most extreme and least plausible provocations in order to argue against the entire field, rather than against its overreaching. We can agree about some of the examples and some of the linguistic extensions, but that doesn’t dismiss the entire effort the way Adam seems to think it does.
It’s good to have Adam’s challenge. Because his is a long and thoughtful article, I’ll discuss the thematic problems with it that I think are the most important.
First, I believe he’s too eager to make his case, which is the same criticism he makes of the digital humanists. For example, when talking about the use of algorithmic tools, he talks at length about Franco Moretti‘s work, focusing on the essay “Style, Inc.: Reflections on 7,000 Titles.” Moretti used a computer to look for patterns in the titles of 7,000 novels published between 1740 and 1850, and discovered that they tended to get much shorter over time. “…Moretti shows that what changed was the function of the title itself.” As the market for novels got more crowded, the typical title went from being a summary of the contents to a “catchy, attention-grabbing advertisement for the book.” In addition, says Adam, Moretti discovered that sensationalistic novels tend to begin with “The” while “pioneering feminist novels” tended to begin with “A.” Moretti tenders an explanation, writing “What the article ‘says’ is that we are encountering all these figures for the first time.”
Adam concludes that while Moretti’s research is “as good a case for the usefulness of digital tools in the humanities as one can find” in any of the books under review, “its findings are not very exciting.” And, he says, you have to know which questions to ask the data, which requires being well-grounded in the humanities.
That you need to be well-grounded in the humanities to make meaningful use of digital tools is an important point. But here he seems to me to be arguing against a straw man. I have not encountered any digital humanists who suggest that we engage with our history and culture only algorithmically. I don’t profess expertise in the state of the digital humanities, so perhaps I’m wrong. But the digital humanists I know personally (including my friend Jeffrey Schnapp, a co-author of a book, Digital_Humanities, that Adam reviews) are in fact quite learned lovers of culture and history. If there is indeed an important branch of digital humanities that says we should entirely replace the study of the humanities with algorithms, then Adam’s criticism is trenchant…but I’d still want to hear from less extreme proponents of the field. In fact, in my limited experience, digital humanists are not trying to make the humanities safe for robots. They’re trying to increase our human engagement with and understanding of the humanities.
As to the point that algorithmic research can only “illustrate a truism rather than discovering a truth,” — a criticism he levels even more fiercely at the Ngram research described in the book Uncharted — it seems to me that Adam is missing an important point. If computers can now establish quantitatively the truth of what we have assumed to be true, that is no small thing. For example, the Ngram work has established not only that Jewish sources were dropped from German books during the Nazi era, but also the timing and extent of the erasure. This not only helps make the humanities more evidence-based —remember that Adam criticizes the digital humanists for their argument-by-assertion —but also opens the possibility of algorithmically discovering correlations that overturn assumptions or surprise us. One might argue that we therefore need to explore these new techniques more thoroughly, rather than dismissing them as adding nothing. (Indeed, the NY Times review of Uncharted discusses surprising discoveries made via Ngram research.)
Perhaps the biggest problem I have with Adam’s critique I’ve also had with some digital humanists. Adam thinks of the digital humanities as being about the digitizing of sources. He then dismisses that digitizing as useful but hardly revolutionary: “The translation of books into digital files, accessible on the Internet around the world, can be seen as just another practical tool…which facilitates but does not change the actual humanistic work of thinking and writing.”
First, that underplays the potential significance of making the works of culture and scholarship globally available.
Second, if you’re going to minimize the digitizing of books as merely the translation of ink into pixels, you miss what I think is the most important and transformative aspect of the digital humanities: the networking of knowledge and scholarship. Adam in fact acknowledges the networking of scholarship in a twisty couple of paragraphs. He quotes the following from the book Digital_Humanities:
The myth of the humanities as the terrain of the solitary genius…— a philosophical text, a definitive historical study, a paradigm-shifting work of literary criticism — is, of course, a myth. Genius does exist, but knowledge has always been produced and accessed in ways that are fundamentally distributed…
Adam responds by name-checking some paradigm-shifting works, and snidely adds “you can go to the library and check them out…” He then says that there’s no contradiction between paradigm-shifting works existing and the fact that “Scholarship is always a conversation…” I believe he is here completely agreeing with the passage he thinks he’s criticizing: genius is real; paradigm-shifting works exist; these works are not created by geniuses in isolation.
Then he adds what for me is a telling conclusion: “It’s not immediately clear why things should change just because the book is read on a screen rather than on a page.” Yes, that transposition doesn’t suggest changes any more worthy of research than the introduction of mass market paperbacks in the 1940s [source]. But if scholarship is a conversation, might moving those scholarly conversations themselves onto a global network raise some revolutionary possibilities, since that global network allows every connected person to read the scholarship and its objects, lets everyone comment, provides no natural mechanism for promoting any works or comments over any others, inherently assumes a hyperlinked rather than sequential structure of what’s written, makes it easier to share than to sequester works, is equally useful for non-literary media, makes it easier to transclude than to include so that works no longer have to rely on briefly summarizing the other works they talk about, makes differences and disagreements much more visible and easily navigable, enables multiple and simultaneous ordering of assembled works, makes it easier to include everything than to curate collections, preserves and perpetuates errors, is becoming ubiquitously available to those who can afford connection, turns the Digital Divide into a gradient while simultaneously increasing the damage done by being on the wrong side of that gradient, is reducing the ability of a discipline to patrol its edges, and a whole lot more.
It seems to me reasonable to think that it is worth exploring whether these new affordances, limitations, relationships and metaphors might transform the humanities in some fundamental ways. Digital humanities too often is taken simply as, and sometimes takes itself as, the application of computing tools to the humanities. But it should be (and for many, is) broad enough to encompass the implications of the networking of works, ideas and people.
I understand that Adam and others are trying to preserve the humanities from being abandoned and belittled by those who ought to be defending the traditional in the face of the latest. That is a vitally important role, for as a field struggling to establish itself digital humanities is prone to over-stating its case. (I have been known to do so myself.) But in my understanding, that assumes that digital humanists want to replace all traditional methods of study with computer algorithms. Does anyone?
Adam’s article is a brisk challenge, but in my opinion he argues too hard against his foe. The article becomes ideological, just as he claims the explanations, justifications and explorations offered by the digital humanists are.
More significantly, focusing only on the digitizing of works and ignoring the networking of their ideas and the people discussing those ideas, glosses over the locus of the most important changes occurring within the humanities. Insofar as the digital humanities focus on digitization instead of networking, I intend this as a criticism of that nascent discipline even more than as a criticism of Adam’s article.
I’m at a talk by Andrew Revkin of the NY Times’ Dot Earth blog at the Shorenstein Center. [Alex Jones mentions in his introduction that Andy is a singer-songwriter who played with Pete Seeger. Awesome!]
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Andy says he’s been a science reporter for 31 years. His first magazine article was about the dangers of the anti-pot herbicide paraquat. (The article won an award for investigative journalism). It had all the elements — bad guys, victims, drama — typical of “Woe is me. Shame on you” environmental reporting. His story on global warming in 1988 has “virtually the same cast of characters” that you see in today’s coverage. “And public attitudes are about the same…Essentially the landscape hasn’t changed.” Over time, however, he has learned how complex climate science is.
In 2010, his blog moved from NYT’s reporting to editorial, so now he is freer to express his opinions. He wants to talk with us today about the sort of “media conversation” that occurs now, but didn’t when he started as a journalist. We now have a cloud of people who follow a journalist, ready to correct them. “You can say this is terrible. It’s hard to separate noise from signal. And that’s correct.” “It can be noisy, but it’s better than the old model, because the old model wasn’t always right.” Andy points to the NYT coverage on the build up to the invasion of Iraq. But this also means that now readers have to do a lot of the work themselves.
He left the NYT in his mid-fifties because he saw that access to info more often than not doesn’t change you, but instead reinforces your positions. So at Pace U he studies how and why people understand ecological issues. “What is it about us that makes us neglect long-term imperatives?” This works better in a blog in a conversation drawing upon other people’s expertise than an article. “I’m a shitty columnist,” he says. People read columns to reinforce their beliefs, although maybe you’ll read George Will to refresh your animus :) “This makes me not a great spokesperson for a position.” Most positions are one-sided, whereas Andy is interested in the processes by which we come to our understanding.
Q: [alex jones] People seem stupider about the environment than they were 20 years ago. They’re more confused.
A: In 1991 there was a survey of museum goers who thought that global warming was about the ozone hole, not about greenhouse gases. A 2009 study showed that on a scale of 1-6 of alarm, most Americans were at 5 (“concerned,” not yet “alarmed”). Yet, Andy points out, the Cap and Trade bill failed. Likewise,the vast majority support rebates on solar panels and fuel-efficient vehicles. They support requiring 45mph fuel efficiency across vehicle fleets, even at a $1K price premium. He also points to some Gallup data that showed that more than half of the respondents worry a great a deal or a fair amount, but that number hasn’t changed since they Gallup began asking the question, in 1989. [link] Furthermore, global warming doesn’t show up as one of the issues they worry about.
The people we need to motivate are innovators. We’ll have 9B on the planet soon, and 2B who can’t make reasonable energy choices.
Q: Are we heading toward a climate tipping point?
A: There isn’t evidence that tipping points in climate are real and if they are, we can’t really predict them. [link]
Q: The permafrost isn’t going to melt?
A: No, it is melting. But we don’t know if it will be catastrophic.
Andy points to a photo of despair at a climate conference. But then there’s Scott H. DeLisi who represents a shift in how we relate to communities: Facebook, Twitter, Google Hangouts. Inside Climate News won the Pulitzer last year. “That says there are new models that may work. Can they sustain their funding?” Andy’s not sure.
“Journalism is a shinking wedge of a growing pie of ways to tell stories.”
“Escape from the Nerd Loop”: people talking to one another about how to communicate science issues. Andy loves Twitter. The hashtag is as big an invention as photovoltaics, he says. He references Chris Messina, its inventor, and points to how useful it is for separating and gathering strands of information, including at NASA’s Asteroid Watch. Andy also points to descriptions by a climate scientist who went to the Arctic [or Antarctic?] that he curated, and to a singing scientist.
Q: I’m a communications student. There was a guy named Marshall McLuhan, maybe you haven’t heard of him. Is the medium the message?
A: There are different tools for different jobs. I could tell you the volume of the atmosphere, but Adam Nieman, a science illustrator, used this way to show it to you.
Q: Why is it so hard to get out of catastrophism and into thinking about solutions?
A: Journalism usually focuses on the down side.If there’s no “Woe is me” element, it tends not to make it onto the front page. At Pace U. we travel each spring and do a film about a sustainable resource farming question. The first was on shrimp-farming in Belize. It’s got thousands of views but it’s not on the nightly news. How do we shift our norms in the media?
[david ropiek] Inherent human psychology: we pay more attention to risks. People who want to move the public dial inherently are attracted to the more attention-getting headlines, like “You’re going to die.”
A: Yes. And polls show that what people say about global warming depends on the weather outside that day.
A report recently drew the connection between climate change and other big problems facing us: poverty, war, etc. What did you think of it?
A: It was good. But is it going to change things? The Extremes report likewise. The city that was most affected by the recent typhoon had tripled its population, mainly with poor people. Andy values Jesse Ausubel who says that most politics is people pulling on disconnected levels.
Q: Any reflections on the disconnect between breezy IPCC executive summaries and the depth of the actual scientific report?
A: There have been demands for IPCC to write clearer summaries. Its charter has it focused on the down sides.
Q: How can we use open data and community tools to make better decisions about climate change? Will the data Obama opened up last month help?
A: The forces of stasis can congregate on that data and raise questions about it based on tiny inconsistencies. So I’m not sure it will change things. But I’m all for transparency. It’s an incredibly powerful tool, like when the US Embassy was doing its own twitter feed on Beijing air quality. We have this wonderful potential now; Greenpeace (who Andy often criticizes) did on-the-ground truthing about companies deforesting organgutang habitats in Indonesia. Then they did a great campaign to show who’s using the palm oil: Buying a Kitkat bar contributes to the deforesting of Borneo. You can do this ground-truthing now.
Q: In the past 6 months there seems to have been a jump in climate change coverage. No?
A: I don’t think there’s more coverage.
Q: India and Pakistan couldn’t agree on water control in part because the politicians talked about scarcity while the people talked in terms of their traditional animosities. How can we find the right vocabularies?
A: If the conversation is about reducing vulnerabilities and energy efficiency, you can get more consensus than talking about global warming.
Q: How about using data visualizations instead of words?
A: I love visualizations. They spill out from journalism. How much it matters is another question. Ezra Klein just did a piece that says that information doesn’t matter.
Q: Can we talk about your “Years of Living Dangerously” piece? [Couldn’t hear the rest of the question].
A: My blog is edited by the op-ed desk, and I don’t always understand their decisions. Journalism migrates toward controversy. The Times has a feature “Room for Debate,” and I keep proposing “Room for Agreement” [link], where you’d see what people who disagree about an issue can agree on.
Q: [me] Should we still be engaging with deniers? With whom should we be talking?
A: Yes, we should engage. We taxpayers subsidize second mortgages on houses in wild fire zones in Colorado. Why? So firefighters have to put themselves at risk? [link] That’s an issue that people agree on across the spectrum. When it comes to deniers, we have to ask what exactly are you denying, Particular data? Scientific method? Physics? I’ve come to the conclusion that even if we had perfect information, we still wouldn’t galvanize the action we need.
[Andy ends by singing a song about liberated carbon. That’s not something you see every day at the Shorenstein Center.]
[UPDATE (the next day): I added some more links.]
In August, I blogged about a mangled quotation supposedly from Mark Twain posted on an interstitial page at Forbes.com. When I tweeted about the post, it was (thanks to John Overholt [twitter:JohnOverholt]) noticed by Quote Investigator [twitter:QuoteResearch] , who over the course of a few hours tweeted the results of his investigation. Yes, it was mangled. No, it was not Twain. It was probably Christian Bovee. Quote Investigator, who goes by the pen name Garson O’Toole, has now posted on his site at greater length about this investigation.
It’s been clear from the beginning of the Web that it gives us access to experts on topics we never even thought of. As the Web has become more social, and as conversations have become scaled up, these crazy-smart experts are no longer nestling at home. They’re showing up like genies summoned by the incantation of particular words. We see this at Twitter, Reddit, and other sites with large populations and open-circle conversations.
This is a great thing, especially if the conversational space is engineered to give prominence to the contributions of drive-by experts. We want to take advantage of the fact that if enough people are in a conversation, one of them will be an expert.
Tagged with: 2b2k
Date: October 27th, 2013 dw
I gave a webcast talk at Library2.013 titled “Lessons from Reddit.” It’s available as an mp4 for streaming or downloading here. (You might want to start about 3 minutes in, in order to save 3 minutes of your life.)
It was a bit discursive. I had a few topics I knew I wanted to talk about, but I just talked. Here are the topics (with start times), as drawn from the lowest-value slide deck ever:
Why this topic? 3:00
What is Reddit? 5:10
Conversations are engineered 11:17
We are constantly surprised by scale 23:25
We don’t have interests. Interests have us.30:25
The virtue of echo chambers 36:40
Both Facebook and Apple have announced the use of tags. Yay!
Tags have continued to percolate through the ecosystem after their most auspicious introduction in Delicious.com. (Note the phrase “most auspicious”; tags have always been with us.) It’s great to see them increase both because they are a great way to get use out of the craziness while preserving it in its original form for others, and because there is great value in scaling tags, as Flickr has shown.
So, yay for tags. And yay for the crazy.
Tagged with: eim
Date: June 13th, 2013 dw
I don’t care about expensive electric sports cars, but I’m fascinated by the dustup between Elon Musk and the New York Times.
On Sunday, the Times ran an article by John Broder on driving the Tesla S, an all-electric car made by Musk’s company, Tesla. The article was titled “Stalled Out on Tesla’s Electric Highway,” which captured the point quite concisely.
Musk on Wednesday in a post on the Tesla site contested Broder’s account, and revealed that every car Tesla lends to a reviewer has its telemetry recorders set to 11. Thus, Musk had the data that proved that Broder was driving in a way that could have no conceivable purpose except to make the Tesla S perform below spec: Broder drove faster than he claimed, drove circles in a parking lot for a while, and didn’t recharge the car to full capacity.
Boom! Broder was caught red-handed, and it was data that brung him down. The only two questions left were why did Broder set out to tank the Tesla, and would it take hours or days for him to be fired?
Rebecca Greenfield at Atlantic Wire took a close look at the data — at least at the charts and maps that express the data — and evaluated how well they support each of Musk’s claims. Overall, not so much. The car’s logs do seem to contradict Broder’s claim to have used cruise control. But the mystery of why Broder drove in circles in a parking lot seems to have a reasonable explanation: he was trying to find exactly where the charging station was in the service center.
But we’re not done. Commenters on the Atlantic piece have both taken it to task and provided some explanatory hypotheses. Greenfield has interpolated some of the more helpful ones, as well as updating her piece with testimony from the tow-truck driver, and more.
But we’re still not done. Margaret Sullivan [twitter:sulliview] , the NYT “public editor” — a new take on what in the 1960s we started calling “ombudspeople” (although actually in the ’60s we called them “ombudsmen”) — has jumped into the fray with a blog post that I admire. She’s acting like a responsible adult by witholding judgment, and she’s acting like a responsible webby adult by talking to us even before all the results are in, acknowledging what she doesn’t know. She’s also been using social media to discuss the topic, and even to try to get Musk to return her calls.
Now, this whole affair is both typical and remarkable:
It’s a confusing mix of assertions and hypotheses, many of which are dependent on what one would like the narrative to be. You’re up for some Big Newspaper Schadenfreude? Then John Broder was out to do dirt to Tesla for some reason your own narrative can supply. You want to believe that old dinosaurs like the NYT are behind the curve in grasping the power of ubiquitous data? Yup, you can do that narrative, too. You think Elon Musk is a thin-skinned capitalist who’s willing to destroy a man’s reputation in order to protect the Tesla brand? Yup. Or substitute “idealist” or “world-saving environmentally-aware genius,” and, yup, you can have that narrative too.
Not all of these narratives are equally supported by the data, of course — assuming you trust the data, which you may not if your narrative is strong enough. Data signals but never captures intention: Was Broder driving around the parking lot to run down the battery or to find a charging station? Nevertheless, the data do tell us how many miles Broder drove (apparently just about the amount that he said) and do nail down (except under the most bizarre conspiracy theories) the actual route. Responsible adults like you and me are going to accept the data and try to form the story that “makes the most sense” around them, a story that likely is going to avoid attributing evil motives to John Broder and evil conspiratorial actions by the NYT.
But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk produced the numbers thinking that they’d bring us to accept his account. Greenfield went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.
That’s depressing. As always, I am not saying there are no facts, nor that they don’t matter. I’m just reporting empirically that facts don’t settle arguments the way we were told they would. Yet there is something profoundly wonderful and even hopeful about this case that is so typical and so remarkable.
Margaret Sulllivan’s job is difficult in the best of circumstances. But before the Web, it must have been so much more terrifying. She would have been the single point of inquiry as the Times tried to assess a situation in which it has deep, strong vested interests. She would have interviewed Broder and Musk. She would have tried to find someone at the NYT or externally to go over the data Musk supplied. She would have pronounced as fairly as she could. But it would have all been on her. That’s bad not just for the person who occupies that position, it’s a bad way to get at the truth. But it was the best we could do. In fact, most of the purpose of the public editor/ombudsperson position before the Web was simply to reassure us that the Times does not think it’s above reproach.
Now every day we can see just how inadequate any single investigator is for any issue that involves human intentions, especially when money and reputations are at stake. We know this for sure because we can see what an inquiry looks like when it’s done in public and at scale. Of course lots of people who don’t even know that they’re grinding axes say all sorts of mean and stupid things on the Web. But there are also conversations that bring to bear specialized expertise and unusual perspectives, that let us turn the matter over in our hands, hold it up to the light, shake it to hear the peculiar rattle it makes, roll it on the floor to gauge its wobble, sniff at it, and run it through sophisticated equipment perhaps used for other purposes. We do this in public — I applaud Sullivan’s call for Musk to open source the data — and in response to one another.
Our old idea was that the thoroughness of an investigation would lead us to a conclusion. Sadly, it often does not. We are likely to disagree about what went on in Broder’s review, and how well the Tesla S actually performed. But we are smarter in our differences than we ever could be when truth was a lonelier affair. The intelligence isn’t in a single conclusion that we all come to — if only — but in the linked network of views from everywhere.
There is a frustrating beauty in the way that knowledge scales.
Tagged with: 2b2k
Date: February 14th, 2013 dw
I picked up a copy of Bernard Knox’s 1994 Backing into the Future because somewhere I saw it referenced about the weird fact that the ancient Greeks thought that the future was behind them. Knox presents evidence from The Odyssey and Oedipus the King to back this up, so to speak. But that’s literally on the first page of the book. The rest of it consists of brilliant and brilliantly written essays about ancient life and scholarship. Totally enjoyable.
True, he undoes one of my favorite factoids: that Greeks in Homer’s time did not have a concept of the body as an overall unity, but rather only had words for particular parts of the body. This notion comes most forcefully from Bruno Snell in The Discovery of Mind, although I first read about it — and was convinced — by a Paul Feyerabend essay. In his essay “What Did Achilles Look Like?,” Knox convincingly argues that the Greeks had both and a word and concept for the body as a unity. In fact, they may have had three. Knox then points to Homeric uses that seem to indicate, yeah, Homer was talking about a unitary body. E.g., “from the bath he [Oydsseus] stepped, in body [demas] like the immortals,” and Poseidon “takes on the likeness of Calchas, in bodily form,” etc. [p. 52] I don’t read Greek, so I’ll believe whatever the last expert tells me, and Knox is the last expert I’ve read on this topic.
In a later chapter, Knox comes back to Bernard William’s criticism, in Shame and Necessity, of the “Homeric Greeks had no concept of a unitary body” idea, and also discusses another wrong thing that I had been taught. It turns out that the Greeks did have a concept of intention, decision-making, and will. Williams argues that they may not have had distinct words for these things, but Homer “and his characters make distinctions that can only be understood in terms of” those concepts. Further, Williams writes that Homer has
no word that means, simply, “decide.” But he has the notion…All that Homer seems to have left out is the idea of another mental action that is supposed necessarily to lie between coming to a conclusion and acting on it: and he did well in leaving it out, since there is no such action, and the idea of it is the invention of bad philosophy.” [p. 228]
Wow. Seems pretty right to me. What does the act of “making a decision” add to the description of how we move from conclusion to action?
Knox also has a long appreciation of Martha Nussbaum’s The Fragility of Goodness (1986) which makes me want to go out and get that book immediately, although I suspect that Knox is making it considerably more accessible than the original. But it sounds breath-takingly brilliant.
Knox’s essay on Nussbaum, “How Should We Live,” is itself rich with ideas, but one piece particularly struck me. In Book 6 of the Nichomachean Ethics, Aristotle dismisses one of Socrates’ claims (that no one knowingly does evil) by saying that such a belief is “manifestly in contradiction with the phainomena.” I’ve always heard the word “phainomena” translated in (as Knox says) Baconian terms, as if Aristotle were anticipating modern science’s focus on the facts and careful observation. We generally translate phainomena as “appearances” and contrast it with reality. The task of the scientist and the philosopher is to let us see past our assumptions to reveal the thing as it shows itself (appears) free of our anticipations and interpretations, so we can then use those unprejudiced appearances as a guide to truths about reality.
But Nussbaum takes the word differently, and Knox is convinced. Phainomena, are “the ordinary beliefs and sayings” and the sayings of the wise about things. Aristotle’s method consisted of straightening out whatever confusions and contradictions are in this body of beliefs and sayings, but then to show that at least the majority of those beliefs are true. This is a complete inversion of what I’d always thought. Rather than “attending to appearances” meaning dropping one’s assumptions to reveal the thing in its untouched state, it actually means taking those assumptions — of the many and of the wise — as containing truth. It is a confirming activity, not a penetrating and an overturning. Nussbaum says for Aristotle (and in contrast to Plato), “Theory must remain committed to the ways human beings live, act, see.” (Note that it’s entirely possible I’m getting Aristotle, Nussbaum, and Knox wrong. A trifecta of misunderstanding!)
Nussbaum’s book sounds amazing, and I know I should have read it, oh, 20 years ago, but it came out the year I left the philosophy biz. And Knox’s book is just wonderful. If you ever doubted why we need scholars and experts — why would you think such a thing? — this book is a completely enjoyable reminder.
I’m not sure how I came into possession of a copy of The Indexer, a publication by the Society of Indexers, but I thoroughly enjoyed it despite not being a professional indexer. Or, more exactly, because I’m not a professional indexer. It brings me joy to watch experts operate at levels far above me.
The issue of The Indexer I happen to have — Vol. 30, No,. 1, March 2012 — focuses on digital trends, with several articles on the Semantic Web and XML-based indexes as well as several on broad trends in digital reading and digital books, and on graphical visualizations of digital indexes. All good.
I also enjoyed a recurring feature: Indexes reviewed. This aggregates snippets of book reviews that mention the quality of the indexes. Among the positive reviews, the Sunday Telegraph thinks that for the book My Dear Hugh, “the indexer had a better understanding of the book than the editor himself.” That’s certainly going on someone’s resumé!
I’m not sure why I enjoy works of expertise in fields I know little about. It’s true that I know a little about indexing because I’ve written about the organization of digital information, and even a little about indexing. And I have a lot of interest in the questions about the future of digital books that happen to be discussed in this particular issue of The Indexer. That enables me to make more sense of the journal than might otherwise be the case. But even so, what I enjoy most are the discussions of topics that exhibit the professionals’ deep involvement in their craft.
But I think what I enjoy most of all is the discovery that something as seemingly simple as generating an index turns out to be indefinitely deep. There are endless technical issues, but also fathomless questions of principle. There’s even indexer humor. For example, one of the index reviews notes that Craig Brown’s The Lost Diaries “gives references with deadpan precision (‘Greer, Germaine: condemns Queen, 13-14…condemns pineapple, 70…condemns fat, thin and medium sized women, 93…condemns kangaroos,122’).”
As I’ve said before, everything is interesting if observed at the right level of detail.
From TheHeart.org, an article by Lisa Nainggolan:
Gothenburg, Sweden – Further support for the concept of the obesity paradox has come from a large study of patients with acute coronary syndrome (ACS) in the Swedish Coronary Angiography and Angioplasty Registry (SCAAR) . Those who were deemed overweight or obese by body-mass index (BMI) had a lower risk of death after PCI [percutaneous coronary intervention, aka angioplasty] than normal-weight or underweight participants up to three years after hospitalization, report Dr Oskar Angerås (University of Gothenburg, Sweden) and colleagues in their paper, published online September 5, 2012 in the European Heart Journal.
Can confirm. My grandmother in the 1930s was instructed to make sure she fed her husband lots and lots of butter to lubricate his heart after a heart attack. This proved to work extraordinarily well, at least until his next heart attack.
I refer once again to the classic 1999 The Onion headline: Eggs Good for You This Week.
, too big to know
Tagged with: 2b2k
Date: September 10th, 2012 dw
Next Page »