I don’t care about expensive electric sports cars, but I’m fascinated by the dustup between Elon Musk and the New York Times.
On Sunday, the Times ran an article by John Broder on driving the Tesla S, an all-electric car made by Musk’s company, Tesla. The article was titled “Stalled Out on Tesla’s Electric Highway,” which captured the point quite concisely.
Musk on Wednesday in a post on the Tesla site contested Broder’s account, and revealed that every car Tesla lends to a reviewer has its telemetry recorders set to 11. Thus, Musk had the data that proved that Broder was driving in a way that could have no conceivable purpose except to make the Tesla S perform below spec: Broder drove faster than he claimed, drove circles in a parking lot for a while, and didn’t recharge the car to full capacity.
Boom! Broder was caught red-handed, and it was data that brung him down. The only two questions left were why did Broder set out to tank the Tesla, and would it take hours or days for him to be fired?
Except…
Rebecca Greenfield at Atlantic Wire took a close look at the data — at least at the charts and maps that express the data — and evaluated how well they support each of Musk’s claims. Overall, not so much. The car’s logs do seem to contradict Broder’s claim to have used cruise control. But the mystery of why Broder drove in circles in a parking lot seems to have a reasonable explanation: he was trying to find exactly where the charging station was in the service center.
But we’re not done. Commenters on the Atlantic piece have both taken it to task and provided some explanatory hypotheses. Greenfield has interpolated some of the more helpful ones, as well as updating her piece with testimony from the tow-truck driver, and more.
But we’re still not done. Margaret Sullivan [twitter:sulliview] , the NYT “public editor” — a new take on what in the 1960s we started calling “ombudspeople” (although actually in the ’60s we called them “ombudsmen”) — has jumped into the fray with a blog post that I admire. She’s acting like a responsible adult by witholding judgment, and she’s acting like a responsible webby adult by talking to us even before all the results are in, acknowledging what she doesn’t know. She’s also been using social media to discuss the topic, and even to try to get Musk to return her calls.
Now, this whole affair is both typical and remarkable:
It’s a confusing mix of assertions and hypotheses, many of which are dependent on what one would like the narrative to be. You’re up for some Big Newspaper Schadenfreude? Then John Broder was out to do dirt to Tesla for some reason your own narrative can supply. You want to believe that old dinosaurs like the NYT are behind the curve in grasping the power of ubiquitous data? Yup, you can do that narrative, too. You think Elon Musk is a thin-skinned capitalist who’s willing to destroy a man’s reputation in order to protect the Tesla brand? Yup. Or substitute “idealist” or “world-saving environmentally-aware genius,” and, yup, you can have that narrative too.
Not all of these narratives are equally supported by the data, of course — assuming you trust the data, which you may not if your narrative is strong enough. Data signals but never captures intention: Was Broder driving around the parking lot to run down the battery or to find a charging station? Nevertheless, the data do tell us how many miles Broder drove (apparently just about the amount that he said) and do nail down (except under the most bizarre conspiracy theories) the actual route. Responsible adults like you and me are going to accept the data and try to form the story that “makes the most sense” around them, a story that likely is going to avoid attributing evil motives to John Broder and evil conspiratorial actions by the NYT.
But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk produced the numbers thinking that they’d bring us to accept his account. Greenfield went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.
That’s depressing. As always, I am not saying there are no facts, nor that they don’t matter. I’m just reporting empirically that facts don’t settle arguments the way we were told they would. Yet there is something profoundly wonderful and even hopeful about this case that is so typical and so remarkable.
Margaret Sulllivan’s job is difficult in the best of circumstances. But before the Web, it must have been so much more terrifying. She would have been the single point of inquiry as the Times tried to assess a situation in which it has deep, strong vested interests. She would have interviewed Broder and Musk. She would have tried to find someone at the NYT or externally to go over the data Musk supplied. She would have pronounced as fairly as she could. But it would have all been on her. That’s bad not just for the person who occupies that position, it’s a bad way to get at the truth. But it was the best we could do. In fact, most of the purpose of the public editor/ombudsperson position before the Web was simply to reassure us that the Times does not think it’s above reproach.
Now every day we can see just how inadequate any single investigator is for any issue that involves human intentions, especially when money and reputations are at stake. We know this for sure because we can see what an inquiry looks like when it’s done in public and at scale. Of course lots of people who don’t even know that they’re grinding axes say all sorts of mean and stupid things on the Web. But there are also conversations that bring to bear specialized expertise and unusual perspectives, that let us turn the matter over in our hands, hold it up to the light, shake it to hear the peculiar rattle it makes, roll it on the floor to gauge its wobble, sniff at it, and run it through sophisticated equipment perhaps used for other purposes. We do this in public — I applaud Sullivan’s call for Musk to open source the data — and in response to one another.
Our old idea was that the thoroughness of an investigation would lead us to a conclusion. Sadly, it often does not. We are likely to disagree about what went on in Broder’s review, and how well the Tesla S actually performed. But we are smarter in our differences than we ever could be when truth was a lonelier affair. The intelligence isn’t in a single conclusion that we all come to — if only — but in the linked network of views from everywhere.
There is a frustrating beauty in the way that knowledge scales.
Of course Aaron was a legendary prodigy of a hacker in the sense of someone who can build anything out of anything. But that’s not what the media mean when they call him a hacker. They’re talking about his downloading of millions of scholarly articles from JSTOR, and there’s a slight chance they’re also thinking about his making available millions of pages of federal legal material as part of the RECAP project.
Neither the JSTOR nor RECAP downloads were cases of hacking in the sense of forcing your way into a system by getting around technical barriers. Framing Aaron’s narrative — his life as those who didn’t know him will remember it — as that of a hacker is a convenient untruth.
As Alex Stamos makes clear, there were no technical, legal, or contractual barriers preventing Aaron from downloading as many articles in the JSTOR repository as he wanted, other than the possibility that Aaron was trespassing, and even that is questionable. (The MIT closet he “broke into” to gain better access to the network apparently was unlocked.) Alex writes:
Aaron did not “hack” the JSTOR website for all reasonable definitions of “hack”. Aaron wrote a handful of basic python scripts that first discovered the URLs of journal articles and then used curl to request them. Aaron did not use parameter tampering, break a CAPTCHA, or do anything more complicated than call a basic command line tool that downloads a file in the same manner as right-clicking and choosing “Save As” from your favorite browser.
Clearly, this is not what JSTOR had in mind, but it is also something its contract permitted and its technology did nothing to prevent. As Brewster Kahle wrote yesterday:
When I was at MIT, if someone went to hack the system, say by downloading databases to play with them, might be called a hero, get a degree, and start a company– but they called the cops on him. Cops. MIT used to protect us when we transgressed the traditional.
As for RECAP, the material Aaron made available was all in the public domain.
Aaron was not a hacker. He was a builder:
Aaron helped build the RSS standard that enabled a rush of information and ideas — what we blandly call “content” — to be distributed, encountered, and re-distributed. [source]
Aaron did the initial architecture of CreativeCommons.org, promoting a license that removes the friction from the reuse of copyrighted materials. [source]
Aaron did the initial architecture of the Open Library, a source of and about books open to the world. [source]
Aaron played an important role in spurring the grassroots movement that stopped SOPA, a law that would have increased the power of the Hollywood-DC alliance to shut down Web sites. [source]
Aaron contributed to the success of Reddit, a site now at the heart of the Net’s circulatory system for many millions of us.
Aaron contributed to Markdown, a much simpler way of writing HTML Web pages. (I use it for most of my writing.) [source]
Aaron created Infogami, software that made it easy for end-users to create Web sites that feature collaboration and self-expression. (Reddit bought Infogami.) [source]
Aaron wrote web.py, which he described as “a free software web application library for Python. It makes it easier to develop web apps in Python by handling a lot of the Web-related stuff for you. Reddit was built using it, for example.” (In that interview you’ll hear Aaron also talk about his disgust at the level of misogyny in the tech world.) [source]
The mainstream media know that their non-technical audience will hear the term “hacker” in its black hat sense. We need to work against this, not only for the sake of Aaron’s memory, but so that his work is celebrated, encouraged, and continued.
An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.
The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels[1]. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.
First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media [2].
Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…”[3] This leads them to ask an important question:
Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…
Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference [4] where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”
They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:
Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.
I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.
But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.
So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.
We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.
Notes, copied straight from the article:
[1] M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).
[2] C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43
[3] P. Ladwig et al., Mater. Today 13, 52 (2010)
[4] P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts
Amusingly, at 10am this morning, I was giving my talk here at the Aspen Ideas Festival about knowledge in the age of the internet. I’d asked someone to interrupt when the news came through. So at 10:05, someone said: “The court overturned the individual mandate!” And someone else said, “No, they upheld it.” It turns out that CNN got it wrong, but a blogger got it right. Pretty much made one of my points right then.
Anyway, pretty amazing outcome.
And, please, let’s NOT all go out and get sick! Stay well and healthy, my friends.
That’s the headline in USAToday. It’s typical of the news coverage of the Secret Service scandal before the President arrived in Colombia.
Let me fix that for you:
Media’s decision to focus on the Secret Service scandal eclipses Obama trip
The eclipse has only to do with how the media have chosen to cover the trip. And with headlines like the one in USAToday, the circle is complete: the media reporting on the media’s coverage as if they were actually reporting an event.
Mathew’s point is that linking is a good journalistic practice, even if author of the the second article independently confirmed the information in the first, as happened in this case. Mathew thinks it’s a matter of trust, and if the repeater gets caught at it, it would indeed erode trust. Of course, they probably won’t, and even if you did read the WSJ article after reading the TechCrunch post, you’d probably assume that the news was coming from a common source.
I think there’s another reason why reports ought to link to their, um, inspirations: Links are a public good. They create a web that is increasingly rich, useful, diverse, and trustworthy. We should all feel an obligation to be caretakers of and contributors to this new linked public.
And there’s a further reason. In addition to building this new infrastructure of curiosity, linking is a small act of generosity that sends people away from your site to some other that you think shows the world in a way worth considering. Linking is a public service that reminds us how deeply we are social and public creatures.
Which I think helps explains why newspapers often are not generous with their links. A paper like the WSJ believes its value — as well as its self-esteem — comes from being the place you go for news. It covers the stories worth covering, and the stories tell you what you need to know. It is thus a stopping point in the ecology of information. And that’s the oeprational definition of authority: The last place you visit when you’re looking for an answer. If you are satisfied with the answer, you stop your pursuit of it. Take the links out and you think you look like more of an authority. To this mindset, links are sign of weakness.
This made more sense when knowledge was paper-based, because in practical terms that’s pretty much how it worked: You got your news rolled up and thrown onto your porch once a day, and if you wanted more information about an article in it, you were pretty much SOL. Paper masked just how indebted the media were to one another. The media have always been an ecology of knowledge, but paper enabled them to pretend otherwise, and to base much of their economic value on that pretense.
Until newspapers are as heavily linked as GigaOm, TechCrunch, and Wikipedia, until newspapers revel in pointing away from themselves, they are depending on a value that was always unreal and now is unsustainable.
I think in a sense it’s true that the golden age of blogging is over, but that’s a good thing. And not because of anything bad about blogging. On the contrary…
Blogging began when your choices were (roughly) to dive into the never-ending, transient conversational streams of the Internet, or create a page with such great effort that you didn’t want to go back and change it, and few could bother to create a different page in order to comment on yours. Blogs let us post whenever we had something to say, and came with commenting built in. The Net was already conversational; blogs let us make static posts — articles, home pages — conversational.
Thanks to that, we now take for granted that posts will be conversational. The golden age ended because when a rare metal is everywhere, it’s no longer rare. And in this case, that’s a great thing.
Yes, that metaphor sucks. An ecosystem is a better one. Since the Web began, we’ve been filling in the environmental niches. We now have many more ways to talk with one another. Blogs continue to be an incredibly important player in this ecosystem; thank of how rapidly knowledge and ideas have become part of our new public thanks to blogs. But the point of an ecosystem metaphor is that the goodness comes from the complexity and diversity of participants and their relations. I therefore do not mourn the passing of the golden age of any particular modality of conversation, so long as that means other modalities have joined in the happy fray.
Mathew Ingram has a provocative post at Gigaom defending HuffingtonPost and its ilk from the charge that they over-aggregate news to the point of thievery. I’m not completely convinced by Mathew’s argument, but that’s because I’m not completely convinced by any argument about this.
It’s a very confusing issue if you think of it from the point of view of who owns what. So, take the best of cases, in which HuffPo aggregates from several sources and attributes the reportage appropriately. It’s important to take a best case since we’ll all agree that if HuffPo lifts an article en toto without attribution, it’s simple plagiarism. But that doesn’t tell us if the best cases are also plagiarisms. To make it juicier, assume that in one of these best cases, HuffPo relies heavily on one particular source article. It’s still not a slam dunk case of theft because in this example HuffPo is doing what we teach every school child to do: If you use a source, attribute it.
But, HuffPo isn’t a schoolchild. It’s a business. It’s making money from those aggregations. Ok, but we are fine in general with people selling works that aggregate and attribute. Non-fiction publishing houses that routinely sell books that have lots of footnotes are not thieves. And, as Mathew points out, HuffPo (in its best cases) is adding value to the sources it aggregates.
But, HuffPo’s policy even in its best case can enable it to serve as a substitute for the newspapers it’s aggregating. It thus may be harming the sources its using.
And here we get to what I think is the most important question. If you think about the issue in terms of theft, you’re thrown into a moral morass where the metaphors don’t work reliably. Worse, you may well mix in legal considerations that are not only hard to apply, but that we may not want to apply given the new-ness (itself arguable) of the situation.
But, I find that I am somewhat less conflicted about this if I think about it terms of what direction we’d like to nudge our world. For example, when it comes to copyright I find it helpful to keep in mind that a world full of music and musicians is better than a world in which music is rationed. When it comes to news aggregation, many of us will agree that a world in which news is aggregated and linked widely through the ecosystem is better than one in which you—yes, you, since a rule against HuffPo aggregating sources wouldn’t apply just to HuffPo— have to refrain from citing a source for fear that you’ll cross some arbitrary limit. We are a healthier society if we are aggregating, re-aggregating, contextualizing, re-using, evaluating, and linking to as many sources as we want.
Now, beginning by thinking where we want the world to be —which, by the way, is what this country’s Founders did when they put copyright into the Constitution in the first place: “to promote the progress of science and useful arts”—is useful but limited, because to get the desired situation in which we can aggregate with abandon, we need the original journalistic sources to survive. If HuffPo and its ilk genuinely are substituting for newspapers economically, then it seems we can’t get to where we want without limiting the right to aggregate.
And that’s why I’m conflicted. I don’t believe that even if all rights to aggregate were removed (which no one is proposing), newspapers would bounce back. At this point, I’d guess that the Net generation is primarily interested in news mainly insofar as its woven together and woven into the larger fabric. Traditional reportage is becoming valued more as an ingredient than a finished product. It’s the aggregators—the HuffingtonPosts of the world, but also the millions of bloggers, tweeters and retweeters, Facebook likers and Google plus-ers, redditors and slashdotters, BoingBoings and Ars Technicas— who are spreading the news by adding value to it. News now only moves if we’re interested enough in it to pass it along. So, I don’t know how to solve journalism’s deep problems with its business models, but I can’t imagine that limiting the circulation of ideas will help, since in this case, the circulatory flow is what’s keeping the heart beating.
Terry Heaton provides some broad context in a provocative post about the coming year of media turmoil. He writes in an email:
2012 is a dangerous year for all mass media, because decay in our core competency will again be hidden by record revenues (in some cases) due to what promises to be a huge political year. Despite advances in communications’ methods, politicians fall back on the tried and true during elections, and that means big money for an industry that’s struggling. The money will distract us from the real issues, and before you know it, 2013 will be here. It’s time to do something completely different.
The actual post is about the media issues the political year will distract us from.
Categories: media Tagged with: media Date: December 16th, 2011 dw
Erik Martin, the general manager of Reddit, explains what’s so special about the discussion site. I’m particularly interested in the nature of authority on the site, and its introduction of new journalistic rhetorical forms.