Glenn Greenwald mounts a mighty and effective defense against the charge leveled by Mark Ames at Pando.com that Greenwald and Laura Poitras are “monopolizing” and “privatizing” the 50,000-200,000 NSA documents entrusted to them by Edward Snowden.
Unlike Greenwald, I do think “it’s a question worth asking,” as Ames puts it — rather weasily, since his post attempt really is about supplying an answer. It’s worth asking because of the new news venture funded by Pierre Omidyar that has hired Greenwald and Poitras. Greenwald argues (among other things) that the deal has nothing to do with profiting from their access to the Snowden papers; in fact, he says, by the time the venture gets off the ground, there may not be any NSA secrets left to reveal. But one can imagine a situation in which a newspaper hires a journalist with unique access to some highly newsworthy information in order to acquire and control that information. In this case, we have contrary evidence: Greenwald and Poitras have demonstrated their courage and commitment.
Greenwald’s defense overall is, first, that he and Poitras (Bart Gellman plays a lesser role in the article) have not attempted to monopolize the papers so far. On the contrary, they’ve been generous and conscientious in spreading the the revelations to papers around the world. Second, getting paid for doing this is how journalism works.
To be fair, Ames’ criticism isn’t simply that Greenwald is making money, but that Omidyar can’t be trusted. I disagree, albeit without pretending to have any particular insight into Omidyar’s (or anyone’s) soul. (I generally have appreciated Omidyar’s work, but so what?) We do have reason to trust Greenwald, however. It’s inconceivable to me that Greenwald would let the new venture sit on NSA revelations for bad reasons.
But I personally am most interested in why these accusations have traction at all.
Before the Web, the charge that Greenwald is monopolizing the information wouldn’t even have made sense because there wasn’t an alternative. Yes, he might have turned the entire cache over to The Guardian or the New York Times, but then would those newspapers look like monopolists? No, they’d look like journalists, like stewards. Now there are options. Snowden could have posted the cache openly on a Web site. He could have created a torrent so that they circulate forever. He could have given them to Wikileaks curate. He could have sent them to 100 newspapers simultaneously. He could have posted them in encrypted form and have given the key to the Dalai Lama or Jon Stewart. There are no end of options.
But Snowden didn’t. Snowden wanted the information curated, and redacted when appropriate. He trusted his hand-picked journalists more than any newspaper to figure out what “appropriate” means. We might disagree with his choice of method or of journalists, but we can understand it. The cache needs editing, contextualization, and redaction so that we understand it, and so that the legitimate secrets of states are preserved. (Are there legitimate state secrets? Let me explain: Yes.) Therefore, it needs stewardship.
No so incidentally, the fact that we understand without a hiccup why Snowden entrusted individual journalists with the information, rather than giving it to even the most prestigious of newspapers, is another convincing sign of the collapse of our institutions.
It’s only because we have so many other options that entrusting the cache to journalists committed to stewarding it into the public sphere could ever be called “monopolizing” it. The word shouldn’t make any sense to us in this environment, yet it is having enough traction that Greenwald reluctantly wrote a long post defending himself. Given that the three recipients of the Snowden cache have been publishing it in newspapers all over the world makes them much less “monopolists” than traditional reporters are. Greenwald only needed to defend himself from this ridiculous charge because we now have a medium that can do what was never before possible: immediately and directly publish sets of information of any size. And we have a culture (in which I happily and proudly associate) that says openness is the default. But defaults were made to be broken. That’s why they’re defaults and not laws of nature or morality.
Likewise, when Ames’ criticizes Greenwald for profiting from these secrets because he gets paid as a journalist (which is separate from the criticism that working for Omidyar endangers the info — a charge I find non-credible), the charge makes even the slightest sense only because of the Web’s culture of Free, which, again I am greatly enthusiastic about. As an institution of democracy, one might hope that newspapers would be as free as books in the public library — which is to say, the costs are hidden from the user — but it’s obvious what the problems are with government-funded news media. So, journalists get paid by the companies that hire them, and this by itself could only ever look like a criticism in an environment where Free is the default. We now have that environment, even if enabling journalism is one of the places where Free just doesn’t do the entire job.
That the charge that Glenn Greenwald is monopolizing or privatizing the Snowden information is even comprehensible to us is evidence of just how thoroughly the Web is changing our defaults and our concepts. Many of our core models are broken. We are confused. These charges are further proof, as if we needed it.
, too big to know
Tagged with: 2b2k
Date: December 1st, 2013 dw
Yesterday I participated as a color commentator in a 90 minute debate between Clive Thompson [twitter:pomeranian99] and Steve Easterbrook [twitter:smeasterbrook], put on by the CBC’s Q program.The topic was “Does the Net Make Us Smart or Stupid?” It airs today, and you can hear it here.
It was a really good discussion between Clive and Steve, without any of the trumped up argumentativeness that too often mars this type of public conversation. It was, of course, too short, but with a topic like this, we want it to bust its bounds, don’t we?
My participation was minimal, but that’s why we have blogs, right? So, here are two points I would have liked to pursue further.
First, if we’re going to ask if the Net makes us smart or stupid, we have to ask who we’re talking about. More exactly, who in what roles? So, I’d say that the Net’s made me stupider in that I spend more of my time chasing down trivialities. I know more about Miley Cyrus than I would have in the old days. Now I find that I’m interested in the Miley Phenomenon — the media’s treatment, the role of celebrity, the sexualization of everything, etc. — whereas before I would never have felt it worth a trip to the library or the purchase of an issue of Tiger Beat or whatever. (Let me be clear: I’m not that interested. But that’s the point: it’s all now just a click away.)
On the other hand, if you ask if the Net has made scholars and experts smarter, I think the answer has to be an almost unmitigated yes. Find me a scholar or expert who would turn off the Net when pursuing her topic. All discussions of whether the Net makes us smarter I think should begin by considering those who are in the business of being smart, as we all are at some points during the day.
Now, that’s not really as clear a distinction as I’d like. It’s possible to argue that the Net’s made experts stupider because it’s enabled people to become instant “experts” on topics. (Hat tip to Visiona-ary [twitter:0penCV] who independently raised this on Twitter.) We can delude ourselves into thinking we’re experts because we’ve skimmed the Wikipedia article or read an undergrad’s C- post about it. But is it really a bad thing that we can now get a quick gulp of knowledge in a field that we haven’t studied and probably never will study in depth? Only if we don’t recognize that we are just skimmers. At that point we find ourselves seriously arguing with a physicist about information’s behavior at the event horizon of a black hole as if we actually knew what we were talking about. Or, worse, we find ourselves disregarding our physician’s advice because we read something on the Internet. Humility is 95% of knowledge.
Here’s a place where learning some of the skills of journalists would be helpful for us all. (See Dan Gillmor‘s MediActive for more on this.) After all, the primary skill of a particular class of journalists is their ability to speak for experts in a field in which the journalist is not her/himself expert. Journalists, however, know how to figure out who to consult, and don’t confuse themselves with experts themselves. Modern media literacy means learning some of the skills and all of the humility of good journalists.
Second, Clive Thompson made the excellent and hugely important point that knowledge is now becoming public. In the radio show, I tried to elaborate on that in a way that I’m confident Clive already agrees with by saying that it’s not just public, it’s social, and not just social, but networked. Jian Ghomeshi, the host, raised the question of misinformation on the Net by pointing to Reddit‘s misidentification of one of the Boston bombers. He even played a touching and troubling clip by the innocent person’s brother talking about the permanent damage this did to the family. Now, every time you look up “Sunil Tripathi” on the Web, you’ll see him misidentified as a suspect in the bombing.
I responded ineffectively by pointing to Judith Miller’s year of misreporting for the NY Times that helped move us into a war, to make the point that all media are error prone. Clive did a better job by citing a researcher who fact checked an entire issue of a newspaper and uncovered a plethora of errors (mainly small, I assume) that were never corrected and that are preserved forever in the digital edition of that paper.
But I didn’t get a chance to say the thing that I think matters more. So, go ahead and google “Sunil Tripathi”. You will have to work at finding anything that identifies him as the Boston Bomber. Instead, the results are about his being wrongly identified, and about his suicide (which apparently occurred before the false accusations were made).
None of this excuses the exuberantly irresponsible way a subreddit (i.e., a topic-based discussion) at Reddit accused him. And it’s easy to imagine a case in which such a horrible mistake could have driven someone to suicide. But that’s not my point. My point here is twofold.
First, the idea that false ideas once published on the Net continue forever uncorrected is not always the case. If we’re taking as our example ideas that are clearly wrong and are important, the corrections will usually be more obvious and available to us than in the prior media ecology. (That doesn’t relieve us of the responsibility of getting facts right in the first place.)
Second, this is why I keep insisting that knowledge now lives in networks the way it used to live in books or newspapers. You get the truth not in any single chunk but in the web of chunks that are arguing, correcting, and arguing about the corrections. This, however, means that knowledge is an argument, or a conversation, or is more like the webs of contention that characterize the field of living scholarship. There was an advantage to the old ecosystem in which there was a known path to authoritative opinions, but there were problems with that old system as well.
That’s why it irks me to take any one failure, such as the attempt to crowdsource the identification of the Boston murderers, as a trump card in the argument the Net makes us stupider. To do so is to confuse the Net with an aggregation of public utterances. That misses the transformative character of the networking of knowledge. The Net’s essential character is that it’s a network, that it’s connected. We therefore have to look at the network that arose around those tragically wrong accusations.
So, search for Sunil Tripathi at Reddit.com and you will find a list of discussions at Reddit about how wrong the accusation was, how ill-suited Reddit is for such investigations, and how the ethos and culture of Reddit led to the confident condemning of an innocent person. That network of discussion — which obviously extends far beyond Reddit’s borders — is the real phenomenon…”real” in the sense that the accusations themselves arose from a network and were very quickly absorbed into a web of correction, introspection, and contextualization.
The network is the primary unit of knowledge now. For better and for worse.
Tagged with: 2b2k
Date: October 23rd, 2013 dw
I’m at a Riptide forum at Harvard’s Shorenstein Center on the “digital disruption of the news.” The place is packed. Digital Riptide consists of 60 interviews. The panel discussion is with Tim Armstrong, AOL; Caroline Little, Newspaper Association of America; Arthur Sulzberger Jr., The New York Times. It’s moderated by Martin Nisenholtz.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
[I came in a few minutes late...]
Caroline Little: Audiences aren’t the problem. They’re growing. Revenue is the challenge.
Arthur Sulzberg: “We’re losing our first teeth and growing our new teeth. It’s painful. But what’s coming will be bigger” in reach and impact.
Q [Martin Nisenholtz]: Tim, you paid $315M for The Huffington Post. Jeff Bezos paid $250M for The Washington Post. Did Bezos get a better deal than you?
Tim Armstrong: No, Huffpo is worth more than people thought. It’s gone from 0 to 100M video views, for example. It’s got 100% digital DNA. I once owned a Boston newspaper, but saw Mosaic at MIT and knew I had to get out of that business. “I think I got a great deal with Huffpo and I think Jeff got a good deal on the WaPo, depending on what he does with it.”
Q: Caroline, you ran the WaPo digital division. Was it inevitable that they’d sell to Bezos or could they have done someting to change that future?
CL: It wasn’t inevitable. Now newspapers really have to understand their audiences. Taking it private will remove some pressure. The Grahams were trying to put the WaPo in the best possible place for the future, and that took courage.
Q: What is the nature of authority in a world where there are tens of thousands of verticalized publications on every conceivable topic?
AS: Authority is still about accuracy, breadth, calling out your own mistakes, and having experienced people on the ground, not parachuting them in. Few news organizations have bureaus around the world or even the country. The joy of the digital era is the speed and the reach, and the ability to take in points of view very quickly and bring them into a story line. It’s a remarkable opportunity. The downside is clear. Suddenly everyone is looking at the photo of the Boston bomber. Everyone knows it’s him. But it’s not. That kind of accuracy is critical. [I think AS thinks his characterization of authority bolsters the newspaper's case, but I don't think it does. I trust the specialized bloggers/sites on particular topics more than I trust the newspaper. E.g., I get far more in depth and more authoritative coverage of telecom policy from blogs and mailing lists than I do from the NYT. As for Reddit's misidentification of the Boston bomber: Yes, it was a dreadful mistake. But it was done transparently in public and was immediately corrected. <cough>Judith Miller</cough>.]
Q: Tim, you were interviewed and spoke enthusiastically about Patch. Since then you’ve downsized it. What’s so hard about local journalism?
Tim: We rolled it out to 900 of the best GDP [I think] communities in the US. It was about the authoritative nature of local journalism. Patch’s expansion was rapid. This summer we realized there are 500 Patches that work, and 400 with traffic but we’re not part of that business model. Patch will continue to go one post-partnerships because there’s such an acute need for local info. We’ll probably do partnerships. AOL will own some, and traditional media companies will own some.
Caroline: 85% of all media coverage of stories starts with newspapers.
Q: There’s never been a large, truly international paper. What’s the model for going global with a US news brand?
AS: The International Herald Tribune is owned by the NYT. We’re re-branding it in October as the International NYT. This is just a step. We’re fixing things; e.g., we didn’t used to let you subscribe using Euro currency. But: I met with aa Chinese general. She began by talking in an angry way. We had just begun to charge for Web access, and every morning she’d go to the NYT site but it wouldn’t accept her credit card. We fixed it, but the point is that a Chinese general was going to the NYT site the first thing in the morning. What an opportunity!
Q: Young people don’t seem to be willing to pay for media on the Web. As they mature, will they be willing to pay to for a digital subscription to the NYT?
AS: More and more young people, and all people, are willing to pay for an experience they value on the Web. Thank you, Steve Jobs! But it’s not as if 14 yr olds were ever buying newspapers.
Q: Tim, you create some content, and you do deals to provide access to your audience. How do you decide what you’ll cover and what others will?
TA: Our theory is that people care about a limited amount of things. As they get older, they spend more time with things that matter. We want to be the most human-based company in terms of what people care about. E.g., we’re running TechCrunch Disrupt now. 3,000 people. Just about all the major figures in the tech space. This is an influencer space that we have a major major major amount of mindshare in. The second generation of our strategy is to build out massive content partnerships. And a giant B2B strategy; we service about 40K other publishers with video, ads and content-sharing. Our content theory: let’s invest in the most important areas of journalism and content, build B2B, and have relationships with people in the most important areas of their lives. HuffPo is an influencer. It’s a global info source. It’s a trusted brand. E.g., see our coverage of the selection of the new Pope.
Q: Will reporters inevitably have less time to research a story?
AS: The pace has certainly picked up. But we’re still engaged in long form journalism. E.g., Snowfall.
TA: 30-40% of the traffic is on mobiles. Mobile adds consumption, typically about 30%. This changes how news works: If you don’t have a brand like the NYT, you’ll mix low appeal with high appeal
CL: Newsrooms are smaller, but they’d send 15 people to cover the Olympics. Do you really need that? Many newspapers now are investing in longer-form reporting.
Q: What makes global journalism work? The size of the city?
CL: Trust and authenticity. People still trust newspapers.
Q: What do you look for in journalists’ skills?
AS: We didn’t focus fast enough on hiring engineers to build systems and tools.
Q: TV broadcasters are using time efficiently. E.g., Netflix released an entire series at once. How are you experimenting with how to bundle and release investigative reporting?
AS: We’re always experimenting. E.g., We printed Snowfall as a full section, but the experience on the Web was so immensely powerful because of the video, the sound, etc.
TA: We look at how you disrupt readers’ behavior. I always say you can’t beat ESPN Sports Center by being 5% better. You have to be 75% better. Also you need distribution partnerships; that disrupts when your content is released.
Q: [couldn't hear]
TA: If you look at how people use phones and tablets, the fact is that the average TV is 22 mins of content and 8 mins of commercial. When you watch how people use content, you’ll see trusted brands and faster content. [I had trouble hearing this.] People want to be told what they want to see. The future will be very curated and disruptively time-based.
Q: How do you view social media? Does the capacity to share overcome the limitations of 140 characters?
CL: I think Twitter is like a caption to a photograph. If it’s engaging, you’ll go find more about it.
AS: Twitter is a powerful tool, both in and out.
TA: Twitter is best known for short content, but they’re going to be changing that.
CL: Twitter is great for sources.
AS: This isn’t new. People said you can’t trust what people would say over a telephone wire. Telegrams were thought by some to be the death of newspapers. Twitter is a tool that we’re all getting better and better at using.
Q: From Twitter: How do you choose your political angle?
TA: We have a news chooser that lets you customize the news. HuffPo started more with a political angle. Now there are lots of forums set up for people with different political views. But we [AOL? HuffPo?] have our own voice.
AS: The Net is bringing us back to the written word. Radio, TV, telephone all took us away from that. We’re learning that using any single method will fail.
Q: Tim, you said “not just everyone can be a journalist.” How about bloggers who steal content?
TA: Anyone can be a journalist if they want to be. But consumers are smart. They know who’s stealing information and who isn’t. What you see happening in the blogging community are people taking advantage of situations to be disruptive to gain audience. I would not undercut the ability of people building blogs on specific topics disrupting newspapers.
Q: 5B people may be coming on line in the next decades. How does that change the target audience for online journalism?
The possibilities of our growth and the value of the quality of the info we can provide are immense.
TA: As the developing world comes online, they’ll come online with higher bandwidth.
Tagged with: blogs
Date: September 10th, 2013 dw
I don’t care about expensive electric sports cars, but I’m fascinated by the dustup between Elon Musk and the New York Times.
On Sunday, the Times ran an article by John Broder on driving the Tesla S, an all-electric car made by Musk’s company, Tesla. The article was titled “Stalled Out on Tesla’s Electric Highway,” which captured the point quite concisely.
Musk on Wednesday in a post on the Tesla site contested Broder’s account, and revealed that every car Tesla lends to a reviewer has its telemetry recorders set to 11. Thus, Musk had the data that proved that Broder was driving in a way that could have no conceivable purpose except to make the Tesla S perform below spec: Broder drove faster than he claimed, drove circles in a parking lot for a while, and didn’t recharge the car to full capacity.
Boom! Broder was caught red-handed, and it was data that brung him down. The only two questions left were why did Broder set out to tank the Tesla, and would it take hours or days for him to be fired?
Rebecca Greenfield at Atlantic Wire took a close look at the data — at least at the charts and maps that express the data — and evaluated how well they support each of Musk’s claims. Overall, not so much. The car’s logs do seem to contradict Broder’s claim to have used cruise control. But the mystery of why Broder drove in circles in a parking lot seems to have a reasonable explanation: he was trying to find exactly where the charging station was in the service center.
But we’re not done. Commenters on the Atlantic piece have both taken it to task and provided some explanatory hypotheses. Greenfield has interpolated some of the more helpful ones, as well as updating her piece with testimony from the tow-truck driver, and more.
But we’re still not done. Margaret Sullivan [twitter:sulliview] , the NYT “public editor” — a new take on what in the 1960s we started calling “ombudspeople” (although actually in the ’60s we called them “ombudsmen”) — has jumped into the fray with a blog post that I admire. She’s acting like a responsible adult by witholding judgment, and she’s acting like a responsible webby adult by talking to us even before all the results are in, acknowledging what she doesn’t know. She’s also been using social media to discuss the topic, and even to try to get Musk to return her calls.
Now, this whole affair is both typical and remarkable:
It’s a confusing mix of assertions and hypotheses, many of which are dependent on what one would like the narrative to be. You’re up for some Big Newspaper Schadenfreude? Then John Broder was out to do dirt to Tesla for some reason your own narrative can supply. You want to believe that old dinosaurs like the NYT are behind the curve in grasping the power of ubiquitous data? Yup, you can do that narrative, too. You think Elon Musk is a thin-skinned capitalist who’s willing to destroy a man’s reputation in order to protect the Tesla brand? Yup. Or substitute “idealist” or “world-saving environmentally-aware genius,” and, yup, you can have that narrative too.
Not all of these narratives are equally supported by the data, of course — assuming you trust the data, which you may not if your narrative is strong enough. Data signals but never captures intention: Was Broder driving around the parking lot to run down the battery or to find a charging station? Nevertheless, the data do tell us how many miles Broder drove (apparently just about the amount that he said) and do nail down (except under the most bizarre conspiracy theories) the actual route. Responsible adults like you and me are going to accept the data and try to form the story that “makes the most sense” around them, a story that likely is going to avoid attributing evil motives to John Broder and evil conspiratorial actions by the NYT.
But the data are not going to settle the hash. In fact, we already have the relevant numbers (er, probably) and yet we’re still arguing. Musk produced the numbers thinking that they’d bring us to accept his account. Greenfield went through those numbers and gave us a different account. The commenters on Greenfield’s post are arguing yet more, sometimes casting new light on what the data mean. We’re not even close to done with this, because it turns out that facts mean less than we’d thought and do a far worse job of settling matters than we’d hoped.
That’s depressing. As always, I am not saying there are no facts, nor that they don’t matter. I’m just reporting empirically that facts don’t settle arguments the way we were told they would. Yet there is something profoundly wonderful and even hopeful about this case that is so typical and so remarkable.
Margaret Sulllivan’s job is difficult in the best of circumstances. But before the Web, it must have been so much more terrifying. She would have been the single point of inquiry as the Times tried to assess a situation in which it has deep, strong vested interests. She would have interviewed Broder and Musk. She would have tried to find someone at the NYT or externally to go over the data Musk supplied. She would have pronounced as fairly as she could. But it would have all been on her. That’s bad not just for the person who occupies that position, it’s a bad way to get at the truth. But it was the best we could do. In fact, most of the purpose of the public editor/ombudsperson position before the Web was simply to reassure us that the Times does not think it’s above reproach.
Now every day we can see just how inadequate any single investigator is for any issue that involves human intentions, especially when money and reputations are at stake. We know this for sure because we can see what an inquiry looks like when it’s done in public and at scale. Of course lots of people who don’t even know that they’re grinding axes say all sorts of mean and stupid things on the Web. But there are also conversations that bring to bear specialized expertise and unusual perspectives, that let us turn the matter over in our hands, hold it up to the light, shake it to hear the peculiar rattle it makes, roll it on the floor to gauge its wobble, sniff at it, and run it through sophisticated equipment perhaps used for other purposes. We do this in public — I applaud Sullivan’s call for Musk to open source the data — and in response to one another.
Our old idea was that the thoroughness of an investigation would lead us to a conclusion. Sadly, it often does not. We are likely to disagree about what went on in Broder’s review, and how well the Tesla S actually performed. But we are smarter in our differences than we ever could be when truth was a lonelier affair. The intelligence isn’t in a single conclusion that we all come to — if only — but in the linked network of views from everywhere.
There is a frustrating beauty in the way that knowledge scales.
Tagged with: 2b2k
Date: February 14th, 2013 dw
Of course Aaron was a legendary prodigy of a hacker in the sense of someone who can build anything out of anything. But that’s not what the media mean when they call him a hacker. They’re talking about his downloading of millions of scholarly articles from JSTOR, and there’s a slight chance they’re also thinking about his making available millions of pages of federal legal material as part of the RECAP project.
Neither the JSTOR nor RECAP downloads were cases of hacking in the sense of forcing your way into a system by getting around technical barriers. Framing Aaron’s narrative — his life as those who didn’t know him will remember it — as that of a hacker is a convenient untruth.
As Alex Stamos makes clear, there were no technical, legal, or contractual barriers preventing Aaron from downloading as many articles in the JSTOR repository as he wanted, other than the possibility that Aaron was trespassing, and even that is questionable. (The MIT closet he “broke into” to gain better access to the network apparently was unlocked.) Alex writes:
Aaron did not “hack” the JSTOR website for all reasonable definitions of “hack”. Aaron wrote a handful of basic python scripts that first discovered the URLs of journal articles and then used curl to request them. Aaron did not use parameter tampering, break a CAPTCHA, or do anything more complicated than call a basic command line tool that downloads a file in the same manner as right-clicking and choosing “Save As” from your favorite browser.
Clearly, this is not what JSTOR had in mind, but it is also something its contract permitted and its technology did nothing to prevent. As Brewster Kahle wrote yesterday:
When I was at MIT, if someone went to hack the system, say by downloading databases to play with them, might be called a hero, get a degree, and start a company– but they called the cops on him. Cops. MIT used to protect us when we transgressed the traditional.
As for RECAP, the material Aaron made available was all in the public domain.
Aaron was not a hacker. He was a builder:
Aaron helped build the RSS standard that enabled a rush of information and ideas — what we blandly call “content” — to be distributed, encountered, and re-distributed. [source]
Aaron did the initial architecture of CreativeCommons.org, promoting a license that removes the friction from the reuse of copyrighted materials. [source]
Aaron did the initial architecture of the Open Library, a source of and about books open to the world. [source]
Aaron played an important role in spurring the grassroots movement that stopped SOPA, a law that would have increased the power of the Hollywood-DC alliance to shut down Web sites. [source]
Aaron contributed to the success of Reddit, a site now at the heart of the Net’s circulatory system for many millions of us.
Aaron contributed to Markdown, a much simpler way of writing HTML Web pages. (I use it for most of my writing.) [source]
Aaron created Infogami, software that made it easy for end-users to create Web sites that feature collaboration and self-expression. (Reddit bought Infogami.) [source]
Aaron wrote web.py, which he described as “a free software web application library for Python. It makes it easier to develop web apps in Python by handling a lot of the Web-related stuff for you. Reddit was built using it, for example.” (In that interview you’ll hear Aaron also talk about his disgust at the level of misogyny in the tech world.) [source]
Aaron founded Demand Progress and helped found the Progressive Change Campaign Committee, pioneering grassroots political groups. [source]
The mainstream media know that their non-technical audience will hear the term “hacker” in its black hat sense. We need to work against this, not only for the sake of Aaron’s memory, but so that his work is celebrated, encouraged, and continued.
Aaron Swartz was not a hacker. He was a builder.
Tagged with: aaron swartz
Date: January 13th, 2013 dw
An article in published in Science on Thursday, securely locked behind a paywall, paints a mixed picture of science in the age of social media. In “Science, New Media, and the Public,” Dominique Brossard and Dietram A. Scheufele urge action so that science will be judged on its merits as it moves through the Web. That’s a worthy goal, and it’s an excellent article. Still, I read it with a sense that something was askew. I think ultimately it’s something like an old vs. new media disconnect.
The authors begin by noting research that suggests that “online science sources may be helping to narrow knowledge gaps” across educational levels. But all is not rosy. Scientists are going to have “to rethink the interface between the science community and the public.” They point to three reasons.
First, the rise of online media has reduced the amount of time and space given to science coverage by traditional media .
Second, the algorithmic prioritizing of stories takes editorial control out of the hands of humans who might make better decisions. The authors point to research that “shows that there are often clear discrepancies between what people search for online, which specific areas are suggested to them by search engines, and what people ultimately find.” The results provided by search engines “may all be linked in a self-reinforcing informational spiral…” This leads them to ask an important question:
Is the World Wide Web opening up a new world of easily accessible scientific information to lay audiences with just a few clicks? Or are we moving toward an online science communication environment in which knowledge gain and opinion formation are increasingly shaped by how search engines present results, direct traffic, and ultimately narrow our informational choices? Critical discussions about these developments have mostly been restricted to the political arena…
Third, we are debating science differently because the Web is social. As an example they point to the fact that “science stories usually…are embedded in a host of cues about their accuracy, importance, or popularity,” from tweets to Facebook “Likes.” “Such cues may add meaning beyond what the author of the original story intended to convey.” The authors cite a recent conference  where the tone of online comments turned out to affect how people took the content. For example, an uncivil tone “polarized the views….”
They conclude by saying that we’re just beginning to understand how these Web-based “audience-media interactions” work, but that the opportunity and risk are great, so more research is greatly needed:
Without applied research on how to best communicate science online, we risk creating a future where the dynamics of online communication systems have a stronger impact on public views about science than the specific research that we as scientists are trying to communicate.
I agree with so much of this article, including its call for action, yet it felt odd to me that scientists will be surprised to learn that the Web does not convey scientific information in a balanced and impartial way. You only are surprised by this if you think that the Web is a medium. A medium is that through which content passes. A good medium doesn’t corrupt the content; it conveys signal with a minimum of noise.
But unlike any medium since speech, the Web isn’t a passive channel for the transmission of messages. Messages only move through the Web because we, the people on the Web, find them interesting. For example, I’m moving (infinitesimally, granted) this article by Brossard and Scheufele through the Web because I think some of my friends and readers will find it interesting. If someone who reads this post then tweets about it or about the original article, it will have moved a bit further, but only because someone cared about it. In short, we are the medium, and we don’t move stuff that we think is uninteresting and unimportant. We may move something because it’s so wrong, because we have a clever comment to make about it, or even because we misunderstand it, but without our insertion of ourselves in the form of our interests, it is inert.
So, the “dynamics of online communication systems” are indeed going to have “a stronger impact on public views about science” than the scientific research itself does because those dynamics are what let the research have any impact beyond the scientific community. If scientific research is going to reach beyond those who have a professional interest in it, it necessarily will be tagged with “meaning beyond what the author of the original story intended to convey.” Those meanings are what we make of the message we’re conveying. And what we make of knowledge is the energy that propels it through the new system.
We therefore cannot hope to peel the peer-to-peer commentary from research as it circulates broadly on the Net, not that the Brossard and Scheufele article suggests that. Perhaps the best we can do is educate our children better, and encourage more scientists to dive into the social froth as the place where their research is having its broadest effect.
Notes, copied straight from the article:
 M. A. Cacciatore, D. A. Scheufele, E. A. Corley, Public Underst. Sci.; 10.1177/0963662512447606 (2012).
 C. Russell, in Science and the Media, D. Kennedy, G. Overholser, Eds. (American Academy of Arts and Sciences, Cambridge, MA, 2010), pp. 13–43
 P. Ladwig et al., Mater. Today 13, 52 (2010)
 P. Ladwig, A. Anderson, abstract, Annual Conference of the Association for Education in Journalism and Mass Communication, St. Louis, MO, August 2011; www.aejmc. com/home/2011/06/ctec-2011-abstracts
, social media
, too big to know
Tagged with: 2b2k
Date: January 5th, 2013 dw
Holy cow! I did not see that coming.
Amusingly, at 10am this morning, I was giving my talk here at the Aspen Ideas Festival about knowledge in the age of the internet. I’d asked someone to interrupt when the news came through. So at 10:05, someone said: “The court overturned the individual mandate!” And someone else said, “No, they upheld it.” It turns out that CNN got it wrong, but a blogger got it right. Pretty much made one of my points right then.
Anyway, pretty amazing outcome.
And, please, let’s NOT all go out and get sick! Stay well and healthy, my friends.
Tagged with: aspen
Date: June 28th, 2012 dw
Secret Service scandal eclipses Obama trip
That’s the headline in USAToday. It’s typical of the news coverage of the Secret Service scandal before the President arrived in Colombia.
Let me fix that for you:
Media’s decision to focus on the Secret Service scandal eclipses Obama trip
The eclipse has only to do with how the media have chosen to cover the trip. And with headlines like the one in USAToday, the circle is complete: the media reporting on the media’s coverage as if they were actually reporting an event.
Tagged with: journalism
Date: April 16th, 2012 dw
Mathew Ingram at GigaOm has posted the Twitter stream that followed upon his tweet criticizing the Wall Street Journal for running an article based on a post by TechCrunch’s MC Siegler, who responded in an angry post.
Mathew’s point is that linking is a good journalistic practice, even if author of the the second article independently confirmed the information in the first, as happened in this case. Mathew thinks it’s a matter of trust, and if the repeater gets caught at it, it would indeed erode trust. Of course, they probably won’t, and even if you did read the WSJ article after reading the TechCrunch post, you’d probably assume that the news was coming from a common source.
I think there’s another reason why reports ought to link to their, um, inspirations: Links are a public good. They create a web that is increasingly rich, useful, diverse, and trustworthy. We should all feel an obligation to be caretakers of and contributors to this new linked public.
And there’s a further reason. In addition to building this new infrastructure of curiosity, linking is a small act of generosity that sends people away from your site to some other that you think shows the world in a way worth considering. Linking is a public service that reminds us how deeply we are social and public creatures.
Which I think helps explains why newspapers often are not generous with their links. A paper like the WSJ believes its value — as well as its self-esteem — comes from being the place you go for news. It covers the stories worth covering, and the stories tell you what you need to know. It is thus a stopping point in the ecology of information. And that’s the oeprational definition of authority: The last place you visit when you’re looking for an answer. If you are satisfied with the answer, you stop your pursuit of it. Take the links out and you think you look like more of an authority. To this mindset, links are sign of weakness.
This made more sense when knowledge was paper-based, because in practical terms that’s pretty much how it worked: You got your news rolled up and thrown onto your porch once a day, and if you wanted more information about an article in it, you were pretty much SOL. Paper masked just how indebted the media were to one another. The media have always been an ecology of knowledge, but paper enabled them to pretend otherwise, and to base much of their economic value on that pretense.
Until newspapers are as heavily linked as GigaOm, TechCrunch, and Wikipedia, until newspapers revel in pointing away from themselves, they are depending on a value that was always unreal and now is unsustainable.
, too big to know
Tagged with: 2b2k
Date: February 26th, 2012 dw
Brian Solis has responded to Jeremy Owyang’s provocative post declaring the end of the golden age of blogging. Here’s the comment I posted on Brian’s site:
I think in a sense it’s true that the golden age of blogging is over, but that’s a good thing. And not because of anything bad about blogging. On the contrary…
Blogging began when your choices were (roughly) to dive into the never-ending, transient conversational streams of the Internet, or create a page with such great effort that you didn’t want to go back and change it, and few could bother to create a different page in order to comment on yours. Blogs let us post whenever we had something to say, and came with commenting built in. The Net was already conversational; blogs let us make static posts — articles, home pages — conversational.
Thanks to that, we now take for granted that posts will be conversational. The golden age ended because when a rare metal is everywhere, it’s no longer rare. And in this case, that’s a great thing.
Yes, that metaphor sucks. An ecosystem is a better one. Since the Web began, we’ve been filling in the environmental niches. We now have many more ways to talk with one another. Blogs continue to be an incredibly important player in this ecosystem; thank of how rapidly knowledge and ideas have become part of our new public thanks to blogs. But the point of an ecosystem metaphor is that the goodness comes from the complexity and diversity of participants and their relations. I therefore do not mourn the passing of the golden age of any particular modality of conversation, so long as that means other modalities have joined in the happy fray.
Blogging isn’t golden! Long live blogging! :)
, social media
Tagged with: blogging
Date: December 28th, 2011 dw
Next Page »