In August, I blogged about a mangled quotation supposedly from Mark Twain posted on an interstitial page at Forbes.com. When I tweeted about the post, it was (thanks to John Overholt [twitter:JohnOverholt]) noticed by Quote Investigator [twitter:QuoteResearch] , who over the course of a few hours tweeted the results of his investigation. Yes, it was mangled. No, it was not Twain. It was probably Christian Bovee. Quote Investigator, who goes by the pen name Garson O’Toole, has now posted on his site at greater length about this investigation.
It’s been clear from the beginning of the Web that it gives us access to experts on topics we never even thought of. As the Web has become more social, and as conversations have become scaled up, these crazy-smart experts are no longer nestling at home. They’re showing up like genies summoned by the incantation of particular words. We see this at Twitter, Reddit, and other sites with large populations and open-circle conversations.
This is a great thing, especially if the conversational space is engineered to give prominence to the contributions of drive-by experts. We want to take advantage of the fact that if enough people are in a conversation, one of them will be an expert.
Tagged with: 2b2k
Date: October 27th, 2013 dw
Yesterday I participated as a color commentator in a 90 minute debate between Clive Thompson [twitter:pomeranian99] and Steve Easterbrook [twitter:smeasterbrook], put on by the CBC’s Q program.The topic was “Does the Net Make Us Smart or Stupid?” It airs today, and you can hear it here.
It was a really good discussion between Clive and Steve, without any of the trumped up argumentativeness that too often mars this type of public conversation. It was, of course, too short, but with a topic like this, we want it to bust its bounds, don’t we?
My participation was minimal, but that’s why we have blogs, right? So, here are two points I would have liked to pursue further.
First, if we’re going to ask if the Net makes us smart or stupid, we have to ask who we’re talking about. More exactly, who in what roles? So, I’d say that the Net’s made me stupider in that I spend more of my time chasing down trivialities. I know more about Miley Cyrus than I would have in the old days. Now I find that I’m interested in the Miley Phenomenon — the media’s treatment, the role of celebrity, the sexualization of everything, etc. — whereas before I would never have felt it worth a trip to the library or the purchase of an issue of Tiger Beat or whatever. (Let me be clear: I’m not that interested. But that’s the point: it’s all now just a click away.)
On the other hand, if you ask if the Net has made scholars and experts smarter, I think the answer has to be an almost unmitigated yes. Find me a scholar or expert who would turn off the Net when pursuing her topic. All discussions of whether the Net makes us smarter I think should begin by considering those who are in the business of being smart, as we all are at some points during the day.
Now, that’s not really as clear a distinction as I’d like. It’s possible to argue that the Net’s made experts stupider because it’s enabled people to become instant “experts” on topics. (Hat tip to Visiona-ary [twitter:0penCV] who independently raised this on Twitter.) We can delude ourselves into thinking we’re experts because we’ve skimmed the Wikipedia article or read an undergrad’s C- post about it. But is it really a bad thing that we can now get a quick gulp of knowledge in a field that we haven’t studied and probably never will study in depth? Only if we don’t recognize that we are just skimmers. At that point we find ourselves seriously arguing with a physicist about information’s behavior at the event horizon of a black hole as if we actually knew what we were talking about. Or, worse, we find ourselves disregarding our physician’s advice because we read something on the Internet. Humility is 95% of knowledge.
Here’s a place where learning some of the skills of journalists would be helpful for us all. (See Dan Gillmor‘s MediActive for more on this.) After all, the primary skill of a particular class of journalists is their ability to speak for experts in a field in which the journalist is not her/himself expert. Journalists, however, know how to figure out who to consult, and don’t confuse themselves with experts themselves. Modern media literacy means learning some of the skills and all of the humility of good journalists.
Second, Clive Thompson made the excellent and hugely important point that knowledge is now becoming public. In the radio show, I tried to elaborate on that in a way that I’m confident Clive already agrees with by saying that it’s not just public, it’s social, and not just social, but networked. Jian Ghomeshi, the host, raised the question of misinformation on the Net by pointing to Reddit‘s misidentification of one of the Boston bombers. He even played a touching and troubling clip by the innocent person’s brother talking about the permanent damage this did to the family. Now, every time you look up “Sunil Tripathi” on the Web, you’ll see him misidentified as a suspect in the bombing.
I responded ineffectively by pointing to Judith Miller’s year of misreporting for the NY Times that helped move us into a war, to make the point that all media are error prone. Clive did a better job by citing a researcher who fact checked an entire issue of a newspaper and uncovered a plethora of errors (mainly small, I assume) that were never corrected and that are preserved forever in the digital edition of that paper.
But I didn’t get a chance to say the thing that I think matters more. So, go ahead and google “Sunil Tripathi”. You will have to work at finding anything that identifies him as the Boston Bomber. Instead, the results are about his being wrongly identified, and about his suicide (which apparently occurred before the false accusations were made).
None of this excuses the exuberantly irresponsible way a subreddit (i.e., a topic-based discussion) at Reddit accused him. And it’s easy to imagine a case in which such a horrible mistake could have driven someone to suicide. But that’s not my point. My point here is twofold.
First, the idea that false ideas once published on the Net continue forever uncorrected is not always the case. If we’re taking as our example ideas that are clearly wrong and are important, the corrections will usually be more obvious and available to us than in the prior media ecology. (That doesn’t relieve us of the responsibility of getting facts right in the first place.)
Second, this is why I keep insisting that knowledge now lives in networks the way it used to live in books or newspapers. You get the truth not in any single chunk but in the web of chunks that are arguing, correcting, and arguing about the corrections. This, however, means that knowledge is an argument, or a conversation, or is more like the webs of contention that characterize the field of living scholarship. There was an advantage to the old ecosystem in which there was a known path to authoritative opinions, but there were problems with that old system as well.
That’s why it irks me to take any one failure, such as the attempt to crowdsource the identification of the Boston murderers, as a trump card in the argument the Net makes us stupider. To do so is to confuse the Net with an aggregation of public utterances. That misses the transformative character of the networking of knowledge. The Net’s essential character is that it’s a network, that it’s connected. We therefore have to look at the network that arose around those tragically wrong accusations.
So, search for Sunil Tripathi at Reddit.com and you will find a list of discussions at Reddit about how wrong the accusation was, how ill-suited Reddit is for such investigations, and how the ethos and culture of Reddit led to the confident condemning of an innocent person. That network of discussion — which obviously extends far beyond Reddit’s borders — is the real phenomenon…”real” in the sense that the accusations themselves arose from a network and were very quickly absorbed into a web of correction, introspection, and contextualization.
The network is the primary unit of knowledge now. For better and for worse.
Tagged with: 2b2k
Date: October 23rd, 2013 dw
I’m not sure how I came into possession of a copy of The Indexer, a publication by the Society of Indexers, but I thoroughly enjoyed it despite not being a professional indexer. Or, more exactly, because I’m not a professional indexer. It brings me joy to watch experts operate at levels far above me.
The issue of The Indexer I happen to have — Vol. 30, No,. 1, March 2012 — focuses on digital trends, with several articles on the Semantic Web and XML-based indexes as well as several on broad trends in digital reading and digital books, and on graphical visualizations of digital indexes. All good.
I also enjoyed a recurring feature: Indexes reviewed. This aggregates snippets of book reviews that mention the quality of the indexes. Among the positive reviews, the Sunday Telegraph thinks that for the book My Dear Hugh, “the indexer had a better understanding of the book than the editor himself.” That’s certainly going on someone’s resumé!
I’m not sure why I enjoy works of expertise in fields I know little about. It’s true that I know a little about indexing because I’ve written about the organization of digital information, and even a little about indexing. And I have a lot of interest in the questions about the future of digital books that happen to be discussed in this particular issue of The Indexer. That enables me to make more sense of the journal than might otherwise be the case. But even so, what I enjoy most are the discussions of topics that exhibit the professionals’ deep involvement in their craft.
But I think what I enjoy most of all is the discovery that something as seemingly simple as generating an index turns out to be indefinitely deep. There are endless technical issues, but also fathomless questions of principle. There’s even indexer humor. For example, one of the index reviews notes that Craig Brown’s The Lost Diaries “gives references with deadpan precision (‘Greer, Germaine: condemns Queen, 13-14…condemns pineapple, 70…condemns fat, thin and medium sized women, 93…condemns kangaroos,122′).”
As I’ve said before, everything is interesting if observed at the right level of detail.
From TheHeart.org, an article by Lisa Nainggolan:
Gothenburg, Sweden – Further support for the concept of the obesity paradox has come from a large study of patients with acute coronary syndrome (ACS) in the Swedish Coronary Angiography and Angioplasty Registry (SCAAR) . Those who were deemed overweight or obese by body-mass index (BMI) had a lower risk of death after PCI [percutaneous coronary intervention, aka angioplasty] than normal-weight or underweight participants up to three years after hospitalization, report Dr Oskar Angerås (University of Gothenburg, Sweden) and colleagues in their paper, published online September 5, 2012 in the European Heart Journal.
Can confirm. My grandmother in the 1930s was instructed to make sure she fed her husband lots and lots of butter to lubricate his heart after a heart attack. This proved to work extraordinarily well, at least until his next heart attack.
I refer once again to the classic 1999 The Onion headline: Eggs Good for You This Week.
, too big to know
Tagged with: 2b2k
Date: September 10th, 2012 dw
Douglas L. Wilson has a lovely article that tries to make sense of what we know about Lincoln’s love of Shakespeare. He argues that one fact about the performance of Shakespeare at the time illuminates comments Lincoln made to actors and friends. (No spoilers here, my friends!)
BTW, we learn early on in the article that Lincoln thought Hamlet’s “To be or not to be” soliloquy was outdone by the one in which Claudius wonders whether forgiveness is possible for his murder of his brother.
Oh, my offence is rank. It smells to heaven.
It hath the primal eldest curse upon ’t,
A brother’s murder. Pray can I not.
Though inclination be as sharp as will,
My stronger guilt defeats my strong intent,
And, like a man to double business bound,
I stand in pause where I shall first begin,
And both neglect. What if this cursèd hand
Were thicker than itself with brother’s blood?
Is there not rain enough in the sweet heavens
To wash it white as snow? Whereto serves mercy
But to confront the visage of offence?
And what’s in prayer but this twofold force,
To be forestallèd ere we come to fall
Or pardoned being down? Then I’ll look up.
My fault is past. But oh, what form of prayer
Can serve my turn, “Forgive me my foul murder”?
That cannot be, since I am still possessed
Of those effects for which I did the murder:
My crown, mine own ambition, and my queen.
May one be pardoned and retain th’ offense?
In the corrupted currents of this world
Offense’s gilded hand may shove by justice,
And oft ’tis seen the wicked prize itself
Buys out the law. But ’tis not so above.
There is no shuffling. There the action lies
In his true nature, and we ourselves compelled,
Even to the teeth and forehead of our faults,
To give in evidence. What then? What rests?
Try what repentance can. What can it not?
Yet what can it when one can not repent?
O wretched state! O bosom black as death!
O limèd soul that, struggling to be free,
Art more engaged! Help, angels. Make assay.
Bow, stubborn knees, and, heart with strings of steel,
Be soft as sinews of the newborn babe.
All may be well. (kneels)
[(SparkNotes' translation is here.]
I’ll stay away from the cheap psychologizing about Lincoln’s interest in the forgivability of unforgivable crimes during a war waged at least in part against slavery. Instead I’ll offer cheap psychologizing about the theme of the doubleness of self — with the attendant heightened perception of one’s self as always at issue — that seems to go through Lincoln’s favorite passages.
Finally, I might note that articles like this one show the value of experts, something we dare not lose in the networking of knowledge (in case anyone was wondering).
, too big to know
Tagged with: 2b2k
Date: January 15th, 2012 dw
From the ExpertNet site:
The United States General Services Administration (GSA) and the White House Open Government Initiative are soliciting your feedback on a concept for next generation citizen consultation, namely a government-wide software tool and process to elicit expert public participation (working title “ExpertNet”). ExpertNet could:
Enable government officials to circulate notice of opportunities to participate in public consultations to members of the public with expertise on a topic.
Provide those volunteer experts with a mechanism to provide useful, relevant, and manageable feedback back to government officials.
The proposed concept is intended to be complementary to two of the ways the Federal government currently obtains expertise to inform decision-making, namely by convening Federal Advisory Committees and announcing public comment opportunities in the Federal Register.
Take a look at the example in the editable part of the wiki. (And, yes, I did say that parts of the wiki are editable. Thank you for trusting us, my government!)
The only thing I object to in this brilliant idea is that it comes too late for inclusion as an example in my book. Why, those dirty government dogs!
(via Craig Newmark)
, too big to know
Tagged with: 2b2k
• open government
Date: December 16th, 2010 dw
Pew Internet surveyed a bunch o’ experts about where will be in The Cloud in 2020. The survey was more intended to elicit verbal responsesthan to come up with reliable numbers, but overall the experts seem to agree that we’ll be computing with a hybrid of desktop and cloud services. That seems a safe bet, especially since given enough bandwidth, all services are local. (Hasn’t distance always been the time it takes to connect?)
Several of the experts push back against the term “cloud,” Gary Bachula because it’s a “bad metaphor for broadening understanding of the concept,” and Susan Crawford because its ubiquity will mean that we “won’t need a word for it.”
Many worry about the power this will put in the hands of the Big Cloud Providers, with Robert Ackland arguing that “we need the cloud to be built using free and open source software.”
Several believe that there will be some prominent act of terrorism or incompetence in The Cloud that will drive people back to their desktops: “Expect a major news event involving a cloud catastrophe security breach or lost data to drive a reversion of these critical resources back to dedicated computing,” says Nathaniel James, or “a huge blow up with errorism,” predicts R. Ray Wang. Most agree it will be “both/and,” not “either/or.”
Many think that we’re not recognizing the depth of the change. For example, Fred Hapgood is among those who predicts the death, transformation, or marginalization of the PC: “By 2020 the computational hardware that we see around us in our daily lives will all be peripherals â€“ tablets, goggles, earphones, keyboards, 3D printers, and the like. The computation driving the peripherals will go on in any of a million different places…” Says Garth Graham: “By 2020, a â€˜general-purpose PC’ and a â€˜smart phone’ will have converged into a range of general-purpose interactive connection devices, and â€˜things’ will have acquired agency by becoming smart. “The PC is just a phase,” says Rebecca MacKinnon.
Some of the commenters point to the global digital divide, although they don’t agree on which side will be most cloud-based. Gary Arlen says that because of the U.S.’s desktop-based infrastructure, we won’t move into the cloud as rapidly as will less-developed nations. Seliti Arata, however, says, “Business models will provide premium services and applications on the cloud for monetization. However most of the world population will continue to use pirated software on their desktops and alternative/free cloud services.”
As for me, I don’t have predictions because the future is too furious. For example, the speed and availability of broadband access in this country is unpredictable and is by itself determinative, not to mention the Internet-seeking asteroid this is currently streaking toward the Earth. It’s safe to say, however, (= here comes something that in 5 years I’ll feel foolish for having said) that we’re going to move more and more into the cloud. The only thing I’d add to The Experts is that this will have network effects like crazy â€” effects due to the scale of data and social connections being managed under one roofless roof (with, we hope, lots of openness as well as security).
The title of this post is one of my favorite headlines from The Onion.
So, yesterday we’re told that maybe taking a baby aspirin every day is more harmful than helpful, except for those with certain heart disease heart factors. (My doctor has me on ‘em. I’m going to keep take them.)
Today, an article in the Boston Globe reports on a study that says saturate fats don’t clog arteries the way we’ve been told for generations. (In the 1930s, when my grandfather had a heart attack, my grandmother was told to make sure he eats lots and lots of butter, to keep anything from sticking to his arteries.)
So, what will they take back tomorrow? Germ theory? Gravity? Heliocentrism? Bring back phlogiston!
Marginal Revolution has a terrific post about giving advice.
I find myself conflicted about the topic. Although I am an occasional consultant and adviser, I don’t think of myself as giving advice. Sometimes I know stuff (or think that I do) or have opinions that I’ll offer if asked (or, in a blog post, unasked): “I had bad luck with this vendor,” or “Don’t over-specify it at launch, so it can be more emergent,” or “I use Kayak. It’s faster and it doesn’t clutter itself up trying to sell me stuff.” I suppose those all count as advice, but they don’t feel like very interesting cases of advice: The first is a datum with an implied conclusion, the second is a bromide, and the third is a personal preference with a justification statement. I think of “giving advice” as something loftier, larger, and more coherent.
And that larger sense of advice seems to me not to be a self-contained activity, but a process and a social interaction. Giving advice generally (?) means helping someone try on futures. “What would it be like if â€¦?” Maybe at the end of that there’s a recommendation, but that recommendation is the least interesting part of the advice, because it only says, “And here’s what I think.” That’s why I’ve never been able to come up with a “Seven Steps to Miscellaneousness or Cluetrainhood” that would have helped my books.
Anyway, I liked Marginal Revolution’s post, and especially liked the recognition that giving advice is a social activity, not merely a transfer of purported knowledge.
[NOTE: These posts tagged "2b2k' (Too Big to Know") are about the process of writing a book. They therefore talk about the ideas in the book rather incidentally..]
It’s not quite right to say that I’ve finished a first draft of chapter one. More accurately: I’ve stopped typing and have gone back to the beginning. It needs so much work that it doesn’t even constitute a draft.
I read it to our son last night as he trotted on the elliptical trainer in the basement. He thought it’s better than I do, but that’s why we have families. He also offered useful comments: Opening with a recitation of factoids about the growth of info has been done (although he professed to find it amusing); I say three or four times too often that the basics of knowledge are changing; it wasn’t entirely clear how the idea of information overload has gone from a psychological syndrome to a cultural challenge. All too true.
Hearing it out loud helps a lot; I always read drafts of chapters to my wife. I realized, for example, that the long (too long) section on the history of facts adopts an off-putting academic tone. That doesn’t worry me, because adjusting the tone is a normal part of re-writing, although it does require the painful removal of “good stuff” that actually isn’t very interesting. I remain quite concerned about the overall structure, and, worse, whether the chapter is clear in its readerly aims.
So, I’m going to put in a new opening. Although the technique is overdone and predictable, I will probably start with some very quick examples intended to show that knowledge is becoming networked. Then I will tighten the section on information overload, which aims at suggesting that knowledge overload results in a change in the nature of knowledge (in a way that info overload did not change the nature of info). Then, into the reduced section on the history of facts, which aims to challenge our notion that knowledge is a building that depends on having a firm foundation. (I also want to shake the reader by the shoulders and say that the idea of knowledge is not as obvious and eternal as we’ve thought.)
Also, I changed the title of Chapter One yesterday, from “Undoing Knowledge” to “The Great Unnailing.”
And, this morning, while on the ol’ elliptical, I read a review of Amartya Sen’s The Idea of Justice, which, because of its discussion of the inevitability of disagreements, seems like it might be relevant. A few paces on, it also seemed to me that a suitable ending for the book might be a brief section that asks: If we didn’t have a concept of knowledge, would we now invent one? Is that concept still useful? I mean something inchoate by this, for clearly it is useful to distinguish between reliable and unreliable ideas. But that’s always a matter of degree. Would we separate out a special class of specially reliable information, and, more to the point, would we think of it as a realm of truth, a mirror of nature, or our highest calling? I think not. But I don’t know if this is an idea with which to open the book, close the book, or ignore.
Categories: too big to know
Tagged with: 2b2k
• information overload
Date: December 27th, 2009 dw
Next Page »