I got an email yesterday from someone who was irritated that she couldn’t get a quick read on who I am, what this blog is about, how non-credible I am, etc. She did eventually find the “Disclosure” button that does a bunch of that work, but she would have preferred that I follow the Web convention of displaying an “About me” link. I replied that I’m a little shy, and that I don’t want to use credentials to influence credibility (although I of course do that all the time â€” I don’t scratch “Harvard” off my business card, for example).
Well, she’s right. I’m putting an About Me link, which will lead to the Disclosure page. It’s a convenience for the reader and it’s a convention for Web pages. And profile pages do tell us something about the person, even if it’s not always what the person thinks.
I’m also going to try to be less shy about posting links to things I’ve written elsewhere on the Web. For example, here’s a list of the columns I’ve written for KMWorld, and when the new one comes out, I’m going to blog it. That’s not too pushy, is it? After all, having a blog is already an assertion that you think someone somewhere might want to read what you wrote.
I may also blog links to interviews with me as they show up. I rarely do that because it seems like bragging. But, if I were a reader of this blog â€” and I probably wouldn’t be â€” I might also want quick pointers to places where the writer of this blog is not fully in control of what he says. So, I’m going to try to force myself to blog those links.
Now, if you don’t mind my taking a step out of my awkward little me-centric universe: May you all have happy, healthy, and productive new years!
Tagged with: blogs
Date: December 31st, 2009 dw
Let the record show that during the early 1990s, I was pushing for calling the coming decade The Aughties. According to an amusing piece by Rebecca Mead in The New Yorker, that’s one of the leading contenders, although the article explains that it’s actually corruption of the word. I didn’t know! Forgive me!
So, I still don’t know what we should call these past ten years. The Aughties? The Oh-Ohs? The Balloon Years? The Netcade? Millenium 3.0?
And, yes, you do detect a bit of desperation in my tone. That’s because unless we find a name that actually works, I’m going to continue in my head to think of these as The Bush Years oh lord oh no. You don’t want a decade whose name requires you to take the Lord’s name in vain immediately after saying it.
Anyway, here’s hoping our Twenty-Teens are better than this decade has been!
Tagged with: aughties
Date: December 30th, 2009 dw
We just saw Anna Deavere Smith’s one-person, one-act show, Let Me Down Easy. I liked it, but probably less than anyone else in the theater, given the immediate standing ovation she was given.
I feel bad saying anything negative since the show is incredibly well-intentioned and ADS is hugely talented. In it, she presents monologues in the voices of about 20 different people, based on interviews with them. These are named, real people who span ages, genders, races, and countries. Impressive. The topic is health, health care, having a body, and death. Some of the monologues are moving, some are funny. I loved the one by a New Orleans’ nurse. But I felt manipulated by others. And the main thing that kept me from leaping to my feet at the end was the fact that only occasionally did ADS get me to forget that I was watching an actor â€” an immensely talented actor â€” imitating someone else.
It was admirable and enjoyable. For me, it was 3.5 stars out of 5.
We also went to the Bauhaus exhibit at the Museum of Modern Art. I loved it. It covers everything from architecture to painting to furniture to font design. (Great fonts!) And it does a good job conveying the movement’s political and economic principles. I only wish there were some examples of pre-Bauhaus design, because it’s become so much the standard style of contemporary design that it can be hard to remember how radical it was at the time: from heavily ornamented wooden cabinets to simple cabinets with glass doors, designed for practicality of use and manufacture. Bauhaus so won.
It’s a great, rich exhibit. And when you’re done, you still have five floors to visit!
Tagged with: bauhaus
Date: December 30th, 2009 dw
I spent most of today tracking down some information about the history of information overload, so I though I’d blog it in case someone else is looking into this. Also, I may well be getting it wrong, in which case please correct me. (The following is sketchy because it’s just notes ‘n’ pointers.)
I started with Alvin Toffler’s explanation of info overload in the 1970 edition of Future Shock. He introduces the concept carefully, expressing it as the next syndrome up from sensory overload.
So, I tried to find the origins of the phrase “sensory overload.” The earliest reference I could find (after getting some help from the Twitterverse – thanks, Ed Summers! – which pointed me to a citation in the OED) was in coverage of a June, 1958 talk at a conference held at Harvard Medical School. The article in Science (vol 129, p. 222) lists some of the papers, including:
2) “Are there common factors in sensory deprivation, sensory distortion and sensory overload?” by Donald B. Lindsley.
I have not gone through Lindsley’s work to find his first use of the term, and a quick Googling didn’t give me an easy answer to this question.
The concept of sensory overload, as opposed to the term, goes back a ways. Lots of people point to Georg Simmel’s The Metropolis and Mental Life , which he wrote in 1903, although it didn’t have its major effect until a translation was published in English in 1950. That article looks at (“speculates about” actually seems like a more apt phrase) how the sensory over-stimulation common in cities will affect the mental state of the inhabitants. Simmel claims that it makes urban dwellers more reserved, more blase, and more intellect-centered. The over-stimulation Simmel refers to, by the way, is not actually an increase in sensation but an increase in the changes in sensations: a constant roar does not over stimulate us as much as constant changes in noise. (Note that Charles Babbage in his dotage was driven close to insane by the sound of street musicians outside his London apartment.)
The term “sensory overload” seems to have started entering common parlance in the mid to late 1960s. An article in The Nation in 1966 introduces the phrase as if were unfamiliar to readers: “Recent experimentation, however, has confirmed the significance of the problem of sensory overload; that is, of an inability to absorb more than a certain amount of experience in a given time.” [Robert Theobald, "Should Men Compete with Machines", The Nation, Vol 202, No. 19, 4/19/1966] In 1968, in testimony to a Senate panel on drug experience, a witness used the term and again had to explain what it means [semi-link]. So, we can put the phrase’s rise into ordinary usage right at the beginning of the popular career of psychedelic drugs.
Toffler explains information overload as being just like sensory overload, except it results from too much information. Here he clearly seems to be thinking about information in its ordinary sense: facts, figures, ideas, etc. Yet he explains it by using terms from information science, which thinks about information not as facts and ideas but as strings of bits: info overload occurs when the info exceeds our “channel capacity,” Toffler says.
At this point, info overload was thought of as a type of psychological syndrome affecting our ability to make rational choices. Toffler even warns that our sanity hinges on avoiding it.
In 1974, papers emerged applying this to marketing. Suppose consumers were given too much information about products? Research showed they would be unable to decide among them, or might make irrational decisions. From today’s perspective, the amount of information that constituted overload seems ludicrously low. In one experiment, consumers were given 16 fields of information for products. (See Jacoby, Jacob. “Perspectives on Information Overload.” The Journal of Consumer Research, March 1984, p. 432-435. p. 432) And one suspects that marketers were happy to find a rationalization for keeping consumers less well informed.
But, what’s most interesting to me is how information overload has gone from a psychological syndrome to a mere description of our environment. Few of us worry that we’re going to become gibbering idiots because we’ve been overstimulated with information. When we worry about info overload these days, it’s because we’re afraid we won’t be able to get enough of it.
[NOTE: These posts tagged "2b2k' (Too Big to Know") are about the process of writing a book. They therefore talk about the ideas in the book rather incidentally..]
It’s not quite right to say that I’ve finished a first draft of chapter one. More accurately: I’ve stopped typing and have gone back to the beginning. It needs so much work that it doesn’t even constitute a draft.
I read it to our son last night as he trotted on the elliptical trainer in the basement. He thought it’s better than I do, but that’s why we have families. He also offered useful comments: Opening with a recitation of factoids about the growth of info has been done (although he professed to find it amusing); I say three or four times too often that the basics of knowledge are changing; it wasn’t entirely clear how the idea of information overload has gone from a psychological syndrome to a cultural challenge. All too true.
Hearing it out loud helps a lot; I always read drafts of chapters to my wife. I realized, for example, that the long (too long) section on the history of facts adopts an off-putting academic tone. That doesn’t worry me, because adjusting the tone is a normal part of re-writing, although it does require the painful removal of “good stuff” that actually isn’t very interesting. I remain quite concerned about the overall structure, and, worse, whether the chapter is clear in its readerly aims.
So, I’m going to put in a new opening. Although the technique is overdone and predictable, I will probably start with some very quick examples intended to show that knowledge is becoming networked. Then I will tighten the section on information overload, which aims at suggesting that knowledge overload results in a change in the nature of knowledge (in a way that info overload did not change the nature of info). Then, into the reduced section on the history of facts, which aims to challenge our notion that knowledge is a building that depends on having a firm foundation. (I also want to shake the reader by the shoulders and say that the idea of knowledge is not as obvious and eternal as we’ve thought.)
Also, I changed the title of Chapter One yesterday, from “Undoing Knowledge” to “The Great Unnailing.”
And, this morning, while on the ol’ elliptical, I read a review of Amartya Sen’s The Idea of Justice, which, because of its discussion of the inevitability of disagreements, seems like it might be relevant. A few paces on, it also seemed to me that a suitable ending for the book might be a brief section that asks: If we didn’t have a concept of knowledge, would we now invent one? Is that concept still useful? I mean something inchoate by this, for clearly it is useful to distinguish between reliable and unreliable ideas. But that’s always a matter of degree. Would we separate out a special class of specially reliable information, and, more to the point, would we think of it as a realm of truth, a mirror of nature, or our highest calling? I think not. But I don’t know if this is an idea with which to open the book, close the book, or ignore.
Categories: too big to know
Tagged with: 2b2k
• information overload
Date: December 27th, 2009 dw
Jos Schuurmans usefully coins “Amplification is the new circulation.” And then he usefully worries about how to handle the fact that with each amplification, the link to the source becomes more tenuous.
The problem is that the amplification metaphor only captures part of the phenomenon. Yes, a post from a low-traffic site that gets re-broadcast by a big honking site has had its signal amplified. But the amplification happens by being passed through more hands, with each transfer potentially introducing noise, as in the archetypical game of “telephone” or “gossip.” On the other hand, because this is not mere signal-passing, each transfer can also introduce more meaning; the signal/noise framing doesn’t actually work very well here.
Retweeting is a good example and a possibly better metaphor: Noise gets introduced as people drop words and paraphrase the original, and as the context loses meaning because the original tweeter is now a dozen links away. But, as people pare down the original tweet, the signal may get stronger, and as they add their own take and introduce it into their own context, the original tweet can gain meaning.
But, Jos is particularly worried about the loss of source. As the original idea gets handed around, the link to its source may well break or be dropped. “TMZ says Brittany Murphy dead http://bit.ly/6biEQg” becomes “TMZ says Brittany Murphy is dead” becomes “Brittany Murphy dead!!!!!!!!!” and then maybe even “Brittany dead!!!,” and “Britney Spears is dead!!!” Sources almost inevitably will be dropped as messages are passed because we are passing the message for what it says, not because of the metadata about its authenticity.
So, what do we do? I have a three part plan.
Part one: Continue to innovate. For example, there’s probably already some service that is following the tracks of retweets, so that if you want to see where a RT began, you can. Of course, any such service will be imperfect. But the all of the Internet’s strengths come from its imperfection.
Part Two: Try to be responsible. When it matters, include the source. This will also be a highly imperfect solution.
Part Three: Cheer up. Yes, it sucks that amplification results in source loss. But, it’s way better than it was before the Internet when all sorts of bullcrap was passed around without any practical way of checking it out. The Net amplifies bullcrap but also makes it incredibly easy to check it out, whether it’s a computer virus warning passed along by your sweet elderly aunt or a rumor about the spread of a real virus. Also, see Part Two: Try to be responsible. Check out rumors before committing to them. When amplifying, reintroduce lost sources.
As Jos says, amplification is the new circulation. And the new circulation tends towards source loss. It also increases both noise and meaning. And it occurs in a system with astounding tools â€” e.g., your favorite search engine â€” for the reinsertion of source.
Is it better or worse? Yes, definitely.
Liu Xiaobo was sentenced to eleven years in prison today for speaking out against the Chinese government.
The Guardian article begins this way:
One of China‘s most prominent human rights activists was condemned today to 11 years in prison, prompting a furious backlash from domestic bloggers and international civil society groups.
Picture me on this quiet Christmas morning finishing a cup of coffee, listening to a set of tracks I just downloaded from Amazon, my family doing their early slow bustle, criticizing a country a full diameter away from me, and you’ve got the picture of a snug, smug American blogger. Fury? Not sure where to locate it in that picture.
It’s obviously not the same for the Chinese bloggers supporting Liu Xiaobo. This post costs me nothing, but their posts put them at risk. I cannot even imagine what it’s like to press the Publish button having to worry about anything more than losing some reputation points. “What will my pals think?” is a lot different than “Will this start the gears of imprisonment?” That unimaginable gap is our freedom of speech.
The flip side of my ability to blog free of risk is powerlessness. So, I condemn the Chinese government. Let’s say many bloggers do. And then what happens? The Chinese government quakes in its boots because the blogosphere has given it a good scolding?
On the other hand, powerless compared to what? Fifteen years ago, my condemnation would have gotten as far as the person sitting across from me. Or maybe I would have written an outraged letter to the Chinese government. (Actually, I’m sure I wouldn’t have since I never have.) Now at least there’s a chance â€” but just a chance â€” that the Chinese bloggers will know that many other bloggers are with them. And this is part of the difference: The mighty are deaf to our words, but our allies and friends may not be.
So, why am I posting about Liu Xiaobo? For a jumble of reasons, as is always the case for us humans. To make myself feel like I’m doing something even if I’m not. To align myself with someone I admire, in part so I’ll be perceived as someone who cares. To contribute a couple more hops to the networked spread of news about Liu Xiaobo. So those at risk can feel the slight weight of one more post comforting them â€” and to be comforted myself that perhaps our words can connect us for a moment before they evaporate as words almost always do.
Tagged with: blogging
• free speech
• human rights
• Liu Xiaobo
Date: December 25th, 2009 dw
Brett Glass runs a Wireless ISP (WISP) in Laramie, Wyoming that spreads across some wide open spaces. To compete, he argues, he needs the government to regulate the right aspects and to keep its hands off everything else. (He believes Net Neutrality is unnecessary and will hurt the ability of small ISPs to compete.)
I interviewed him when he came to Berkman to give a talk. (My liveblogging of his talk is here.)
There are more such interviews at Broadband Strategy Week.
Avatar is a big, visually beautiful movie whose march to dumbville is relieved by only a couple of bright ideas.
One of the bright ideas is the one you enter the theater knowing [SPOILER ALERT, IF YOU HAVE AVOIDED ALL $150,000,000 SPENT ON MARKETING THIS MOVIE]: A human can mentally inhabit an alien’s body. After that, it’s pretty much all downhill, making it the world’s most expensive computer graphics demo. Cool graphics, though!
It’s actually not a very imaginative movie. The landscapes are standard issue alternate-world stuff, albeit filled in with eye-gobbling detail. Worse, the plot and characters are straight out of a thousand other movies. There’s Mel Gibson doing his Brave Heart exhortation (right down to the blue skin). There’s Star Wars’ weirdly anti-technology message. And, yes, as my wife pointed out, most of all there’s Fern Gully‘s sentimental environmentalism. And these are not coy, arch Tarantino-esque references. They’re James Cameron thinking he’s touching our hearts and our minds. It’s pap. (For the record: So was Titanic.)
The racist tinge is the inverse of the old godawful racism that sees indigenous people as “savages” and “primitives.” Instead, Cameron sees them as wise, mother-earth-worshipping perfection. That’s a lot better, but you watch Avatar’s forest folks and see too many embarrassing resonances with stereotypes of native Americans, with occasional guest stereotypes making cameo appearances. (On the other hand, James Cameron’s most fully realized person in any of his movies was a cyborg, and #2 was a ship, so maybe we shouldn’t expect too much.)
It’s not a bad movie. The graphics were enough to carry me along for 2.5 hours. But it takes every opportunity to be predictable and sorta dumb. You leave wondering how many better movies could have been made if it’s $500M budget had been divided among 500 young filmmakers.
Tagged with: avatar
Date: December 23rd, 2009 dw
Now at Broadband Strategy Week, my interview with the Broadband Strategy Initiative’s Steven Rosenberg:
Steve Rosenberg, Manager of Infrastructure for the FCC’s Omnibus Broadband Initiative, talks about understanding the gaps in broadband coverage, and what it would take financially to close those gaps. He oversees the creation of the model.
Rough question summary:
1:18 Q: You map this by geography, and what else?
1:58 Q: It sounds like the first recourse when you discover a gap is to see if the current infrastructure can cover the gap?
3:14 Q: Let me put this most cynically way possible. We could recast this as you saying that you’re identifying the infrastructure providers who have failed to cover the gaps, and then rewarding them by enabling them to do that which they did not find economically viable or socially important enough to do. It sounds like this works against introducing new forms of infrastructure.
6:44 Q: You say the current infrastructure providers haven’t failed, it just wasn’t profitable. The cynical response is that it wasn’t profitable enough, so they red-lined…They somehow first managed to provide access to communities that could pay the most. So the social aim of providing broadband access was sacrificed…
9:24 Q: Does your data correlate access to density and not to socio-economic properties?
10:33 Q: Are you seeing any other clustering of data around the gaps…?
12:00 Q: Your mapping is separate from the somewhat controversial project that the NTIA is doing, right?
13:50 Q: Where is your data coming from?
15:25 Q: Since some of the controversy revolves around the reliability of mapping data provided by the infrastructure providers, what sort of commercial data sources are you using, and how wary are you about the data coming from the infrastructure providers themselves?
v18:36 Q: Another way to mitigate the dangers would be to make the data public before the report comes out.
19:47 Q: If you posted relatively dirty data, announcing it as not fully reliable, might generate such interesting contrasting analyses that might in the open government sort of way might affect your analysis. Any way to bug you on this?
21:40 Q: But you then run the risk of people going to the now-published data and coming back with different results than you did…
22:40 Q: The open gov’t response is that publishing the data in the rawest form possible enables people to do their own models based upon their assumptions. It would enable the broad community of all people, commercial to non-commercial, to make their own models and raise assumptions that you might have missed…
23:40 Q: Let’s talk about the financial modeling you’re doing…
26:50 Q: It sounds incredibly difficult. How many people do you have working on this?
27:50 Q: Will this model have value after the report is done and outside of the FCC?
Tagged with: broadband
Date: December 23rd, 2009 dw
Next Page »