I am the lucky fellow who got to have dinner with James Bridle last night. I am a big fan of his brilliance and humor. And of James himself, of course.
I ran into him at the NEXT conference I was at in Berlin. His in fact was the only session I managed to get to. (My schedule got very busy all of a sudden.) And his talk was, well, brilliant. And funny. Two points stick out in particular. First, he talked about “code/spaces,” a notion from a book by Martin Dodge and Rob Kitchin. A code/space is an architectural space that shapes itself around the information processing that happens within it. For example, an airport terminal is designed around the computing processes that happen within it; the physical space doesn’t work without the information processes. James is in general fascinated by the Cartesian pituitary glands where the physical and the digital meet. (I am too, but I haven’t pursued it with James’ vigor or anything close to his literary-aesthetic sense.)
Second, James compared software development to fan fiction: People generally base their new ideas on twists on existing applications. Then he urged us to take it to the next level by thinking about software in terms of slash fiction: bringing together two different applications so that they can have hot monkey love, or at least form an innovative hybrid.
Then, at dinner, James told me about one of his earliest projects. a matchbox computer that learns to play “noughts and crosses” (i.e., tic-tac-toe). He explains this in a talk dedicated to upping the game when we use the word “awesome.” I promise you: This is an awesome talk. It’s all written out and well illustrated. Trust me. Awesome.
Tagged with: awesome
Date: May 10th, 2012 dw
I spent most of today tracking down some information about the history of information overload, so I though I’d blog it in case someone else is looking into this. Also, I may well be getting it wrong, in which case please correct me. (The following is sketchy because it’s just notes ‘n’ pointers.)
I started with Alvin Toffler’s explanation of info overload in the 1970 edition of Future Shock. He introduces the concept carefully, expressing it as the next syndrome up from sensory overload.
So, I tried to find the origins of the phrase “sensory overload.” The earliest reference I could find (after getting some help from the Twitterverse – thanks, Ed Summers! – which pointed me to a citation in the OED) was in coverage of a June, 1958 talk at a conference held at Harvard Medical School. The article in Science (vol 129, p. 222) lists some of the papers, including:
2) “Are there common factors in sensory deprivation, sensory distortion and sensory overload?” by Donald B. Lindsley.
I have not gone through Lindsley’s work to find his first use of the term, and a quick Googling didn’t give me an easy answer to this question.
The concept of sensory overload, as opposed to the term, goes back a ways. Lots of people point to Georg Simmel’s The Metropolis and Mental Life , which he wrote in 1903, although it didn’t have its major effect until a translation was published in English in 1950. That article looks at (“speculates about” actually seems like a more apt phrase) how the sensory over-stimulation common in cities will affect the mental state of the inhabitants. Simmel claims that it makes urban dwellers more reserved, more blase, and more intellect-centered. The over-stimulation Simmel refers to, by the way, is not actually an increase in sensation but an increase in the changes in sensations: a constant roar does not over stimulate us as much as constant changes in noise. (Note that Charles Babbage in his dotage was driven close to insane by the sound of street musicians outside his London apartment.)
The term “sensory overload” seems to have started entering common parlance in the mid to late 1960s. An article in The Nation in 1966 introduces the phrase as if were unfamiliar to readers: “Recent experimentation, however, has confirmed the significance of the problem of sensory overload; that is, of an inability to absorb more than a certain amount of experience in a given time.” [Robert Theobald, "Should Men Compete with Machines", The Nation, Vol 202, No. 19, 4/19/1966] In 1968, in testimony to a Senate panel on drug experience, a witness used the term and again had to explain what it means [semi-link]. So, we can put the phrase’s rise into ordinary usage right at the beginning of the popular career of psychedelic drugs.
Toffler explains information overload as being just like sensory overload, except it results from too much information. Here he clearly seems to be thinking about information in its ordinary sense: facts, figures, ideas, etc. Yet he explains it by using terms from information science, which thinks about information not as facts and ideas but as strings of bits: info overload occurs when the info exceeds our “channel capacity,” Toffler says.
At this point, info overload was thought of as a type of psychological syndrome affecting our ability to make rational choices. Toffler even warns that our sanity hinges on avoiding it.
In 1974, papers emerged applying this to marketing. Suppose consumers were given too much information about products? Research showed they would be unable to decide among them, or might make irrational decisions. From today’s perspective, the amount of information that constituted overload seems ludicrously low. In one experiment, consumers were given 16 fields of information for products. (See Jacoby, Jacob. “Perspectives on Information Overload.” The Journal of Consumer Research, March 1984, p. 432-435. p. 432) And one suspects that marketers were happy to find a rationalization for keeping consumers less well informed.
But, what’s most interesting to me is how information overload has gone from a psychological syndrome to a mere description of our environment. Few of us worry that we’re going to become gibbering idiots because we’ve been overstimulated with information. When we worry about info overload these days, it’s because we’re afraid we won’t be able to get enough of it.
Jos Schuurmans usefully coins “Amplification is the new circulation.” And then he usefully worries about how to handle the fact that with each amplification, the link to the source becomes more tenuous.
The problem is that the amplification metaphor only captures part of the phenomenon. Yes, a post from a low-traffic site that gets re-broadcast by a big honking site has had its signal amplified. But the amplification happens by being passed through more hands, with each transfer potentially introducing noise, as in the archetypical game of “telephone” or “gossip.” On the other hand, because this is not mere signal-passing, each transfer can also introduce more meaning; the signal/noise framing doesn’t actually work very well here.
Retweeting is a good example and a possibly better metaphor: Noise gets introduced as people drop words and paraphrase the original, and as the context loses meaning because the original tweeter is now a dozen links away. But, as people pare down the original tweet, the signal may get stronger, and as they add their own take and introduce it into their own context, the original tweet can gain meaning.
But, Jos is particularly worried about the loss of source. As the original idea gets handed around, the link to its source may well break or be dropped. “TMZ says Brittany Murphy dead http://bit.ly/6biEQg” becomes “TMZ says Brittany Murphy is dead” becomes “Brittany Murphy dead!!!!!!!!!” and then maybe even “Brittany dead!!!,” and “Britney Spears is dead!!!” Sources almost inevitably will be dropped as messages are passed because we are passing the message for what it says, not because of the metadata about its authenticity.
So, what do we do? I have a three part plan.
Part one: Continue to innovate. For example, there’s probably already some service that is following the tracks of retweets, so that if you want to see where a RT began, you can. Of course, any such service will be imperfect. But the all of the Internet’s strengths come from its imperfection.
Part Two: Try to be responsible. When it matters, include the source. This will also be a highly imperfect solution.
Part Three: Cheer up. Yes, it sucks that amplification results in source loss. But, it’s way better than it was before the Internet when all sorts of bullcrap was passed around without any practical way of checking it out. The Net amplifies bullcrap but also makes it incredibly easy to check it out, whether it’s a computer virus warning passed along by your sweet elderly aunt or a rumor about the spread of a real virus. Also, see Part Two: Try to be responsible. Check out rumors before committing to them. When amplifying, reintroduce lost sources.
As Jos says, amplification is the new circulation. And the new circulation tends towards source loss. It also increases both noise and meaning. And it occurs in a system with astounding tools â€” e.g., your favorite search engine â€” for the reinsertion of source.
Is it better or worse? Yes, definitely.
I’ve been honored with one of Ethan Zuckerman’s incredible liveblog postings. I gave a 45 min talk at the Berkman Center yesterday. I spoke quickly, waved my hands a lot, and spewed. [Rough draft here.] Even so, Ethan was able to commit an amazing act of streaming journalism, with very few places where I would even quibble with his summary and analysis.
He posted it immediately after I spoke, which I can attest to because if you read it you would never think that it was an unedited draft. It’s too thoughtful and well-written for that. This is Ethan writing on the fly, not merely typing or transcribing. Amazing.
independent of all that, I am very fortunate to be able to call Ethan a close friend.
[Later that day: Here's the video of the webcast.]
Draft of my talk on the end of information at the Berkman Center. [NOV 11: Here's the video of the webcast done on Nov 9. Ethan Zuckerman's extensive and amazing live blogging of the talk is here..]
I have been working for weeks on a talk I’m giving at a Tuesday lunch at the Berkman Center, where “work on” means erasing more than I’ve written. I’ve done more complete rewrites than I can count, mainly because I can’t figure out what the point of the talk is. I started out knowing what the point was, but as I actually wrote it, I knew less and less. So, here’s a rough outline of the current sorry state of the talk.
I. Information has been the dominant metaphor
This is the easy part. From cradle to grave, we’ve reconceived of ourselves and our world as information. But, except for the technical definition, we don’t know what it is (and most of how we’ve reconceived of ourselves has nothing to do with the technical def, and most of us don’t know the technical def anyway).
II. A discontinuous history
“Info” has two ordinary senses that precede its take-over by Claude Shannon in 1948: It’s something you’re about to learn, and it’s the content of tables. Shannon then introduced his technical definition, which only a tiny percentage of the population understands. Nevertheless, info became the dominant paradigm. So, what enabled it to take over our culture? Two notes: 1. I am explicitly not going to talk about its utility or its politics of control and mastery, both of which are obviously crucial to the answer. 2. I am going to contrast the Info Age with the Link Age (or whatever we’re going to call the new epoch).
Enabler 1: Information scales
Info scales sufficiently to enable large corporations to manage themselves. But its scaling strategy is to exclude everything that doesn’t fit its rows and columns. E.g., the personnel database contains only a tiny bit about what employees know about one another. In the Age of Links, we include everything. Links create a world of abundance. The irony is that while the Info Age’s strategy was to exclude bad and useless info, in the Age of Links we’re better able to manage the abundance of crap than the abundance of good stuff.
Enabler 2: Info is a resource
It’s a resource in that it’s useful to us. We can retrieve stuff from it, using the criteria of precision and recall: Did our query get only the right stuff and all the right stuff? In the Link Age of Abundance, however, getting all the right stuff is a disaster. (Which is why we invented two new criteria: relevance and interestingness.)
Furthermore, info is a resource from which we fetch nuggets of value. The Web, though, is a place that we enter and navigate. The irony is that in the Age of Info, we thought about entering an info space as becoming Jeff Bridges in Tron. Or, we thought that if we entered the info space because it engulfed us, it would be a cold world of men with clipboards, as in movies such as Desk Set. In the Link Age, the place we enter is fully social, and is becoming completely integrated with the real world space.
Enabler 3: Bits apply to everything
We sometimes talk about atoms vs. bits because anything can be turned into a bit. Bits are thus coextensive with the universe. But, bits can represent anything in the world because they are so fundamentally unlike the world. Every other measurement measures some property of the world (height, weight, shoe size, whatever), but bits measure pure difference. The world bits model always shows itself in particular ways, in particular properties. Bits are thus profoundly unnatural; they exist only because we take them as bits. They are thus very much unlike atoms.
Further, bits reduce everything to the simplest of differences: yes/no, 1/0. Links, on the other hand, are put in place to find and tease out differences that are complex enough to require language and to be worth pointing out.
Enabler 4: Information explains communication
Although Shannon expressly was not trying to explain human communication, his diagram matches our basic view of communication as the movement of code through a conduit. (Paul Edwards is good on this, as on many other issues.) Plus, Shannon’s popularizer, Warren Weaver, expressly said the theory applies to people speaking, pipers piping, dancers dancing, and just about every other form of communication. Still, we have to ask why think of communication as the process of moving symbols through conduits when so much else is required, and so much more is implied, by even the simplest of human conversations. Part of the answer is, I think, our Cartesian metaphysics that thinks that we experience representations of the world, and thus can only communicate by shipping messages to others that affect their representations of the world. The world itself has dropped out of this equation: We only have heads and conduits between them.
This basic picture of communication of content moving through a medium to a receiver treats communication as an obstacle to be overcome, for noise keeps banging on the conduit. This is how the world looks if you come out of an experience where communication was difficult, as was the case for the early info scientists, some of whom had worked on how to improve communications on a noisy battlefield. (Paul Edwards again: The Closed World is excellent.) But hyperlinks are neither content nor medium; more exactly, they’re both. Like a path, a hyperlink assumes an existing world, a shared ground. (Links are a very special sort of path, though, because they are generative of their world.)
Enabler 5: Information lets us understand the world
Models let us find what is essential and common among all that which they model. But they deny the abundance of the world and the fact that the world doesn’t behave the way we want. The contingent does show up in the Info Age view of the world. It shows up as noise. In the Link Age, succeed by making the world noisy: creating a path among ideas that differ. (This is not noise in Info Theory’s sense.) Of course, we rightfully worry that amidst this differential linkage we will only seek that which is familiar and reassuring. The success of the Link Age depends upon it remaining as noisy and full of difference as possible, the opposite of how the Info Age measured success.
So, as I write this out, I can see some sections that don’t really add up. For example, Enabler 3′s discussion is pretty incoherent. But that’s why I’m writing this out now.
I have one day left to get something presentable out of this, since I am out all day on Monday. And I’m jetlagged and pretty exhausted now. Ack.
Tagged with: infohist
Date: November 7th, 2009 dw
James Surowiecki has a piece in the New Yorker that finally got me to understand why Obama is including a tax rebate in his stimulus package. It’s not the mere pandering to the Republicans that I thought it was. It actually sounds pretty smart.
And while you’re there, you might as well read Atul Gawande’s argument for building our health care system on what we have, rather than sweeping it all away and beginning fresh.
Then finish it all off with the dessert wine of Mariana Cook’s 1996 interview with Barack and Michelle Obama, in which the future president expresses love’s swing of mystery and familiarity. Just in case you weren’t gushy enough about the two of them.
Categories: Uncategorized Tagged with: economics
Date: January 24th, 2009 dw
Can anyone point to a good history of the brain that goes through the various ways we’ve thought about it, particularly in the West, from Aristotle thinking it was designed to cool the blood, up through our modern idea that it’s an information processor?
Categories: Uncategorized Tagged with: brain
Date: January 12th, 2009 dw
How far back does the “atoms vs. bits” idea go? Did anyone talk about it before Nicholas Negroponte in “Being Digital“?
Some specifications of what I’m looking for:
- It has to be actual bits, i.e., binary units of information. So, no fair tracing it back to Plato’s Cave.
- I’m not asking about Negroponte’s particular idea in “Being Digital,” which contrasted economies built on atoms with ones built on bits. I’m actually interested in the sense that there are two semi-equivalent realities, one built of atoms and one built of bits.
- I’m not looking for “Its are bits” physicists who say the universe is made of information. I’m looking for the idea that bits are different from atoms, but deserve to be on a roughly equal footing…at least to the extent that the phrase “atoms vs. bits” makes sense the way “atoms vs. weekends” does not.
Any pointers, corrections, or exasperated sighs are gratefully accepted.
Categories: Uncategorized Tagged with: bits
Date: January 6th, 2009 dw
As we exit the Information Age, we can begin to see how our idea of information has shaped our view of who we are.
The future from
1978: What a 1978 anthology predicts about the future of the computer tells us a lot about the remarkable turn matters have taken.
A software idea: Text from audio: Anyone care to write software that would make it much easier to edit spoken audio?
Bogus Contest: Name that software!
Last Thursday, I had a discussion with Charlie Nesson and Aaron Shaw at the Berkman Center about the first article in this issue. You can see some clips of the conversation here:
2 [Note: In this I misspeak and say info is noise; I meant to say that noise is info. I just noticed my error. Oops.]
Tagged with: infohist
Date: October 21st, 2008 dw
A stray and obvious thought?
If you look at the issue of privacy at social networking sites in terms of information, as outside observers such as parents and governments frequently do, you come up with proposals to enable users to control their information.
But sites like Facebook aren’t about information. They’re about self, others, and the connections among them. Likewise Flickr isn’t about info; it’s about sharing photos.
If the issue gets phrased in terms of info, then the field tilts towards assuming privacy as the good and publicness as the threat, with control over info as the bulwark. But, within the participant’s frame, publicness is taken as the good and privacy as fear-based or selfish.
This is a case where an information-based view misses the phenomenon and can lead to bad policy decisions.
Also, our kids will think we’re dorks.
Categories: Uncategorized Tagged with: digital culture
• digital rights
• social networks
Date: September 24th, 2008 dw
Next Page »