Joho the Bloginfo overload Archives - Joho the Blog

December 31, 2011

[2b2] What information overload looks like

Click the image to see it full size.

Comments Off on [2b2] What information overload looks like

August 6, 2010

The history of noise

I interview Kate Crawford for RadioBerkman about the history of noise and and its changed nature in the age of social media. (Noise 2.0?). (By the way, Kate co-wrote the music that intros ands outros the interview.)

Comments Off on The history of noise

January 31, 2010

[2b2k] Clay Shirky, info overload, and when filters increase the size of what’s filtered

Clay Shirky’s masterful talk at the Web 2.0 Expo in NYC last September — “It’s not information overload. It’s filter failure” — makes crucial points and makes them beautifully. [Clay explains in greater detail in this two part CJR interview: 1 2]

So I’ve been writing about information overload in the context of our traditional strategy for knowing. Clay traces information overload to the 15th century, but others have taken it back earlier than that, and there’s even a quotation from Seneca (4 BCE) that can be pressed into service: “What is the point of having countless books and libraries whose titles the owner could scarcely read through in his whole lifetime? That mass of books burdens the student without instructing…” I’m sure Clay would agree that if we take “information overload” as meaning the sense that there’s too much for any one individual to know, we can push the date back even further.

The little research I’ve done on the origins of the phrase “information overload” supports Clay’s closing point: Info overload isn’t a problem so much as the water we fishes swim in. When the term was popularized by Alvin Toffler in 1970’s Future Shock, Toffler talked about it as a psychological syndrome that could lead to madness (on a par with sensory overload, which is where the term came from). By the time we hit the late 1980s and early 1990s, people aren’t writing about info overload as a psychological syndrome, but as a cultural fact that we have to deal with. The question became not how we can avoid over-stimulating our informational organs but how we can manage to find the right information in the torrent. So, I think Clay is absolutely spot on.

I do want to push on one of the edges of Clay’s idea, though. Knowledge traditionally has responded to the fact that what-is-to-be-known outstrips our puny brains with the strategy of reducing the size of what has to be known. We divide the world into manageable topics, or we skim the surface. We build canons of what needs to be known. We keep the circle of knowledge quite small, at least relative to all the pretenders to knowledge. All of this of course reflects the limitations of the paper medium we traditionally used for the preservation and communication of knowledge.

The hypothesis of “Too Big to Know” is that in the face of the new technology and the exponentially exponential amount of information if makes available to us, knowledge is adopting a new strategy. Rather than merely filtering — “merely” because we will of course continue to filter — we are also including as much as possible. The new sort of filtering that we do is not always and not merely reductive.

A traditional filter in its strongest sense removes materials: It filters out the penny dreadful novels so that they don’t make it onto the shelves of your local library, or it filters out the crazy letters written in crayon so they don’t make it into your local newspaper. Filtering now does not remove materials. Everything is still a few clicks away. The new filtering reduces the number of clicks for some pages, while leaving everything else the same number of clicks away. Granted, that is an overly-optimistic way of putting it: Being the millionth result listed by a Google search makes it many millions of times harder to find that page than the ones that make it onto Google’s front page. Nevertheless, it’s still much much easier to access that millionth-listed page than it is to access a book that didn’t make it through the publishing system’s editorial filters.

But there’s another crucial sense in which the new filtering technology is not purely reductive. Filters now are often simultaneously additive. For example, blogs act as filters, recommending other pages. But blogs don’t merely sift through the Web and present you with what they find, the way someone curating a collection of books puts the books on a shelf. Blogs contextualize the places they point to, sometimes at great length. That contextualization is a type of filter that adds a great deal of rich information. Further, in many instances, we can see why the filter was applied the way it was. For blogs and other human-written pieces, this is often explained in the contextualization. At Wikipedia, it takes place in the “About” pages where people explain why they have removed some information and added others. And the point of the Top 100 lists and Top Ten Lists that are so popular these days is to generate reams and reams of online controversy.

Thus, many of our new filters reflect the basic change in our knowledge strategy. We are moving from managing the perpetual overload Clay talks about by reducing the amount we have to deal with, to reducing it in ways that simultaneously add to the overload. Merely filtering is not enough, and filtering is no longer a merely reductive activity. The filters themselves are information that are then discussed, shared, and argued about. When we swim through information overload, we’re not swimming in little buckets that result from filters; we are swimming in a sea made bigger by the loquacious filters that are guiding us.

[Note: Amanda Lynn at fatcow has provided a Belorussian translation. Thanks!]

14 Comments »