Joho the Blog » infohistory

August 9, 2014

Tim Berners-Lee’s amazingly astute 1992 article on this crazy Web thing he started

Dan Brickley points to this incredibly prescient article by Tim Berners-Lee from 1992. The World Wide Web he gets the bulk of the credit for inventing was thriving at CERN where he worked. Scientists were linking to one another’s articles without making anyone type in a squirrely Internet address. Why, over a thousand articles were hyperlinked.

And on this slim basis, Tim outlines the fundamental challenges we’re now living through. Much of the world has yet to catch up with insights he derived from the slightest of experience.

May the rest of us have even a sliver of his genius and a heaping plateful of his generosity.

2 Comments »

November 17, 2013

Noam Chomsky, security, and equivocal information

Noam Chomsky and Barton Gellman were interviewed at the Engaging Big Data conference put on by MIT’s Senseable City Lab on Nov. 15. When Prof. Chomsky was asked what we can do about government surveillance, he reiterated his earlier call for us to understand the NSA surveillance scandal within an historical context that shows that governments always use technology for their own worst purposes. According to my liveblogging (= inaccurate, paraphrased) notes, Prof. Chomsky said:

Governments have been doing this for a century, using the best technology they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe this is universal. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications. [Emphasis added, of course.]

I was glad that Barton Gellman — hardly an NSA apologist — called Prof. Chomsky on his lumping of the NSA with the Stasi, for there is simply no comparison between the freedom we have in the US and the thuggish repression omnipresent in East Germany. But I was still bothered, albeit by a much smaller point. I have no serious quarrel with Prof. Chomsky’s points that government incursions on rights are nothing new, and that governments generally (always?) believe they are acting for the best of purposes. I am a little bit hung-up, however, on his equivocating on “information.”

Prof. Chomsky is of course right in his implied definition of information. (He is Noam Chomsky, after all, and knows a little more about the topic than I do.) Modern information is often described as a measure of surprise. A string of 100 alternating ones and zeroes conveys less information than a string of 100 bits that are less predictable, for if you can predict with certainty what the next bit will be, then you don’t learn anything from that bit; it carries no information. Information theory lets us quantify how much information is conveyed by streams of varying predictability.

So, when U.S. security folks say they are spying on us for our own security, are they saying literally nothing? Is that claim without meaning? Only in the technical sense of information. It is, in fact, quite meaningful, even if quite predictable, in the ordinary sense of the term “information.”

First, Prof. Chomsky’s point that governments do bad things while thinking they’re doing good is an important reminder to examine our own assumptions. Even the bad guys think they’re the good guys.

Second, I disagree with Prof. Chomsky’s generalization that governments always justify surveillance in the name of security. For example, governments sometimes record traffic (including the movement of identifiable cars through toll stations) with the justification that the information will be used to ease congestion. Tracking the position of mobile phones has been justified as necessary for providing swift EMT responses. Governments require us to fill out detailed reports on our personal finances every year on the grounds that they need to tax us fairly. Our government hires a fleet of people every ten years to visit us where we live in order to compile a census. These are all forms of surveillance, but in none of these cases is security given as the justification. And if you want to say that these other forms don’t count, I suspect it’s because it’s not surveillance done in the name of security…which is my point.

Third, governments rarely cite security as the justification without specifying what the population is being secured against; as Prof. Chomsky agrees, that’s an inherent part of the fear-mongering required to get us to accept being spied upon. So governments proclaim over and over what threatens our security: Spies in our midst? Civil unrest? Traitorous classes of people? Illegal aliens? Muggers and murderers? Terrorists? Thus, the security claim isn’t made on its own. It’s made with specific threats in mind, which makes the claim less predictable — and thus more informational — than Prof. Chomsky says.

So, I disagree with Prof. Chomsky’s argument that a government that justifies spying on the grounds of security is literally saying something without meaning. Even if it were entirely predictable that governments will always respond “Because security” when asked to justify surveillance — and my second point disputes that — we wouldn’t treat the response as meaningless but as requiring a follow-up question. And even if the government just kept repeating the word “Security” in response to all our questions, that very act would carry meaning as well, like a doctor who won’t tell you what a shot is for beyond saying “It’s to keep you healthy.” The lack of meaning in the Information Theory sense doesn’t carry into the realm in which people and their public officials engage in discourse.

Here’s an analogy. Prof. Chomsky’s argument is saying, “When a government justifies creating medical programs for health, what they’re saying is meaningless. They always say that! The Nazis said the same thing when they were sterilizing ‘inferiors,’ and Medieval physicians engaged in barbarous [barber-ous, actually - heyo!] practices in the name of health.” Such reasoning would rule out a discussion of whether current government-sponsored medical programs actually promote health. But that is just the sort of conversation we need to have now about the NSA.

Prof. Chomsky’s repeated appeals to history in this interview covers up exactly what we need to be discussing. Yes, both the NSA and the Stasi claimed security as their justification for spying. But far from that claim being meaningless, it calls for a careful analysis of the claim: the nature and severity of the risk, the most effective tactics to ameliorate that threat, the consequences of those tactics on broader rights and goods — all considerations that comparisons to the Stasi and Genghis Khan obscure. History counts, but not as a way to write off security considerations as meaningless by invoking a technical definition of “information.”

1 Comment »

July 28, 2013

The shockingly short history of the history of technology

In 1960, the academic journal Technology and Culture devoted its entire Autumn edition [1] to essays about a single work, the fifth and final volume of which had come out in 1958: A History of Technology, edited by Charles Singer, E. J. Holmyard, A. R. Hall, and Trevor I. Williams. Essay after essay implies or outright states something I found quite remarkable: A History of Technology is the first history of technology.

You’d think the essays would have some clever twist explaining why all those other things that claimed to be histories were not, perhaps because they didn’t get the concept of “technology” right in some modern way. But, no, the statements are pretty untwisty. The journal’s editor matter-of-factly claims that the history of technology is a “new discipline.”[2] Robert Woodbury takes the work’s publication as the beginning of the discipline as well, although he thinks it pales next to the foundational work of the history of science [3], a field the journal’s essays generally take as the history of technology’s older sibling, if not its parent. Indeed, fourteen years later, in 1974, Robert Multhauf wrote an article for that same journal, called “Some Observations on the State of the History of Technology,”[4] that suggested that the discipline was only then coming into its own. Why some universities have even recognized that there is such a thing as an historian of science!

The essay by Lewis Mumford, whom one might have mistaken for a prior historian of technology, marks the volumes as a first history of technology, pans them as a history of technology, and acknowledges prior attempts that border on being histories of technology. [5] His main objection to A History of Technology— and he is far from alone in this among the essays — is that the volumes don’t do the job of synthesizing the events recounted, failing to put them into the history of ideas, culture, and economics that explain both how technology took the turns that it did and what the meaning of those turns meant for human life. At least, Mumford says, these five volumes do a better job than the works of three British nineteenth century who wrote something like histories of technology: Andrew Ure, Samuel Smiles, and Charles Babbage. (Yes, that Charles Babbage.) (Multhauf points also to Louis Figuier in France, and Franz Reuleaux in Germany.[6])

Mumford comes across as a little miffed in the essay he wrote about A History of Technology, but, then, Mumford often comes across as at least a little miffed. In the 1963 introduction to his 1934 work, Technics and Civilization, Mumford seems to claim the crown for himself, saying that his work was “the first to summarize the technical history of the last thousand years of Western Civilization…” [7]. And, indeed, that book does what he claims is missing from A History of Technology, looking at the non-technical factors that made the technology socially feasible, and at the social effects the technology had. It is a remarkable work of synthesis, driven by a moral fervor that borders on the rhetoric of a prophet. (Mumford sometimes crossed that border; see his 1946 anti-nuke essay, “Gentlemen: You are Mad!” [8]) Still, in 1960 Mumford treated A History of Technology as a first history of technology not only in the academic journal Technology and Culture, but also in The New Yorker, claiming that until recently the history of technology had been “ignored,” and “…no matter what the oversights or lapses in this new “History of Technology, one must be grateful that it has come into existence at all.”[9]

So, there does seem to be a rough consensus that the first history of technology appeared in 1958. That the newness of this field is shocking, at least to me, is a sign of how dominant technology as a concept — as a frame — has become in the past couple of decades.


[1] Techology and Culture. Autumn, 1960. Vol. 1, Issue 4.

[2] Melvin Kranzberg. “Charles Singer and ‘A History of Technology'” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 299-302. p. 300.

[3] Robert S. Woodbury. “The Scholarly Future of the History of Technology” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 345-8. P. 345.

[4] Robert P. Multhauf, “Some Observations on the State of the History of Technology.” Techology and Culture. Jan, 1974. Vol. 15, no. 1. pp. 1-12

[5] Lewis Mumford. “Tools and the Man.” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 320-334.

[6] Multhauf, p. 3.

[7] Lewis Mumford. Technics and Civilization. (Harcourt Brace, 1934. New edition 1963), p. xi.

[8] Lewis Mumford. “Gentlemen: You Are Mad!” Saturday Review of Literature. March 2, 1946, pp. 5-6.

[9] Lewis Mumford. “From Erewhon to Nowhere.” The New Yorker. Oct. 8, 1960. pp. 180-197.

2 Comments »

April 7, 2013

The medium is the message is the transmitter is the receiver

Al Jazeera asked me to contribute a one-minute video for an episode of Listening Post about how McLuhan looks in the Age of the Internet. They ultimately rejected it. I can see why; it’s pretty geeky. Also, it’s not very interesting.

So, what the heck, here it is:

3 Comments »

August 10, 2012

HyperCard@25

On August 11, 1987, Apple genius Bill Atkinson — and this was before every kid in a clean shirt was a candidate for Apple Genius — held a press conference at MacWorld to unveil HyperCard. I was there.

example of a hypercard stack

I watched Atkinson’s presentation as a PR/marketing guy at a software company, and as an early user of the initial Mac (although my personal machines were CP/M, MS-DOS, and then Windows). As a PR guy, I was awestruck by the skill of the presentation. I remember Atkinson dynamically resizing a bit-mapped graphic and casually mentioning that figuring out on the fly which bits to keep and which to throw out was no small feat. And at the other end of the scale, the range of apps — each beautifully designed, of course — was fantastic.

HyperCard was pitched as a way to hyperlink together a series of graphic cards or screens. The cards had the sort of detail that bitmapped graphics afforded, and that Apple knew how to deliver. Because the cards were bitmapped, they tended to highlight their uniqueness, like the pages of a highly designed magazine, or etchings in a gallery.

Atkinson also pitched HyperCard as a development environment that made some hard things easy. That part of the pitch left me unconvinced. Atkinson emphasized the object-orientation of HyperTalk — I remember him talking about message-passing and inheritance — but it seemed to me as a listener that building a HyperStack (as HyperCard apps were called) was going to be beyond typical end users. (I created a Stack for my company a few months later, with some basic interactivity. Fun.)

Apple killed off HyperCard in 2004, but it remains more than a fond memory to many of us. In fact, some — including Bill Atkinson — have noted how close it was to being a browser before there was a Web. A couple of months ago, Matthew Lasar at Ars Technica wrote:

In an angst-filled 2002 interview, Bill Atkinson confessed to his Big Mistake. If only he had figured out that stacks could be linked through cyberspace, and not just installed on a particular desktop, things would have been different.

“I missed the mark with HyperCard,” Atkinson lamented. “I grew up in a box-centric culture at Apple. If I’d grown up in a network-centric culture, like Sun, HyperCard might have been the first Web browser. My blind spot at Apple prevented me from making HyperCard the first Web browser.”

First of all, we should all give Bill A a big group hug. HyperCard was an awesome act of imagination. Thank you!

But I don’t quite see HyperCard as the precursor to the Web. I think instead it anticipated something later.

HyperCard + network = Awesome, but HyperCard + network != Web browser. The genius of Tim Berners-Lee was not that he built a browser that made the Internet far more usable. TBL’s real genius was that he wrote protocols and standards by which hyperlinked information could be displayed and shared. The HTML standard itself was at best serviceable; it was a specification using an already-existing standard, SGML, that let you specify the elements and structure of particular document types.. TBL went against the trend by making an SGML specification that was so simple that it was derided by the SGML cowboys. That was very smart. Wise, even. But we have the Web today because TBL didn’t start by inventing a browser. He instead said that if you want to have some text be hyperlinked, surround it with a particular mark-up (“<a href= ‘http://pathname.com/page.html’></a>”). And, if you want to write a browser, make sure that it knows how to interpret that markup (and support the protocols). The Web took off because it wasn’t an application, but a specification for how applications could share hyperlinked information. Anyone who wanted to could write applications for displaying hyperlinked documents. And people did

There’s another way HyperCard was not Web minus Network. The Web works off of a word-processing metaphor. HyperCard works off of a page-layout/graphics metaphor. HTML as first written gave authors precious little control over the presentation of the contents: the browser decided where the line endings were, how to wrap text around graphics, and what a first level heading would look like compared to a second level heading. Over time, HTML (especially with CSS) has afforded authors a much higher degree of control over presentation, but the architecture of the Web still reflects the profound split between content and layout. This is how word-processors work, and it’s how SGML worked. HyperCard, on the other hand, comes out of a bitmapped graphics mentality in which the creator gets pinpoint control over the placement of every dot. You can get stunningly beautiful cards this way, but the Web has gained some tremendous advantages because it went the other way.

Let me be clearer. In the old days, WordPerfect was unstructured and Microsoft Word was structured. With WordPerfect, to make a line of text into a subhead you’d insert a marker telling WordPerfect to begin putting the text into boldface, and another marker telling it to stop. You might put in other markers telling it to put the same run of text into a larger font and to underline it. With Word, you’d make a line into a subhead by putting your text caret into it and selecting “subhead” from a pick list. If you wanted to turn all the subheads red, you’d edit the properties of “subhead” and Word would apply the change to everything marked as a subhead. HTML is like Word, not WordPerfect. From this basic decision, HTML has gained tremendous advantages:

  • The structure of pages is important semantic information. A Web with structured pages is smarter than one with bitmaps.

  • Separating content from layout enables the dynamic re-laying out that has become increasingly important as we view pages on more types of devices. If you disagree, tell me how much fun it is to read a full-page pdf on your mobile phone.

  • Structured documents enable many of the benefits of object orientation: Define it once and have all instances update; attach methods to these structures, enable inheritance, etc.

(It’s disappointing to me that Google Docs document programming environment doesn’t take advantage of this. The last time I looked, you can’t attach methods to objects. It’s WordPerfect all over again.)

I should mention that the software company I worked at, Interleaf, created electronic documents that separated content from presentation (with SGML as its model), that treated document elements as objects, and that enabled them to be extended with event-aware methods. These documents worked together over local area networks. So, I think there’s a case to be made that Interleaf’s “active documents” were actually closer to presaging the Web than HyperCard was, although Interleaf made the same mistake of writing an app — and an expensive, proprietary one to boot — rather than a specification. It was great technology, but the act of genius that gave us the Web was about the power of specifications and an architecture independent of the technology that implements it.

HyperCard was a groundbreaking, beautiful, and even thrilling app. Ahead of its time for sure. But the time it was ahead of seems to me to be not so much the Age of the Web as the Age of the App. I don’t know why there isn’t now an app development environment that gives us what HyperCard did. Apparently HyperCard is still ahead of its time.

 


[A few minutes later] Someone’s pointed me to Infinite Canvas as a sort of HyperCard for iPhone…

[An hour later:] A friend suggests using the hashtag #HyperCard25th.

8 Comments »

July 5, 2012

The origins of information

“You can ring me here tonight from Finland. If you’ve got the film, just say the deal’s come off.”

“And if not?”

“Say the deal’s off.”

“It sounds rather alike,” Avery objected. “If the line’s bad, I mean. ‘Off’ and ‘Come off’.”

Then say they’re not interested. Say something negative. You know what I mean.”

— John le Caré The Looking Glass War (Pan Books, London, 1965), p. 34.

As Paul Edwards explains in his wonderful The Closed World, information theory grew in part out of work done during WWII to develop a vocabulary of words sufficiently distinctive that they could be differentiated over the uproar of battle. The demand for discernible difference (= information) has led us to to see language as code, to denature qualities, and to count that which gets in the way of clarity as noise.

I’m not kicking. I’m just reminding us that information had an origin and a purpose.

Be the first to comment »

May 13, 2012

[2b2k] The Net as paradigm

Edward Burman recently sent me a very interesting email in response to my article about the 50th anniversary of Thomas Kuhn’s The Structure of Scientific Revolutions. So I bought his 2003 book Shift!: The Unfolding Internet – Hype, Hope and History (hint: If you buy it from Amazon, check the non-Amazon sellers listed there) which arrived while I was away this week. The book is not very long — 50,000 words or so — but it’s dense with ideas. For example, Edward argues in passing that the Net exploits already-existing trends toward globalization, rather than leading the way to it; he even has a couple of pages on Heidegger’s thinking about the nature of communication. It’s a rich book.

Shift! applies The Structure of Scientific Revolutions to the Internet revolution, wondering what the Internet paradigm will be. The chapters that go through the history of failed attempts to understand the Net — the “pre-paradigms” — are fascinating. Much of Edward’s analysis of business’ inability to grasp the Net mirrors cluetrain‘s themes. (In fact, I had the authorial d-bag reaction of wishing he had referenced Cluetrain…until I realized that Edward probably had the same reaction to my later books which mirror ideas in Shift!) The book is strong in its presentation of Kuhn’s ideas, and has a deep sense of our cultural and philosophical history.

All that would be enough to bring me to recommend the book. But Edward admirably jumps in with a prediction about what the Internet paradigm will be:

This…brings us to the new paradigm, which will condition our private and business lives as the twenty-first century evolves. It is a simple paradigm, and may be expressed in synthetic form in three simple words: ubiquitous invisible connectivity. That is to say, when the technologies, software and devices which enable global connectivity in real time become so ubiquitous that we are completely unaware of their presence…We are simply connected.” [p. 170]

It’s unfair to leave it there since the book then elaborates on this idea in very useful ways. For example, he talks about the concept of “e-business” as being a pre-paradigm, and the actual paradigm being “The network itself becomes the company,” which includes an erosion of hierarchy by networks. But because I’ve just written about Kuhn, I found myself particularly interested in the book’s overall argument that Kuhn gives us a way to understand the Internet. Is there an Internet paradigm shift?

The are two ways to take this.

First, is there a paradigm by which we will come to understand the Internet? Edward argues yes, we are rapidly settling into the paradigmatic understanding of the Net. In fact, he guesses that “the present revolution [will] be completed and the new paradigm of being [will] be in force” in “roughly five to eight years” [p. 175]. He sagely points to three main areas where he thinks there will be sufficient development to enable the new paradigm to take root: the rise of the mobile Internet, the development of productivity tools that “facilitate improvements in the supply chain” and marketing, and “the increased deployment of what have been termed social applications, involving education and the political sphere of national and local government.” [pp. 175-176] Not bad for 2003!

But I’d point to two ways, important to his argument, in which things have not turned out as Edward thought. First, the 5-8 years after the book came out were marked by a continuing series of disruptive Internet developments, including general purpose social networks, Wikipedia, e-books, crowdsourcing, YouTube, open access, open courseware, Khan Academy, etc. etc. I hope it’s obvious that I’m not criticizing Edward for not being prescient enough. The book is pretty much as smart as you can get about these things. My point is that the disruptions just keep coming. The Net is not yet settling down. So we have to ask: Is the Net going to enable continuous disruption and self-transformation? If so will it be captured by a paradigm? (Or, as M. Knight Shyamalan might put it, is disruption the paradigm?)

Second, after listing the three areas of development over the next 5-8 years, the book makes a claim central to the basic formulation of the new paradigm Edward sees emerging: “And, vitally, for thorough implementation [of the paradigm] the three strands must be invisible to the user: ubiquitous and invisible connectivity.” [p. 176] If the invisibility of the paradigm is required for its acceptance, then we are no closer to that event, for the Internet remains perhaps the single most evident aspect of our culture. No other cultural object is mentioned as many times in a single day’s newspaper. The Internet, and the three components the book point to, are more evident to us than ever. (The exception might be innovations in logistics and supply chain management; I’d say Internet marketing remains highly conspicuous.) We’ve never had a technology that so enabled innovation and creativity, but there may well come a time when we stop focusing so much cultural attention on the Internet. We are not close yet.

Even then, we may not end up with a single paradigm of the Internet. It’s really not clear to me that the attendees at ROFLcon have the same Net paradigm as less Internet-besotted youths. Maybe over time we will all settle into a single Internet paradigm, but maybe we won’t. And we might not because the forces that bring about Kuhnian paradigms are not at play when it comes to the Internet. Kuhnian paradigms triumph because disciplines come to us through institutions that accept some practices and ideas as good science; through textbooks that codify those ideas and practices; and through communities of professionals who train and certify the new scientists. The Net lacks all of that. Our understanding of the Net may thus be as diverse as our cultures and sub-cultures, rather than being as uniform and enforced as, say, genetics’ understanding of DNA is.

Second, is the Internet affecting what we might call the general paradigm of our age? Personally, I think the answer is yes, but I wouldn’t use Kuhn to explain this. I think what’s happening — and Edward agrees — is that we are reinterpreting our world through the lens of the Internet. We did this when clocks were invented and the world started to look like a mechanical clockwork. We did this when steam engines made society and then human motivation look like the action of pressures, governors, and ventings. We did this when telegraphs and then telephones made communication look like the encoding of messages passed through a medium. We understand our world through our technologies. I find (for example) Lewis Mumford more helpful here than Kuhn.

Now, it is certainly the case that reinterpreting our world in light of the Net requires us to interpret the Net in the first place. But I’m not convinced we need a Kuhnian paradigm for this. We just need a set of properties we think are central, and I think Edward and I agree that these properties include the abundant and loose connections, the lack of centralized control, the global reach, the ability of everyone (just about) to contribute, the messiness, the scale. That’s why you don’t have to agree about what constitutes a Kuhnian paradigm to find Shift! fascinating, for it helps illuminate the key question: How are the properties of the Internet becoming the properties we see in — or notice as missing from — the world outside the Internet?

Good book.

3 Comments »

May 10, 2012

Awesome James Bridle

I am the lucky fellow who got to have dinner with James Bridle last night. I am a big fan of his brilliance and humor. And of James himself, of course.

I ran into him at the NEXT conference I was at in Berlin. His in fact was the only session I managed to get to. (My schedule got very busy all of a sudden.) And his talk was, well, brilliant. And funny. Two points stick out in particular. First, he talked about “code/spaces,” a notion from a book by Martin Dodge and Rob Kitchin. A code/space is an architectural space that shapes itself around the information processing that happens within it. For example, an airport terminal is designed around the computing processes that happen within it; the physical space doesn’t work without the information processes. James is in general fascinated by the Cartesian pituitary glands where the physical and the digital meet. (I am too, but I haven’t pursued it with James’ vigor or anything close to his literary-aesthetic sense.)

Second, James compared software development to fan fiction: People generally base their new ideas on twists on existing applications. Then he urged us to take it to the next level by thinking about software in terms of slash fiction: bringing together two different applications so that they can have hot monkey love, or at least form an innovative hybrid.

Then, at dinner, James told me about one of his earliest projects. a matchbox computer that learns to play “noughts and crosses” (i.e., tic-tac-toe). He explains this in a talk dedicated to upping the game when we use the word “awesome.” I promise you: This is an awesome talk. It’s all written out and well illustrated. Trust me. Awesome.

3 Comments »

March 23, 2012

VisiCalc: The first killer app’s unveiling

VisiCalc was the first killer app. It became the reason people bought a personal computer.

You can read a paper presented in 1979 by one of its creators, Bob Frankston, in which he explains what it does and why it’s better than learning BASIC. Yes, VisiCalc’s competitor was a programming language. (Here’s Bob reading part I of his paper.)

Bob of course acknowledges Dan Bricklin as VisiCalc’s original designer. (Here’s Dan re-reading the original announcement.) Bob and Dan founded the company Software Arts, which developed VisiCalc. Since then, both have spent their time in commercial and public-spirited efforts trying to make the Internet a better place for us all.

visicalc screenshot
from wikipedia

3 Comments »

March 8, 2012

[2b2k] No, now that you mention it, we’re not overloaded with information

On a podcast today, Mitch Joel asked me something I don’t think anyone else has: Are we experiencing information overload? Everyone else assumes that we are. Including me. I found myself answering no, we are not. There is of course a reasonable and valid reason to say that we are. But I think there’s also an important way in which we are not. So, here goes:

There are more things to see in the world than any one human could ever see. Some of those sights are awe-inspiring. Some are life-changing. Some would bring you peace. Some would spark new ideas. But you are never going to see them all. You can’t. There are too many sights to see. So, are you suffering from Sight Overload?

There are more meals than you could ever eat. Some are sooo delicious, but you can’t live long enough to taste them all. Are you suffering from Taste Overload?

Or, you’re taking a dip in the ocean. The water extends to the horizon. Are you suffering from Water Overload? Or are you just having a nice swim?

That’s where I think we are with information overload. Of course there’s more than we could ever encounter or make sense of. Of course. But it’s not Information Overload any more than the atmosphere is Air Overload.

It only seems that way if you think you can master information, or if you think there is some defined set of information you can and must have, or if you find yourself repeating the mantra of delivering the right information to the right people at the right time, as if there were any such thing.

Information overload is so 1990s.

 


[The next day: See my follow-on post]

16 Comments »

Next Page »