Joho the Blog » infohistory

August 10, 2012

HyperCard@25

On August 11, 1987, Apple genius Bill Atkinson — and this was before every kid in a clean shirt was a candidate for Apple Genius — held a press conference at MacWorld to unveil HyperCard. I was there.

example of a hypercard stack

I watched Atkinson’s presentation as a PR/marketing guy at a software company, and as an early user of the initial Mac (although my personal machines were CP/M, MS-DOS, and then Windows). As a PR guy, I was awestruck by the skill of the presentation. I remember Atkinson dynamically resizing a bit-mapped graphic and casually mentioning that figuring out on the fly which bits to keep and which to throw out was no small feat. And at the other end of the scale, the range of apps — each beautifully designed, of course — was fantastic.

HyperCard was pitched as a way to hyperlink together a series of graphic cards or screens. The cards had the sort of detail that bitmapped graphics afforded, and that Apple knew how to deliver. Because the cards were bitmapped, they tended to highlight their uniqueness, like the pages of a highly designed magazine, or etchings in a gallery.

Atkinson also pitched HyperCard as a development environment that made some hard things easy. That part of the pitch left me unconvinced. Atkinson emphasized the object-orientation of HyperTalk — I remember him talking about message-passing and inheritance — but it seemed to me as a listener that building a HyperStack (as HyperCard apps were called) was going to be beyond typical end users. (I created a Stack for my company a few months later, with some basic interactivity. Fun.)

Apple killed off HyperCard in 2004, but it remains more than a fond memory to many of us. In fact, some — including Bill Atkinson — have noted how close it was to being a browser before there was a Web. A couple of months ago, Matthew Lasar at Ars Technica wrote:

In an angst-filled 2002 interview, Bill Atkinson confessed to his Big Mistake. If only he had figured out that stacks could be linked through cyberspace, and not just installed on a particular desktop, things would have been different.

“I missed the mark with HyperCard,” Atkinson lamented. “I grew up in a box-centric culture at Apple. If I’d grown up in a network-centric culture, like Sun, HyperCard might have been the first Web browser. My blind spot at Apple prevented me from making HyperCard the first Web browser.”

First of all, we should all give Bill A a big group hug. HyperCard was an awesome act of imagination. Thank you!

But I don’t quite see HyperCard as the precursor to the Web. I think instead it anticipated something later.

HyperCard + network = Awesome, but HyperCard + network != Web browser. The genius of Tim Berners-Lee was not that he built a browser that made the Internet far more usable. TBL’s real genius was that he wrote protocols and standards by which hyperlinked information could be displayed and shared. The HTML standard itself was at best serviceable; it was a specification using an already-existing standard, SGML, that let you specify the elements and structure of particular document types.. TBL went against the trend by making an SGML specification that was so simple that it was derided by the SGML cowboys. That was very smart. Wise, even. But we have the Web today because TBL didn’t start by inventing a browser. He instead said that if you want to have some text be hyperlinked, surround it with a particular mark-up (“<a href= ‘http://pathname.com/page.html’></a>”). And, if you want to write a browser, make sure that it knows how to interpret that markup (and support the protocols). The Web took off because it wasn’t an application, but a specification for how applications could share hyperlinked information. Anyone who wanted to could write applications for displaying hyperlinked documents. And people did

There’s another way HyperCard was not Web minus Network. The Web works off of a word-processing metaphor. HyperCard works off of a page-layout/graphics metaphor. HTML as first written gave authors precious little control over the presentation of the contents: the browser decided where the line endings were, how to wrap text around graphics, and what a first level heading would look like compared to a second level heading. Over time, HTML (especially with CSS) has afforded authors a much higher degree of control over presentation, but the architecture of the Web still reflects the profound split between content and layout. This is how word-processors work, and it’s how SGML worked. HyperCard, on the other hand, comes out of a bitmapped graphics mentality in which the creator gets pinpoint control over the placement of every dot. You can get stunningly beautiful cards this way, but the Web has gained some tremendous advantages because it went the other way.

Let me be clearer. In the old days, WordPerfect was unstructured and Microsoft Word was structured. With WordPerfect, to make a line of text into a subhead you’d insert a marker telling WordPerfect to begin putting the text into boldface, and another marker telling it to stop. You might put in other markers telling it to put the same run of text into a larger font and to underline it. With Word, you’d make a line into a subhead by putting your text caret into it and selecting “subhead” from a pick list. If you wanted to turn all the subheads red, you’d edit the properties of “subhead” and Word would apply the change to everything marked as a subhead. HTML is like Word, not WordPerfect. From this basic decision, HTML has gained tremendous advantages:

  • The structure of pages is important semantic information. A Web with structured pages is smarter than one with bitmaps.

  • Separating content from layout enables the dynamic re-laying out that has become increasingly important as we view pages on more types of devices. If you disagree, tell me how much fun it is to read a full-page pdf on your mobile phone.

  • Structured documents enable many of the benefits of object orientation: Define it once and have all instances update; attach methods to these structures, enable inheritance, etc.

(It’s disappointing to me that Google Docs document programming environment doesn’t take advantage of this. The last time I looked, you can’t attach methods to objects. It’s WordPerfect all over again.)

I should mention that the software company I worked at, Interleaf, created electronic documents that separated content from presentation (with SGML as its model), that treated document elements as objects, and that enabled them to be extended with event-aware methods. These documents worked together over local area networks. So, I think there’s a case to be made that Interleaf’s “active documents” were actually closer to presaging the Web than HyperCard was, although Interleaf made the same mistake of writing an app — and an expensive, proprietary one to boot — rather than a specification. It was great technology, but the act of genius that gave us the Web was about the power of specifications and an architecture independent of the technology that implements it.

HyperCard was a groundbreaking, beautiful, and even thrilling app. Ahead of its time for sure. But the time it was ahead of seems to me to be not so much the Age of the Web as the Age of the App. I don’t know why there isn’t now an app development environment that gives us what HyperCard did. Apparently HyperCard is still ahead of its time.

 


[A few minutes later] Someone’s pointed me to Infinite Canvas as a sort of HyperCard for iPhone…

[An hour later:] A friend suggests using the hashtag #HyperCard25th.

9 Comments »

July 5, 2012

The origins of information

“You can ring me here tonight from Finland. If you’ve got the film, just say the deal’s come off.”

“And if not?”

“Say the deal’s off.”

“It sounds rather alike,” Avery objected. “If the line’s bad, I mean. ‘Off’ and ‘Come off’.”

Then say they’re not interested. Say something negative. You know what I mean.”

— John le Caré The Looking Glass War (Pan Books, London, 1965), p. 34.

As Paul Edwards explains in his wonderful The Closed World, information theory grew in part out of work done during WWII to develop a vocabulary of words sufficiently distinctive that they could be differentiated over the uproar of battle. The demand for discernible difference (= information) has led us to to see language as code, to denature qualities, and to count that which gets in the way of clarity as noise.

I’m not kicking. I’m just reminding us that information had an origin and a purpose.

Comments Off on The origins of information

May 13, 2012

[2b2k] The Net as paradigm

Edward Burman recently sent me a very interesting email in response to my article about the 50th anniversary of Thomas Kuhn’s The Structure of Scientific Revolutions. So I bought his 2003 book Shift!: The Unfolding Internet – Hype, Hope and History (hint: If you buy it from Amazon, check the non-Amazon sellers listed there) which arrived while I was away this week. The book is not very long — 50,000 words or so — but it’s dense with ideas. For example, Edward argues in passing that the Net exploits already-existing trends toward globalization, rather than leading the way to it; he even has a couple of pages on Heidegger’s thinking about the nature of communication. It’s a rich book.

Shift! applies The Structure of Scientific Revolutions to the Internet revolution, wondering what the Internet paradigm will be. The chapters that go through the history of failed attempts to understand the Net — the “pre-paradigms” — are fascinating. Much of Edward’s analysis of business’ inability to grasp the Net mirrors cluetrain‘s themes. (In fact, I had the authorial d-bag reaction of wishing he had referenced Cluetrain…until I realized that Edward probably had the same reaction to my later books which mirror ideas in Shift!) The book is strong in its presentation of Kuhn’s ideas, and has a deep sense of our cultural and philosophical history.

All that would be enough to bring me to recommend the book. But Edward admirably jumps in with a prediction about what the Internet paradigm will be:

This…brings us to the new paradigm, which will condition our private and business lives as the twenty-first century evolves. It is a simple paradigm, and may be expressed in synthetic form in three simple words: ubiquitous invisible connectivity. That is to say, when the technologies, software and devices which enable global connectivity in real time become so ubiquitous that we are completely unaware of their presence…We are simply connected.” [p. 170]

It’s unfair to leave it there since the book then elaborates on this idea in very useful ways. For example, he talks about the concept of “e-business” as being a pre-paradigm, and the actual paradigm being “The network itself becomes the company,” which includes an erosion of hierarchy by networks. But because I’ve just written about Kuhn, I found myself particularly interested in the book’s overall argument that Kuhn gives us a way to understand the Internet. Is there an Internet paradigm shift?

The are two ways to take this.

First, is there a paradigm by which we will come to understand the Internet? Edward argues yes, we are rapidly settling into the paradigmatic understanding of the Net. In fact, he guesses that “the present revolution [will] be completed and the new paradigm of being [will] be in force” in “roughly five to eight years” [p. 175]. He sagely points to three main areas where he thinks there will be sufficient development to enable the new paradigm to take root: the rise of the mobile Internet, the development of productivity tools that “facilitate improvements in the supply chain” and marketing, and “the increased deployment of what have been termed social applications, involving education and the political sphere of national and local government.” [pp. 175-176] Not bad for 2003!

But I’d point to two ways, important to his argument, in which things have not turned out as Edward thought. First, the 5-8 years after the book came out were marked by a continuing series of disruptive Internet developments, including general purpose social networks, Wikipedia, e-books, crowdsourcing, YouTube, open access, open courseware, Khan Academy, etc. etc. I hope it’s obvious that I’m not criticizing Edward for not being prescient enough. The book is pretty much as smart as you can get about these things. My point is that the disruptions just keep coming. The Net is not yet settling down. So we have to ask: Is the Net going to enable continuous disruption and self-transformation? If so will it be captured by a paradigm? (Or, as M. Knight Shyamalan might put it, is disruption the paradigm?)

Second, after listing the three areas of development over the next 5-8 years, the book makes a claim central to the basic formulation of the new paradigm Edward sees emerging: “And, vitally, for thorough implementation [of the paradigm] the three strands must be invisible to the user: ubiquitous and invisible connectivity.” [p. 176] If the invisibility of the paradigm is required for its acceptance, then we are no closer to that event, for the Internet remains perhaps the single most evident aspect of our culture. No other cultural object is mentioned as many times in a single day’s newspaper. The Internet, and the three components the book point to, are more evident to us than ever. (The exception might be innovations in logistics and supply chain management; I’d say Internet marketing remains highly conspicuous.) We’ve never had a technology that so enabled innovation and creativity, but there may well come a time when we stop focusing so much cultural attention on the Internet. We are not close yet.

Even then, we may not end up with a single paradigm of the Internet. It’s really not clear to me that the attendees at ROFLcon have the same Net paradigm as less Internet-besotted youths. Maybe over time we will all settle into a single Internet paradigm, but maybe we won’t. And we might not because the forces that bring about Kuhnian paradigms are not at play when it comes to the Internet. Kuhnian paradigms triumph because disciplines come to us through institutions that accept some practices and ideas as good science; through textbooks that codify those ideas and practices; and through communities of professionals who train and certify the new scientists. The Net lacks all of that. Our understanding of the Net may thus be as diverse as our cultures and sub-cultures, rather than being as uniform and enforced as, say, genetics’ understanding of DNA is.

Second, is the Internet affecting what we might call the general paradigm of our age? Personally, I think the answer is yes, but I wouldn’t use Kuhn to explain this. I think what’s happening — and Edward agrees — is that we are reinterpreting our world through the lens of the Internet. We did this when clocks were invented and the world started to look like a mechanical clockwork. We did this when steam engines made society and then human motivation look like the action of pressures, governors, and ventings. We did this when telegraphs and then telephones made communication look like the encoding of messages passed through a medium. We understand our world through our technologies. I find (for example) Lewis Mumford more helpful here than Kuhn.

Now, it is certainly the case that reinterpreting our world in light of the Net requires us to interpret the Net in the first place. But I’m not convinced we need a Kuhnian paradigm for this. We just need a set of properties we think are central, and I think Edward and I agree that these properties include the abundant and loose connections, the lack of centralized control, the global reach, the ability of everyone (just about) to contribute, the messiness, the scale. That’s why you don’t have to agree about what constitutes a Kuhnian paradigm to find Shift! fascinating, for it helps illuminate the key question: How are the properties of the Internet becoming the properties we see in — or notice as missing from — the world outside the Internet?

Good book.

3 Comments »

May 10, 2012

Awesome James Bridle

I am the lucky fellow who got to have dinner with James Bridle last night. I am a big fan of his brilliance and humor. And of James himself, of course.

I ran into him at the NEXT conference I was at in Berlin. His in fact was the only session I managed to get to. (My schedule got very busy all of a sudden.) And his talk was, well, brilliant. And funny. Two points stick out in particular. First, he talked about “code/spaces,” a notion from a book by Martin Dodge and Rob Kitchin. A code/space is an architectural space that shapes itself around the information processing that happens within it. For example, an airport terminal is designed around the computing processes that happen within it; the physical space doesn’t work without the information processes. James is in general fascinated by the Cartesian pituitary glands where the physical and the digital meet. (I am too, but I haven’t pursued it with James’ vigor or anything close to his literary-aesthetic sense.)

Second, James compared software development to fan fiction: People generally base their new ideas on twists on existing applications. Then he urged us to take it to the next level by thinking about software in terms of slash fiction: bringing together two different applications so that they can have hot monkey love, or at least form an innovative hybrid.

Then, at dinner, James told me about one of his earliest projects. a matchbox computer that learns to play “noughts and crosses” (i.e., tic-tac-toe). He explains this in a talk dedicated to upping the game when we use the word “awesome.” I promise you: This is an awesome talk. It’s all written out and well illustrated. Trust me. Awesome.

3 Comments »

March 23, 2012

VisiCalc: The first killer app’s unveiling

VisiCalc was the first killer app. It became the reason people bought a personal computer.

You can read a paper presented in 1979 by one of its creators, Bob Frankston, in which he explains what it does and why it’s better than learning BASIC. Yes, VisiCalc’s competitor was a programming language. (Here’s Bob reading part I of his paper.)

Bob of course acknowledges Dan Bricklin as VisiCalc’s original designer. (Here’s Dan re-reading the original announcement.) Bob and Dan founded the company Software Arts, which developed VisiCalc. Since then, both have spent their time in commercial and public-spirited efforts trying to make the Internet a better place for us all.

visicalc screenshot
from wikipedia

3 Comments »

March 8, 2012

[2b2k] No, now that you mention it, we’re not overloaded with information

On a podcast today, Mitch Joel asked me something I don’t think anyone else has: Are we experiencing information overload? Everyone else assumes that we are. Including me. I found myself answering no, we are not. There is of course a reasonable and valid reason to say that we are. But I think there’s also an important way in which we are not. So, here goes:

There are more things to see in the world than any one human could ever see. Some of those sights are awe-inspiring. Some are life-changing. Some would bring you peace. Some would spark new ideas. But you are never going to see them all. You can’t. There are too many sights to see. So, are you suffering from Sight Overload?

There are more meals than you could ever eat. Some are sooo delicious, but you can’t live long enough to taste them all. Are you suffering from Taste Overload?

Or, you’re taking a dip in the ocean. The water extends to the horizon. Are you suffering from Water Overload? Or are you just having a nice swim?

That’s where I think we are with information overload. Of course there’s more than we could ever encounter or make sense of. Of course. But it’s not Information Overload any more than the atmosphere is Air Overload.

It only seems that way if you think you can master information, or if you think there is some defined set of information you can and must have, or if you find yourself repeating the mantra of delivering the right information to the right people at the right time, as if there were any such thing.

Information overload is so 1990s.

 


[The next day: See my follow-on post]

17 Comments »

February 16, 2012

Information is the opposite of information

The ordinary language use of “information” in some ways is the opposite of the technical sense given the term by Claude Shannon — the sense that kicked off the Information Age.

Shannon’s information is a measure of surprise: the more unexpected is the next letter a user lays down in Scrabble, the more information it conveys.

The ordinary language use of the term (well, one of them) is to refer to something you are about to learn or have just learned: “I have some information for you, sir! The British have taken Trenton.” The more surprising the news is, the more important the information is. So, so far ordinary language “information” seems a lot like Shannon’s “information.”

But we use the term primarily to refer to news that’s not all that important to us personally. So, you probably wouldn’t say, “I got some information today: I’m dying.” If you did, you’d be taken as purposefully downplaying its significance, as in a French existentialist drama in which all of life is equally depressing. When we’re waiting to hear about something that really matters to us, we’re more likely to say we’re waiting for news.

Indeed, if the information is too surprising, we don’t call it “information” in ordinary parlance. For example, if you asked someone for your doctor’s address, what you learned you might well refer to as “information.” But if you learned that your doctor’s office is in a dirigible constantly circling the earth, you probably wouldn’t refer to that as information. “I got some information today. My doctor’s office is in a dirigible,” sounds odd. More likely: “You’ll never guess what I found out today: My doctor’s office is in a dirigible! I mean, WTF, dude!” The term “information” is out of place if the information is too surprising.

And in that way the ordinary language use of the term is the opposite of its technical meaning.

3 Comments »

September 11, 2011

With a little twist of Heidegger

I’m giving a talk in Berlin in a week. My hosts want me to talk about the evolution of media, but suggested that I might want to weave some Heidegger in, which is not a request you often get. It’s a brief talk, but what I’ve written talks about four pairs, all based on Shannon’s original drawing of signal moving through a channel. 1. The medium and bits as idealized abstractions. 2. The medium and messages: How McLuhan reacts against information theory’s idea of a medium, and the sense in which on the Internet we are the medium. 3. Medium and communication: Why we think of communication as something that occurs through a medium, rather than as a way in which we share the world. 4. Medium and noise: Why the world appears, in its most brutal facticity, in Shannon’s diagram as noise, and how the richness of the Web (which consists of connections intentionally made) is in fact signal that taken together can be noise. (I know I am not using these terms rigorously.)

At the end, I’ll summarize the four contrasts:

Bits without character vs. A world that always shows itself as something

The medium as a vacuum vs. We are the medium that moves messages because we care about them

Communication as the reproduction of a representation in the listener’s head vs. Turning to a shared world together

World as noise vs. Links as a context of connection

Not by coincidence, each of these is a major Heideggerian theme: Being-as or meaning, care, truth. and world.

And if it’s not obvious, I do not think that Heidegger’s writings on technology have anything much to do with the Internet. He was criticizing the technology of the 1950s that scared him: mainframes and broadcast. He probably would have hated the Net also, but he was a snobby little fascist prick.

7 Comments »

February 13, 2011

The size of an update

I enjoyed this explanation of how Google updates Chrome faster than ever by cleverly only updating the elements that have changed. The problem is that software in executable form usually uses spots in memory that are hard-coded into it: Instead of saying “Take the number_of_miles_traveled and divide it by number_of_gallons_used…”, it says “Take the number stored at memory address #1876023…” (I’m obviously simplifying it.) If you insert or delete code from the program, the memory addresses will probably change, so that the program is now looking in the wrong spot for the numbers of miles traveled, and for instructions about what to do next. You can only hope that the crash will be fast and while in the presence of those who love you.

So, I enjoyed the Chrome article for a few reasons.

First, it was written clearly enough that even I could follow it, pretty much.

Second, the technique they use is not only clever, it bounces between levels of abstraction. The compiled code that runs on your computer generally is at a low level of abstraction: What the programmer thinks of as a symbol (a variable) such as number_of_miles_traveled gets turned into a memory address. The Chrome update system reintroduces a useful level of abstraction.

Third, I like what this says about the nature of information. I don’t think Courgette (the update system) counts as a compression algorithm, because it does not enable fewer bits to encode more information, but it does enable fewer bits to have more effect. Or maybe it does count as compression if we consider Chrome to be not a piece of software that runs on client computers but to be a system of clients connected to a central server that is spread out across both space and time. In either case, information is weird.

1 Comment »

January 31, 2011

We are the medium

I know many others have made this point, but I think it’s worth saying again: We are the medium.

I don’t mean this in the sense that we are the new news media, as when Dan Gillmor talks about “We the Media.” I cherish Dan’s work (read his latest: Mediactive), but I mean “We are the medium” more in McLuhan’s “The medium is the message” sense.

McLuhan was reacting against information science’s view of a medium as that through which a signal (or message) passes.

shannon's communication diagram

Information science purposefully abstracted itself from every and any particular medium, aiming at theories that held whether you were talking about tin can telephones or an inter-planetary Web. McLuhan’s pushback was: But the particularities of a medium do count. They affect the message. In fact, the medium is the message!

I mean by “We are the medium” something I think we all understand, although the old way of thinking keeps intruding. “We are the medium” means that, quite literally, we are the ones through whom information, messages, news, ideas, videos, and links of every sort move — and they move through this “channel” because we decide to move them. Someone sends me a link to a funny video. I tweet about it. You see it. You send a Facebook message to your friends. One of them (presumably an ancient) emails it to more friends. The video moves through us. Without us, the transport medium —” the Internet — is a hyperlinked collection of inert bits. We are the medium.

Which makes McLuhan’s aphorism more true than ever. In tweeting about the video, I am also tweeting about myself: “This is the sort of thing I find funny. Don’t I have a great sense of humor? And I was clever enough to find it. And I care enough about you— and about my reputation — to send it out to you.” That’s 51 characters over the the Twitter limit, but it’s clearly embedded in my tweet.

Although there are a thousand ways “We are the medium” is wrong, I think what’s right about it matters:

  • Because we are the medium, one-way announcements, such as a tweet to thousands of followers, still has a conversational element. We may not be able to tweet back and expect an answer, but we we can pass it around, which is a conversational act.

  • Because we are the medium, news is no longer mere information. In forwarding the item about the Egyptian protestor or about the Navy dealing well with a gay widower, I am also saying something about myself. That’s why we are those that formerly were known as the audience: not just because we can engage in acts of journalism without a newspaper behind us, but because in becoming the medium through which news travels, some of us travels with every retweet.

  • Because we are the medium, fame on the Net is not simply being known by many because your image was transmitted many times. Rather, if you’re famous on the Internet, it’s because we put ourselves on the line by forwarding your image, your video, your idea, your remix. We are the medium that made you famous.

It is easy to slip back into the old paradigm in which there is a human sender, a message, a medium through which it travels, and a human recipient. It’s easy because that’s an accurate abstraction that is sometimes useful. It’s easy because the Internet is also used for traditional communication. But what is distinctive and revolutionary about the Internet is the failure of the old diagram to capture what so often is essential: We are not users of the medium, and we are not outside of the medium listening to its messages. Rather, we are the medium.

42 Comments »

« Previous Page | Next Page »