Joho the Bloginfohistory Archives - Joho the Blog

June 18, 2018

Filming the first boxing match

Joseph Fagan, an author, writer, TV Show host, and the Official Historian of West Orange Township, has given me permission to post his recounting of the legal waters surrounding the first filming of a boxing match. It’s a fascinating early example of finding analogies in order to figure out how to apply old laws to new technology — and also of how the technological limitations of a medium can affect content.

First filmed boxing match tested the legal waters in WO [West Orange, NJ]

By Joseph Fagan

On June 14, 1894, one hundred and twenty four years ago today, a boxing match was first captured on film. The event took place at Edison’s Black Maria studio giving the world’s first movie studio in West Orange the distinction of being the first place for a filmed boxing match in history. It was a staged six round fight between two lightweight boxers Michael Leonard and Jack Cushing. The filming of this fight at the Black Maria may have violated prize fighting laws but “the technology seemed to surpass the law in a way no one could have predicted”the technology seemed to surpass the law in a way no one could have predicted.

Although boxing was still illegal in New Jersey in 1894 the sport was growing in popularity. The New Jersey penal code had been amended in 1835 to specifically outlaw prize fighting. The art of pugilism as it was also known was banned in the United States at the time. It was illegal to organize, participate, or attend a boxing match. But the law was somewhat unclear on the legality of photographing a boxing match. By the time Edison’s moving picture technology had emerged the law had not yet adopted any provisions for the filming of a boxing match.

An assumption was made that since it was legal to look at a still photograph of a boxing match by extension it therefore was then legal to look at a motion picture of a boxing match as well. The New Jersey legislature could not have anticipated prize fighting films in 1835 when photography techniques were still in its infancy and mostly all experimental.

By the late 1880s the concept of moving images as entertainment was not a new one and not uniquely that of Edison. In 1893 he built the world’s first motion picture studio in West Orange known as the Black Maria. The films produced at this studio were not film as we know it today but short films made specifically for use in Edison’s invention the kinetoscope. This emerging technology not only commercialized moving pictures but also made history as it tested the known boundaries of New Jersey law regarding prize fighting.

The first kinetoscope parlor opened in New York City on April 14, 1894 in a converted shoe store. This date marks the birth of commercial film exhibition in the United States. Customers could view the films in a kinetoscope which sat on the floor and was combination peep show slot machine. Kinetoscope parlors soon increased in popularity and opened around the country. Production of a constant flow of new film subjects was needed at the West Orange studio to keep the new invention popular. Many vaudeville performers, dancers, and magicians became the first forms of entertainment to be filmed at the Black Maria studio.

The filming of the Leonard Cushing Fight demonstrated the potential illegality of the events at the Black Maria but there is no record of a grand jury investigation of the fight. The ring was specially designed to fit in the Black Maria and was only 12 feet square. The fight consisted of six one minute rounds between Leonard and Cushing. One minute was the longest the film in the camera would last so“ the kinetoscope itself was the time keeper” the kinetoscope itself was the time keeper. In between rounds the camera had to be reloaded which took seven minutes. The fight was essentially six separate bouts each titled by round number. In the background five fans can be seen looking into the ring. The referee hardly moves as the two fighters swing roundhouse blows at each other. Michael Leonard wore white trunks and Jack Cushing wore black trunks. Although a couple of punches seem to land both fighters maintained upright stances during the fight. Customers in kinetoscope parlors who watched the final round saw Leonard score a knockdown and was therefore considered the winner.

The first boxing match was filmed and produced by William Kennedy Dickson working for Edison. It remains unclear if Edison was actually at the fight and is reported to have been 40 miles away in Ogdensburg, NJ overlooking his mining operations. In my opinion I doubt very little happened at his West Orange complex without his knowledge or approval. Edison’s confidence is perhaps best understood in a 1903 quote. M. A. Rosanoff joined Edison’s staff and asked what rules he needed to observe. Edison replied, “” There are no rules here… we are trying to accomplish something.””” There are no rules here… we are trying to accomplish something.”

In the face of legal uncertainties regarding New Jersey law in 1894 plausible deniability may have helped Edison as he drifted into uncharted legal waters. No one was ever charged with a crime for filming the first prize fight in history at the Black Maria in West Orange. It simply set the course for future changes until the prohibition against prize fighting in New Jersey was eventually abolished in 1924.

Posted under a Creative Commons Attribution Non Commercial license: CC-BY-NC, Joseph Fagan

Joseph Fagan can be reached at [email protected]

2 Comments »

May 10, 2018

When Edison chose not to invent speech-to-text tech

In 1911, the former mayor of Kingston, Jamaica, wrote a letter [pdf] to Thomas Alva Edison declaring that “The days of sitting down and writing one’s thoughts are now over” … at least if Edison were to agree to take his invention of the voice recorder just one step further and invent a device that transcribes voice recordings into speech. It was, alas, an idea too audacious for its time.

Here’s the text of Philip Cohen Stern’s letter:

Dear Sir :-

Your world wide reputation has induced me to trouble you with the following :-

As by talking in the in the Gramaphone [sic] we can have our own voices recorded why can this not in some way act upon a typewriter and reproduce the speech in typewriting

Under the present condition we dictate our matter to a shorthand writer who then has to typewrite it. What a labour saving device it would be if we could talk direct to the typewriter itself! The convenience of it would be enormous. It frequently occurs that a man’s best thoughts occur to him after his business hours and afetr [sic] his stenographer and typist have left and if he had such an instrument he would be independent of their presence.

The days of sitting down and writing out one’s thoughts are now over. It is not alone that there is always the danger in the process of striking out and repairing as we go along, but I am afraid most business-men have lost the art by the constant use of stenographer and their thoughts won’t run into their fingers. I remember the time very well when I could not think without a pen in my hand, now the reverse is the case and if I walk about and dictate the result is not only quicker in time but better in matter; and it occurred to me that such an instrument as I have described is possible and that if it be possible there is no man on earth but you who could do it

If my idea is worthless I hope you will pardon me for trespassing on your time and not denounce me too much for my stupidity. If it is not, I think it is a machine that would be of general utility not only in the commercial world but also for Public Speakers etc.

I am unfortunately not an engineer only a lawyer. If you care about wasting a few lines on me, drop a line to Philip Stern, Barrister-at-Law at above address, marking “Personal” or “Private” on the letter.

Yours very truly,
[signed] Philip Stern.

At the top, Edison has written:

The problem you speak of would be enormously difficult I cannot at present time imagine how it could be done.

The scan of the letter lives at Rutger’s Thomas A. Edison Papers Digital Edition site: “Letter from Philip Cohen Stern to Thomas Alva Edison, June 5th, 1911,” Edison Papers Digital Edition, accessed May 6, 2018, http://edison.rutgers.edu/digital/items/show/57054. Thanks to Rutgers for mounting the collection and making it public. And a special thanks to Lewis Brett Smiler, the extremely helpful person who noted Stern’s letter to my sister-in-law, Meredith Sue Willis, as a result of a talk she gave recently on The Novelist in the Digital Age.

By the way, here’s Philip Stern’s obituary.

5 Comments »

February 11, 2018

The brain is not a computer and the world is not information

Robert Epstein argues in Aeon against the dominant assumption that the brain is a computer, that it processes information, stores and retrieves memories, etc. That we assume so comes from what I think of as the informationalizing of everything.

The strongest part of his argument is that computers operate on symbolic information, but brains do not. There is no evidence (that I know of, but I’m no expert. On anything) that the brain decomposes visual images into pixels and those pixels into on-offs in a code that represents colors.

In the second half, Epstein tries to prove that the brain isn’t a computer through some simple experiments, such as drawing a dollar bill from memory and while looking at it. Someone committed to the idea that the brain is a computer would probably just conclude that the brain just isn’t a very good computer. But judge for yourself. There’s more to it than I’m presenting here.

Back to Epstein’s first point…

It is of the essence of information that it is independent of its medium: you can encode it into voltage levels of transistors, magnetized dust on tape, or holes in punch cards, and it’s the same information. Therefore, a representation of a brain’s states in another medium should also be conscious. Epstein doesn’t make the following argument, but I will (and I believe I am cribbing it from someone else but I don’t remember who).

Because information is independent of its medium, we could encode it in dust particles swirling clockwise or counter-clockwise; clockwise is an on, and counter is an off. In fact, imagine there’s a dust cloud somewhere in the universe that has 86 billion motes, the number of neurons in the human brain. Imagine the direction of those motes exactly matches the on-offs of your neurons when you first spied the love of your life across the room. Imagine those spins shift but happen to match how your neural states shifted over the next ten seconds of your life. That dust cloud is thus perfectly representing the informational state of your brain as you fell in love. It is therefore experiencing your feelings and thinking your thoughts.

That by itself is absurd. But perhaps you say it is just hard to imagine. Ok, then let’s change it. Same dust cloud. Same spins. But this time we say that clockwise is an off, and the other is an on. Now that dust cloud no longer represents your brain states. It therefore is both experiencing your thoughts and feeling and is not experiencing them at the same time. Aristotle would tell us that that is logically impossible: a thing cannot simultaneously be something and its opposite.

Anyway…

Toward the end of the article, Epstein gets to a crucial point that I was very glad to see him bring up: Thinking is not a brain activity, but the activity of a body engaged in the world. (He cites Anthony Chemero’s Radical Embodied Cognitive Science (2009) which I have not read. I’d trace it back further to Andy Clark, David Chalmers, Eleanor Rosch, Heidegger…). Reducing it to a brain function, and further stripping the brain of its materiality to focus on its “processing” of “information” is reductive without being clarifying.

I came into this debate many years ago already made skeptical of the most recent claims about the causes of consciousness by having some awareness of the series of failed metaphors we have used over the past couple of thousands of years. Epstein puts this well, citing another book I have not read (and another book I’ve consequently just ordered):

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

Maybe this time our tech-based metaphor has happened to get it right. But history says we should assume not. We should be very alert to the disanologies, which Epstein helps us with.

Getting this right, or at least not getting it wrong, matters. The most pressing problem with the informationalizing of thought is not that it applies a metaphor, or even that the metaphor is inapt. Rather it’s that this metaphor leads us to a seriously diminished understanding of what it means to be a living, caring creature.

I think.

 

Hat tip to @JenniferSertl for pointing out the Aeon article.

Comments Off on The brain is not a computer and the world is not information

August 22, 2016

Why do so many baby words start with B?

What’s wrong with English? So many of the words for things in a baby’s environment start with B so when she says “buh,” — or, as our grandchild prefers, “bep” — you don’t know if she is talking about a banana, bunny, boat, bread, bath, bubble, ball, bum, burp, bird, belly, or bathysphere.

This is not how you design a language for easy learning. You don’t hear soldiers speaking into their walkie talkies about being at position “Buh buh buh buh.” No, they say something like, “Bravo Victor Mike November.” Those words were picked precisely because they are so hard to mistake for one another. Now that’s how you design a language! (It’s also possible that research at Harvard during WWII that led to the development of the NATO phonetic alphabet influenced the development of Information Theory what with that theory’s differentiating of signal from noise.)

This problem in English probably helps explain why we spend so much time teaching our children how to say animal sounds: animals have the common sense not to sound like one another. That may also be why some of the sounds we teach our children have little to do with the noises animals actually make: Dogs don’t actually say “Woof,” but that sound is hard to confused with the threadbare imitation we can manage of the sound a tiger makes.

Being a baby is tough. You’ve got little flabby fingers that can’t do anything you want except hold onto a measly Cheerio and even then they can’t tell the difference between your mouth and your nose. Plus you can’t get anywhere except by hitching a ride with an adult whose path is as senseless as a three-legged drunk’s. Then when you want nothing more than a bite of buttery brie, the stupid freaking adult brings you a big blue blanket and then gets annoyed when you kick it off.

The least we could do for our babies is give them some words that don’t sound like every other word they care about.

Comments Off on Why do so many baby words start with B?

October 3, 2014

The modern technology with the worst signal-to-noise ratio is …

…the car alarm.

When one goes off, the community’s reaction is not “Catch the thief!” but “Find the car owner so s/he can turn off the @[email protected]#ing car alarm.” At least in the communities I’ve lived in. (Note: I am a privileged white man.)

The signal-to-noise ratio sucks for car alarms in every direction. First, it is a signal to the car owner that is blasted to an entire neighborhood that’s trying to do something else. Second, it’s almost always a false alarm. (See note above.) Third, because it’s almost always a false alarm, it’s an astoundingly ineffective true alarm. The signal becomes noise.

Is there any modern technology with a worse signal-to-noise ratio?

6 Comments »

August 9, 2014

Tim Berners-Lee’s amazingly astute 1992 article on this crazy Web thing he started

Dan Brickley points to this incredibly prescient article by Tim Berners-Lee from 1992. The World Wide Web he gets the bulk of the credit for inventing was thriving at CERN where he worked. Scientists were linking to one another’s articles without making anyone type in a squirrely Internet address. Why, over a thousand articles were hyperlinked.

And on this slim basis, Tim outlines the fundamental challenges we’re now living through. Much of the world has yet to catch up with insights he derived from the slightest of experience.

May the rest of us have even a sliver of his genius and a heaping plateful of his generosity.

3 Comments »

November 17, 2013

Noam Chomsky, security, and equivocal information

Noam Chomsky and Barton Gellman were interviewed at the Engaging Big Data conference put on by MIT’s Senseable City Lab on Nov. 15. When Prof. Chomsky was asked what we can do about government surveillance, he reiterated his earlier call for us to understand the NSA surveillance scandal within an historical context that shows that governments always use technology for their own worst purposes. According to my liveblogging (= inaccurate, paraphrased) notes, Prof. Chomsky said:

Governments have been doing this for a century, using the best technology they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe this is universal. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications. [Emphasis added, of course.]

I was glad that Barton Gellman — hardly an NSA apologist — called Prof. Chomsky on his lumping of the NSA with the Stasi, for there is simply no comparison between the freedom we have in the US and the thuggish repression omnipresent in East Germany. But I was still bothered, albeit by a much smaller point. I have no serious quarrel with Prof. Chomsky’s points that government incursions on rights are nothing new, and that governments generally (always?) believe they are acting for the best of purposes. I am a little bit hung-up, however, on his equivocating on “information.”

Prof. Chomsky is of course right in his implied definition of information. (He is Noam Chomsky, after all, and knows a little more about the topic than I do.) Modern information is often described as a measure of surprise. A string of 100 alternating ones and zeroes conveys less information than a string of 100 bits that are less predictable, for if you can predict with certainty what the next bit will be, then you don’t learn anything from that bit; it carries no information. Information theory lets us quantify how much information is conveyed by streams of varying predictability.

So, when U.S. security folks say they are spying on us for our own security, are they saying literally nothing? Is that claim without meaning? Only in the technical sense of information. It is, in fact, quite meaningful, even if quite predictable, in the ordinary sense of the term “information.”

First, Prof. Chomsky’s point that governments do bad things while thinking they’re doing good is an important reminder to examine our own assumptions. Even the bad guys think they’re the good guys.

Second, I disagree with Prof. Chomsky’s generalization that governments always justify surveillance in the name of security. For example, governments sometimes record traffic (including the movement of identifiable cars through toll stations) with the justification that the information will be used to ease congestion. Tracking the position of mobile phones has been justified as necessary for providing swift EMT responses. Governments require us to fill out detailed reports on our personal finances every year on the grounds that they need to tax us fairly. Our government hires a fleet of people every ten years to visit us where we live in order to compile a census. These are all forms of surveillance, but in none of these cases is security given as the justification. And if you want to say that these other forms don’t count, I suspect it’s because it’s not surveillance done in the name of security…which is my point.

Third, governments rarely cite security as the justification without specifying what the population is being secured against; as Prof. Chomsky agrees, that’s an inherent part of the fear-mongering required to get us to accept being spied upon. So governments proclaim over and over what threatens our security: Spies in our midst? Civil unrest? Traitorous classes of people? Illegal aliens? Muggers and murderers? Terrorists? Thus, the security claim isn’t made on its own. It’s made with specific threats in mind, which makes the claim less predictable — and thus more informational — than Prof. Chomsky says.

So, I disagree with Prof. Chomsky’s argument that a government that justifies spying on the grounds of security is literally saying something without meaning. Even if it were entirely predictable that governments will always respond “Because security” when asked to justify surveillance — and my second point disputes that — we wouldn’t treat the response as meaningless but as requiring a follow-up question. And even if the government just kept repeating the word “Security” in response to all our questions, that very act would carry meaning as well, like a doctor who won’t tell you what a shot is for beyond saying “It’s to keep you healthy.” The lack of meaning in the Information Theory sense doesn’t carry into the realm in which people and their public officials engage in discourse.

Here’s an analogy. Prof. Chomsky’s argument is saying, “When a government justifies creating medical programs for health, what they’re saying is meaningless. They always say that! The Nazis said the same thing when they were sterilizing ‘inferiors,’ and Medieval physicians engaged in barbarous [barber-ous, actually – heyo!] practices in the name of health.” Such reasoning would rule out a discussion of whether current government-sponsored medical programs actually promote health. But that is just the sort of conversation we need to have now about the NSA.

Prof. Chomsky’s repeated appeals to history in this interview covers up exactly what we need to be discussing. Yes, both the NSA and the Stasi claimed security as their justification for spying. But far from that claim being meaningless, it calls for a careful analysis of the claim: the nature and severity of the risk, the most effective tactics to ameliorate that threat, the consequences of those tactics on broader rights and goods — all considerations that comparisons to the Stasi and Genghis Khan obscure. History counts, but not as a way to write off security considerations as meaningless by invoking a technical definition of “information.”

1 Comment »

July 28, 2013

The shockingly short history of the history of technology

In 1960, the academic journal Technology and Culture devoted its entire Autumn edition [1] to essays about a single work, the fifth and final volume of which had come out in 1958: A History of Technology, edited by Charles Singer, E. J. Holmyard, A. R. Hall, and Trevor I. Williams. Essay after essay implies or outright states something I found quite remarkable: A History of Technology is the first history of technology.

You’d think the essays would have some clever twist explaining why all those other things that claimed to be histories were not, perhaps because they didn’t get the concept of “technology” right in some modern way. But, no, the statements are pretty untwisty. The journal’s editor matter-of-factly claims that the history of technology is a “new discipline.”[2] Robert Woodbury takes the work’s publication as the beginning of the discipline as well, although he thinks it pales next to the foundational work of the history of science [3], a field the journal’s essays generally take as the history of technology’s older sibling, if not its parent. Indeed, fourteen years later, in 1974, Robert Multhauf wrote an article for that same journal, called “Some Observations on the State of the History of Technology,”[4] that suggested that the discipline was only then coming into its own. Why some universities have even recognized that there is such a thing as an historian of science!

The essay by Lewis Mumford, whom one might have mistaken for a prior historian of technology, marks the volumes as a first history of technology, pans them as a history of technology, and acknowledges prior attempts that border on being histories of technology. [5] His main objection to A History of Technology— and he is far from alone in this among the essays — is that the volumes don’t do the job of synthesizing the events recounted, failing to put them into the history of ideas, culture, and economics that explain both how technology took the turns that it did and what the meaning of those turns meant for human life. At least, Mumford says, these five volumes do a better job than the works of three British nineteenth century who wrote something like histories of technology: Andrew Ure, Samuel Smiles, and Charles Babbage. (Yes, that Charles Babbage.) (Multhauf points also to Louis Figuier in France, and Franz Reuleaux in Germany.[6])

Mumford comes across as a little miffed in the essay he wrote about A History of Technology, but, then, Mumford often comes across as at least a little miffed. In the 1963 introduction to his 1934 work, Technics and Civilization, Mumford seems to claim the crown for himself, saying that his work was “the first to summarize the technical history of the last thousand years of Western Civilization…” [7]. And, indeed, that book does what he claims is missing from A History of Technology, looking at the non-technical factors that made the technology socially feasible, and at the social effects the technology had. It is a remarkable work of synthesis, driven by a moral fervor that borders on the rhetoric of a prophet. (Mumford sometimes crossed that border; see his 1946 anti-nuke essay, “Gentlemen: You are Mad!” [8]) Still, in 1960 Mumford treated A History of Technology as a first history of technology not only in the academic journal Technology and Culture, but also in The New Yorker, claiming that until recently the history of technology had been “ignored,” and “…no matter what the oversights or lapses in this new “History of Technology, one must be grateful that it has come into existence at all.”[9]

So, there does seem to be a rough consensus that the first history of technology appeared in 1958. That the newness of this field is shocking, at least to me, is a sign of how dominant technology as a concept — as a frame — has become in the past couple of decades.


[1] Techology and Culture. Autumn, 1960. Vol. 1, Issue 4.

[2] Melvin Kranzberg. “Charles Singer and ‘A History of Technology'” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 299-302. p. 300.

[3] Robert S. Woodbury. “The Scholarly Future of the History of Technology” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 345-8. P. 345.

[4] Robert P. Multhauf, “Some Observations on the State of the History of Technology.” Techology and Culture. Jan, 1974. Vol. 15, no. 1. pp. 1-12

[5] Lewis Mumford. “Tools and the Man.” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 320-334.

[6] Multhauf, p. 3.

[7] Lewis Mumford. Technics and Civilization. (Harcourt Brace, 1934. New edition 1963), p. xi.

[8] Lewis Mumford. “Gentlemen: You Are Mad!” Saturday Review of Literature. March 2, 1946, pp. 5-6.

[9] Lewis Mumford. “From Erewhon to Nowhere.” The New Yorker. Oct. 8, 1960. pp. 180-197.

2 Comments »

April 7, 2013

The medium is the message is the transmitter is the receiver

Al Jazeera asked me to contribute a one-minute video for an episode of Listening Post about how McLuhan looks in the Age of the Internet. They ultimately rejected it. I can see why; it’s pretty geeky. Also, it’s not very interesting.

So, what the heck, here it is:

4 Comments »

August 10, 2012

[email protected]

On August 11, 1987, Apple genius Bill Atkinson — and this was before every kid in a clean shirt was a candidate for Apple Genius — held a press conference at MacWorld to unveil HyperCard. I was there.

example of a hypercard stack

I watched Atkinson’s presentation as a PR/marketing guy at a software company, and as an early user of the initial Mac (although my personal machines were CP/M, MS-DOS, and then Windows). As a PR guy, I was awestruck by the skill of the presentation. I remember Atkinson dynamically resizing a bit-mapped graphic and casually mentioning that figuring out on the fly which bits to keep and which to throw out was no small feat. And at the other end of the scale, the range of apps — each beautifully designed, of course — was fantastic.

HyperCard was pitched as a way to hyperlink together a series of graphic cards or screens. The cards had the sort of detail that bitmapped graphics afforded, and that Apple knew how to deliver. Because the cards were bitmapped, they tended to highlight their uniqueness, like the pages of a highly designed magazine, or etchings in a gallery.

Atkinson also pitched HyperCard as a development environment that made some hard things easy. That part of the pitch left me unconvinced. Atkinson emphasized the object-orientation of HyperTalk — I remember him talking about message-passing and inheritance — but it seemed to me as a listener that building a HyperStack (as HyperCard apps were called) was going to be beyond typical end users. (I created a Stack for my company a few months later, with some basic interactivity. Fun.)

Apple killed off HyperCard in 2004, but it remains more than a fond memory to many of us. In fact, some — including Bill Atkinson — have noted how close it was to being a browser before there was a Web. A couple of months ago, Matthew Lasar at Ars Technica wrote:

In an angst-filled 2002 interview, Bill Atkinson confessed to his Big Mistake. If only he had figured out that stacks could be linked through cyberspace, and not just installed on a particular desktop, things would have been different.

“I missed the mark with HyperCard,” Atkinson lamented. “I grew up in a box-centric culture at Apple. If I’d grown up in a network-centric culture, like Sun, HyperCard might have been the first Web browser. My blind spot at Apple prevented me from making HyperCard the first Web browser.”

First of all, we should all give Bill A a big group hug. HyperCard was an awesome act of imagination. Thank you!

But I don’t quite see HyperCard as the precursor to the Web. I think instead it anticipated something later.

HyperCard + network = Awesome, but HyperCard + network != Web browser. The genius of Tim Berners-Lee was not that he built a browser that made the Internet far more usable. TBL’s real genius was that he wrote protocols and standards by which hyperlinked information could be displayed and shared. The HTML standard itself was at best serviceable; it was a specification using an already-existing standard, SGML, that let you specify the elements and structure of particular document types.. TBL went against the trend by making an SGML specification that was so simple that it was derided by the SGML cowboys. That was very smart. Wise, even. But we have the Web today because TBL didn’t start by inventing a browser. He instead said that if you want to have some text be hyperlinked, surround it with a particular mark-up (“<a href= ‘http://pathname.com/page.html’></a>”). And, if you want to write a browser, make sure that it knows how to interpret that markup (and support the protocols). The Web took off because it wasn’t an application, but a specification for how applications could share hyperlinked information. Anyone who wanted to could write applications for displaying hyperlinked documents. And people did

There’s another way HyperCard was not Web minus Network. The Web works off of a word-processing metaphor. HyperCard works off of a page-layout/graphics metaphor. HTML as first written gave authors precious little control over the presentation of the contents: the browser decided where the line endings were, how to wrap text around graphics, and what a first level heading would look like compared to a second level heading. Over time, HTML (especially with CSS) has afforded authors a much higher degree of control over presentation, but the architecture of the Web still reflects the profound split between content and layout. This is how word-processors work, and it’s how SGML worked. HyperCard, on the other hand, comes out of a bitmapped graphics mentality in which the creator gets pinpoint control over the placement of every dot. You can get stunningly beautiful cards this way, but the Web has gained some tremendous advantages because it went the other way.

Let me be clearer. In the old days, WordPerfect was unstructured and Microsoft Word was structured. With WordPerfect, to make a line of text into a subhead you’d insert a marker telling WordPerfect to begin putting the text into boldface, and another marker telling it to stop. You might put in other markers telling it to put the same run of text into a larger font and to underline it. With Word, you’d make a line into a subhead by putting your text caret into it and selecting “subhead” from a pick list. If you wanted to turn all the subheads red, you’d edit the properties of “subhead” and Word would apply the change to everything marked as a subhead. HTML is like Word, not WordPerfect. From this basic decision, HTML has gained tremendous advantages:

  • The structure of pages is important semantic information. A Web with structured pages is smarter than one with bitmaps.

  • Separating content from layout enables the dynamic re-laying out that has become increasingly important as we view pages on more types of devices. If you disagree, tell me how much fun it is to read a full-page pdf on your mobile phone.

  • Structured documents enable many of the benefits of object orientation: Define it once and have all instances update; attach methods to these structures, enable inheritance, etc.

(It’s disappointing to me that Google Docs document programming environment doesn’t take advantage of this. The last time I looked, you can’t attach methods to objects. It’s WordPerfect all over again.)

I should mention that the software company I worked at, Interleaf, created electronic documents that separated content from presentation (with SGML as its model), that treated document elements as objects, and that enabled them to be extended with event-aware methods. These documents worked together over local area networks. So, I think there’s a case to be made that Interleaf’s “active documents” were actually closer to presaging the Web than HyperCard was, although Interleaf made the same mistake of writing an app — and an expensive, proprietary one to boot — rather than a specification. It was great technology, but the act of genius that gave us the Web was about the power of specifications and an architecture independent of the technology that implements it.

HyperCard was a groundbreaking, beautiful, and even thrilling app. Ahead of its time for sure. But the time it was ahead of seems to me to be not so much the Age of the Web as the Age of the App. I don’t know why there isn’t now an app development environment that gives us what HyperCard did. Apparently HyperCard is still ahead of its time.

 


[A few minutes later] Someone’s pointed me to Infinite Canvas as a sort of HyperCard for iPhone…

[An hour later:] A friend suggests using the hashtag #HyperCard25th.

9 Comments »

Next Page »