Joho the Blog » infohistory

March 24, 2019

Automating our hardest things: Machine Learning writes

In 1948 when Claude Shannon was inventing information science [pdf] (and, I’d say, information itself), he took as an explanatory example a simple algorithm for predicting the element of a sentence. For example, treating each letter as equiprobable, he came up with sentences such as:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD.

If you instead use the average frequency of each letter, you instead come up with sentences that seem more language-like:

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL.

At least that one has a reasonable number of vowels.

If you then consider the frequency of letters following other letters—U follows a Q far more frequently than X does—you are practically writing nonsense Latin:

ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.

Looking not at pairs of letters but triplets Shannon got:

IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE.

Then Shannon changes his units from triplets of letters to triplets of words, and gets:

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

Pretty good! But still gibberish.

Now jump ahead seventy years and try to figure out which pieces of the following story were written by humans and which were generated by a computer:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

“Pérez and his friends were astonished to see the unicorn herd”Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

The answer: The first paragraph was written by a human being. The rest was generated by a machine learning system trained on a huge body of text. You can read about it in a fascinating article (pdf of the research paper) by its creators at OpenAI. (Those creators are: Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.)

There are two key differences between this approach and Shannon’s.

First, the new approach analyzed a very large body of documents from the Web. It ingested 45 million pages linked in Reddit comments that got more than three upvotes. After removing duplicates and some other cleanup, the data set was reduced to 8 million Web pages. That is a lot of pages. Of course the use of Reddit, or any one site, can bias the dataset. But one of the aims was to compare this new, huge, dataset to the results from existing sets of text-based data. For that reason, the developers also removed Wikipedia pages from the mix since so many existing datasets rely on those pages, which would smudge the comparisons.

(By the way, a quick google search for any page from before December 2018 mentioning both “Jorge Pérez” and “University of La Paz” turned up nothing. “The AI is constructing, not copy-pasting.”The AI is constructing, not copy-pasting.)

The second distinction from Shannon’s method: the developers used machine learning (ML) to create a neural network, rather than relying on a table of frequencies of words in triplet sequences. ML creates a far, far more complex model that can assess the probability of the next word based on the entire context of its prior uses.

The results can be astounding. While the developers freely acknowledge that the examples they feature are somewhat cherry-picked, they say:

When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50% of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.

There are obviously things to worry about as this technology advances. For example, fake news could become the Earth’s most abundant resource. For fear of its abuse, its developers are not releasing the full dataset or model weights. Good!

Nevertheless, the possibilities for research are amazing. And, perhaps most important in the longterm, one by one the human capabilities that we take as unique and distinctive are being shown to be replicable without an engine powered by a miracle.

That may be a false conclusion. Human speech does not consist simply of the utterances we make but the complex intentional and social systems in which those utterances are more than just flavored wind. But ML intends nothing and appreciates nothing. “Nothing matters to ML.”Nothing matters to ML. Nevertheless, knowing that sufficient silicon can duplicate the human miracle should shake our confidence in our species’ special place in the order of things.

(FWIW, my personal theology says that when human specialness is taken as conferring special privilege, any blow to it is a good thing. When that specialness is taken as placing special obligations on us, then at its very worst it’s a helpful illusion.)

6 Comments »

June 18, 2018

Filming the first boxing match

Joseph Fagan, an author, writer, TV Show host, and the Official Historian of West Orange Township, has given me permission to post his recounting of the legal waters surrounding the first filming of a boxing match. It’s a fascinating early example of finding analogies in order to figure out how to apply old laws to new technology — and also of how the technological limitations of a medium can affect content.

First filmed boxing match tested the legal waters in WO [West Orange, NJ]

By Joseph Fagan

On June 14, 1894, one hundred and twenty four years ago today, a boxing match was first captured on film. The event took place at Edison’s Black Maria studio giving the world’s first movie studio in West Orange the distinction of being the first place for a filmed boxing match in history. It was a staged six round fight between two lightweight boxers Michael Leonard and Jack Cushing. The filming of this fight at the Black Maria may have violated prize fighting laws but “the technology seemed to surpass the law in a way no one could have predicted”the technology seemed to surpass the law in a way no one could have predicted.

Although boxing was still illegal in New Jersey in 1894 the sport was growing in popularity. The New Jersey penal code had been amended in 1835 to specifically outlaw prize fighting. The art of pugilism as it was also known was banned in the United States at the time. It was illegal to organize, participate, or attend a boxing match. But the law was somewhat unclear on the legality of photographing a boxing match. By the time Edison’s moving picture technology had emerged the law had not yet adopted any provisions for the filming of a boxing match.

An assumption was made that since it was legal to look at a still photograph of a boxing match by extension it therefore was then legal to look at a motion picture of a boxing match as well. The New Jersey legislature could not have anticipated prize fighting films in 1835 when photography techniques were still in its infancy and mostly all experimental.

By the late 1880s the concept of moving images as entertainment was not a new one and not uniquely that of Edison. In 1893 he built the world’s first motion picture studio in West Orange known as the Black Maria. The films produced at this studio were not film as we know it today but short films made specifically for use in Edison’s invention the kinetoscope. This emerging technology not only commercialized moving pictures but also made history as it tested the known boundaries of New Jersey law regarding prize fighting.

The first kinetoscope parlor opened in New York City on April 14, 1894 in a converted shoe store. This date marks the birth of commercial film exhibition in the United States. Customers could view the films in a kinetoscope which sat on the floor and was combination peep show slot machine. Kinetoscope parlors soon increased in popularity and opened around the country. Production of a constant flow of new film subjects was needed at the West Orange studio to keep the new invention popular. Many vaudeville performers, dancers, and magicians became the first forms of entertainment to be filmed at the Black Maria studio.

The filming of the Leonard Cushing Fight demonstrated the potential illegality of the events at the Black Maria but there is no record of a grand jury investigation of the fight. The ring was specially designed to fit in the Black Maria and was only 12 feet square. The fight consisted of six one minute rounds between Leonard and Cushing. One minute was the longest the film in the camera would last so“ the kinetoscope itself was the time keeper” the kinetoscope itself was the time keeper. In between rounds the camera had to be reloaded which took seven minutes. The fight was essentially six separate bouts each titled by round number. In the background five fans can be seen looking into the ring. The referee hardly moves as the two fighters swing roundhouse blows at each other. Michael Leonard wore white trunks and Jack Cushing wore black trunks. Although a couple of punches seem to land both fighters maintained upright stances during the fight. Customers in kinetoscope parlors who watched the final round saw Leonard score a knockdown and was therefore considered the winner.

The first boxing match was filmed and produced by William Kennedy Dickson working for Edison. It remains unclear if Edison was actually at the fight and is reported to have been 40 miles away in Ogdensburg, NJ overlooking his mining operations. In my opinion I doubt very little happened at his West Orange complex without his knowledge or approval. Edison’s confidence is perhaps best understood in a 1903 quote. M. A. Rosanoff joined Edison’s staff and asked what rules he needed to observe. Edison replied, “” There are no rules here… we are trying to accomplish something.””” There are no rules here… we are trying to accomplish something.”

In the face of legal uncertainties regarding New Jersey law in 1894 plausible deniability may have helped Edison as he drifted into uncharted legal waters. No one was ever charged with a crime for filming the first prize fight in history at the Black Maria in West Orange. It simply set the course for future changes until the prohibition against prize fighting in New Jersey was eventually abolished in 1924.

Posted under a Creative Commons Attribution Non Commercial license: CC-BY-NC, Joseph Fagan

Joseph Fagan can be reached at JosephFagan@WestOrangeHistory.com

2 Comments »

May 10, 2018

When Edison chose not to invent speech-to-text tech

In 1911, the former mayor of Kingston, Jamaica, wrote a letter [pdf] to Thomas Alva Edison declaring that “The days of sitting down and writing one’s thoughts are now over” … at least if Edison were to agree to take his invention of the voice recorder just one step further and invent a device that transcribes voice recordings into speech. It was, alas, an idea too audacious for its time.

Here’s the text of Philip Cohen Stern’s letter:

Dear Sir :-

Your world wide reputation has induced me to trouble you with the following :-

As by talking in the in the Gramaphone [sic] we can have our own voices recorded why can this not in some way act upon a typewriter and reproduce the speech in typewriting

Under the present condition we dictate our matter to a shorthand writer who then has to typewrite it. What a labour saving device it would be if we could talk direct to the typewriter itself! The convenience of it would be enormous. It frequently occurs that a man’s best thoughts occur to him after his business hours and afetr [sic] his stenographer and typist have left and if he had such an instrument he would be independent of their presence.

The days of sitting down and writing out one’s thoughts are now over. It is not alone that there is always the danger in the process of striking out and repairing as we go along, but I am afraid most business-men have lost the art by the constant use of stenographer and their thoughts won’t run into their fingers. I remember the time very well when I could not think without a pen in my hand, now the reverse is the case and if I walk about and dictate the result is not only quicker in time but better in matter; and it occurred to me that such an instrument as I have described is possible and that if it be possible there is no man on earth but you who could do it

If my idea is worthless I hope you will pardon me for trespassing on your time and not denounce me too much for my stupidity. If it is not, I think it is a machine that would be of general utility not only in the commercial world but also for Public Speakers etc.

I am unfortunately not an engineer only a lawyer. If you care about wasting a few lines on me, drop a line to Philip Stern, Barrister-at-Law at above address, marking “Personal” or “Private” on the letter.

Yours very truly,
[signed] Philip Stern.

At the top, Edison has written:

The problem you speak of would be enormously difficult I cannot at present time imagine how it could be done.

The scan of the letter lives at Rutger’s Thomas A. Edison Papers Digital Edition site: “Letter from Philip Cohen Stern to Thomas Alva Edison, June 5th, 1911,” Edison Papers Digital Edition, accessed May 6, 2018, http://edison.rutgers.edu/digital/items/show/57054. Thanks to Rutgers for mounting the collection and making it public. And a special thanks to Lewis Brett Smiler, the extremely helpful person who noted Stern’s letter to my sister-in-law, Meredith Sue Willis, as a result of a talk she gave recently on The Novelist in the Digital Age.

By the way, here’s Philip Stern’s obituary.

5 Comments »

February 11, 2018

The brain is not a computer and the world is not information

Robert Epstein argues in Aeon against the dominant assumption that the brain is a computer, that it processes information, stores and retrieves memories, etc. That we assume so comes from what I think of as the informationalizing of everything.

The strongest part of his argument is that computers operate on symbolic information, but brains do not. There is no evidence (that I know of, but I’m no expert. On anything) that the brain decomposes visual images into pixels and those pixels into on-offs in a code that represents colors.

In the second half, Epstein tries to prove that the brain isn’t a computer through some simple experiments, such as drawing a dollar bill from memory and while looking at it. Someone committed to the idea that the brain is a computer would probably just conclude that the brain just isn’t a very good computer. But judge for yourself. There’s more to it than I’m presenting here.

Back to Epstein’s first point…

It is of the essence of information that it is independent of its medium: you can encode it into voltage levels of transistors, magnetized dust on tape, or holes in punch cards, and it’s the same information. Therefore, a representation of a brain’s states in another medium should also be conscious. Epstein doesn’t make the following argument, but I will (and I believe I am cribbing it from someone else but I don’t remember who).

Because information is independent of its medium, we could encode it in dust particles swirling clockwise or counter-clockwise; clockwise is an on, and counter is an off. In fact, imagine there’s a dust cloud somewhere in the universe that has 86 billion motes, the number of neurons in the human brain. Imagine the direction of those motes exactly matches the on-offs of your neurons when you first spied the love of your life across the room. Imagine those spins shift but happen to match how your neural states shifted over the next ten seconds of your life. That dust cloud is thus perfectly representing the informational state of your brain as you fell in love. It is therefore experiencing your feelings and thinking your thoughts.

That by itself is absurd. But perhaps you say it is just hard to imagine. Ok, then let’s change it. Same dust cloud. Same spins. But this time we say that clockwise is an off, and the other is an on. Now that dust cloud no longer represents your brain states. It therefore is both experiencing your thoughts and feeling and is not experiencing them at the same time. Aristotle would tell us that that is logically impossible: a thing cannot simultaneously be something and its opposite.

Anyway…

Toward the end of the article, Epstein gets to a crucial point that I was very glad to see him bring up: Thinking is not a brain activity, but the activity of a body engaged in the world. (He cites Anthony Chemero’s Radical Embodied Cognitive Science (2009) which I have not read. I’d trace it back further to Andy Clark, David Chalmers, Eleanor Rosch, Heidegger…). Reducing it to a brain function, and further stripping the brain of its materiality to focus on its “processing” of “information” is reductive without being clarifying.

I came into this debate many years ago already made skeptical of the most recent claims about the causes of consciousness by having some awareness of the series of failed metaphors we have used over the past couple of thousands of years. Epstein puts this well, citing another book I have not read (and another book I’ve consequently just ordered):

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

Maybe this time our tech-based metaphor has happened to get it right. But history says we should assume not. We should be very alert to the disanologies, which Epstein helps us with.

Getting this right, or at least not getting it wrong, matters. The most pressing problem with the informationalizing of thought is not that it applies a metaphor, or even that the metaphor is inapt. Rather it’s that this metaphor leads us to a seriously diminished understanding of what it means to be a living, caring creature.

I think.

 

Hat tip to @JenniferSertl for pointing out the Aeon article.

Comments Off on The brain is not a computer and the world is not information

August 22, 2016

Why do so many baby words start with B?

What’s wrong with English? So many of the words for things in a baby’s environment start with B so when she says “buh,” — or, as our grandchild prefers, “bep” — you don’t know if she is talking about a banana, bunny, boat, bread, bath, bubble, ball, bum, burp, bird, belly, or bathysphere.

This is not how you design a language for easy learning. You don’t hear soldiers speaking into their walkie talkies about being at position “Buh buh buh buh.” No, they say something like, “Bravo Victor Mike November.” Those words were picked precisely because they are so hard to mistake for one another. Now that’s how you design a language! (It’s also possible that research at Harvard during WWII that led to the development of the NATO phonetic alphabet influenced the development of Information Theory what with that theory’s differentiating of signal from noise.)

This problem in English probably helps explain why we spend so much time teaching our children how to say animal sounds: animals have the common sense not to sound like one another. That may also be why some of the sounds we teach our children have little to do with the noises animals actually make: Dogs don’t actually say “Woof,” but that sound is hard to confused with the threadbare imitation we can manage of the sound a tiger makes.

Being a baby is tough. You’ve got little flabby fingers that can’t do anything you want except hold onto a measly Cheerio and even then they can’t tell the difference between your mouth and your nose. Plus you can’t get anywhere except by hitching a ride with an adult whose path is as senseless as a three-legged drunk’s. Then when you want nothing more than a bite of buttery brie, the stupid freaking adult brings you a big blue blanket and then gets annoyed when you kick it off.

The least we could do for our babies is give them some words that don’t sound like every other word they care about.

Comments Off on Why do so many baby words start with B?

October 3, 2014

The modern technology with the worst signal-to-noise ratio is …

…the car alarm.

When one goes off, the community’s reaction is not “Catch the thief!” but “Find the car owner so s/he can turn off the @!@#ing car alarm.” At least in the communities I’ve lived in. (Note: I am a privileged white man.)

The signal-to-noise ratio sucks for car alarms in every direction. First, it is a signal to the car owner that is blasted to an entire neighborhood that’s trying to do something else. Second, it’s almost always a false alarm. (See note above.) Third, because it’s almost always a false alarm, it’s an astoundingly ineffective true alarm. The signal becomes noise.

Is there any modern technology with a worse signal-to-noise ratio?

6 Comments »

August 9, 2014

Tim Berners-Lee’s amazingly astute 1992 article on this crazy Web thing he started

Dan Brickley points to this incredibly prescient article by Tim Berners-Lee from 1992. The World Wide Web he gets the bulk of the credit for inventing was thriving at CERN where he worked. Scientists were linking to one another’s articles without making anyone type in a squirrely Internet address. Why, over a thousand articles were hyperlinked.

And on this slim basis, Tim outlines the fundamental challenges we’re now living through. Much of the world has yet to catch up with insights he derived from the slightest of experience.

May the rest of us have even a sliver of his genius and a heaping plateful of his generosity.

3 Comments »

November 17, 2013

Noam Chomsky, security, and equivocal information

Noam Chomsky and Barton Gellman were interviewed at the Engaging Big Data conference put on by MIT’s Senseable City Lab on Nov. 15. When Prof. Chomsky was asked what we can do about government surveillance, he reiterated his earlier call for us to understand the NSA surveillance scandal within an historical context that shows that governments always use technology for their own worst purposes. According to my liveblogging (= inaccurate, paraphrased) notes, Prof. Chomsky said:

Governments have been doing this for a century, using the best technology they had. I’m sure Gen. Alexander believes what he’s saying, but if you interviewed the Stasi, they would have said the same thing. Russian archives show that these monstrous thugs were talking very passionately to one another about defending democracy in Eastern Europe from the fascist threat coming from the West. Forty years ago, RAND released Japanese docs about the invasion of China, showing that the Japanese had heavenly intentions. They believed everything they were saying. I believe this is universal. We’d probably find it for Genghis Khan as well. I have yet to find any system of power that thought it was doing the wrong thing. They justify what they’re doing for the noblest of objectives, and they believe it. The CEOs of corporations as well. People find ways of justifying things. That’s why you should be extremely cautious when you hear an appeal to security. It literally carries no information, even in the technical sense: it’s completely predictable and thus carries no info. I don’t doubt that the US security folks believe it, but it is without meaning. The Nazis had their own internal justifications. [Emphasis added, of course.]

I was glad that Barton Gellman — hardly an NSA apologist — called Prof. Chomsky on his lumping of the NSA with the Stasi, for there is simply no comparison between the freedom we have in the US and the thuggish repression omnipresent in East Germany. But I was still bothered, albeit by a much smaller point. I have no serious quarrel with Prof. Chomsky’s points that government incursions on rights are nothing new, and that governments generally (always?) believe they are acting for the best of purposes. I am a little bit hung-up, however, on his equivocating on “information.”

Prof. Chomsky is of course right in his implied definition of information. (He is Noam Chomsky, after all, and knows a little more about the topic than I do.) Modern information is often described as a measure of surprise. A string of 100 alternating ones and zeroes conveys less information than a string of 100 bits that are less predictable, for if you can predict with certainty what the next bit will be, then you don’t learn anything from that bit; it carries no information. Information theory lets us quantify how much information is conveyed by streams of varying predictability.

So, when U.S. security folks say they are spying on us for our own security, are they saying literally nothing? Is that claim without meaning? Only in the technical sense of information. It is, in fact, quite meaningful, even if quite predictable, in the ordinary sense of the term “information.”

First, Prof. Chomsky’s point that governments do bad things while thinking they’re doing good is an important reminder to examine our own assumptions. Even the bad guys think they’re the good guys.

Second, I disagree with Prof. Chomsky’s generalization that governments always justify surveillance in the name of security. For example, governments sometimes record traffic (including the movement of identifiable cars through toll stations) with the justification that the information will be used to ease congestion. Tracking the position of mobile phones has been justified as necessary for providing swift EMT responses. Governments require us to fill out detailed reports on our personal finances every year on the grounds that they need to tax us fairly. Our government hires a fleet of people every ten years to visit us where we live in order to compile a census. These are all forms of surveillance, but in none of these cases is security given as the justification. And if you want to say that these other forms don’t count, I suspect it’s because it’s not surveillance done in the name of security…which is my point.

Third, governments rarely cite security as the justification without specifying what the population is being secured against; as Prof. Chomsky agrees, that’s an inherent part of the fear-mongering required to get us to accept being spied upon. So governments proclaim over and over what threatens our security: Spies in our midst? Civil unrest? Traitorous classes of people? Illegal aliens? Muggers and murderers? Terrorists? Thus, the security claim isn’t made on its own. It’s made with specific threats in mind, which makes the claim less predictable — and thus more informational — than Prof. Chomsky says.

So, I disagree with Prof. Chomsky’s argument that a government that justifies spying on the grounds of security is literally saying something without meaning. Even if it were entirely predictable that governments will always respond “Because security” when asked to justify surveillance — and my second point disputes that — we wouldn’t treat the response as meaningless but as requiring a follow-up question. And even if the government just kept repeating the word “Security” in response to all our questions, that very act would carry meaning as well, like a doctor who won’t tell you what a shot is for beyond saying “It’s to keep you healthy.” The lack of meaning in the Information Theory sense doesn’t carry into the realm in which people and their public officials engage in discourse.

Here’s an analogy. Prof. Chomsky’s argument is saying, “When a government justifies creating medical programs for health, what they’re saying is meaningless. They always say that! The Nazis said the same thing when they were sterilizing ‘inferiors,’ and Medieval physicians engaged in barbarous [barber-ous, actually – heyo!] practices in the name of health.” Such reasoning would rule out a discussion of whether current government-sponsored medical programs actually promote health. But that is just the sort of conversation we need to have now about the NSA.

Prof. Chomsky’s repeated appeals to history in this interview covers up exactly what we need to be discussing. Yes, both the NSA and the Stasi claimed security as their justification for spying. But far from that claim being meaningless, it calls for a careful analysis of the claim: the nature and severity of the risk, the most effective tactics to ameliorate that threat, the consequences of those tactics on broader rights and goods — all considerations that comparisons to the Stasi and Genghis Khan obscure. History counts, but not as a way to write off security considerations as meaningless by invoking a technical definition of “information.”

1 Comment »

July 28, 2013

The shockingly short history of the history of technology

In 1960, the academic journal Technology and Culture devoted its entire Autumn edition [1] to essays about a single work, the fifth and final volume of which had come out in 1958: A History of Technology, edited by Charles Singer, E. J. Holmyard, A. R. Hall, and Trevor I. Williams. Essay after essay implies or outright states something I found quite remarkable: A History of Technology is the first history of technology.

You’d think the essays would have some clever twist explaining why all those other things that claimed to be histories were not, perhaps because they didn’t get the concept of “technology” right in some modern way. But, no, the statements are pretty untwisty. The journal’s editor matter-of-factly claims that the history of technology is a “new discipline.”[2] Robert Woodbury takes the work’s publication as the beginning of the discipline as well, although he thinks it pales next to the foundational work of the history of science [3], a field the journal’s essays generally take as the history of technology’s older sibling, if not its parent. Indeed, fourteen years later, in 1974, Robert Multhauf wrote an article for that same journal, called “Some Observations on the State of the History of Technology,”[4] that suggested that the discipline was only then coming into its own. Why some universities have even recognized that there is such a thing as an historian of science!

The essay by Lewis Mumford, whom one might have mistaken for a prior historian of technology, marks the volumes as a first history of technology, pans them as a history of technology, and acknowledges prior attempts that border on being histories of technology. [5] His main objection to A History of Technology— and he is far from alone in this among the essays — is that the volumes don’t do the job of synthesizing the events recounted, failing to put them into the history of ideas, culture, and economics that explain both how technology took the turns that it did and what the meaning of those turns meant for human life. At least, Mumford says, these five volumes do a better job than the works of three British nineteenth century who wrote something like histories of technology: Andrew Ure, Samuel Smiles, and Charles Babbage. (Yes, that Charles Babbage.) (Multhauf points also to Louis Figuier in France, and Franz Reuleaux in Germany.[6])

Mumford comes across as a little miffed in the essay he wrote about A History of Technology, but, then, Mumford often comes across as at least a little miffed. In the 1963 introduction to his 1934 work, Technics and Civilization, Mumford seems to claim the crown for himself, saying that his work was “the first to summarize the technical history of the last thousand years of Western Civilization…” [7]. And, indeed, that book does what he claims is missing from A History of Technology, looking at the non-technical factors that made the technology socially feasible, and at the social effects the technology had. It is a remarkable work of synthesis, driven by a moral fervor that borders on the rhetoric of a prophet. (Mumford sometimes crossed that border; see his 1946 anti-nuke essay, “Gentlemen: You are Mad!” [8]) Still, in 1960 Mumford treated A History of Technology as a first history of technology not only in the academic journal Technology and Culture, but also in The New Yorker, claiming that until recently the history of technology had been “ignored,” and “…no matter what the oversights or lapses in this new “History of Technology, one must be grateful that it has come into existence at all.”[9]

So, there does seem to be a rough consensus that the first history of technology appeared in 1958. That the newness of this field is shocking, at least to me, is a sign of how dominant technology as a concept — as a frame — has become in the past couple of decades.


[1] Techology and Culture. Autumn, 1960. Vol. 1, Issue 4.

[2] Melvin Kranzberg. “Charles Singer and ‘A History of Technology'” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 299-302. p. 300.

[3] Robert S. Woodbury. “The Scholarly Future of the History of Technology” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 345-8. P. 345.

[4] Robert P. Multhauf, “Some Observations on the State of the History of Technology.” Techology and Culture. Jan, 1974. Vol. 15, no. 1. pp. 1-12

[5] Lewis Mumford. “Tools and the Man.” Techology and Culture Autumn, 1960. Vol. 1, Issue 4. pp. 320-334.

[6] Multhauf, p. 3.

[7] Lewis Mumford. Technics and Civilization. (Harcourt Brace, 1934. New edition 1963), p. xi.

[8] Lewis Mumford. “Gentlemen: You Are Mad!” Saturday Review of Literature. March 2, 1946, pp. 5-6.

[9] Lewis Mumford. “From Erewhon to Nowhere.” The New Yorker. Oct. 8, 1960. pp. 180-197.

2 Comments »

April 7, 2013

The medium is the message is the transmitter is the receiver

Al Jazeera asked me to contribute a one-minute video for an episode of Listening Post about how McLuhan looks in the Age of the Internet. They ultimately rejected it. I can see why; it’s pretty geeky. Also, it’s not very interesting.

So, what the heck, here it is:

4 Comments »

Next Page »