logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

March 27, 2023

chatGPT and spad boonerisms

For whatever weird reason, I often spoonerize words after hearing them. For example, I’ll hear “tapping noise” and think “napping toys.” Or “pop corn” echoes as “cop porn.” I disturbed myself yesterday, however, by hearing “a juice box” and thinking “abuse jocks.”

So, I checked in with chatGPT and found that it’s got the concept in theory but not in practice:

PROMPT: What are some interesting but unexpected spoonerisms?
ChatGPT: Spoonerisms are a type of error in which the initial sounds or letters of two or more words are swapped to create a new phrase. Here are some unexpected spoonerisms that you might find interesting:
"Belly Jeans" instead of "Jelly Beans"
"Scooby Snacks" instead of "Ruby Slippers"
"Nimble Thimble" instead of "Thimble Nibble"
"Bumble Beeple" instead of "People Bumping"
"Picky Sockets" instead of "Sticky Pockets"
"Satan's Fingers" instead of "Santa's Fingers"
"Bike of Bricks" instead of "Brick of Bikes"
"Belly Liar" instead of "Lily Bayer"
"Cozy Nook" instead of "Nozy Cook"
"Pumpkin Piehole" instead of "Pumpkin Pole"

Why do we humans win this pathetic John Henry (that steel-drivin’ man) contest against GPT? Possibly because GPT doesn’t know how words are spelled (because it replaces words with tokens) and thus has to rely on context to detect spoonerisms? Any other, better ideas about why it sucks at spoonerizing?

Tweet
Follow me

Categories: ai Tagged with: ai • chatai • chatgpt Date: March 27th, 2023 dw

Be the first to comment »

December 4, 2022

Computers inside computers inside computers…

First there was the person who built a computer inside of Minecraft and programmed it to play Minecraft. 

Now Frederic Besse built a usable linux terminal in GPTchat — usable in that it can perform systems operations on a virtual computer that’s also been invoked in (by? with?) GPTchat. For example, you can tell the terminal to create a file and where to store it in a file system that did not exist until you asked, and under most definitions of “exist” doesn’t exist anywhere.

I feel like I need to get a bigger mind in order for it to be sufficiently blown.

(PS: I could do without the casual anthropomorphizing in the GPT article.)

Tweet
Follow me

Categories: ai, machine learning, philosophy Tagged with: ai • gpt • language models • machine learning • philosophy Date: December 4th, 2022 dw

Be the first to comment »

November 15, 2021

Dust Rising: Machine learning and the ontology of the real

Aeon.co has posted an article I worked on for a couple of years. It’s only 2,200 words, but they were hard words to find because the ideas were, and are, hard for me. I have little sense of whether I got either the words or the ideas right.

The article argues, roughly, that the sorts of generalizations that machine learning models embody are very different from the sort of generalizations the West has taken as the truths that matter. ML’s generalizations often are tied to far more specific configurations of data and thus are often not understandable by us, and often cannot be applied to particular cases except by running the ML model.

This may be leading us to locate the really real not in the eternal (as the West has traditional done) but at least as much in the fleeting patterns of dust that result from everything affecting everything else all the time and everywhere.

Three notes:

  1. Nigel Warburton, the philosophy editor at Aeon, was very helpful, as was Timo Hannay in talking through the ideas, and at about a dozen other people who read drafts. None of them agreed entirely with the article.

2. Aeon for some reason deleted a crucial footnote that said that my views do not necessarily represent the views of Google, while keeping the fact that I am a part time, temporary writer-in-residence there. To be clear: My reviews do not necessarily represent Google’s.

3. My original first title for it was “Dust Rising”, but then it became “Trains, Car Wrecks, and Machine Learning’s Ontology” which i still like although I admit it that “ontology” may not be as big a draw as I think it is.

Tweet
Follow me

Categories: ai, machine learning, philosophy Tagged with: ai • everydaychaos • machine learning • philosophy Date: November 15th, 2021 dw

Be the first to comment »

February 28, 2021

The Uncanny Stepford Valley

You’ve probably heard about MyHeritage.com‘s DeepNostalgia service that animates photos of faces. I’ve just posted at Psychology Today about the new type of uncanniness it induces, even though the animations of the individual photos I think pretty well escape The uncanny Value.

Here’s a sample from the MyHeritage site:

And here’s a thread of artworks and famous photos animated using DeepNostalgia that I reference in my post:

https://t.co/MDFSu3J0H1 has created some sort of animate your old photos application and I’m of course using it to feed my history addiction.
I apologise in advance to all the ancestors I’m about to offend.

Very fake history.

I’m sorry Queenie. pic.twitter.com/2np437yXyt

— Fake History Hunter (@fakehistoryhunt) February 28, 2021

More at Psychology Today …

Tweet
Follow me

Categories: ai, culture, machine learning, philosophy Tagged with: ai • entertainment • machine learning • philosophish • uncanny valley Date: February 28th, 2021 dw

Be the first to comment »

March 28, 2020

Computer Ethics 1985

I was going through a shelf of books I haven’t visited in a couple of decades and found a book I used in 1986 when I taught Introduction to Computer Science in my last year as a philosophy professor. (It’s a long story.) Ethical Issues in the Use of Computers was a handy anthology, edited by Deborah G. Johnson and John W. Snapper (Wadsworth, 1985).

So what were the ethical issues posed by digital tech back then?

The first obvious point is that back then ethics were ethics: codes of conduct promulgated by professional societies. So, Part I consists of eight essays on “Codes of Conduct for the Computer Professions.” All but two of the articles present the codes for various computing associations. The two stray sheep are “The Quest for a Code of Professional Ethics: An Intellectual and Moral Confusion” (John Ladd) and “What Should Professional Societies do About Ethics?” (Fay H. Sawyier).

Part 2 covers “Issues of Responsibility”, with most of the articles concerning themselves with liability issues. The last article, by James Moor, ventures wider, asking “Are There Decisions Computers Should Not Make?” About midway through, he writes:

“Therefore, the issue is not whether there are some limitations to computer decision-making but how well computer decision making compares with human decision making.” (p. 123)

While saluting artificial intelligence researchers for their enthusiasm, Moor says “…at this time the results of their labors do not establish that computers will one day match or exceed human levels of ability for most kinds of intellectual activities.” Was Moor right? It depends. First define basically everything.

Moor concedes that Hubert Dreyfus’ argument (What Computers Still Can’t Do) that understanding requires a contextual whole has some power, but points to effective expert systems. Overall, he leaves open the question whether computers will ever match or exceed human cognitive abilities.

After talking about how to judge computer decisions, and forcefully raising Joseph Weizenbaum’s objection that computers are alien to human life and thus should not be allowed to make decisions about that life, Moor lays out some guidelines, concluding that we need to be pragmatic about when and how we will let computers make decisions:

“First, what is the nature of the computer’s competency and how has it been demonstrated? Secondly given our basic goals and values why is it better to use a computer decision maker in a particular situation than a human decision maker?”

We are still asking these questions.

Part 3 is on “Privacy and Security.” Four of the seven articles can be considered to be general introductions fo the concept of privacy. Apparently privacy was not as commonly discusssed back then.

Part 4, “Computers and Power,” suddenly becomes more socially aware. It includes an excerpt from Weizenbaum’s Computer Power and Human Reason, as well as articles on “Computers and Social Power” and “Peering into the Poverty Gap.”

Part 5 is about the burning issue of the day: “Software as Property.” One entry is the Third Circuit Court of Appeals finding in Apple vs. Franklin Computer. Franklin’s Ace computer contained operating system code that had been copied from Apple. The Court knew this because in addition to the programs being line-by-line copies, Franklin failed to remove the name of one of the Apple engineers that the engineer had embedded in the program. Franklin acknowledged the copying but argued that operating system code could not be copyrighted.

That seems so long ago, doesn’t it?


Because this post mentions Joseph Weizenbaum, here’s the beginning of a blog post from 2010:

I just came across a 1985 printout of notes I took when I interviewed Prof. Joseph Weizenbaum in his MIT office for an article that I think never got published. (At least Google and I have no memory of it.) I’ve scanned it in; it’s a horrible dot-matrix printout of an unproofed semi-transcript, with some chicken scratches of my own added. I probably tape recorded the thing and then typed it up, for my own use, on my KayPro.

In it, he talks about AI and ethics in terms much more like those we hear today. He was concerned about its use by the military especially for autonomous weapons, and raised issues about the possible misuse of visual recognition systems. Weizenbaum was both of his time and way ahead of it.

Tweet
Follow me

Categories: ai, copyright, infohistory, philosophy Tagged with: ai • copyright • ethics • history • philosophy Date: March 28th, 2020 dw

Be the first to comment »

July 27, 2019

How we’re meaningless now: Projections vs. simulations

Back when I was a lad, we experienced the absurdity of life by watching as ordinary things in the world shed their meanings the way the Nazi who opens the chest in Raiders of the Lost Ark loses his skin: it just melts away.

In this experience of meaninglessness, though, what’s revealed is not some other layer beneath the surface, but the fact that all meaning is just something we make up and project over things that are indifferent to whatever we care to drape over them.

If you don’t happen to have a holy ark handy, you can experience this meaninglessness writ small by saying the word “ketchup” over and over until it becomes not a word but a sound. The magazine “Forbes” also works well for this exercise. Or, if you are a Nobel Prize winning writer and surprisingly consistently wrong philosopher like Jean Paul Sartre, perhaps a chestnut tree will reveal itself to you as utterly alien and resistant to the meaning we keep trying to throw on to it.

That was meaninglessness in the 1950s and on. Today we still manage to find our everyday world meaningless, but now we don’t see ourselves projecting meanings outwards but instead imagine ourselves to be in a computer simulation. Why? Because we pretty consistently understand ourselves in terms of our dominant tech, and these days the video cards owned by gamers are close to photo realistic, virtual reality is creating vivid spatial illusions for us, and AI is demonstrating the capacity of computers to simulate the hidden logic of real domains.

So now the source of the illusory meaning that we had taken for granted reveals itself not to be us projecting the world out from our skull holes but to be super-programmers who have created our experience of the world without bothering to create an actual world.

That’s a big difference. Projecting meaning only makes sense when there’s a world to project onto. The experience of meaninglessness as simulation takes that world away.

The meaninglessness we experience assigns the absurdity not to the arbitrariness that has led us to see the world one way instead of another, but to an Other whom we cannot see, imagine, or guess at. We envision, perhaps, children outside of our time and space playing a video game (“Sims Cosmos”), or alien computer scientists running a test to see what happens using the rules they’ve specified this time. For a moment we perhaps marvel at how life-like are the images we see as we walk down a street or along a forest path, how completely the programmers have captured the feeling of a spring rain on our head and shoulders but cleverly wasted no cycles simulating any special feeling on the soles of our feet. The whole enterprise – life, the universe, and everything – is wiped out the way a computer screen goes blank when the power is turned off.

In the spirit of the age, the sense of meaninglessness that comes from the sense we’re in a simulation is not despair, for it makes no difference. Everything is different but nothing has changed. The tree still rustles. The spring rain still smells of new earth. It is the essence of the simulation that it is full of meaning. That’s what’s being simulated. It’s all mind without any matter, unlike the old revelation that the world is all matter without meaning. The new meaninglessness is absurd absurdity, not tragic absurdity. We speculate about The Simulation without it costing a thing. The new absurdity is a toy of thought, not a problem for life.

I am not pining for my years suffering from attacks of Old School Anxiety. It was depressing and paralyzing. Our new way of finding the world meaningless is playful and does not turn every joy to ashes. It has its own dangers: it can release one from any sense of responsibility – “Dude, sorry to have killed your cat, but it was just a simulation” – and it can sap some of the sense of genuineness out of one’s emotions. But not for long because, hey, it’s a heck of a realistic simulation.

But to be clear, I reject both attempts to undermine the meaningfulness of our experience. I was drawn to philosophical phenomenology precisely because it was a way to pay attention to the world and our experience, rather than finding ways to diminish them both.

Both types of meaninglessness, however, think they are opening our eyes to the hollowness of life, when in fact they are privileging a moment of deprivation as a revelation of truth, as if the uncertainty and situatedness of meaning is a sign that it is illusory rather than it being the ground of every truth and illusion itself.

Tweet
Follow me

Categories: ai, machine learning, misc, philosophy Tagged with: ai Date: July 27th, 2019 dw

Be the first to comment »

July 10, 2019

Learning AI by doing: My new series of posts

The first in a series of six posts about my experiences learning how to train a machine learning system has just been posted here. There’s no code and no math in it. Instead it focuses on the tasks and choices involved in building one of these applications. How do you figure out what sort of data to provide? How do you get that data into the system? How can you tell when the system has been trained? What types of controls do the developers have over the outcomes? What sort of ways can I go wrong? (Given that the title of the series is “The Adventures of a TensorFlow.js n00b” the answer to that last question is: Every way.)

I was guided through this project by Yannick Assogba, a developer in the machine learning research group — People + AI Research –I’m embedded in at Google as a writer in residence. Yannick is natural born teacher, and is preternaturally patient.

The series is quite frank. I make every stupid mistake possible. And for your Schadenfreude, five more posts in this series are on their way…

.

Tweet
Follow me

Categories: ai, tech Tagged with: ai • machine learning • PAIR Date: July 10th, 2019 dw

Be the first to comment »

June 23, 2019

Everyday Chaos coverage, etc.

I just posted a new page at the Everyday Chaos web site. It lists media coverage, talks, and other ways into the book.

Take a look!

Tweet
Follow me

Categories: ai, everyday chaos, media, moi Tagged with: everydaychaos • videos Date: June 23rd, 2019 dw

Be the first to comment »

April 16, 2019

First chapter of Everyday Chaos on Medium…and more!

Well, actually less. And more. Allow me to explain:

The first half of the first chapter of Everyday Chaos is now available at Medium. (An Editor’s Choice, no less!)

You can also read the first half of the chapter on how our model of models is changing at the Everyday Chaos site (Direct link: pdf).

At that site you’ll also find a fifteen minute video (Direct link: video) in which I attempt to explain why I wrote the book and what it’s about.

Or, you can just skip right to the pre-order button (Direct link: Amazon or IndieBound) :)

Tweet
Follow me

Categories: ai, everyday chaos Tagged with: ai • everyday chaos Date: April 16th, 2019 dw

Be the first to comment »

March 24, 2019

Automating our hardest things: Machine Learning writes

In 1948 when Claude Shannon was inventing information science [pdf] (and, I’d say, information itself), he took as an explanatory example a simple algorithm for predicting the element of a sentence. For example, treating each letter as equiprobable, he came up with sentences such as:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD.

If you instead use the average frequency of each letter, you instead come up with sentences that seem more language-like:

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL.

At least that one has a reasonable number of vowels.

If you then consider the frequency of letters following other letters—U follows a Q far more frequently than X does—you are practically writing nonsense Latin:

ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.

Looking not at pairs of letters but triplets Shannon got:

IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE.

Then Shannon changes his units from triplets of letters to triplets of words, and gets:

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

Pretty good! But still gibberish.

Now jump ahead seventy years and try to figure out which pieces of the following story were written by humans and which were generated by a computer:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

“Pérez and his friends were astonished to see the unicorn herd”Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

The answer: The first paragraph was written by a human being. The rest was generated by a machine learning system trained on a huge body of text. You can read about it in a fascinating article (pdf of the research paper) by its creators at OpenAI. (Those creators are: Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.)

There are two key differences between this approach and Shannon’s.

First, the new approach analyzed a very large body of documents from the Web. It ingested 45 million pages linked in Reddit comments that got more than three upvotes. After removing duplicates and some other cleanup, the data set was reduced to 8 million Web pages. That is a lot of pages. Of course the use of Reddit, or any one site, can bias the dataset. But one of the aims was to compare this new, huge, dataset to the results from existing sets of text-based data. For that reason, the developers also removed Wikipedia pages from the mix since so many existing datasets rely on those pages, which would smudge the comparisons.

(By the way, a quick google search for any page from before December 2018 mentioning both “Jorge Pérez” and “University of La Paz” turned up nothing. “The AI is constructing, not copy-pasting.”The AI is constructing, not copy-pasting.)

The second distinction from Shannon’s method: the developers used machine learning (ML) to create a neural network, rather than relying on a table of frequencies of words in triplet sequences. ML creates a far, far more complex model that can assess the probability of the next word based on the entire context of its prior uses.

The results can be astounding. While the developers freely acknowledge that the examples they feature are somewhat cherry-picked, they say:

When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50% of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.

There are obviously things to worry about as this technology advances. For example, fake news could become the Earth’s most abundant resource. For fear of its abuse, its developers are not releasing the full dataset or model weights. Good!

Nevertheless, the possibilities for research are amazing. And, perhaps most important in the longterm, one by one the human capabilities that we take as unique and distinctive are being shown to be replicable without an engine powered by a miracle.

That may be a false conclusion. Human speech does not consist simply of the utterances we make but the complex intentional and social systems in which those utterances are more than just flavored wind. But ML intends nothing and appreciates nothing. “Nothing matters to ML.”Nothing matters to ML. Nevertheless, knowing that sufficient silicon can duplicate the human miracle should shake our confidence in our species’ special place in the order of things.

(FWIW, my personal theology says that when human specialness is taken as conferring special privilege, any blow to it is a good thing. When that specialness is taken as placing special obligations on us, then at its very worst it’s a helpful illusion.)

Tweet
Follow me

Categories: ai, infohistory, philosophy Tagged with: ai • creativity • information • machine learning Date: March 24th, 2019 dw

6 Comments »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!