March 10, 2023
Curiosity
How interesting the world is depends on how well it’s written.
Date: March 10th, 2023 dw
March 10, 2023
How interesting the world is depends on how well it’s written.
February 22, 2023
Topic: The Supreme Court is hearing a case about whether section 230 exempts Google from responsibility for what it algorithmically recommends.
A thought from one of my favorite philosophers, Richard Rorty:
Rortiana, CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0,
via Wikimedia Commons
“…revolutionary achievements in the arts, in the sciences, and the moral and political thought typically occur when somebody realizes that two or more of our vocabularies are interfering with each other, and proceeds to invent a new vocabulary to replace both… The gradual trial and error creation of a new, third, vocabulary… is not a discovery about how old vocabularies fit together… Such creations are not the result of successfully filling together pieces of a puzzle. They are not discoveries of a reality behind the appearances, of an undistorted view of the whole picture with which to replace myopic views of its parts. The proper analogy is with the invention of new tools to take the place of old tools.” — Richard Rorty, Contingency, Irony, and Solidarity, 1989, p. 12
We’ve had to invent a new vocabulary to talk about these things: Is a website like a place or a magazine? Are text messages like phone calls or telegrams? Are personal bloggers journalists [a question from 2004]?
The issue isn’t which Procrustean bed we want to force these new entries into, but what sort of world we want to live in with them. It’s about values, not principles or definitions. IMUOETC (“In my uncertain opinion expressed too confidently”)
Procruste’s bed. And this is better than being too tall for it :(
A point all of this misses: “Yo, we’re talking about law here, which by its nature requires us to bring entities and events under established categories.”
January 14, 2023
I typed my doctoral dissertation in 1978 on my last electric typewriter, a sturdy IBM Model B.
My soon-to-be wife was writing hers out long hand, which I was then typing up.
Then one day we took a chapter to a local typist who was using a Xerox word processor which was priced too high for grad students or for most offices. When I saw her correcting text, and cutting and pasting, my eyes bulged out like a Tex Avery wolf.
As soon as Kay-Pro II’s were available, I bought one from my cousin who had recently opened a computer store.
The moment I received it and turned it on, I got curious about how the characters made it to the screen, and became a writer about tech. In fact, I became a frequent contributor to the Pro-Files KayPro magazine, writing ‘splainers about the details of how these contraptions. worked.
I typed my wife’s dissertation on it — which was my justification for buying it — and the day when its power really hit her was when I used WordStar’s block move command to instantly swap sections 1 and 4 as her thesis advisor had suggested; she had unthinkingly assumed it meant I’d be retyping the entire chapter.
People noticed the deeper implications early on. E.g., Michael Heim, a fellow philosophy prof (which I had been, too), wrote a prescient book, Electric Language, in the early 1990s (I think) about the metaphysical implications of typing into an utterly malleable medium. David Levy wrote Scrolling Forward about the nature of documents in the Age of the PC. People like Frode Hegland are still writing about this and innovating in the text manipulation space.
A small observation I used to like to make around 1990 about the transformation that had already snuck into our culture: Before word processors, a document was a one of a kind piece of writing like a passport, a deed, or an historic map used by Napoleon; a document was tied to its material embodiment. Then the word processing folks needed a way to talk about anything you could write using bits, thus severing “documents” from their embodiment. Everything became a document as everything became a copy.
In any case, word processing profoundly changed not only how I write, but how I think, since I think by writing. Having a fluid medium lowers the cost of trying out ideas, but also makes it easy for me to change the structure of my thoughts, and since thinking is generally about connecting ideas, and those connections almost always assume a structure that changes their meaning — not just a linear scroll of one-liners — word processing is a crucial piece of “scaffolding” (in Clark and Chalmer‘s sense) for me and I suspect for most people.
In fact, I’ve come to recognize I am not a writer so much as a re-writer of my own words.
December 12, 2022
To say that facts are social constructions doesn’t mean everything put forward as a fact is a fact. Nor does it mean that facts don’t express truths or facts are not to be trusted. Nor does it mean that there’s some unconstructed fact behind facts. Social constructionists don’t want to leave us in a world in which it’s ok to sat “No, it’s not raining” in the middle of a storm or claim “Water boiled at 40C for me this morning under normal circumstances.”
Rather the critique, as I understand it, is that the fact-based disciplines we choose to pursue, the roles they play, who gets to participate, the forms of discourse and of proof, the equipment invented and the ways the materials are handled (the late Bruno Latour was brilliant on this point, among others), the commitment to an objective and consistent methodology (see Paul Feyerabend), all are the result of history, culture, economics, and social forces. Science itself is a social construct (as per Thomas Kuhn‘s The Structure of Scientific Revolutions [me on that book]). (Added bonus: Here’s Richard Rorty’s review of Ian Hacking’s excellent book, The Social Construction of What?)
Facts as facts pretty clearly seem (to me) to be social constructions. As such, they have a history…
Facts as we understand them became a thing in western culture when Francis Bacon early in the 17th century started explicitly using them to ground theories, which was a different way of constructing scientific truths; prior to this, science was built on deductions, not facts. (Pardon my generalizations.)
You can see the movement from deductive truth to fact-based empirical evidence across the many editions of Thomas Malthus‘ 1798 book, An Essay on the Principle of Population, that predicted global famine based on a mathematical formula, but then became filled with facts and research from around the world. It went from a slim deductive volume to six volumes thick with facts and stats. Social construction added pounds to his Malthus’ work.
This happened because statistics arrived in Britain, by way of Germany, in the early 19th century. Statistical facts became important at that time not only because they enabled the inductive grounding of theories (as per Bacon and Malthus), but because they could rebut people’s personal interests. In particular, they became an important way to break the sort of class-based assumptions that made it seem to t be ok to clean rich people’s chimneys by shoving little boys up them. Against this were posed facts that showed that it was in fact bad for them.
Compiling “blue books” of fact-based research became a standard part of the legislative process in England in the first half of the 19th century. By mid-century, the use of facts was so prevalent that in 1854 Dickens bemoaned society’s reliance on them in Hard Times on the grounds that facts kill imagination…yet another opposite to facts, and another social construction.
As the 19th century ended, we got our first fact-finding commissions that were established in order to peacefully resolve international disputes. (Narrator: They rarely did.) This was again using facts as the boulder that stubs your toe of self-interest (please forget I ever wrote that phrase), but now those interests were cross-national and not as easily resolvable as when you poise the interests of lace-cuffed lords against the interests of children crawling through upperclass chimneys.
In the following century we got (i.e., we constructed) an idea of science and human knowledge that focused on assembling facts as if they were bricks out of which one could build a firm foundation. This led to some moaning (in a famous 1963 letter to the editor) that science was turning into a mere “brickyard” of unassembled facts.
I’m not a historian, and this is the best I can recall from a rabbit hole of specific curiosity I fell into about 10 yrs ago when writing Too Big to Know. But the point is the the idea of the social construction of science and facts doesn’t mean that all facts — including “alternative facts” — are equal. Water really does boil at 100C. Rather it’s the idea, role, use, importance, and control of facts that’s socially constructed.
December 11, 2022
“Quine … had his 1927 Remington portable modified to handle symbolic logic. Among the characters that he sacrificed was the question mark. “Well, you see, I deal in certainties,” he explained.” [1]
This is from an article by Richard Polt about Heidegger’s philosophical argument against typewriters in light of the discovery of Heidegger’s own typewriter; it was apparently for his assistant to transcribe his handwritten text.
Polt brings a modern sensibility to his Heidegger scholarship. The article itself uses Heideggerian jargon to describe elements of the story of the discovery and authentication of the typewriter; he is poking gentle fun at that jargon. At least I’m pretty sure he is; humor is a rare element in Heideggerian scholarship. But I’m on a mailing list with Richard and over the years have found him to be open-minded and kind, as well as being a top-notch scholar of Heidegger.
Polt is also a certified typewriter nerd.
[1] Polt’s article footnotes this as follows: Willard Van Orman Quine profile in Beacon Hill Paper, May 15, 1996, p. 11, quoted at http://www.wvquine.org/wvq-newspaper. html. See Mel Andrews, “Quine’s Remington Portable no. 2,” ETCetera: Journal of the Early Typewriter Collectors’ Association 131 (Winter 2020/2021), 19–20.
December 4, 2022
First there was the person who built a computer inside of Minecraft and programmed it to play Minecraft.
Now Frederic Besse built a usable linux terminal in GPTchat — usable in that it can perform systems operations on a virtual computer that’s also been invoked in (by? with?) GPTchat. For example, you can tell the terminal to create a file and where to store it in a file system that did not exist until you asked, and under most definitions of “exist” doesn’t exist anywhere.
I feel like I need to get a bigger mind in order for it to be sufficiently blown.
(PS: I could do without the casual anthropomorphizing in the GPT article.)
May 31, 2022
Ludwig Wittgenstein said “If a lion could talk, we couldn’t understand him.” (Philosophical Investigations, Part 2)
But lions already speak, and we do understand them: When one roars at us, we generally know exactly what it means.
If a lion could say more than that, presumably (= I dunno) it would be about the biological needs we share with all living creatures for evolutionary reasons: hunger, threat, opportunity, reproduction, and — only in higher species — “Hey, look at that, not me!” (= sociality).
But that rests on a pyramid version of language in which the foundation consists of a vocabulary born of biological necessity. That well might be the case (= I dunno), but by now our language’s evolutionary vocabulary is no longer bound to its evolutionary value.
If a lion could speak, it would speak about what matters to it, for that seems (= I dunno) essential to language. If so, we might be able to understand it … or at least understand it better than what clouds, rust, and the surface of a pond would say if they could speak.
I dunno.
January 31, 2022
Notes for a post:
Plato said (Phaedrus, 265e) that we should “carve nature at its joints,” which assumes of course that nature has joints, i.e., that it comes divided in natural and (for the Greeks) rational ways. (“Rational” here means something like in ways that we can discover, and that divide up the things neatly, without overlap.)
For Aristotle, at least in the natural world those joints consist of the categories that make a thing what it is, and that make things knowable as those things.
To know a thing was to see how it’s different from other things, particularly (as per Aristotle) from other things that they share important similarities with: humans are the rational animals because we share essential properties with other animals, but are different from them in our rationality.
The overall order of the universe was knowable and formed a hierarchy (e.g. beings -> animals -> vertebrates -> upright -> rational) that makes the differences essential. It’s also quite efficient since anything clustered under a concept, no matter how many levels down, inherits the properties of the higher level concepts.
We no longer believe that there is a perfect, economical order of things. “We no longer believe that there is a single, perfect, economical order of things. ”We want to be able to categorize under many categories, to draw as many similarities and differences as we need for our current project. We see this in our general preference for search over browsing through hierarchies, the continued use of tags as a way of cutting across categories, and in the rise of knowledge graphs and high-dimensional language models that connect everything every way they can even if the connections are very weak.
Why do we care about weak connections? 1. Because they are still connections. 2. The Internet’s economy of abundance has disinclined us to throw out any information. 3. Our new technologies (esp. machine learning) can make hay (and sometimes errors) out of rich combinations of connections including those that are weak.
If Plato believed that to understand the world we need to divide it properly — carve it at its joints — knowledge graphs and machine learning assume that knowledge consists of joining things as many different ways as we can.
November 15, 2021
Aeon.co has posted an article I worked on for a couple of years. It’s only 2,200 words, but they were hard words to find because the ideas were, and are, hard for me. I have little sense of whether I got either the words or the ideas right.
The article argues, roughly, that the sorts of generalizations that machine learning models embody are very different from the sort of generalizations the West has taken as the truths that matter. ML’s generalizations often are tied to far more specific configurations of data and thus are often not understandable by us, and often cannot be applied to particular cases except by running the ML model.
This may be leading us to locate the really real not in the eternal (as the West has traditional done) but at least as much in the fleeting patterns of dust that result from everything affecting everything else all the time and everywhere.
Three notes:
2. Aeon for some reason deleted a crucial footnote that said that my views do not necessarily represent the views of Google, while keeping the fact that I am a part time, temporary writer-in-residence there. To be clear: My reviews do not necessarily represent Google’s.
3. My original first title for it was “Dust Rising”, but then it became “Trains, Car Wrecks, and Machine Learning’s Ontology” which i still like although I admit it that “ontology” may not be as big a draw as I think it is.
July 11, 2021
Although I am an agnostic, I used to think of myself as a functional atheist: I saw no compelling reason to believe in God (and thus am an agnostic), but I lived my life as if there is certainly no God.
Now I see that I got that backwards. I firmly remain an agnostic, but it turns out there are ways in which I have always experienced the world as if it were a divine creation. I don’t believe my experience is actually evidence either way, but I find it interesting that my agnostic belief has long masked my belief-like experience…
— Continued at Psychology Today