Joho the Blog » [2b2k] Agre on minds and hands
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

[2b2k] Agre on minds and hands

I recently published a column at KMWorld pointing out some of the benefits of having one’s thoughts share a context with people who build things. Today I came across an article by Jethro Masis titled “Making AI Philosophical Again: On Philip E. Agre’s Legacy.” Jethro points to a 1997 work by the greatly missed Philip Agre that says it so much better:

…what truly founds computational work is the practitioner’s evolving sense of what can be built and what cannot” (1997, p. 11). The motto of computational practitioners is simple: if you cannot build it, you do not understand it. It must be built and we must accordingly understand the constituting mechanisms underlying its workings.This is why, on Agre’s account, computer scientists “mistrust anything unless they can nail down all four corners of it; they would, by and large, rather get it precise and wrong than vague and right” (Computation and Human Experience, 1997, p. 13).

(I’m pretty sure I read Computation and Human Experience many years ago. Ah, the Great Forgetting of one in his mid-60s.)

Jethro’s article overall attempts to adopt Agre’s point that “The technical and critical modes of research should come together in this newly expanded form of critical technical consciousness,” and to apply this to Heidegger’s idea of Zuhandenheit: how things show themselves to us as useful to our plans and projects; for Heidegger, that is the normal, everyday way most things present themselves to us. This leads Jethro to take us through Agre’s criticisms of AI modeling, its failure to represent context except as vorhanden [pdf], (Heidegger’s term for how things look when they are torn out of the context of our lived purposes), and the need to thoroughly rethink the idea of consciousness as consisting of representations of an external world. Agre wants to work out “on a technical level” how this can apply to AI. Fascinating.


Here’s another bit of brilliance from Agre:

For Agre, this is particularly problematic because “as long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem.” (p. 260 of CHE)

Previous: « || Next: »

Leave a Reply

Comments (RSS).  RSS icon