We’re pretty convinced that the future lies ahead of us. But according to Bernard Knox, the ancient Greeks were not. In Backing into the Future he writes:
“ The future, invisible, is behind us. ” the Greek word opiso, which literally means ‘behind’ or ‘back, refers not to the past but to the future. The early Greek imagination envisaged the past and the present as in front of us–we can see them. The future, invisible, is behind us. Only a few very wise men can see what is behind them. (p. 11)
G.W. Whitrow in Time in History quotes George Steiner in After Babel to make the same point about the ancient Hebrews:
…the future is preponderantly thought to lie before us, while in Hebrew future events are always expressed as coming after us. (p. 14)
Whitrow doesn’t note that Steiner’s quote (which Steiner puts in quotes) comes from Thorlief Borman’s Hebrew Thought Compared with Greek. Borman writes:
…we Indo-Germanic peoples think of time as a line on which we ourselves stand at a point called now; then we have the future lying before us, and the past stretches out behind us. The [ancient] Israelites use the same expressions ‘before’ and ‘after’ but with opposite meanings. qedham means ‘what is before’ (Ps. 139.5) therefore, ‘remote antiquity’, past. ‘ahar means ‘back’, ‘behind’, and of the time ‘after; aharith means ‘hindermost side’, and then ‘end of an age’, future… (p. 149)
This is bewildering, and not just because the Borman’s writing is hard to parse.“we also sometimes switch the direction of future and past.”
He continues on to note that we modern Westerners also sometimes switch the direction of future and past. In particular, when we “appreciate time as the transcendental design of history,” we
think of ourselves as living men who are on a journey from the cradle to the grave and who stand in living association with humanity which is also journeying ceaselessly forward. . Then the generation of the past are our progenitors, at least our forebears, who have existed before us because they have gone on before us, and we follow after then. In that case we call the past foretime. According to this mode of thinking, the future generation are our descendants, at least our successors, who therefore come after us. (p. 149. Emphasis in the original.)
Yes, I find this incredibly difficult to wrap my brain around. I think the trick is the ambiguity of “before us.” The future lies before us, but our forebears were also before us.
Borman tries to encapsulate our contradictory ways of thinking about the future as follows: “the future lies before us but comes after us.” The problem in understanding this is that we hear “before us” as “ahead of us.” The word “before” means “ahead” when it comes to space.
Borman’s explanation of the ancient Hebrew way of thinking is related to Knox’s explanation of the Greek idiom:
From the psychological viewpoint it is absurd to say that we have the future before us and the past behind us, as though the future were visible to us and the past occluded. “…as though the future were visible to us and the past occluded. Quite the reverse is true.”Quite the reverse is true. What our forebears have accomplished lies before us as their completed works; the house we see, the meadows and fields, the cultural and political system are congealed expressions of the deeds of our fathers. The same is true of everything they have done, lived, or suffered; it lies before us as completed facts… The present and the future are, on the contrary still in the process of coming and becoming. (p. 150)
The nature of becoming is different for the Greeks and Hebrews, so the darkness of the future has different meanings. But both result in the future lying behind us.
Tagged with: future
Date: January 2nd, 2016 dw
At Digital Trends I take another look at a question that is now gaining some currency: How should autonomous cars be programmed when all the choices are bad and someone has to die in order to maximize the number of lives that are saved?
The question gets knottier the more you look at it. In two regards especially:
First, it makes sense to look at this through a utilitarian lens, but when you do, you have to be open to the possibility that it’s morally better to kill a 64 year old who’s at the end of his productive career (hey, don’t look at me that way!) vs. a young parent, or a promising scientist or musician. We consider age and health when doing triage for organ replacements. Should our cars do it for us when deciding who dies?
Second, the real question is who gets to decide this? The developers at Google who are programming the cars? And suppose the Google software disagrees with the prioritization of the Tesla self-driving cars? Who wins? Or, do we want to have a cross-manufacturer agreement about whose life to sacrifice if someone has to die in an accident? A global agreement about the value of lives?
Yeah, sure. What could go wrong with that? /s
Tagged with: philosophy
Date: October 28th, 2015 dw
Bertrand Russell on knowledge for the Encyclopedia Brittanica:
[A]t first sight it might be thought that knowledge might be defined as belief which is in agreement with the facts. The trouble is that no one knows what a belief is, no one knows what a fact is, and no one knows what sort of agreement between them would make a belief true.
But that wonderful quote is misleading if left there. In fact it introduces Russell’s careful exploration and explanation of those terms. Crucially: “We are thus driven to the view that, if a belief is to be something causally important, it must be defined as a characteristic of behaviour.”
Tagged with: 2b2k
Date: June 1st, 2015 dw
It had to be back in 1993 that I had dual cards at Interleaf. But it was only a couple of days ago that I came across them.
Yes, for a couple of years I was both VP of Strategic Marketing and Chief Philosophical Officer at Interleaf.
The duties of the former were more rigorously defined than those of the latter. It was mainly just a goofy card, but it did reflect a bit of my role there. I got to think about the nature of documents, knowledge, etc., and then write and speak about it.
Goofy for sure. But I think in some small ways it helped the company. Interleaf had amazingly innovative software, decades ahead of its time, in large part because the developers had stripped documents down to their elements, and were thinking in new ways about how they could go back together. Awesome engineers, awesome software.
And I got to try to explain why this was important even beyond what the software enabled you to do.
Should every company have a CPO? I remember writing about that at the end of my time there. If I find it, I’ll post it. But I won’t and so I won’t.
Tagged with: innovation
Date: January 12th, 2015 dw
I recently published a column at KMWorld pointing out some of the benefits of having one’s thoughts share a context with people who build things. Today I came across an article by Jethro Masis titled “Making AI Philosophical Again: On Philip E. Agre’s Legacy.” Jethro points to a 1997 work by the greatly missed Philip Agre that says it so much better:
…what truly founds computational work is the practitioner’s evolving sense of what can be built and what cannot” (1997, p. 11). The motto of computational practitioners is simple: if you cannot build it, you do not understand it. It must be built and we must accordingly understand the constituting mechanisms underlying its workings.This is why, on Agre’s account, computer scientists “mistrust anything unless they can nail down all four corners of it; they would, by and large, rather get it precise and wrong than vague and right” (Computation and Human Experience, 1997, p. 13).
(I’m pretty sure I read Computation and Human Experience many years ago. Ah, the Great Forgetting of one in his mid-60s.)
Jethro’s article overall attempts to adopt Agre’s point that “The technical and critical modes of research should come together in this newly expanded form of critical technical consciousness,” and to apply this to Heidegger’s idea of Zuhandenheit: how things show themselves to us as useful to our plans and projects; for Heidegger, that is the normal, everyday way most things present themselves to us. This leads Jethro to take us through Agre’s criticisms of AI modeling, its failure to represent context except as vorhanden [pdf], (Heidegger’s term for how things look when they are torn out of the context of our lived purposes), and the need to thoroughly rethink the idea of consciousness as consisting of representations of an external world. Agre wants to work out “on a technical level” how this can apply to AI. Fascinating.
Here’s another bit of brilliance from Agre:
For Agre, this is particularly problematic because “as long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem.” (p. 260 of CHE)
Tagged with: agre
• too big to know
Date: December 7th, 2014 dw
Google self-driving cars are presumably programmed to protect their passengers. So, when a traffic situation gets nasty, the car you’re in will take all the defensive actions it can to keep you safe.
But what will robot cars be programmed to do when there’s lots of them on the roads, and they’re networked with one another?
We know what we as individuals would like. My car should take as its Prime Directive: “Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.
It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.
Or, a broken run-off pipe creates a dangerous pool of water on the highway during a flash storm. The self-driving cars agree that unless my car accelerates and rams into a concrete piling, all other joint action results in a tractor trailing jack-knifing, causing lots of death and destruction. Not to mention The Angelic Children’s Choir school bus that would be in harm’s way. So, the swarm of robotic cars makes the right decision and intentionally kills me.
In short, the networking of robotic cars will change the basic moral principles that guide their behavior. Non-networked cars are presumably programmed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be programmed to support some form of utilitarianism that tries to minimize the collective damage. And that’s probably what we’d want. Isn’t it?
But one of the problems with utilitarianism is that there turns out to be little agreement about what counts as a value and how much it counts. Is saving a pedestrian more important than saving a passenger? Is it always right try to preserve human life, no matter how unlikely it is that the action will succeed and no matter how many other injuries it is likely to result in? Should the car act as if its passenger has seat-belted him/herself in because passengers should do so? Should the cars be more willing to sacrifice the geriatric than the young, on the grounds that the young have more of a lifespan to lose? And won’t someone please think about the kids m— those cute choir kids?
We’re not good at making these decisions, or even at having rational conversations about them. Usually we don’t have to, or so we tell ourselves. For example, many of the rules that apply to us in public spaces, including roads, optimize for fairness: everyone waits at the same stop lights, and you don’t get to speed unless something is relevantly different about your trip: you are chasing a bad guy or are driving someone who urgently needs medical care.
But when we are better able control the circumstances, fairness isn’t always the best rule, especially in times of distress. Unfortunately, we don’t have a lot of consensus around the values that would enable us to make joint decisions. We fall back to fairness, or pretend that we can have it all. Or we leave it to experts, as with the rules that determine who gets organ transplants. It turns out we don’t even agree about whether it’s morally right to risk soldiers’ lives to rescue a captured comrade.
Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.
Imagine a time when the roadways are full of self-driving cars and trucks. There are some good reasons to think that that time is coming, and coming way sooner than we’d imagined.
Imagine that Google remains in the lead, and the bulk of the cars carry their brand. And assume that these cars are in networked communication with one another.
Can we assume that Google will support Networked Road Neutrality, so that all cars are subject to the same rules, and there is no discrimination based on contents, origin, destination, or purpose of the trip?
Or would Google let you pay a premium to take the “fast lane”? (For reasons of network optimization the fast lane probably wouldn’t actually be a designated lane but well might look much more like how frequencies are dynamically assigned in an age of “smart radios.”) We presumably would be ok with letting emergency vehicles go faster than the rest of the swarm, but how about letting the rich go faster by programming the robot cars to give way when a car with its “Move aside!” bit is on?
Let’s say Google supports a strict version of Networked Road Neutrality. But let’s assume that Google won’t be the only player in this field. Suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage — perhaps forming a cordon around them? Would that be legal? Would selling a virtual fast lane on a public roadway be legal in the first place? And who gets to decide? The FCC?
One thing is sure: It’ll be a golden age for lobbyists.
I’ve been meaning to try Medium.com, a magazine-bloggy place that encourages carefully constructed posts by providing an elegant writing environment. It’s hard to believe, but it’s even better looking than Joho the Blog. And, unlike HuffPo, there are precious few stories about side boobs. So, and might do so again.
The piece is about why we seem to keep insisting that the Internet is panopticon when it clearly is not. So, if you care about panopticons, you might find it interesting. Here’s a bit from the beginning:
A panopticon was Jeremy Bentham’s (1748-1832) idea about how to design a prison or other institution where people need to be watched. It was to be a circular building with a watchers’ station in the middle containing a guard who could see everyone, but who could not himself/herself be seen. Even though everyone couldn’t be seen at the same time, prisoners would never know when they were being watched. That’d keep ’em in line.
There is indeed a point of comparison between a panopticon and the Internet: you generally can’t tell when your public stuff is being seen (although your server logs could tell you). But that’s not even close to what a panopticon is.
…So why did the comparison seem so apt?
, social media
Tagged with: philosophy
Date: February 16th, 2014 dw
I’ve posted [pdf] a terrible scan that I made of a talk given by Joseph P. Fell in Sept. 1970. “What is philosophy?” was presented to a general university audience, and in Prof. Fell’s way, it is both clear and deep.
Prof. Fell was my most influential teacher when I was at Bucknell, and, well, ever. He was and is more interested in understanding than in being right, and certainly more than in being perceived as right. This enables him to model a philosophizing that is both rigorous and gentle.
Although I’ve told him more than once how much he has affected my life, he is too humble to believe it. So I’m telling you all instead.
Tagged with: gratitude
Date: February 11th, 2014 dw
The history of Western philosophy usually has a presumed shape: there’s a known series of Great Men (yup, men) who in conversation with their predecessors came up with a coherent set of ideas. You can list them in chronological order, and cluster them into schools of thought with their own internal coherence: the neo-Platonists, the Idealists, etc. Sometimes, the schools and not the philosophers are the primary objects in the sequence, but the topology is basically the same. There are the Big Ideas and the lesser excursions, the major figures and the supporting players.
Of course the details of the canon are always in dispute in every way: who is included, who is major, who belongs in which schools, who influenced whom. A great deal of scholarly work is given over to just such arguments. But there is some truth to this structure itself: philosophers traditionally have been shaped by their tradition, and some have had more influence than others. There are also elements of a feedback loop here: you need to choose which philosophers you’ll teach in philosophy courses, so you you act responsibly by first focusing on the majors, and by so doing you confirm for the next generation that the ones you’ve chosen are the majors.
But I wonder if in one or two hundred years philosophers (by which I mean the PT-3000 line of Cogbots™) will mark our era as the end of the line — the end of the linear sequence of philosophers. Rather than a sequence of recognized philosophers in conversation with their past and with one another, we now have a network of ideas being passed around, degraded by noise and enhanced by pluralistic appropriation, but without owners — at least without owners who can hold onto their ideas long enough to be identified with them in some stable form. This happens not simply because networks are chatty. It happens not simply because the transmission of ideas on the Internet occurs through a p2p handoff in which each of the p’s re-expresses the idea. It happens also because the discussion is no longer confined to a handful of extensively trained experts with strict ideas about what is proper in such discussions, and who share a nano-culture that supersedes the values and norms of their broader local cultures.
If philosophy survives as anything more than the history of thought, perhaps we will not be able to outline its grand movements by pointing to a handful of thinkers but will point to the webs through which ideas passed, or, more exactly, the ideas around which webs are formed. Because no idea passes through the Web unchanged, it will be impossible to pretend that there are “ideas-in-themselves” — nothing like, say, Idealism which has a core definition albeit with a history of significant variations. There is no idea that is not incarnate, and no incarnation that is not itself a web of variations in conversation with itself.
I would spell this out for you far more precisely, but I don’t know what I’m talking about, beyond an intuition that the tracks end at the trampled field in which we now live.
, too big to know
Tagged with: 2b2k
Date: December 28th, 2013 dw
After yesterday’s Supreme Court decisions, I’m just so happy about the progress we’re making.
It seems like progress to me because of the narrative line I have for the stretch of history I happen to have lived through since my birth in 1950: We keep widening the circle of sympathy, acceptance, and rights so that our social systems more closely approximate the truly relevant distinctions among us. I’ve seen the default position on the rights of African Americans switch, then the default position on the rights of women, and now the default position on sexual “preferences.” I of course know that none of these social changes is complete, but to base a judgment on race, gender, or sexuality now requires special arguments, whereas sixty years ago, those factors were assumed to be obviously relevant to virtually all of life.
According to this narrative, it’s instructive to remember that the Supreme Court overruled state laws banning racial intermarriage only in 1967. That’s amazing to me. When I was 17, outlawing “miscegeny” seemed to some segment of the population to be not just reasonable but required. It was still a debatable issue. Holy cow! How can you remember that and not think that we’re going to struggle to explain to the next generation that in 2013 there were people who actually thought banning same sex marriage was not just defensible but required?
So, I imagine a conversation (and, yes, I know I’m making it up) with someone angry about yesterday’s decisions. Arguing over which differences are relevant is often a productive way to proceed. You say that women’s upper body strength is less than men’s, so women shouldn’t be firefighters, but we can agree that if a woman can pass the strength tests, then she should be hired. Or maybe we argue about how important upper body strength is for that particular role. You say that women are too timid, and I say that we can find that out by hiring some, but at least we agree that firefighters need to be courageous. A lot of our moral arguments about social issues are like that. They are about what are the relevant differences.
But in this case it’s really really hard. I say that gender is irrelevant to love, and all that matters to a marriage is love. You say same sex marriage is unnatural, that it’s forbidden by God, and that lust is a temptation to be resisted no matter what its object. Behind these ideas (at least in this reconstruction of an imaginary argument) is an assumption that physical differences created by God must entail different potentials which in turn entail different moral obligations. Why else would God have created those physical distinctions? The relevance of the distinctions are etched in stone. Thus the argument over relevant differences can’t get anywhere. We don’t even agree about the characteristics of the role (e.g., upper body strength and courage count for firefighters) so that we can then discuss what differences are relevant to those characteristics. We don’t have enough agreement to be able to disagree fruitfully.
I therefore feel bad for those who see yesterday’s rulings as one more step toward a permissive, depraved society. I wish I could explain why my joy feels based on acceptance, not permissiveness, and not on depravity but on love.
By the way, my spellchecker flags “miscegeny” as a misspelled word, a real sign of progress.
Next Page »