“The arc of the moral universe is long but bends towards justice.”
That saying was of course made famous by Martin Luther King who put it between quotation marks to indicate that it was not original with him. Had King’s own arc not been stopped short by a white racist with a gun, it might have been MLK, at the age of 86, who addressed us on Friday in Charlestown. As it is, our President did him proud.
The always awesome Quote Investigator tells us that the quotation in fact came from Theodore Parker in 1857; Parker was a Unitarian minister, Transcendentalist, and abolitionist. The entire sermon (“Of Justice and the Conscience,” pp. 66-102) is worth reading, but here’s the relevant snippet:
Look at the facts of the world. You see a continual and progressive triumph of the right. I do not pretend to understand the moral universe, the arc is a long one, my eye reaches but little ways. I cannot calculate the curve and complete the figure by the experience of sight; I can divine it by conscience. But from what I see I am sure it bends towards justice.
The sermon points out that the wicked often suffer in ways that the outside world can’t perceive. But Parker is realistic enough to recognize that “we do not see that justice is always done on earth,” (p. 89) and he proceeds to remind his congregation of some of the overwhelming evils present in the world, including: “Three million slaves earn the enjoyment of Americans, who curse them in the name of Christ.” (p. 90) Neither does Parker let us rest in the comfortable thought that justice reigns in the next world. We need a “conscious development of the moral element in man, and a corresponding expansion of justice in human affairs…” (p. 90).
But, is Parker right? Does the arc of the moral universe bend toward justice, or towards injustice, or toward neither, or toward entropy? Why shouldn’t we think we construct that arc out of our wishes and happy thoughts?
Parker’s support for his claim is not what sight shows him but what is visible to his conscience. But what did conscience mean to him?
In 1850 Parker delivered a sermon called “The Function and Place of Conscience in Relation to the Laws.” He begins by explaining the term: “It is the function of conscience to discover to men the moral law of God.” He puts it on a level with our other faculties, part of the reaction against the reduction of consciousness to what comes through our sense organs. Transcendentalists were influenced by Kant who argued that sense perception wouldn’t add up to experience if we didn’t come into the world with a pre-existing ability to organize perceptions in time, space, causality, etc. In addition, affirms Parker, we have a faculty — conscience — that lets us understand things in terms of their moral qualities. That faculty is as fallible as the others, but it is “adequate to the purpose God meant for it”; otherwise God would have failed to outfit us adequately for the task He has set us, which would be on Him.
For Parker, conscience (knowledge of what is right) is at least as important as intellect (knowledge of the world). In “Of Justice and Conscience,” he bemoans that “We have statistical societies for interest” but “no moral societies for justice.” (p. 92) “There is no college for conscience.” (p. 93). (Statistics as a concept and a field had entered British culture at the beginning of the 19th century. By the 1850s it had become a dominant way of evaluating legislative remedies there. See Too Big to Know for a discussion of this. Yeah, I just product placed my own book.)
The faculty of justice (conscience) is at least as important as the faculty of intellect, for conscience drives action. In “The Function and Place of Conscience,” he writes:
Nothing can absolve me from this duty, neither the fact that it is uncomfortable or unpopular, nor that is conflicts with my desires, my passions, my immediate interests, and my plans in life. Such is the place of conscience amongst other faculties of my nature
Indeed, the heart of this sermon is the injunction to rise to the demands inherent in our being children of God, and to reject any conflicting demands by government, business, or society.
Much of this sermon could be quoted by those who refuse as businesspeople or government employees to serve same-sex couples, although Parker is talking about returning fugitive slaves to their owners, not decorating cakes:
This statute [the Fugutive Slave Act] is not to be laid to the charge of the slaveholders of the South alone; its most effective supporters are northern men; Boston is more to be blamed for it than Charleston or Savannah, for nearly a thousand persons of this city and neighborhood, most of them men of influence through money if by no other means, addressed a letter of thanks to the distinguished man who had volunteered to support that infamous bill telling him that he had “convinced the understanding and touched the conscience of the nation.”
That “distinguished man” was, shockingly, Daniel Webster. Webster had been an eloquent and fierce abolitionist. But in 1850, he argued just as fiercely in support of the Fugitive Slave Act in order to preserve the union. Parker wrote an impassioned account of this in his 1853 Life of Daniel Webster.
Parker’s sermon exhorts his congregants, in a passage well worth reading, to resist the law. “[I]t is the natural duty of citizens to rescue every fugitive slave from the hands of the marshal who essays to return him to bondage; to do it peaceably if they can, forcibly if they must, but by all means to do it.”
So, conscience trumps the other faculties by bringing us to act on behalf of justice. But the moral law that conscience lets us perceive is different from the laws of nature. Parker writes in “Of Justice” that there is no gap between the natural laws and their fulfillment. This is so much the case that we learn those laws by observing nature’s regularities. But the moral law “unlike attraction [i.e., gravity] … does not work free from all hindrance.” (p. 69). The moral law requires fulfillment by humans. We are imperfect, so there is a gap between the moral law and the realm over which it rules.
Parker continues: Even if we could learn the law of right through observation and experience — just as we learn the laws of nature — those laws would feel arbitrary. In any case, because history is still unfolding, we can’t learn our moral lessons from it, for our justice has not yet been actualized in history. (p. 73) Man has “an ideal of nature which shames his actual of history.” (p. 73) So, “God has given us a moral faculty, the conscience…” (p. 72) to see what we have yet not made real.
Intellect is not enough. Only conscience can see the universe’s incomplete moral arc.
So, does the arc of the moral universe bend toward justice?
Our intellect sets off warning flares. History is too complex to have a shape. The shape we perceive of course looks like progress because we always think that what we think is the right thing to think, so we think we’re thinking better than did those who came before us. And, my intellect says quite correctly, yeah, sure you’d think that, Mr. Privileged White Guy.
At the moment of despair — when even in Boston citizens are signing letters in favor of returning people back to their enslavement — “The arc of the moral universe is long but bends toward justice” brings hope. No, it says, you’re not going to get what you deserve, but your children might, or their children after them. It is a hard, hard hope.
But is it true?
I will postulate what Theodore Parker did not: Neither our intellect nor conscience can know what the universe’s arc will actually be. Even thinking it has any shape requires an act of imagination that bears an unfathomable cost of forgetting.
But, I believe that Parker was right that conscience — our sense of right and wrong — informs our intellect. Hope is to moral perception as light is to vision: You cannot perceive the world within its moral space without believing there is a point to action. And we can’t perceive outside of that moral space, for it is within the moral space that the universe and what we do in it matters. Even science — crucial science — is pursued as a moral activity, as something that matters beyond itself. If nothing you do can have any effect on what matters beyond your own interests, then moral behavior is pointless and self-indulgent. Hope is moral action’s light.
So, of course I don’t know if the arc of the moral universe bends towards justice. But if there is a moral universe, modest hopes bend its history.
, too big to know
Tagged with: 2b2k
Date: June 27th, 2015 dw
Atlantic.com has just posted an article of mine that re-examines the “Argument from Architecture” that has been at the bottom of much of what I’ve written over the past twenty years. That argument says, roughly, that the Internet’s architecture embodies particular values that are inevitably transmitted to its users. (Yes, the article discusses what “inevitably” means in this context.) But has the Net been so paved by Facebook, apps, commercialism, etc., that we don’t experience that architecture any more?
I remember a 1971 National Lampoon article that gave away the endings of a hundred books and movies. Wikipedia and others think that article might have been the first use of the term “spoiler.” But “SPOILER ALERT” has only become a common signpost because of what the Internet has done to time, and in particular, to simultaneity.
In the old days of one-to-many, broadcast media, the events that shaped culture happened once and usually happened on schedule. So, it would make sense to bring up what was on the news broadcast last night, or to chuckle over that hilarious scene in this week’s Beverly Hillbillies. Now we watch on our own schedules, having common moments mainly around sports events and breaking news — games or tragedies. Perhaps this has contributed to our culture’s addiction to extremes.
We need SPOILER ALERT signposts because we watch when we want but the Net is so huge and unconstrained and cheap that it operates like a push medium — the opposite of why traditional broadcast was a push medium. Trying to avoid finding out what happened on Game of Thrones this week is like trying to avoid getting run over when crossing a highway, except that even seeing the approaching cars counts as getting run over.
Game of Thrones spoiler
This change in temporality shows up in the phrase “real time.” We only distinguish one type of time as “real” because it is no longer the default. The default is asynchronous because that’s how most of our communications occur online. Real time increasingly feels like a deprivation. It requires you to drop what you’re doing to participate or you’re going to lose out. And that feels sub-optimal, or even unfair.
Without the requirement of simultaneity, we are more free to follow our interests. And that turns out to fragment our culture. Or liberate it. Or enrich it. Or all of the above.
Tagged with: culture
Date: June 19th, 2015 dw
We used to have an obligation to at least try to be sympathetic. Now that’s ratcheted up to having to be empathetic. We should lower the bar.
Sympathy means feeling bad for someone while empathy means actually feeling the same feelings.
If that’s what those words still mean, empathy is more than we usually need and is less than we can often accomplish.
You’re hungry? I can be sympathetic about your hunger, but I can’t feel your hunger.
There are child soldiers? I can perhaps understand some of the situation that lets such a thing happen, and I can be shocked and sad that it does, but I don’t think I can feel what those children feel.
You have been sexually assaulted? I can be deeply sympathetic and supportive, but I don’t think I can actually feel what you felt or even what you are feeling now. For example, if you are now overwhelming anxious about being in some ordinary situations — walking to your car, entering an unlit room — you will have all my sympathy and support, but I will not experience the trembling you feel in your knees or the tension expressed by your shallow breaths.
Empathy is hard. It often takes the magic of an artist to get us to feel what a character is feeling. (Q: If I am feeling what a non-existent character is feeling, is that even empathy?)
Empathy is hard. Empathy is rare. Empathy is often exactly what is not required: If you are afraid, you probably don’t need another frightened person. You need someone sympathetic who can help you deal with your fears.
Sympathy is getting a bad rap, as if it means just patting someone on the shoulder and saying “There there.” That’s not what sympathy ever was. Sympathy means you are affected by another person’s feelings, not that you feel those very feelings. If I am sad and worried that you are so depressed, I am affected by your feelings, but I am not myself depressed.
Empathy can be a pure mirror of someone else’s feelings. But sympathy requires more than just feeling. If I see you crying, to be sympathetic I have to know something about you and especially about what has caused you to cry. Are you crying because you’ve been hurt? Because you broke up with someone you loved? Because you just saw a sad movie? Because you didn’t get into a school or onto a particular team? Because you’re sympathizing with someone else? In order to sympathize more fully, I need to know.
That is, in sympathy you turn not just to feelings but to the world. You see what the sufferer sees from her/his point of view, or as close to that point of view as you can. What you see is not a matter of indifference to you. You are moved by what is moving the other. How you are moved is different in type and extent — you are not fearful in the face of the other’s fears, you are not as wracked by grief as is the mourner — but you are moved.
Sympathy lets the world matter to you as it matters to someone else. In sympathy, the mattering culminates from heart, mind, and caring about the other. It is perhaps the best thing we do.
Most importantly, through sympathy are we moved to helpful action, whether that is indeed a pat on the shoulder or requires a far larger commitment. Sympathy does that to us. For us.
Empathy can get in the way of the supportive action that sympathy demands. If a friend is heartbroken because a relationship ended, you may bring to bear a different view of the world and hold out other feelings as possibilities. Hope perhaps. A different perspective. A pint of Ben and Jerry’s. The gap in feelings between you and your friend enables the sympathetic action your friend needs.
If our aim is to act in the world to try to reduce pain, fear, and sadness, then asking for empathy is often to ask for too much. Sympathy more than suffices.
Tagged with: sympathy
Date: January 31st, 2015 dw
Tagged with: humor
Date: January 27th, 2015 dw
It had to be back in 1993 that I had dual cards at Interleaf. But it was only a couple of days ago that I came across them.
Yes, for a couple of years I was both VP of Strategic Marketing and Chief Philosophical Officer at Interleaf.
The duties of the former were more rigorously defined than those of the latter. It was mainly just a goofy card, but it did reflect a bit of my role there. I got to think about the nature of documents, knowledge, etc., and then write and speak about it.
Goofy for sure. But I think in some small ways it helped the company. Interleaf had amazingly innovative software, decades ahead of its time, in large part because the developers had stripped documents down to their elements, and were thinking in new ways about how they could go back together. Awesome engineers, awesome software.
And I got to try to explain why this was important even beyond what the software enabled you to do.
Should every company have a CPO? I remember writing about that at the end of my time there. If I find it, I’ll post it. But I won’t and so I won’t.
Tagged with: innovation
Date: January 12th, 2015 dw
I recently published a column at KMWorld pointing out some of the benefits of having one’s thoughts share a context with people who build things. Today I came across an article by Jethro Masis titled “Making AI Philosophical Again: On Philip E. Agre’s Legacy.” Jethro points to a 1997 work by the greatly missed Philip Agre that says it so much better:
…what truly founds computational work is the practitioner’s evolving sense of what can be built and what cannot” (1997, p. 11). The motto of computational practitioners is simple: if you cannot build it, you do not understand it. It must be built and we must accordingly understand the constituting mechanisms underlying its workings.This is why, on Agre’s account, computer scientists “mistrust anything unless they can nail down all four corners of it; they would, by and large, rather get it precise and wrong than vague and right” (Computation and Human Experience, 1997, p. 13).
(I’m pretty sure I read Computation and Human Experience many years ago. Ah, the Great Forgetting of one in his mid-60s.)
Jethro’s article overall attempts to adopt Agre’s point that “The technical and critical modes of research should come together in this newly expanded form of critical technical consciousness,” and to apply this to Heidegger’s idea of Zuhandenheit: how things show themselves to us as useful to our plans and projects; for Heidegger, that is the normal, everyday way most things present themselves to us. This leads Jethro to take us through Agre’s criticisms of AI modeling, its failure to represent context except as vorhanden [pdf], (Heidegger’s term for how things look when they are torn out of the context of our lived purposes), and the need to thoroughly rethink the idea of consciousness as consisting of representations of an external world. Agre wants to work out “on a technical level” how this can apply to AI. Fascinating.
Here’s another bit of brilliance from Agre:
For Agre, this is particularly problematic because “as long as an underlying metaphor system goes unrecognized, all manifestations of trouble in technical work will be interpreted as technical difficulties and not as symptoms of a deeper, substantive problem.” (p. 260 of CHE)
Tagged with: agre
• too big to know
Date: December 7th, 2014 dw
Here’s the opening of my latest column at KMWorld:
A couple of weeks ago, I joined other former students of Joseph P. Fell at Bucknell University for a weekend honoring him. Although he is a philosophy professor, the takeaway for many of us was a reminder that while hands are useless without minds to guide them, minds need hands more deeply than we usually think.
Philosophy is not the only discipline that needs this reminder. Almost anyone—it’s important to maintain the exceptions—who is trying to understand a topic would do well by holding something in her hands, or, better, building something with them…
A year ago, Harold Feld posted one of the most powerful ways of framing our excessive zeal for copyright that I have ever read. I was welling up even before he brought Aaron Swartz into the context.
Harold’s post is within a standard Jewish genre: the d’var Torah, an explanation of a point in the portion of the Torah being read that week. As is expected of the genre, he draws upon a long, self-reflective history of interpretation. I urge you to read it because of the light it sheds on our culture of copyright, but it’s also worth noticing the form of the discussion.
The content: In the Jewish tradition, Sodom’s sin wasn’t sexual but rather an excessive possessiveness leading to a fanatical unwillingness to share. Harold cites from a collection of traditional commentary, The Ethics of Our Fathers:
“There are four types of moral character. One who says: ‘what is mine is mine and what is yours is yours.’ This is an average person. Some say it is the Way of Sodom. The one who says: ‘what is mine is yours and what is yours is mine,’ is ignorant of the world. ‘What is mine is yours and what is yours is yours’ is the righteous. ‘What is mine is mine and what is yours is mine’ is the wicked.”
In a PowerPoint, it’d be a 2×2 chart. Harold’s point will be that the ‘what is mine is mine and what is yours is yours.’ of the average person becomes wicked when enforced without compassion or flexibility. Harold evokes the traditional Jewish examples of Sodom’s wickedness and compares them to what’s become our dominant “average” assumptions about how copyright ought to work.
I am purposefully not explaining any further. Read Harold’s piece.
The form: I find the space of explanation within which this d’var Torah — and most others that I’ve heard — operates to be fascinating. At the heart of Harold’s essay is a text accepted by believers as having been given by God, yet the explanation is accomplished by reference to a history of human interpretations that disagree with one another, with guidance by a set of values (e.g., sharing is good) that persevere in a community thanks to that community’s insistent adherence to its tradition. The result is that an agnostic atheist like me (I’m only pretty sure there is no God) can find truth and wisdom in the interpretation of a text I take as being ungrounded in a divine act.
But forget all that. Read Harold’s post, bubbelah.
Google self-driving cars are presumably programmed to protect their passengers. So, when a traffic situation gets nasty, the car you’re in will take all the defensive actions it can to keep you safe.
But what will robot cars be programmed to do when there’s lots of them on the roads, and they’re networked with one another?
We know what we as individuals would like. My car should take as its Prime Directive: “Prevent my passengers from coming to harm.” But when the cars are networked, their Prime Directive well might be: “Minimize the amount of harm to humans overall.” And such a directive can lead a particular car to sacrifice its humans in order to keep the total carnage down. Asimov’s Three Rules of Robotics don’t provide enough guidance when the robots are in constant and instantaneous contact and have fragile human beings inside of them.
It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.
Or, a broken run-off pipe creates a dangerous pool of water on the highway during a flash storm. The self-driving cars agree that unless my car accelerates and rams into a concrete piling, all other joint action results in a tractor trailing jack-knifing, causing lots of death and destruction. Not to mention The Angelic Children’s Choir school bus that would be in harm’s way. So, the swarm of robotic cars makes the right decision and intentionally kills me.
In short, the networking of robotic cars will change the basic moral principles that guide their behavior. Non-networked cars are presumably programmed to be morally-blind individualists trying to save their passengers without thinking about others, but networked cars will probably be programmed to support some form of utilitarianism that tries to minimize the collective damage. And that’s probably what we’d want. Isn’t it?
But one of the problems with utilitarianism is that there turns out to be little agreement about what counts as a value and how much it counts. Is saving a pedestrian more important than saving a passenger? Is it always right try to preserve human life, no matter how unlikely it is that the action will succeed and no matter how many other injuries it is likely to result in? Should the car act as if its passenger has seat-belted him/herself in because passengers should do so? Should the cars be more willing to sacrifice the geriatric than the young, on the grounds that the young have more of a lifespan to lose? And won’t someone please think about the kids m— those cute choir kids?
We’re not good at making these decisions, or even at having rational conversations about them. Usually we don’t have to, or so we tell ourselves. For example, many of the rules that apply to us in public spaces, including roads, optimize for fairness: everyone waits at the same stop lights, and you don’t get to speed unless something is relevantly different about your trip: you are chasing a bad guy or are driving someone who urgently needs medical care.
But when we are better able control the circumstances, fairness isn’t always the best rule, especially in times of distress. Unfortunately, we don’t have a lot of consensus around the values that would enable us to make joint decisions. We fall back to fairness, or pretend that we can have it all. Or we leave it to experts, as with the rules that determine who gets organ transplants. It turns out we don’t even agree about whether it’s morally right to risk soldiers’ lives to rescue a captured comrade.
Fortunately, we don’t have to make these hard moral decisions. The people programming our robot cars will do it for us.
Imagine a time when the roadways are full of self-driving cars and trucks. There are some good reasons to think that that time is coming, and coming way sooner than we’d imagined.
Imagine that Google remains in the lead, and the bulk of the cars carry their brand. And assume that these cars are in networked communication with one another.
Can we assume that Google will support Networked Road Neutrality, so that all cars are subject to the same rules, and there is no discrimination based on contents, origin, destination, or purpose of the trip?
Or would Google let you pay a premium to take the “fast lane”? (For reasons of network optimization the fast lane probably wouldn’t actually be a designated lane but well might look much more like how frequencies are dynamically assigned in an age of “smart radios.”) We presumably would be ok with letting emergency vehicles go faster than the rest of the swarm, but how about letting the rich go faster by programming the robot cars to give way when a car with its “Move aside!” bit is on?
Let’s say Google supports a strict version of Networked Road Neutrality. But let’s assume that Google won’t be the only player in this field. Suppose Comcast starts to make cars, and programs them to get ahead of the cars that choose to play by the rules. Would Google cars take action to block the Comcast cars from switching lanes to gain a speed advantage — perhaps forming a cordon around them? Would that be legal? Would selling a virtual fast lane on a public roadway be legal in the first place? And who gets to decide? The FCC?
One thing is sure: It’ll be a golden age for lobbyists.
Next Page »