Joho the Blogethics Archives - Joho the Blog

May 18, 2017

Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

Be the first to comment »

May 15, 2017

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Be the first to comment »

October 11, 2016

[liveblog] Bas Nieland, Datatrix, on predicting customer behavior

At the PAPis conference Bas Nieland, CEO and Co-Founder of Datatrics, is talking about how to predict the color of shoes your customer is going to buy. The company tries to “make data science marketeer-proof for marketing teams of all sizes.” IT ties to create 360-degree customer profiles by bringing together info from all the data silos.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

They use some machine learning to create these profiles. The profile includes the buying phase, the best time to present choices to a user, and the type of persuasion that will get them to take the desired action. [Yes, this makes me uncomfortable.]

It is structured around a core API that talks to mongoDB and MySQL. They provide “workbenches” that work with the customer’s data systems. They use BigML to operate on this data.

The outcome are models that can be used to make recommendations. They use visualizations so that marketeers can understand it. But the marketeers couldn’t figure out how to use even simplified visualizations. So they created visual decision trees. But still the marketeers couldn’t figure it out. So they turn the data into simple declarative phrases: which audience they should contact, in which channel, what content, and when. E.g.:

“To increase sales, çontact your customers in the buying phase with high engagement through FB with content about jeans on sale on Thursday, around 10 o’clock.”

They predict the increase in sales for each action, and quantify in dollars the size of the opportunity. They also classify responses by customer type and phase.

For a hotel chain, they connected 16,000 variables and 21M data points, that got reduced to 75 variables by BigML which created a predictive model that ended up getting the chain more customer conversions. E.g., if the model says someone is in the orientation phase, the Web site shows photos of recommend hotels. If in the decision phase, the user sees persuasive messages, e.g., “18 people have looked at this room today.” The messages themselves are chosen based on the customer’s profile.

Coming up: Chatbot integration. It’s a “real conversation” [with a bot with a photo of an atttractive white woman who is supposedly doing the chatting]

Take-aways: Start simple. Make ML very easy to understand. Make it actionable.

Q&A

Me: Is there a way built in for a customer to let your model know that it’s gotten her wrong. E.g., stop sending me pregnancy ads because I lost the baby.

Bas: No.

Me: Is that on the roadmap?

Bas: Yes. But not on a schedule. [I’m not proud of myself for this hostile question. I have been turning into an asshole over the past few years.]

Be the first to comment »

July 13, 2016

Making the place better

I was supposed to give an opening talk at the 9th annual Ethics & Publishing conference put on by George Washington Uinversity. Unfortunately, a family emergency kept me from going, so I sent a very homemade video of the presentation that I recorded at my desk with my monitor raised to head height.

The theme of my talk was a change in how we make the place better — “the place” being where we live — in the networked age. It’s part of what I’ve been thinking about as I prepare to write a book about the change in our paradigm of the future. So, these are thoughts-in-progress. And I know I could have stuck the landing better. In any case, here it is.

2 Comments »

July 22, 2013

Paid content needs REALLY BIG metadata

HBR.com has just put up a post of mine about some new guidelines for “paid content.” The guidelines come from the PR and marketing communications company Edelman, which creates and places paid content for its clients. (Please read the disclosure that takes up all of paragraph 4 of my post. Short version: Edelman paid for a day of consulting on the guidelines. And, no, that didn’t include me agreeing to write about the guidelines)

I just read the current issue of Wired (Aug.) and was hit by a particularly good example. This issue has a two-page spread on pp. 34-35 that features an info graphic that is stylistically indistinguishable from another info graphic on p. 55. The fact that the two pager is paid content is flagged only by a small Shell logo in the upper left and the words “Wired promotion” in gray text half the height of the “article’s” subhead. It’s just not enough.

Worse, once you figure out that it’s an ad, you start to react to legitimate articles with suspicion. Is the article on the very next page (p. 36) titled “Nerf aims for girls but hits boys too” also paid content? How about the interview with the stars of the new comedy “The World’s End”? And then there’s the article on p. 46 that seems to be nothing but a plug for coins from Kitco. The only reason to think it’s not an ad in disguise is that it mentions a second coin company, Metallium. That’s pretty subtle metadata. Even so, it crossed my mind that maybe the two companies pitched in to pay for the article.

That’s exactly the sort of thought a journal doesn’t want crossing its readers’ minds. The failure to plainly distinguish paid content from unpaid content can subvert the reader’s trust. While I understand the perilous straits of many publications, if they’re going to accept paid content (and that seems like a done deal), then this month’s Wired gives a good illustration of why it’s in their own interest to mark their paid content clearly, using a standardized set of terms, just as the Edelman guidelines suggest.

(And, yes, I am aware of the irony – at best – that my taking money from Edelman raises just the sort of trust issues that I’m decrying in poorly-marked paid content.)

3 Comments »

January 26, 2010

[berkman] Julie Cohen on networked selves

Julie Cohen is giving a Berkman lunch on “configuring the networked self.” She’s working on a book that “explores the effects of expanding copyright, pervasive surveillance, and the increasingly opaque design of network architectures in the emerging networked information society.” She’s going to talk about a chapter that “argues that “access to knowledge” is a necessary but insufficient condition for human flourishing, and adds two additional conditions.” (Quotes are from the Berkman site.) [NOTE: Ethan Zuckerman’s far superior livebloggage is here.]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The book is motivated by two observations of the discourse around the Net, law, and policy in the U.S.

1. We make grandiose announcements about designing infrastructures that enable free speech and free markets, but at the end of the day, many of the results are antithetical to the interests of the individuals in that space by limiting what they can do with the materials they encounter.

2. There’s a disconnect between the copyright debate and the privacy debate. The free culture debate is about openness, but that can make it hard to reconcile privacy claims. We discuss these issues within a political framework with assumptions about autonomous choice made by disembodied individuals…a worldview that doesn’t have much to do with reality, she says. It would be better to focus on the information flows among embodied, real people who experience the network as mediated by devices and interfaces. The liberal theory framework doesn’t give us good tools. E.g., it treats individuals as separate from culture.

Julie says lots of people are asking these questions. They just happen not to be in legal studies. One purpose of her book is to unpack post modern literature to see how situated, embodied users of networks experience technology, and to see how that affects information law and policy. Her normative framework is informed by Martha Nussbaum‘s ideas about human flourishing: How can information law and policy help human flourishing by providing information to information and knowledge? Intellectual property laws should take this into account, she says. But, she says, this has been situated within the liberal tradition, which leads to indeterminate results. You lend it content by looking at the post modern literature that tells us important things about the relationship between self and culture, self and community, etc. By knowing how those relationships work, you can give content to human flourishing, which informs which laws and policies we need.

[I’m having trouble hearing her. She’s given two “political reference points,” but I couldn’t hear either. :(]

[I think one of them is everyday practice.] Everyday practice is not linear, often not animated by overarching strategies.

The third political reference point is play. Play is an important concept, but the discussion of intentional play needs to be expanded to include “the play of circumstances.” Life puts random stuff in your way. That type of play is often the actual source of creativity. We should be seeking to foster play in our information policy; it is a structural condition of human flourishing.

Access to knowledge isn’t enough to supply a base for human flourishing because it doesn’t get you everything you need, e.g., right to re-use works. We also need operational transparency: We need to know how these digital architectures work. We need to know how the collected data will be used. And we also need semantic discontinuity: Formal incompleteness in legal and technical infrastructures. E.g., wrt copyright to reuse works you shouldn’t have to invoke a legal defense such as fair use; there should be space left over for play. E.g., in privacy, rigid arbitrary rules against transacting and aggregating personal data so that there is space left over for people to play with identity. E.g., in architecture, question the norm that seamless interoperability makes life better, because it means that data about you moves around without your having the ability to stop it. E.g., interoperability among social networks changes the nature of social networks. We need some discontinuity for flourishing.

Q: People need the freedom to have multiple personas. We need more open territory.
A: Yes. The common pushback is that if you restrict the flow of info in any way, we’ll slide down the slippery slope of censorship. But that’s not true and it gets in the way of the conversation we need to have.

Q: [charlie nesson] How do you create this space of playfulness when it comes to copyright?
A: In part, look at the copyright law of 1909. It’s reviled by copyright holders, but there’s lots of good in it. It set up categories that determined if you could get the rights, and the rights were much more narrowly defined. We should define rights to reproduction and adaptation that gives certain significant rights to copyright holders, but that quite clearly and unambiguously reserves lots to users, with reference to the possible market effect that is used by courts to defend the owners’ rights.
Q: [charlie] But you run up against the pocketbooks of the copyright holders…
A: Yes, there’s a limit to what a scholar can do. Getting there is no mean feat, but it begins with a discourse about the value of play and that everyone benefits from it, not just crazy youtube posters, even the content creators.

JPalfrey asks CNesson what he thinks. Charlie says that having to assert fair use, to fend off lawsuits, is wrong. Fair uyse ought to be the presumption.

Q: [csandvig] Fascinating. The literature that lawyers denigrate as pomo makes me think of a book by an anthropologist and sociologist called “The Internet: An Ethnographic Approach.” It’s about embodied, local, enculturated understanding of the Net. Their book was about Trinidad, arguing that if you’re in Trinidad, the Net is one thing, and if you’re not, it’s another thing. And, they say, we need many of these cultural understandings. But it hasn’t happened. Can you say more about the lit you referred to?
A: Within mainstream US legal and policy scholarship, there’s no recognition of this. They’re focused on overcoming the digital divide. That’s fine, but it would be better not to have a broadband policy that thinks it’s the same in all cultures. [Note: I’m paraphrasing, as I am throughout this post. Just a reminder.]

A: [I missed salil’s question; sorry] We could build a system of randomized incompatibilities, but there’s value in having them emerge otherwise than by design, and there’s value to not fixing some of the ones that exist in the world. The challenge is how to design gaps.
Q: The gaps you have in mind are not ones that can be designed the way a computer scientist might…
A: Yes. Open source forks, but that’s at war with the idea that everything should be able to speak to everything else. It’d

Q: [me] I used to be a technodeterminist; I recognize the profound importance of cultural understandings/experience. So, the Internet is different in Trinidad than in Beijing or Cambridge. Nevertheless, I find myself thinking that some experiences of the Net are important and cross cultural, e.g., that Ideas are linked, there’s lots to see, people disagree, people like me can publish, etc.
A: You can say general things about the Net if you go to a high enough level of abstraction. You’re only a technodeterminist if you think there’s only way to get there, only one set of rules that get you there. Is that what you mean?
Q: Not quite. I’m asking if there’s a residue of important characteristics of the experience of the Net that cuts across all cultures. “Ideas are linked” or “I can contribute” may be abstractions, but they’re also important and can be culturally transformative, so the lessons we learn from the Net aren’t unactionably general.
A: Liberalism creeps back in. It’s acrappy descriptional tool, but a good aspirational one. The free spread of a corpus of existing knowledge…imagine a universal digital library with open access. That would be a universal good. I’m not saying I have a neutral prescription upon which any vision of human flourishing would work. I’m looking for critical subjectivity.

A: Network space changes based on what networks can do. 200 yrs ago, you wouldn’t have said PAris is closer to NY than Williamsburg VA, but today you might because lots of people go NY – Paris.

Q: [doc] You use geographic metaphors. Much of the understanding of the Net is based on plumbing metaphors.
A: The privacy issues make it clear it’s a geography, not a plumbing system. [Except for leaks :) ]

[Missed a couple of questions]

A: Any good educator will have opinions about how certain things are best reserved for closed environments, e.g., in-class discussions, what sorts of drafts to share with which other people, etc. There’s a value to questioning the assumption that everything ought to be open and shared.

Q: [wseltzer] Why is it so clear that it the Net isn’t plumbing? We make bulges in the pipe as spaces where we can be more private…
A: I suppose it depends on your POV. If you run a data aggregation biz, it will look like that. But if you ask someone who owns such a biz how s/he feels about privacy in her/his own life, that person will have opinions at odds with his/her professional existence.

Q: [jpalfrey] You’re saying that much of what we take as apple pie is in conflict, but that if we had the right toolset, we could make progress…
A: There isn’t a single unifying framework that can make it all make sense. You need the discontinuities to manage that. Dispute arise, but we have a way to muddle along. One of my favorite books: How We Became Post-Human. She writes about the Macy conferences out of which came out of cybernetics, including the idea that info is info no matter how it’s embodied. I think that’s wrong. We’re analog in important ways.

12 Comments »

August 17, 2009

meta-meta-spam

I received this today:

FOR IMMEDIATE RELEASE

TWITTER ATTEMPTS TO SHUT DOWN USOCIAL

Twitter has recently moved to shut down web promotions company uSocial.net, by claiming the advertising agency is “spamming”.

According to uSocial CEO Leon Hill, Twitter recently sent accusations via a brand-management organisation that uSocial are using Twitter for spam purposes. Despite this, uSocial say the claims are false.

“The definition of spam is using electronic messaging to send unsolicited communication and as we don’t use Twitter for this, the claims are false.” Said Hill.

uSocial believe the claims are due to a service the company sells which allows clients to purchase packages of followers to increase their viewership on the site.

“The people at Twitter who are sending these claims are just flailing around trying to look for any excuse they can, though it’s going to take much more than this if they want us to pack up shop.” Said Hill. “We’re not going away that easily.”

The service in question can be viewed on uSocial’s site by going to http://usocial.net/twitter_marketing.

Based upon this press release, uSocial is correct: It is not a spammer. Rather, it enables spammers. And then they spammed me to tell me about it.

uSocial also helps companies game sites such as Digg.com by purchasing votes. uSocial is thus explicitly a force out to corrupt human trust. So, screw ’em.

(The uSocial site is down at the moment. Check this post by Eric Lander to read about the site.)

[Tags: ]

1 Comment »

March 4, 2009

Three from the Boston Globe: Conflict, amusement, and maddening missing of the point

Part One

The big page two story of today’s Boston Globe is an article by Lori Montgomery of the Washington Post. It begins:

Two of the administration’s top economic officials defended President Obama’s $3.6 trillion budget plan yesterday, arguing that the proposal would finance a historic investment in critical economic priorities while restoring balance to a tax code tipped in favor of the wealthy.

The first nine paragraphs are about the fierce conflict. Only in paragraph ten do we get the most important news:

Despite those and a few other contentious issues, Obama’s budget request was generally well-received yesterday, as lawmakers took their first opportunity to comment on an agenda that many have described as the most ambitious and transformative since the dawn of the Reagan era. Democratic budget leaders said they are likely to endorse most of Obama’s proposals sometime in April in the form of a nonbinding budget resolution.

If it bleeds, it leads. Sigh.

 


Part Two

Because I generally disagree on policy with the Globe’s conservative columnist Jeff Jacoby, I try to give him the benefit of the doubt in his reasoning. But this morning, he’s driven me officially nuts. Well done, sir!

Jacoby devotes his column to the philosopher Peter Singer. First, he lauds Singer for “his commitment to charity.” But the bulk of the column is given over to Singer’s controversial — too mild a word — argument for permitting infanticide under careful legal conditions.

Actually, I’ve misspoken. Jacoby doesn’t mention Singer’s argument. He only gives the conclusion. Jacoby’s own conclusion is that Singer’s stance shows what happens “if morality is merely a matter of opinion and preference — if there is no overarching ethical code that supersedes any value system we can contrive for ourselves…”

In fact, Singer’s most objectionable conclusions come from rigorously applying standards of morality against opinion and preference. For example, if we say it’s our superior intelligence that gives us certain rights, then we should be willing to accord those rights to other creatures that turn out to have the same intelligence, even in preference to humans who lack that intelligence by accidents of birth or personal history. Or, as Singer says in the conclusion of the brief column in Foreign Policy that Jacoby cites:

…a new ethic will … recognize that the concept of a person is distinct from that of a member of the species Homo sapiens, and that it is personhood, not species membership, that is most significant in determining when it is wrong to end a life. We will understand that even if the life of a human organism begins at conception, the life of a person—that is, at a minimum, a being with some level of self-awareness—does not begin so early. And we will respect the right of autonomous, competent people to choose when to live and when to die.

There are lots of ways to argue with Singer’s conclusions. (I found him so convincing on animal rights in the 1970s that I’ve been a vegetarian ever since. I find him less convincing on infanticide.) But saying that Singer is merely expressing personal opinion is not to argue with him at all. In short, Jacoby is merely expressing his own opinions and preferences, and thus is guilty of exactly what he criticizes Singer for.

 


Part Three

The headline over the continuation of an article from the front page says:

Amid Maine’s extremes, teams of dogs and humans vie

It’s mildly disappointing to learn that the article is about dog sled racing in Maine, rather than about a dogs vs. humans sports event. [Tags: ]

4 Comments »