logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

June 12, 2021

The Shopping Cart Imperative

A long-time friend and, I’ve learned, a former grocery worker, today on a mailing list posted a brief rant calling people who do not return their grocery carts to the cart corral “moral cretins.” He made exceptions for people parked in handicapped parking spots, but not those who say they cannot leave their children unattended in a car for ten seconds. “Model good behavior,” he enjoins the latter folks.

While I always return my cart —honestly, I do–I felt weirdly compelled to defend those who willfully disobey the cart injunction, even though I understand where my friend is coming from on this issue: non-cart-returning is evidence of a belief that one can just waltz through life without thinking about the consequences of one’s actions, just expecting other “lesser” humans to clean up after you.

Here’s what I wrote:

I want to rise in a weak defense of those who do not return their carts.

While some certainly are moral cretins and self-centered ass-hats, others may believe that the presence of cart wranglers in the parking lot is evidence that the store is providing a cart-return service. “That’s their job, ” these people may be thinking.

Why then does the store give over some parking spaces to cart collection areas?  They are there for the convenience of shoppers who are taking carts. It’s up to the cart wranglers to make sure that area is always stocked.

But why then does the store have signs that say, “Please return your carts”? Obviously the “please” means that the store is asking you to volunteer to do their job for them.

Who would interpret a sign that way? Ok, probably moral cretins and self-centered ass-hats

I’m just being a wiseguy in that last sentence. Not only do I know you non-returners are fine people who have good reasons for your behavior, I even understand that there are probably more important things to talk about.

Tweet
Follow me

Categories: ethics, humor, philosophy Tagged with: ethics • morality • philosophy • shopping carts Date: June 12th, 2021 dw

3 Comments »

January 11, 2021

Parler and the failure of moral frameworks

This probably is not about what you think it is. It doesn’t take a moral stand about Parler or about its being chased off the major platforms and, in effect, off the Internet. Yet the title of this post is accurate: it’s about why moral frameworks don’t help us solve problems like those posed by Parler.

Traditional moral frameworks

The two major philosophical frameworks we use in the West to assess moral situations are consequentialism (mainly utilitarianism) and deontology. Utilitarianism assesses the morality of a choice based on the cumulative amount of happiness it will bring across the entire population (or how much it diminishes unhappiness). Deontology applies moral principles to cases, such as “It’s wrong to steal.”

Each has its advantages, but I don’t see how to apply them in a way that settles the issues about Parler. Or about most other things.

For example, from almost its very beginning (J.S. Mill, but not Bentham, as far as I remember), utilitarians have had to institute a hierarchy of pleasures in order to meet the objection that if we adopt that framework we should morally prefer policies that promote drunkenness and sex, over funding free Mozart concerts. (Just a tad of class bias showing there :) Worse, in a global space, do we declare a small culture’s happiness of less worth than those of a culture with a larger population? Should we declare a small culture’s happiness of less worth? Indeed, how do we apply utilitarianism to a single culture’s access to, for example,  pornography?

That last question raises a different, and common, objection with utilitarianism: suppose overall happiness is increased by ignoring the rights of others? It’s hard for utilitarianism to get over the conclusion that slavery is ok  so long as the people held slaves are greatly outnumbered by those who benefit from them. The other standard example is a contrivance in which a town’s overall happiness is greatly increased by allowing a person known by the authorities to be innocent to nevertheless be hanged. That’s because it turns out that most of us have a sense of deontological principles: We don’t care if slavery or hanging innocent people results in an overall happier society because it’s wrong on principle. 

But deontology has its own issues with being applied. The closest Immanuel Kant — the most prominent deontologist — gets to putting some particular value into his Categorical Imperative is to phrase it in terms of treating people as ends, not means, i.e., valuing autonomy. Kant argues that it is central because without it we can’t be moral creatures. But it’s not obvious that that is the highest value for humans especially in difficult moral situations,We can’t be fully moral without empathy nor is it clear how and when to limit people’s autonomy. (Many of us believe we also can’t be fully moral without empathy, but that’s a different argument.)

The relatively new  — 30 year old  — ethics of care avoids many of the issues with both of these moral frameworks by losing primary interest in general principles or generalized happiness, and instead thinking about morality in terms of relationships with distinct and particular individuals to whom we owe some responsibility of care; it takes as its fundamental and grounding moral behavior the caring of a mother for a child.  (Yes, it recognizes that fathers also care for children.) It begins with the particular, not an attempt at the general.

Applying the frameworks to Parler

So, how do any of these help us with the question of de-platforming Parler?

Utilitarians might argue that the existence of Parler as an amplifier of hate threatens to bring down the overall happiness of the world. Of course, the right-wing extremists on Parler would argue exactly the opposite, and would point to the detrimental consequences of giving the monopoly platforms this power.  I don’t see how either side convinces the other on this basis.

Deontologists might argue that the de-platforming violates the rights of the users and readers of Parler. the rights threatened by fascismOther deontologists  might talk about the rights threatened by the consequences of the growth of fascism enabled by Parler. Or they might simply make the utilitarian argument. Again, I don’t see how these frameworks lead to convincing the other side.

While there has been work done on figuring out how to apply the ethics of care to policy, it generally doesn’t make big claims about settling this sort of issue. But it may be that moral frameworks should not be measured by how effectively they convert opponents, but rather by how well they help us come to our own moral beliefs about issues. In that case, I still don’t see how they much help. 

If forced to have an opinion about Parler  — andI don’t think I have one worth stating  — I’d probably find a way to believe that the harmful consequences of Parler outweigh hindering the  human right of the participants to hang out with people they want to talk with and to say whatever they want. My point is definitely not that you ought to believe the same thing, because I’m very uncomfortable with it myself. My point is that moral frameworks don’t help us much.

And, finally, as I posted recently, I think moral questions are getting harder and harder now that we are ever more aware of more people, more opinions, and the complex dynamic networks of people, beliefs, behavior, and policies.

* * *

My old friend AKMA — so learned, wise, and kind that you could plotz — takes me to task in a very thought-provoking way. I reply in the comments.

Tweet
Follow me

Categories: echo chambers, ethics, everyday chaos, media, philosophy, policy, politics, social media Tagged with: ethics • free speech • morality • parler • philosophy • platforms Date: January 11th, 2021 dw

Be the first to comment »

March 28, 2020

Computer Ethics 1985

I was going through a shelf of books I haven’t visited in a couple of decades and found a book I used in 1986 when I taught Introduction to Computer Science in my last year as a philosophy professor. (It’s a long story.) Ethical Issues in the Use of Computers was a handy anthology, edited by Deborah G. Johnson and John W. Snapper (Wadsworth, 1985).

So what were the ethical issues posed by digital tech back then?

The first obvious point is that back then ethics were ethics: codes of conduct promulgated by professional societies. So, Part I consists of eight essays on “Codes of Conduct for the Computer Professions.” All but two of the articles present the codes for various computing associations. The two stray sheep are “The Quest for a Code of Professional Ethics: An Intellectual and Moral Confusion” (John Ladd) and “What Should Professional Societies do About Ethics?” (Fay H. Sawyier).

Part 2 covers “Issues of Responsibility”, with most of the articles concerning themselves with liability issues. The last article, by James Moor, ventures wider, asking “Are There Decisions Computers Should Not Make?” About midway through, he writes:

“Therefore, the issue is not whether there are some limitations to computer decision-making but how well computer decision making compares with human decision making.” (p. 123)

While saluting artificial intelligence researchers for their enthusiasm, Moor says “…at this time the results of their labors do not establish that computers will one day match or exceed human levels of ability for most kinds of intellectual activities.” Was Moor right? It depends. First define basically everything.

Moor concedes that Hubert Dreyfus’ argument (What Computers Still Can’t Do) that understanding requires a contextual whole has some power, but points to effective expert systems. Overall, he leaves open the question whether computers will ever match or exceed human cognitive abilities.

After talking about how to judge computer decisions, and forcefully raising Joseph Weizenbaum’s objection that computers are alien to human life and thus should not be allowed to make decisions about that life, Moor lays out some guidelines, concluding that we need to be pragmatic about when and how we will let computers make decisions:

“First, what is the nature of the computer’s competency and how has it been demonstrated? Secondly given our basic goals and values why is it better to use a computer decision maker in a particular situation than a human decision maker?”

We are still asking these questions.

Part 3 is on “Privacy and Security.” Four of the seven articles can be considered to be general introductions fo the concept of privacy. Apparently privacy was not as commonly discusssed back then.

Part 4, “Computers and Power,” suddenly becomes more socially aware. It includes an excerpt from Weizenbaum’s Computer Power and Human Reason, as well as articles on “Computers and Social Power” and “Peering into the Poverty Gap.”

Part 5 is about the burning issue of the day: “Software as Property.” One entry is the Third Circuit Court of Appeals finding in Apple vs. Franklin Computer. Franklin’s Ace computer contained operating system code that had been copied from Apple. The Court knew this because in addition to the programs being line-by-line copies, Franklin failed to remove the name of one of the Apple engineers that the engineer had embedded in the program. Franklin acknowledged the copying but argued that operating system code could not be copyrighted.

That seems so long ago, doesn’t it?


Because this post mentions Joseph Weizenbaum, here’s the beginning of a blog post from 2010:

I just came across a 1985 printout of notes I took when I interviewed Prof. Joseph Weizenbaum in his MIT office for an article that I think never got published. (At least Google and I have no memory of it.) I’ve scanned it in; it’s a horrible dot-matrix printout of an unproofed semi-transcript, with some chicken scratches of my own added. I probably tape recorded the thing and then typed it up, for my own use, on my KayPro.

In it, he talks about AI and ethics in terms much more like those we hear today. He was concerned about its use by the military especially for autonomous weapons, and raised issues about the possible misuse of visual recognition systems. Weizenbaum was both of his time and way ahead of it.

Tweet
Follow me

Categories: ai, copyright, infohistory, philosophy Tagged with: ai • copyright • ethics • history • philosophy Date: March 28th, 2020 dw

Be the first to comment »

December 4, 2017

Workshop: Trustworthy Algorithmic Decision-Making

I’m at a two-day inter-disciplinary workshop on “Trustworthy Algorithmic Decision-Making” put on by the National Science Foundation and Michigan State University. The 2-page whitepapers
from the participants are online. (Here’s mine.) I may do some live-blogging of the workshops.

Goals:

– Key problems and critical qustionos?

– What to tell pol;icy-makers and others about the impact of these systems?

– Product approaches?

– What ideas, people, training, infrastructure are needed for these approaches?

Excellent diversity of backgrounds: CS, policy, law, library science, a philosopher, more. Good diversity in gender and race. As the least qualified person here, I’m greatly looking forward to the conversations.

Tweet
Follow me

Categories: ai, liveblog, philosophy Tagged with: 2b2k • ai • ethics • machine learning • philosophy Date: December 4th, 2017 dw

Be the first to comment »

November 5, 2017

[liveblog] Stefania Druga on how kids can help teach us about AI

Stefania Druga, a graduate student in the Personal Robots research group at the MIT Media Lab, is leading a discussion focusing on how children can help us to better understand and utilize AI. She’s going to talk about some past and future research projects.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She shows two applications of AI developed for kids The first is Cayla, a robotic doll. “It got hacked three days after it was released in Germany” and was banned there. The second is Aristotle, which was supposed to be an Alexa for kids. A few weeks ago Mattel decided not to release it, after “parents worried about their kids’ privacy signed petitions”parents worried about their kids’ privacy signed petitions.

Stefania got interested in what research was being done in this field. She found a couple of papers. One (Lovato & Piper 2015
) showed that children mirrored how they interact with Siri, e.g., how angry or assertive. Antother (McReynolds et al., 2017 [pdf]) found that how children and parents interact with smart toys revealed how little parents and children know about how much info is being collected by these toys, e.g. Hello Barbie’s privacy concerns. It also looked at how parents and children were being incentivized to share info on social media.

Stefania’s group did a pilot study, having parents and 27 kids interact with various intelligent agents, including Alexa, Julie Chatbot, Tina the T.Rex, and Google Home. Four or five chidlren would interact with the agent at a time, with an adult moderator. Their parents were in the room.

Stefania shows a video about this project. After the kids interacted with the agent, they asked if it was smarter than the child, if it’s a friend, if it has feelings. Children anthropomorphize AIs in playful ways. Most of the older children thought the agents were more intelligent than they were, while the younger children weren’t sure. Two conclusions: Makers of these devices should pay more attention to how children interact with them, and we need more research.

What did the children think? They thought the agents were friendly and truthful. “They thought two Alexa devices were separate individuals.”They thought two Alexa devices were separate individuals. The older children thought about these agents differently than the younger ones do. This latter may be because of how children start thinking about smartness as they progress through school. A question: do they think about artificial intelligence as being the same as human intelligence?

After playing with the agents, they would probe the nature of the device. “They are trying to place the ontology of the device.”

Also, they treated the devices as gender ambiguous.

The media glommed onto this pilot study. E.g., MIT Technology Review: “Growing Up with Alexa.” Or NYTimes: “Co-Parenting with Alexa.” Wired: Understanding Generation Alpha. From these articles, it seems that people are really polarized about the wisdom of introducing children to these devices.

Is this good for kids? “It’s complicated,” Stefania says. The real question is: How can children and parents leverage intelligent agents for learning, or for other good ends?

Her group did another study, this summer, that had 30 pairs of children and parents navigate a robot to solve a maze. They’d see the maze from the perspective of the robot. They also saw a video of a real mouse navigating a maze, and of another robot solving the maze by itself. “Does changing the agent (themselves, mouse, robot) change their idea of intelligence?”Does changing the agent (themselves, mouse, robot) change their idea of intelligence? Kids and parents both did the study. Most of the kids mirrored their parents’ choices. They even mirrored the words the parents used…and the value placed on those words.

What next? Her group wants to know how to use these devices for learning. They build extensions using Scratch, including for an open source project called Poppy. (She shows a very cool video of the robot playing, collaborating, painting, etc.) Kids can program it easily. Ultimately, she hopes that this might help kids see that they have agency, and that while the robot is smart at some things, people are smart at other things.

Q&A

Q: You said you also worked with the elderly. What are the chief differences?

A: Seniors and kids have a lot in common. They were especially interested in the fact that these agents can call their families. (We did this on tablets, and some of the elderly can’t use them because their skin is too dry.)

Q: Did learning that they can program the robots change their perspective on how smart the robots are?

A: The kids who got the bot through the maze did not show a change in their perspective. When they become fluent in customizing it and understanding how it computes, it might. It matters a lot to have the parents involved in flipping that paradigm.

Q: How were the parents involved in your pilot study?

A: It varied widely by parent. It was especially important to have the parents there for the younger kids because the device sometimes wouldn’t understand the question, or what sorts of things the child could ask it about.

Q: Did you look at how the participants reacted to robots that have strong or weak characteristics of humans or animals.

A: We’ve looked at whether it’s an embodied intelligent agent or not, but not at that yet. One of our colleagues is looking at questions of empathy.

Q: [me] Do the adults ask their children to thank Siri or other such agents?

A: No.

Q: [me] That suggests they’re tacitly shaping them to think that these devices are outside of our social norms?

Q: In my household, the “thank you” extinguishes itself: you do it a couple of times, and then you give it up.

A: This indicates that these systems right now are designed in a very transactional way. You have to say the wake up call every single phrase. But these devices will advance rapidly. Right now it’s unnatural conversation. But wth chatbots kids have a more natural conversation, and will say thank you. And kids want to teach it things, e.g, their names or favorite color. When Alexa doesn’t know what the answer is, the natural thing is to tell it, but that doesn’t work.

Q: Do the kids think these are friends?

A: There’s a real question around animism. Is it ok for a device to be designed to create a relationship with, say, a senior person and to convince them to take their pills? My answer is that people tend to anthropomorphize everything. Over time, kids will figure out the limitations of these tools.

Q: Kids don’t have genders for the devices? The speaking ones all have female voices. The doll is clearly a female.

A: Kids were interchanging genders because the devices are in a fluid space in the spectrum of genders. “They’re open to the fact that it’s an entirely new entity.”

Q: When you were talking about kids wanting to teach the devices things, I was thinking maybe that’s because they want the robot to know them. My question: Can you say more about what you observed with kids who had intelligent agents at home as opposed to those who do not?

A: Half already had a device at home. I’m running a workshop in Saudi Arabia with kids there. I’m very curious to see the differences. Also in Europe. We did one in Colombia among kids who had never seen an Alexa before and who wondered where the woman was. They thought there must be a phone inside. They all said good bye at the end.

Q: If the wifi goes down, does the device’s sudden stupidness concern the children? Do they think it died?

A: I haven’t tried that.

[me] Sounds like that would need to go through an IRB.

Q: I think your work is really foundational for people who want to design for kids.

Tweet
Follow me

Categories: ai, education, ethics, liveblog Tagged with: ai • children • education • ethics • robots Date: November 5th, 2017 dw

Be the first to comment »

May 18, 2017

Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

Tweet
Follow me

Categories: philosophy, tech Tagged with: ai • deep learning • ethics • philosophy Date: May 18th, 2017 dw

Be the first to comment »

May 15, 2017

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Tweet
Follow me

Categories: berkman, culture, liveblog, philosophy, science, tech Tagged with: ai • ethics • machine learning • philosophy Date: May 15th, 2017 dw

Be the first to comment »

October 11, 2016

[liveblog] Bas Nieland, Datatrix, on predicting customer behavior

At the PAPis conference Bas Nieland, CEO and Co-Founder of Datatrics, is talking about how to predict the color of shoes your customer is going to buy. The company tries to “make data science marketeer-proof for marketing teams of all sizes.” IT ties to create 360-degree customer profiles by bringing together info from all the data silos.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

They use some machine learning to create these profiles. The profile includes the buying phase, the best time to present choices to a user, and the type of persuasion that will get them to take the desired action. [Yes, this makes me uncomfortable.]

It is structured around a core API that talks to mongoDB and MySQL. They provide “workbenches” that work with the customer’s data systems. They use BigML to operate on this data.

The outcome are models that can be used to make recommendations. They use visualizations so that marketeers can understand it. But the marketeers couldn’t figure out how to use even simplified visualizations. So they created visual decision trees. But still the marketeers couldn’t figure it out. So they turn the data into simple declarative phrases: which audience they should contact, in which channel, what content, and when. E.g.:

“To increase sales, çontact your customers in the buying phase with high engagement through FB with content about jeans on sale on Thursday, around 10 o’clock.”

They predict the increase in sales for each action, and quantify in dollars the size of the opportunity. They also classify responses by customer type and phase.

For a hotel chain, they connected 16,000 variables and 21M data points, that got reduced to 75 variables by BigML which created a predictive model that ended up getting the chain more customer conversions. E.g., if the model says someone is in the orientation phase, the Web site shows photos of recommend hotels. If in the decision phase, the user sees persuasive messages, e.g., “18 people have looked at this room today.” The messages themselves are chosen based on the customer’s profile.

Coming up: Chatbot integration. It’s a “real conversation” [with a bot with a photo of an atttractive white woman who is supposedly doing the chatting]

Take-aways: Start simple. Make ML very easy to understand. Make it actionable.

Q&A

Me: Is there a way built in for a customer to let your model know that it’s gotten her wrong. E.g., stop sending me pregnancy ads because I lost the baby.

Bas: No.

Me: Is that on the roadmap?

Bas: Yes. But not on a schedule. [I’m not proud of myself for this hostile question. I have been turning into an asshole over the past few years.]

Tweet
Follow me

Categories: big data, business, cluetrain, future, liveblog, marketing Tagged with: ethics • personalization Date: October 11th, 2016 dw

Be the first to comment »

July 13, 2016

Making the place better

I was supposed to give an opening talk at the 9th annual Ethics & Publishing conference put on by George Washington Uinversity. Unfortunately, a family emergency kept me from going, so I sent a very homemade video of the presentation that I recorded at my desk with my monitor raised to head height.

The theme of my talk was a change in how we make the place better — “the place” being where we live — in the networked age. It’s part of what I’ve been thinking about as I prepare to write a book about the change in our paradigm of the future. So, these are thoughts-in-progress. And I know I could have stuck the landing better. In any case, here it is.

Tweet
Follow me

Categories: free culture, future, open access, philosophy Tagged with: ethics • publishing Date: July 13th, 2016 dw

2 Comments »

July 22, 2013

Paid content needs REALLY BIG metadata

HBR.com has just put up a post of mine about some new guidelines for “paid content.” The guidelines come from the PR and marketing communications company Edelman, which creates and places paid content for its clients. (Please read the disclosure that takes up all of paragraph 4 of my post. Short version: Edelman paid for a day of consulting on the guidelines. And, no, that didn’t include me agreeing to write about the guidelines)

I just read the current issue of Wired (Aug.) and was hit by a particularly good example. This issue has a two-page spread on pp. 34-35 that features an info graphic that is stylistically indistinguishable from another info graphic on p. 55. The fact that the two pager is paid content is flagged only by a small Shell logo in the upper left and the words “Wired promotion” in gray text half the height of the “article’s” subhead. It’s just not enough.

Worse, once you figure out that it’s an ad, you start to react to legitimate articles with suspicion. Is the article on the very next page (p. 36) titled “Nerf aims for girls but hits boys too” also paid content? How about the interview with the stars of the new comedy “The World’s End”? And then there’s the article on p. 46 that seems to be nothing but a plug for coins from Kitco. The only reason to think it’s not an ad in disguise is that it mentions a second coin company, Metallium. That’s pretty subtle metadata. Even so, it crossed my mind that maybe the two companies pitched in to pay for the article.

That’s exactly the sort of thought a journal doesn’t want crossing its readers’ minds. The failure to plainly distinguish paid content from unpaid content can subvert the reader’s trust. While I understand the perilous straits of many publications, if they’re going to accept paid content (and that seems like a done deal), then this month’s Wired gives a good illustration of why it’s in their own interest to mark their paid content clearly, using a standardized set of terms, just as the Edelman guidelines suggest.

(And, yes, I am aware of the irony – at best – that my taking money from Edelman raises just the sort of trust issues that I’m decrying in poorly-marked paid content.)

Tweet
Follow me

Categories: business, journalism, marketing Tagged with: ethics • marketing • paid content • pr Date: July 22nd, 2013 dw

3 Comments »

Next Page »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!