logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

June 12, 2021

The Shopping Cart Imperative

A long-time friend and, I’ve learned, a former grocery worker, today on a mailing list posted a brief rant calling people who do not return their grocery carts to the cart corral “moral cretins.” He made exceptions for people parked in handicapped parking spots, but not those who say they cannot leave their children unattended in a car for ten seconds. “Model good behavior,” he enjoins the latter folks.

While I always return my cart —honestly, I do–I felt weirdly compelled to defend those who willfully disobey the cart injunction, even though I understand where my friend is coming from on this issue: non-cart-returning is evidence of a belief that one can just waltz through life without thinking about the consequences of one’s actions, just expecting other “lesser” humans to clean up after you.

Here’s what I wrote:

I want to rise in a weak defense of those who do not return their carts.

While some certainly are moral cretins and self-centered ass-hats, others may believe that the presence of cart wranglers in the parking lot is evidence that the store is providing a cart-return service. “That’s their job, ” these people may be thinking.

Why then does the store give over some parking spaces to cart collection areas?  They are there for the convenience of shoppers who are taking carts. It’s up to the cart wranglers to make sure that area is always stocked.

But why then does the store have signs that say, “Please return your carts”? Obviously the “please” means that the store is asking you to volunteer to do their job for them.

Who would interpret a sign that way? Ok, probably moral cretins and self-centered ass-hats

I’m just being a wiseguy in that last sentence. Not only do I know you non-returners are fine people who have good reasons for your behavior, I even understand that there are probably more important things to talk about.

Tweet
Follow me

Categories: ethics, humor, philosophy Tagged with: ethics • morality • philosophy • shopping carts Date: June 12th, 2021 dw

3 Comments »

January 11, 2021

Parler and the failure of moral frameworks

This probably is not about what you think it is. It doesn’t take a moral stand about Parler or about its being chased off the major platforms and, in effect, off the Internet. Yet the title of this post is accurate: it’s about why moral frameworks don’t help us solve problems like those posed by Parler.

Traditional moral frameworks

The two major philosophical frameworks we use in the West to assess moral situations are consequentialism (mainly utilitarianism) and deontology. Utilitarianism assesses the morality of a choice based on the cumulative amount of happiness it will bring across the entire population (or how much it diminishes unhappiness). Deontology applies moral principles to cases, such as “It’s wrong to steal.”

Each has its advantages, but I don’t see how to apply them in a way that settles the issues about Parler. Or about most other things.

For example, from almost its very beginning (J.S. Mill, but not Bentham, as far as I remember), utilitarians have had to institute a hierarchy of pleasures in order to meet the objection that if we adopt that framework we should morally prefer policies that promote drunkenness and sex, over funding free Mozart concerts. (Just a tad of class bias showing there :) Worse, in a global space, do we declare a small culture’s happiness of less worth than those of a culture with a larger population? Should we declare a small culture’s happiness of less worth? Indeed, how do we apply utilitarianism to a single culture’s access to, for example,  pornography?

That last question raises a different, and common, objection with utilitarianism: suppose overall happiness is increased by ignoring the rights of others? It’s hard for utilitarianism to get over the conclusion that slavery is ok  so long as the people held slaves are greatly outnumbered by those who benefit from them. The other standard example is a contrivance in which a town’s overall happiness is greatly increased by allowing a person known by the authorities to be innocent to nevertheless be hanged. That’s because it turns out that most of us have a sense of deontological principles: We don’t care if slavery or hanging innocent people results in an overall happier society because it’s wrong on principle. 

But deontology has its own issues with being applied. The closest Immanuel Kant — the most prominent deontologist — gets to putting some particular value into his Categorical Imperative is to phrase it in terms of treating people as ends, not means, i.e., valuing autonomy. Kant argues that it is central because without it we can’t be moral creatures. But it’s not obvious that that is the highest value for humans especially in difficult moral situations,We can’t be fully moral without empathy nor is it clear how and when to limit people’s autonomy. (Many of us believe we also can’t be fully moral without empathy, but that’s a different argument.)

The relatively new  — 30 year old  — ethics of care avoids many of the issues with both of these moral frameworks by losing primary interest in general principles or generalized happiness, and instead thinking about morality in terms of relationships with distinct and particular individuals to whom we owe some responsibility of care; it takes as its fundamental and grounding moral behavior the caring of a mother for a child.  (Yes, it recognizes that fathers also care for children.) It begins with the particular, not an attempt at the general.

Applying the frameworks to Parler

So, how do any of these help us with the question of de-platforming Parler?

Utilitarians might argue that the existence of Parler as an amplifier of hate threatens to bring down the overall happiness of the world. Of course, the right-wing extremists on Parler would argue exactly the opposite, and would point to the detrimental consequences of giving the monopoly platforms this power.  I don’t see how either side convinces the other on this basis.

Deontologists might argue that the de-platforming violates the rights of the users and readers of Parler. the rights threatened by fascismOther deontologists  might talk about the rights threatened by the consequences of the growth of fascism enabled by Parler. Or they might simply make the utilitarian argument. Again, I don’t see how these frameworks lead to convincing the other side.

While there has been work done on figuring out how to apply the ethics of care to policy, it generally doesn’t make big claims about settling this sort of issue. But it may be that moral frameworks should not be measured by how effectively they convert opponents, but rather by how well they help us come to our own moral beliefs about issues. In that case, I still don’t see how they much help. 

If forced to have an opinion about Parler  — andI don’t think I have one worth stating  — I’d probably find a way to believe that the harmful consequences of Parler outweigh hindering the  human right of the participants to hang out with people they want to talk with and to say whatever they want. My point is definitely not that you ought to believe the same thing, because I’m very uncomfortable with it myself. My point is that moral frameworks don’t help us much.

And, finally, as I posted recently, I think moral questions are getting harder and harder now that we are ever more aware of more people, more opinions, and the complex dynamic networks of people, beliefs, behavior, and policies. ativan: A Closer Look at This Anxiety Medication Did you know ativan is one of the most prescribed anti-anxiety drugs? Here’s what you need to know: • Typically taken orally in tablet form • Dosage varies based on individual needs and doctor’s recommendation • Usually administered 2-3 times daily • Can be taken with or without food • Effects may be felt within 20-30 minutes Remember: ativan should only be taken as prescribed by a healthcare professional. Misuse can lead to dependency. Have you or someone you know been prescribed ativan? Share your experiences or questions below.

* * *

My old friend AKMA — so learned, wise, and kind that you could plotz — takes me to task in a very thought-provoking way. I reply in the comments.

Tweet
Follow me

Categories: echo chambers, ethics, everyday chaos, media, philosophy, policy, politics, social media Tagged with: ethics • free speech • morality • parler • philosophy • platforms Date: January 11th, 2021 dw

Be the first to comment »

August 18, 2020

America the Diverse

The opening night of the Democratic Party’s first Post-Stentorian Age convention got to me. Of course I loved Michelle Obama’s profoundly righteous talk. But what really got to me were the faces we saw. It was on purpose and it worked. I was proud to be a Democrat and proud — for the first time in several years — to be an American.

No, we are not unique in our diversity. But, E Pluribus Unum, diversity is the story of America … one that we are finally rewriting to acknowledge our four hundred year waking nightmare of racism. To say that we did not live up to our self-image and ideals is to mumble “I think I smell something” in a theater that burned all but to the ground. And note the implicit racism of my unassuming “we” in that sentence.

The Democrats made a proper show of the party’s commitment to diversity, to the point that when a small group of youngsters — who turned out to be Biden’s grandchildren — recited the Pledge of Allegiance, I was shocked to see a screen with only white faces on it.

We all know it’s time to turn the Party over to people of color. More than time. Yes, we are nominating an old white man because we’re afraid in this exceptional election to stray from what we perceive as the safest possible choice. I understand that. But now the Democrats have the beginnings of a diverse bench to draw from. Good.

No more excuses. Time’s up.

Tweet
Follow me

Categories: ethics, politics Tagged with: morality • politics • race Date: August 18th, 2020 dw

1 Comment »

June 1, 2020

Rights vs. Dignity

Of course we need to accord people their rights and their dignity. But over time I have come to find dignity to be the more urgent demand.

Rights cover what a society will let people do. Dignity pertains to who a person is.

Rights are granted on the basis of theories. Dignity is enacted in the presence of another.

Rights are mediated by whatever institution grants the rights. Dignity is unmediated, immediate.

Rights are the same for all. Dignity is for the singular person before you.

You can grudgingly grant people their rights. The moment you grant someone their dignity, any resentment you had about doing so turns against yourself.

Grant people their dignity, and rights will follow. Grant people their rights and you may treat them like slaves who have been freed by law.

A world without dignity is not at peace.

Tweet
Follow me

Categories: culture, ethics, peace, politics Tagged with: culture • politics • rights Date: June 1st, 2020 dw

Be the first to comment »

June 25, 2019

Nudge gone evil

Princeton has published the ongoing results of its “Dark Patterns” project. The site says:

Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions.

So, Nudge gone evil. (And far beyond nudging.)

From the Dark Patterns page, here are some of the malicious patterns they are tracking:

  • Activity Notification: Influencing shopper decisions by making the product appear popular with others.
  • Confirmshaming: Steering shoppers to certain choices through shame and guilt.
  • Countdown Timer: Pressuring shoppers with a decreasing count-down timer.
  • Forced Enrollment: Requiring shoppers to agree to something in order to use basic functions of the website.
  • Hard to Cancel: Making it easy for shoppers to sign up and obstructing their ability to cancel.
  • Hidden Costs: Waiting to reveal extra costs to shoppers until just before they make a purchase.
  • Hidden Subscription: Charging a recurring fee after accepting an initial fee or trial period.
  • High Demand: Pressuring shoppers by suggesting that a product has high demand.
  • Limited Time: Telling shoppers that a deal or discount will expire soon.
  • Low-Stock Notification: Pressuring shoppers with claims that the inventory is low.
  • Pressured Selling: Pre-selecting or pressuring shoppers to accept the most expensive options.
  • Sneak into Basket: Adding extra products into shopping carts without consent or through boxes checked by default.
  • Trick Questions: Steering shoppers into certain choices with confusing language
  • Visual Interference: Distracting shoppers away from certain information through flashy color, style, and messages.

Home page:https://webtransparency.cs.princeton.edu/dark-patterns/

Academic paper:
https://webtransparency.cs.princeton.edu/dark-patterns/assets/dark-patterns.pdf

Nathan Mathias has put a front end to this data at the Tricky Sites site: https://trickysites.cs.princeton.edu/

Tweet
Follow me

Categories: ethics, marketing Tagged with: marketing • scams Date: June 25th, 2019 dw

Be the first to comment »

September 14, 2018

Five types of AI fairness

Google PAIR (People + AI Research) has just posted my attempt to explain what fairness looks like when it’s operationalized for a machine learning system. It’s pegged around five “fairness buttons” on the new Google What-If tool, a resource for developers who want to try to figure out what factors (“features” in machine learning talk) are affecting an outcome.

Note that there are far more than five ways to operationalize fairness. The point of the article is that once we are forced to decide exactly what we’re going to count as fair — exactly enough that a machine learning system can implement it — we realize just how freaking complex fairness is. OMG. I broke my brain trying to figure out how to explain some of those ideas, and it took several Google developers (especially James Wexler) and a fine mist of vegetarian broth to restore it even incompletely. Even so, my explanations are less clear than I (or you, I’m sure) would like. But at least there’s no math in them :)

I’ll write more about this at some point, but for me the big take-away is that fairness has had value as a moral concept so far because it is vague enough to allow our intuition to guide us. Machine learning is going to force us to get very specific about it. But we are not yet adept enough at it — e.g., we don’t have a vocabulary for talking about the various varieties — plus we don’t agree about them enough to be able to navigate the shoals. It’s going to be a big mess, but something we have to work through. When we do, we’ll be better at being fair.

Now, about the fact that I am a writer-in-residence at Google. Well, yes I am, and have been for about 6 weeks. It’s a 6 month part-time experiment. My role is to try to explain some of machine learning to people who, like me, lack the technical competence to actually understand it. I’m also supposed to be reflecting in public on what the implications of machine learning might be on our ideas. I am expected to be an independent voice, an outsider on the inside.

So far, it’s been an amazing experience. I’m attached to PAIR, which has developers working on very interesting projects. They are, of course, super-smart, but they have not yet tired of me asking dumb questions that do not seem to be getting smarter over time. So, selfishly, it’s been great for me. And isn’t that all that really matters, hmmm?

Tweet
Follow me

Categories: ai, ethics, philosophy Tagged with: aimachinelearning • fairness Date: September 14th, 2018 dw

Be the first to comment »

July 5, 2018

Empathy at three

Yesterday afternoon, our three year old grandson, who I’ll call Eliza because I’ve heard people have noticed a creep on the Internet recently, played with “Amos,” a 2.5yo child he had never met before. Amos is a sweet, fun child who was eager to join in. Eliza turned three a few days ago, so there was a noticeable age difference but not a huge gap. They played for hours out on the lawn, along with Amos’ wonderfully sociable and kind 6yo sister. It gave me full-body memories of watching our children play with their cousins on the very same lawn. There aren’t a lot of stretches when I’d say I was happy without adding some type of qualifier. Yesterday earned no qualifiers

Then, after maybe four hours of play, Amos swung a bubble wand and accidentally hit Eliza in the head with it. It’s a foot-long light plastic tube with a long slit in it, and Amos is only 2.5, so there was no damage, no mark, no blame. But, still, no one likes being beaned, especially by surprise.

Eliza started to make the quivering face of a child about to cry, but quickly realized what had happened. You could see him struggle not to cry. His mom — who was born empathetic — took him into the hammock where she was lying down and snuggled him. He spooned so she wouldn’t see that he was still stifling tears. But I could see. And his mom of course could tell. And so could Amos, who started getting upset because Eliza was.

Now, I’m Eliza’s grandparent and he and I are very close in both senses of the word. So I am undoubtedly one of the two most biased people in the world when it comes to him. On the other hand, I have the joy of knowing him well. And I am certain that Eliza was holding back the display of his emotions because he did not want to upset Amos.

I think we often overrate empathy. But not always. And what Eliza exhibited was not just empathy. It was empathy for the person who accidentally hurt him. It was empathy rising above his own contrary feelings. It was empathy in the moment, without pause, that helped the object of that empathy, Amos. It was empathy that could not be expressed as empathy .

So why did I wake up at 2:30 this morning and weep?

Tweet
Follow me

Categories: ethics Tagged with: children • empathy • kindness Date: July 5th, 2018 dw

1 Comment »

November 5, 2017

[liveblog] Stefania Druga on how kids can help teach us about AI

Stefania Druga, a graduate student in the Personal Robots research group at the MIT Media Lab, is leading a discussion focusing on how children can help us to better understand and utilize AI. She’s going to talk about some past and future research projects.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She shows two applications of AI developed for kids The first is Cayla, a robotic doll. “It got hacked three days after it was released in Germany” and was banned there. The second is Aristotle, which was supposed to be an Alexa for kids. A few weeks ago Mattel decided not to release it, after “parents worried about their kids’ privacy signed petitions”parents worried about their kids’ privacy signed petitions.

Stefania got interested in what research was being done in this field. She found a couple of papers. One (Lovato & Piper 2015
) showed that children mirrored how they interact with Siri, e.g., how angry or assertive. Antother (McReynolds et al., 2017 [pdf]) found that how children and parents interact with smart toys revealed how little parents and children know about how much info is being collected by these toys, e.g. Hello Barbie’s privacy concerns. It also looked at how parents and children were being incentivized to share info on social media.

Stefania’s group did a pilot study, having parents and 27 kids interact with various intelligent agents, including Alexa, Julie Chatbot, Tina the T.Rex, and Google Home. Four or five chidlren would interact with the agent at a time, with an adult moderator. Their parents were in the room.

Stefania shows a video about this project. After the kids interacted with the agent, they asked if it was smarter than the child, if it’s a friend, if it has feelings. Children anthropomorphize AIs in playful ways. Most of the older children thought the agents were more intelligent than they were, while the younger children weren’t sure. Two conclusions: Makers of these devices should pay more attention to how children interact with them, and we need more research.

What did the children think? They thought the agents were friendly and truthful. “They thought two Alexa devices were separate individuals.”They thought two Alexa devices were separate individuals. The older children thought about these agents differently than the younger ones do. This latter may be because of how children start thinking about smartness as they progress through school. A question: do they think about artificial intelligence as being the same as human intelligence?

After playing with the agents, they would probe the nature of the device. “They are trying to place the ontology of the device.”

Also, they treated the devices as gender ambiguous.

The media glommed onto this pilot study. E.g., MIT Technology Review: “Growing Up with Alexa.” Or NYTimes: “Co-Parenting with Alexa.” Wired: Understanding Generation Alpha. From these articles, it seems that people are really polarized about the wisdom of introducing children to these devices.

Is this good for kids? “It’s complicated,” Stefania says. The real question is: How can children and parents leverage intelligent agents for learning, or for other good ends?

Her group did another study, this summer, that had 30 pairs of children and parents navigate a robot to solve a maze. They’d see the maze from the perspective of the robot. They also saw a video of a real mouse navigating a maze, and of another robot solving the maze by itself. “Does changing the agent (themselves, mouse, robot) change their idea of intelligence?”Does changing the agent (themselves, mouse, robot) change their idea of intelligence? Kids and parents both did the study. Most of the kids mirrored their parents’ choices. They even mirrored the words the parents used…and the value placed on those words.

What next? Her group wants to know how to use these devices for learning. They build extensions using Scratch, including for an open source project called Poppy. (She shows a very cool video of the robot playing, collaborating, painting, etc.) Kids can program it easily. Ultimately, she hopes that this might help kids see that they have agency, and that while the robot is smart at some things, people are smart at other things.

Q&A

Q: You said you also worked with the elderly. What are the chief differences?

A: Seniors and kids have a lot in common. They were especially interested in the fact that these agents can call their families. (We did this on tablets, and some of the elderly can’t use them because their skin is too dry.)

Q: Did learning that they can program the robots change their perspective on how smart the robots are?

A: The kids who got the bot through the maze did not show a change in their perspective. When they become fluent in customizing it and understanding how it computes, it might. It matters a lot to have the parents involved in flipping that paradigm.

Q: How were the parents involved in your pilot study?

A: It varied widely by parent. It was especially important to have the parents there for the younger kids because the device sometimes wouldn’t understand the question, or what sorts of things the child could ask it about.

Q: Did you look at how the participants reacted to robots that have strong or weak characteristics of humans or animals.

A: We’ve looked at whether it’s an embodied intelligent agent or not, but not at that yet. One of our colleagues is looking at questions of empathy.

Q: [me] Do the adults ask their children to thank Siri or other such agents?

A: No.

Q: [me] That suggests they’re tacitly shaping them to think that these devices are outside of our social norms?

Q: In my household, the “thank you” extinguishes itself: you do it a couple of times, and then you give it up.

A: This indicates that these systems right now are designed in a very transactional way. You have to say the wake up call every single phrase. But these devices will advance rapidly. Right now it’s unnatural conversation. But wth chatbots kids have a more natural conversation, and will say thank you. And kids want to teach it things, e.g, their names or favorite color. When Alexa doesn’t know what the answer is, the natural thing is to tell it, but that doesn’t work.

Q: Do the kids think these are friends?

A: There’s a real question around animism. Is it ok for a device to be designed to create a relationship with, say, a senior person and to convince them to take their pills? My answer is that people tend to anthropomorphize everything. Over time, kids will figure out the limitations of these tools.

Q: Kids don’t have genders for the devices? The speaking ones all have female voices. The doll is clearly a female.

A: Kids were interchanging genders because the devices are in a fluid space in the spectrum of genders. “They’re open to the fact that it’s an entirely new entity.”

Q: When you were talking about kids wanting to teach the devices things, I was thinking maybe that’s because they want the robot to know them. My question: Can you say more about what you observed with kids who had intelligent agents at home as opposed to those who do not?

A: Half already had a device at home. I’m running a workshop in Saudi Arabia with kids there. I’m very curious to see the differences. Also in Europe. We did one in Colombia among kids who had never seen an Alexa before and who wondered where the woman was. They thought there must be a phone inside. They all said good bye at the end.

Q: If the wifi goes down, does the device’s sudden stupidness concern the children? Do they think it died?

A: I haven’t tried that.

[me] Sounds like that would need to go through an IRB.

Q: I think your work is really foundational for people who want to design for kids.

Tweet
Follow me

Categories: ai, education, ethics, liveblog Tagged with: ai • children • education • ethics • robots Date: November 5th, 2017 dw

Be the first to comment »

June 6, 2017

[liveblog] metaLab

Harvard metaLab is giving an informal Berkman Klein talk about their work on designing for ethical AI. Jeffrey Schnapp introduces metaLab as “an idea foundry, a knowledge-design lab, and a production studio experimenting in the networked arts and humanities.” The discussion today will be about metaLab’s various involvements in the Berkman Klein – MIT MediaLab clomid project on ethics and governance of AI. The conference is packed with Fellows and the newly-arrived summer interns.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Matthew Battles and Jessica Yurkofsky begin by talking about Curricle, a “new platform for experimenting with shopping for courses.” How can the experience be richer, more visual, and use more of the information and data that Harvard has? They’ve come up with a UI that has three elements: traditional ambien search, a visualization, and a list of the results.

“They’ve been grappling with the ethics of putting forward new search algorithms. ”They’ve been grappling with the ethics of putting forward new search algorithms. The design is guided by transparency, autonomy, and visualization. Transparency means that they make apparent how the search works, allowing students to assign weights to keywords. If Curricle makes recommendations, it will explain that it’s because other students like you have chosen it or because students like you have never done this, etc. Visualization shows students what’s being returned by their getting clomid search and how it’s distributed.

Similar principles guide a new project, AI Compass, that is the entry point for information about Berkman Klein’s work on the Ethics and Governance of AI project. It is designed to document the research being done and to provide a tool for surveying the field more broadly. They looked at how neural nets are visualized, how training sets are presented, and other visual metaphors. They are trying to find a way to present these resources in their connections. They have decided to use Conway’s Game of Life [which I was writing about an hour ago, which freaks me out a bit]. The game allows complex structures to emerge from simple rules. AI Compass is using animated cellular automata as icons on the site.

metaLab wants to enable people to explore the information at three different scales. The macro scale shows all of the content arranged into thematic areas. This lets you see connections among the pieces. The middle scale shows the content with more information. At the lowest scale, you see the resource prednisone information itself, as well as connections to related content.

Sarah Newman talks about how AI is viewed in popular culture: the Matrix, Ahnuld, etc. “We generally don’t think about AI as it’s expressed in the tools we actually use”We generally don’t think about AI as it’s expressed in the tools we actually use, such as face recognition, search, recommendations, etc. metaLab is interested in how art can draw out the social and cultural dimensions of AI. “What can we learn about ourselves by how we interact with, tell stories about, and project logic, intelligence, and sentience onto machines?” The aim is to “provoke meaningful reflection.”

One project is called “The Future of Secrets.” Where our email and texts be in 100 years? And what does this tell us about our relationship with our tech. Why and how do we trust them? It’s an installation that’s been at the Museum of Fine Arts in Boston and recently in Berlin. People enter secrets that are printed out anonymously. People created stories, most of which weren’t true, often about the logic of the machine. People tended to project much more intelligence on the machine than was there. Cameras were watching and would occasionally print out images from the show itself.

From this came a new piece (done with fellow Rachel Kalmar) in which a computer reads the secrets out loud. It will be installed at the Berkman Klein Center soon.

Working with Kim Albrecht in Berlin, the center is creating data visualizations based on the data that a mobile phone collects, including the accelerometer. “These visualizations let us see how the device is constructing an image of the world we’re moving through”These visualizations let us see how the device is constructing an image of the world we’re moving through. That image is messy, noisy.

The lab is also collaborating on a Berlin exhibition, adding provocative framing using X degrees of Separation. It finds relationships among objects from disparate cultures. What relationships do algorithms find? How does that compare with how humans do it? What can we learn?

Starting in the fall, Jeffrey and a co-teacher are going to be leading a robotics design studio, experimenting with interior and exterior architecture in which robotic agents are copresent with human actors. This is already happening, raising regulatory and urban planning challenges. The studio will also take seriously machine vision as a way of generating new ways of thinking about mobility within city spaces.

Q&A

Q: me: For AI Compass, where’s the info coming from? How is the data represented? Open API?

Matthew: It’s designed to focus on particular topics. E.g., Youth, Governance, Art. Each has a curator. The goal is not to map the entire space. It will be a growing resource. An open API is not yet on the radar, but it wouldn’t be difficult to do.

Q: At the AI Advance, Jonathan Zittrain said that organizations are a type of AI: governed by a set of rules, they grow and learn beyond their individuals, etc.

Matthew: We hope to deal with this very capacious approach to AI is through artists. What have artists done that bear on AI beyond the cinematic tropes? There’s a rich discourse about this. We want to be in dialogue with all sorts of people about this.

Q: About Curricle: Are you integrating Q results [student responses to classes], etc.?

Sarah: Not yet. There’s mixed feeling from administrators about using that data. We want Curricle to encourage people to take new paths. The Q data tends to encourage people down old paths. Curricle will let students annotate their own paths and share it.

Jeffrey: We’re aiming at creating a curiosity engine. We’re working with a century of curricular data. This is a rare privilege.

me: It’d enrich the library if the data about resources was hooked into LibraryCloud.

Q: kendra: A useful feature would be finding a random course that fits into your schedule.

A: In the works.

Q: It’d be great to have transparency around the suggestions of unexpected courses. We don’t want people to be choosing courses simply to be unique.

A: Good point.

A: The same tool that lets you diversify your courses also lets you concentrate all of them into two days in classrooms near your dorm. Because the data includes courses from all the faculty, being unique is actually easy. The challenge is suggesting uniqueness that means something.

Q: People choose courses in part based on who else is choosing that course. It’d be great to have friends in the platform.

A: Great idea.

Q: How do you educate the people using the platform? How do you present and explain the options? How are you going to work with advisors?

A: Important concerns at the core of what we’re thinking about and working on.

Tweet
Follow me

Categories: ai, culture, ethics, libraries Tagged with: bkc • metalab Date: June 6th, 2017 dw

1 Comment »


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

Joho the Blog uses WordPress blogging software.
Thank you, WordPress!