Joho the Blog » culture

December 15, 2017

[liveblog] Sonja Amadae on computational creativity

I’m at the STEAM ed Finland conference in Jyväskylä. Sonja Amadae at Swansea University (also currently at Helsinki U.) works on robotic ethics. She will argue in this talk that computers are algorithmic, that they only do what they’re programmed to do, that they don’t understand what they’re doing and they don’t feel human experience. AI is, she concludes, a tool.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.


AI is like a human prosthetic that helps us walk. AI is an enhancement of human capabilities.


She will talk about about three cases.


Case 1: Generating a Rembrandt


A bank funded a project [cool site about it] to see what would happen if a computer had all of the data about Rembrandt’s portraits. They quantified the paintings: types, facial aspects including the size and distance of facial features, depth, contour, etc. They programmed the algorithm to create a portrait. The result was quite goood. People were pleased. Of course, it painted a white male. Is that creativity?


We are now recogizing the biases widespread in AI. E.g., “Biased algorithms are everywhere and one seeems to care” in MIT Tech Review by Will Knight. She also points to the time that Google mistakenly tagged black people as “gorillas.” So, we know there are limitations.


So, we fix the problem…and we end up with facial recognition systems so good that China can identify jaywalkers from surveillance cams, and then they post their images and names on large screens at the intersections.


Case 2: Forgery detection


The aim of one project was to detect forgeries. It was built on work done by Marits Michel van Dantzig in the 1950s. He looked at the brushstrokes on a painting; artists have signature brushstrokes. Each painting has on average 80,000 brushstrokes. A computer can compare a suspect painting’s brushstrokes with the legitimate brushstrokes of the artist. The result: the AI could identify forgeries 80% of the time from a single stroke.


Case 3: Computational creativity


She cites Wikipedia on Computational Creativity because she thinks it gets it roughly right:

Computational creativity (also known as artificial creativity, mechanical creativity, creative computing or creative computation) is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.
The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends:[

  • To construct a program or computer capable of human-level creativity.

  • To better understand human creativity and to formulate an algorithmic perspective on creative behavior in humans.

  • To design programs that can enhance human creativity without necessarily being creative themselves.

She also quotes John McCarthy:

‘To ascribe certain beliefs, knowledge, free will, intentions, consciousness, abilities, or wants to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person.’


If you google “computational art” and you’ll see many pictures created computationally. [Or here.] Is it genuine creativity? What’s going on here?


We know that AI’s products can be taken as human. A poem created by AI won a poetry contest for humans. E.g., “A home transformed by the lightning the balanced alcoves smother” But the AI doesn’t know it’s making a poem.


Can AI make art? Well, art is in the eye of the beholder, so if you think it’s art, it’s art. But philosophically, we need to recall Turing-Church Computability which states that the computation “need not be intelligible to the one calculating.” The fact that computers can create works that look creative does not mean that the machines have the awareness required for creativity.


Can the operations of the brain be simulated on a computer? The Turing-Church statement says yes. But now we have computing so advanced that it’s unpredictable, probabilistic, and is beyond human capability. But the computations need not be artistic to the one computing.


Computability has limits:


1. Information and data are not meaning or knowledge.


2. Every single moment of existence is unique in the universe. Every single moment we see a unique aspect of the world. A Turing computer can’t see the outside world. It only has what it’s internal to it.


3. Human mind has existential experience.


4. The mind can reflect on itself.


5. Scott Aaronson says that humans can exercise free will and AI cannot, based on quantum theory. [Something about quantum free states.]


6.The universe has non-computable systems. Equilibrium paths?


“Aspect seeing” means that we can make a choice about how we see what we see. And each moment of each aspect is unique in time.


In SF, the SPCA uses a robot to chase away homeless people. Robots cannot exercise compassion.


Computers compute. Humans create. Creativity is not computable.


Q&A


Q: [me] Very interesting talk. What’s at stake in the question?


A: AI has had such a huge presence in our lives. There’s a power of thinking about rationality as computation. Gets best articulated in game theory. Can we conclude that this game theoretical rationality — the foundational understanding of rationality — is computable? Do human brings anything to the table? This leads to an argument for the obsolescence of the human. If we’re just computational, then we aren’t capable of any creativity. Or free will. That’s what’s ultimately at stake here.


Q: Can we create machines that are irrational, and have them bring a more human creativity?


A: There are many more types of rationality than game theory sort. E.g., we are rational in connection with one another working toward shared goals. The dichotomy between the rational and irrational is not sufficient.
TAGS:

Comments Off on [liveblog] Sonja Amadae on computational creativity

December 5, 2017

[liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

I’ve been at a two-day workshop sponsored by the Michigan State Uiversity and the National Science Foundation: “Workshop on Trustworthy Algorithmic Decision-Making.” After multiple rounds of rotating through workgroups iterating on five different questions, each group presented its findings — questions, insights, areas of future research.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Seriously, I cannot capture all of this.

Conduct of Data Science

What are the problems?

  • Who defines and how do we ensure good practice in data science and machine learning?

Why is the topic important? Because algorithms are important. And they have important real-world effects on people’s lives.

Why is the problem difficult?

  • Wrong incentives.

  • It can be difficult to generalize practices.

  • Best practices may be good for one goal but not another, e.g., efficiency but not social good. Also: Lack of shared concepts and vocabulary.

How to mitigate the problems?

  • Change incentives

  • Increase communication via vocabularies, translations

  • Education through MOOCS, meetups, professional organizations

  • Enable and encourage resource sharing: an open source lesson about bias, code sharing, data set sharing

Accountability group

The problem: How to integratively assess the impact of an algorithmic system on the public good? “Integrative” = the impact may be positive and negative and affect systems in complex ways. The impacts may be distributed differently across a population, so you have to think about disparities. These impacts may well change over time

We aim to encourage work that is:

  • Aspirationally casual: measuring outcomes causally but not always through randomized control trials.

  • The goal is not to shut down algorithms to to make positive contributions that generat solutions.

This is a difficult problem because:

  • Lack of variation in accountability, enforcements, and interventions.

  • It’s unclear what outcomes should be measure and how. This is context-dependent

  • It’s unclear which interventions are the highest priority

Why progress is possible: There’s a lot of good activity in this space. And it’s early in the topic so there’s an ability to significantly influence the field.

What are the barriers for success?

  • Incomplete understanding of contexts. So, think it in terms of socio-cultural approaches, and make it interdisciplinary.

  • The topic lies between disciplines. So, develop a common language.

  • High-level triangulation is difficult. Examine the issues at multiple scales, multiple levels of abstraction. Where you assess accountability may vary depending on what level/aspect you’re looking at.

Handling Uncertainty

The problem: How might we holistically treat and attribute uncertainty through data analysis and decisions systems. Uncertainty exists everywhere in these systems, so we need to consider how it moves through a system. This runs from choosing data sources to presenting results to decision-makers and people impacted by these results, and beyond that its incorporation into risk analysis and contingency planning. It’s always good to know where the uncertainty is coming from so you can address it.

Why difficult:

  • Uncertainty arises from many places

  • Recognizing and addressing uncertainties is a cyclical process

  • End users are bad at evaluating uncertain info and incorporating uncertainty in their thinking.

  • Many existing solutions are too computationally expensive to run on large data sets

Progress is possible:

  • We have sampling-based solutions that provide a framework.

  • Some app communities are recognizing that ignoring uncertainty is reducing the quality of their work

How to evaluate and recognize success?

  • A/B testing can show that decision making is better after incorporating uncertainty into analysis

  • Statistical/mathematical analysis

Barriers to success

  • Cognition: Train users.

  • It may be difficult to break this problem into small pieces and solve them individually

  • Gaps in theory: many of the problems cannot currently be solved algorithmically.

The presentation ends with a note: “In some cases, uncertainty is a useful tool.” E.g., it can make the system harder to game.

Adversaries, workarounds, and feedback loops

Adversarial examples: add a perturbation to a sample and it disrupts the classification. An adversary tries to find those perturbations to wreck your model. Sometimes this is used not to hack the system so much as to prevent the system from, for example, recognizing your face during a protest.

Feedback loops: A recidivism prediction system says you’re likely to commit further crimes, which sends you to prison, which increases the likelihood that you’ll commit further crimes.

What is the problem: How should a trustworthy algorithm account for adversaries, workarounds, and feedback loops?

Who are the stakeholders?

System designers, users, non-users, and perhaps adversaries.

Why is this a difficult problem?

  • It’s hard to define the boundaries of the system

  • From whose vantage point do we define adversarial behavior, workarounds, and feedback loops.

Unsolved problems

  • How do we reason about the incentives users and non-users have when interacting with systems in unintended ways.

  • How do we think about oversight and revision in algorithms with respect to feedback mechanisms

  • How do we monitor changes, assess anomalies, and implement safeguards?

  • How do we account for stakeholders while preserving rights?

How to recognize progress?

  • Mathematical model of how people use the system

  • Define goals

  • Find stable metrics and monitor them closely

  • Proximal metrics. Causality?

  • Establish methodologies and see them used

  • See a taxonomy of adversarial behavior used in practice

Likely approaches

  • Security methodology to anticipating and unintended behaviors and adversarial interactions’. Monitor and measure

  • Record and taxonomize adversarial behavior in different domains

  • Test . Try to break things.

Barriers

  • Hard to anticipate unanticipated behavior

  • Hard to define the problem in particular cases.

  • Goodhardt’s Law

  • Systems are born brittle

  • What constitutes adversarial behavior vs. a workaround is subjective.

  • Dynamic problem

Algorithms and trust

How do you define and operationalize trust.

The problem: What are the processes through which different stakeholders come to trust an algorithm?

Multiple processes lead to trust.

  • Procedural vs. substantive trust: are you looking at the weights of the algorithms (e.g.), or what were the steps to get you there?

  • Social vs personal: did you see the algorithm at work, or are you relying on peers?

These pathways are not necessarily predictive of each other.

Stakeholders build truth through multiple lenses and priorities

  • the builders of the algorithms

  • the people who are affected

  • those who oversee the outcomes

Mini case study: a child services agency that does not want to be identified. [All of the following is 100% subject to my injection of errors.]

  • The agency uses a predictive algorithm. The stakeholders range from the children needing a family, to NYers as a whole. The agency knew what into the model. “We didn’t buy our algorithm from a black-box vendor.” They trusted the algorithm because they staffed a technical team who had credentials and had experience with ethics…and who they trusted intuitively as good people. Few of these are the quantitative metrics that devs spend their time on. Note that FAT (fairness, accountability, transparency) metrics were not what led to trust.

Temporality:

  • Processes that build trust happen over time.

  • Trust can change or maybe be repaired over time. “

  • The timescales to build social trust are outside the scope of traditional experiments,” although you can perhaps find natural experiments.

Barriers:

  • Assumption of reducibility or transfer from subcomponents

  • Access to internal stakeholders for interviews and process understanding

  • Some elements are very long term

 


 

What’s next for this workshop

We generated a lot of scribbles, post-it notes, flip charts, Slack conversations, slide decks, etc. They’re going to put together a whitepaper that goes through the major issues, organizing them, and tries to capture the complexity while helping to make sense of it.

There are weak or no incentives to set appropriate levels of trust

Key takeways:

  • Trust is irreducible to FAT metrics alone

  • Trust is built over time and should be defined in terms of the temporal process

  • Isolating the algorithm as an instantiation misses the socio-technical factors in trust.

Comments Off on [liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

November 29, 2017

"The Walking Dead" is Negan

[SPOILERS??] There are no direct spoilers of the “So and So dies” sort in this post, but it assumes you are pretty much up to date on the current season of The Walking Dead.

The Walking Dead has become Negan. I mean the show itself.

Negan brings to the show a principle of chaos: you never know who he’s going to bash to death. This puts all the characters at risk, although perhaps some less so than others based on their fan-base attachments.

That adds some threat and tension of the sort that Game of Thrones used to have. But only if it’s a principle of chaos embedded within a narrative structure and set of characters that we care about. And for the prior season and the current one, there’s almost no narrative structure and, frankly, not that many characters who don’t feel like narrative artifices.

As a result, the main tension in the current season is exactly the same as it was at the beginning of last season when we waited to find out who Negan would choose to bash to death. Negan was so random that “the viewer discussions generally were attempts to anticipate what the writers wanted to do to us”the viewer discussions generally were attempts to anticipate what the writers wanted to do to us. They had to kill someone significant or else the threat level would go down. But they couldn’t kill so-and-so because s/he was too popular, or whatever. There were no intrinsic reasons why Negan would chose one victim over another — Wild Card! — so the reasons had to have to do with audience retention.

This entire season is random in that bad way. The writers are now Negan, choosing randomly among Team Rick’s characters. They’re going to kill off someone for some ratings-based reason, and we’re just waiting for them to make up their mind.

The series didn’t start out this way. It had characters in conflict, and characters in arcs. Rick and The Punisher. Carol and her sister. Daryl and his other brother Daryl. Gingerbeard and The Mullet. Now there’s nothing, maybe because every character’s arc has been the same: S/he becomes an empowered action star.

There are still some things I like about the show. For example, it’s heartening to watch them work on the female empowerment, although it’d be more interesting if they didn’t all become like Rick. And Negan is a pretty good villain. Sure, I could do with fewer predictable charming smiles, but he’s scary.

But I’ll be damned if in the last episode of this series [MADE-UP SPOILERS AHEAD] Team Rick (which will probably be Team Maggie by then) realizes that it has become Negan. I’ll be especially pissed off if the last shot is of the dying Jesus saying, “We are Negan.” Star wipe. Out. Puke.

Comments Off on "The Walking Dead" is Negan

October 25, 2017

[liveblog] John Palfrey’s new book (and thoughts on rules vs. models)

John Palfrey is doing a launch event at the Berkman Klein Center for his new book, Safe Spaces, Brave Spaces: Diversity and Free Expression in Education. John is the Head of School at Phillips Academy Andover, and for many years was the executive director of the Berkman Klein Center and the head of the Harvard Law School Library. He’s also the chairman of the board of the Knight Foundation. This event is being put on by the BKC, the Law Library, and Andover. His new book is available on paper, or online as an open access book. (Of course it is. It’s John Palfrey, people!)

[Disclosure: Typical conversations about JP, when he’s not present, attempt — and fail — to articulate his multi-facted awesomeness. I’ll fail at this also, so I’ll just note that JP is directly responsible for my affiliation with the BKC and and for my co-directorship of the Harvard Library Innovation Lab…and those are just the most visible ways in which he has enabled me to flourish as best I can. ]

Also, at the end of this post I have some reflections on rules vs. models, and the implicit vs. explicit.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

John begins by framing the book as an attempt to find a balance between diversity and free expression. Too often we have pitted the two against each other, especially in the past few years, he says: the left argues for diversity and the right argues for free expression. It’s important to have both, although he acknowledges that there are extremely hard cases where there is no reconciliation; in those cases we need rules and boundaries. But we are much better off when we can find common ground.

“This may sound old-fashioned in the liberal way. And that’s true,” he says. But we’re having this debate in part because young people have been advancing ideas that we should be listening to. We need to be taking a hard look.

Our institutions should be deeply devoted to diversity, equity and inclusion. Our institutions haven’t been as supportive of these as they should be, although they’re getting better at it, e.g. getting better at acknowledging the effects of institutional racism.

The diversity argument pushes us toward the question of “safe spaces.” Safe spaces are crucial in the same way that every human needs a place where everyone around them supports them and loves them, and where you can say dumb things. We all need zones of comfort, with rules implicit or explicit. It might be a room, a group, a virtual space… E.g., survivors of sexual assault need places where they know there are rules and they can express themselves without feeling at risk.

But, John adds, there should also be spaces where people are uncomfortable, where their beliefs are challenged.

Spaces of both sorts are experienced differently by different people. Privileged people like John experience spaces as safe that others experience as uncomfortable.

The examples in his book include: trigger warnings, safe spaces, the debates over campus symbols, the disinvitation of speakers, etc. These are very hard to navigate and call out for a series of rules or principles. Different schools might approach these differently. E.g.,students from the Gann Academy are here tonight, a local Jewish high school. They well might experience a space differently than students at Andover. Different schools well might need different rules.

Now John turns it over to students for comments. (This is very typical JP: A modest but brilliant intervention and then a generous deferral to the room. I had the privilege of co-teaching a course with him once, and I can attest that he is a brilliant, inspiring teacher. Sorry, but to be such a JP fanboy, but I am at least an evidence-based fanboy.) [I have not captured these student responses adequately, in some cases simply because I had trouble hearing them. They were remarkable, however. And I could not get their names with enough confidence to attempt to reproduce them here. Sorry!]

Student Responses

Student: I graduated from Andover and now I’m at Harvard. I was struck by the book’s idea that we need to get over the dichotomy between diversity and free expression. I want to address Chapter 5, about hate speech. It says each institution ought to assess its own values to come up with its principles about speech and diversity, and those principles ought to be communicated clearly and enforced consistently. But, I believe, we should in fact be debating what the baseline should be for all institutions. We don’t all have full options about what school we’re going to go to, so there ought to be a baseline we all can rely on.

JP: Great critique. Moral relativism is not a good idea. But I don’t think one size fits all. In the hardest cases, there might be sharpest limits. But I do agree there ought to be some sort of baseline around diversity, equity, and inclusion. I’d like to see that be a higher baseline, and we’ve worked on this at Andover. State universities are different. E.g., if a neo-Nazi group wants to demonstrate on a state school campus and they follow the rules laid out in the Skokie case, etc., they should be allowed to demonstrate. If they came to Andover, we’d say no. As a baseline, we might want to change the regulations so that the First Amendment doesn’t apply if the experience is detrimental to the education of the students; that would be a very hard line to draw. Even if we did, we still might want to allow local variations.

Student: Brave spaces are often build from safe spaces. E.g., at Andover we used Facebook to build a safe space for women to talk, in the face of academic competitions where misogyny was too common. This led to creating brave places where open, frank discussion across differences was welcomed.

JP: Yes, giving students a sense of safety so they can be brave is an important point. And, yes, brave spaces do often grow from safe spaces.

Andover student: I was struck by why diversity is important: the cross-pollination of ideas. But from my experience, a lot of that hasn’t occurred because we’re stuck in our own groups. There’s also typically a divide between the students and the faculty. Student activitsts are treated as if they’re just going through a phase. How do we bridge that gap?

JP: How do we encourage more cross-pollination? It’s a really hard problem for educators. I’ve been struck by the difference between teaching at Harvard Law and Andover in terms of the comfort with disagreeing across political divides; it was far more comfortable at the Law School. I’ve told students if you present a paper that disagrees with my point of view and argues for it beautifully, you’ll do better than parroting ideas back to me. Second, we have to stop using demeaning language to talk about student activists. BTW, there is an interesting dynamic, as teachers today may well have been activists when they were young and think of themselves as the reformers.

Student: [hard to hear] At Andover, our classes were seminar-based, which is a luxury not all students have. Also: Wouldn’t encouraging a broader spread of ideas create schisms? How would you create a school identity?

JP: This echoes the first student speaker’s point about establishing a baseline. Not all schools can have 12 students with two teachers in a seminar, as at Andover. We need to find a dialectic. As for schisms: we have to communicate values. Institutions are challenged these days but there is a huge place for them as places that convey values. There needs to be some top down communication of those values. Students can challenge those values, and they should. This gets at the heart of the problem: Do we tolerate the intolerant?

Student: I’m a graduate of Andover and currently at Harvard. My generation has grown up with the Internet. What happens when what is supposed to be a safe space becomes a brave space for some but not all? E.g., a dorm where people speak freely thinking it’s a safe space. What happens when the default values overrides what someone else views as comfortable? What is the power of an institution to develop, monitor, and mold what people actually feel? When communities engage in groupthink, how can an institution construct space safes?

JP: I don’t have an easy answer to this. We do need to remember that these spaces are experienced differently by different people, and the rules ought to reflect this. Some of my best learning came from late night bull sessions. It’s the duty of the institution to do what it can to enable that sort of space. But we also have to recognize that people who have been marginalized react differently. The rule sets need to reflect that fact.

Student: Andover has many different forum spaces available, from hallways to rooms. We get to decide to choose when and where these conversations will occur. For a more traditional public high school where you only have 30-person classroom as a forum, how do we have the difficult conversations that students at Andover choose to have in more intimate settings?

JP: The size and rule-set of the group matters enormously. Even in a traditional HS you can still break a class into groups. The answer is: How do you hack the space?

Student: I’m a freshman at Harvard. Before the era of safe spaces, we’d call them friends: people we can talk with and have no fear that our private words will be made public, and where we will not be judged. Safe spaces may exclude people, e.g., a safe space open only to women.

JP Andover has a group for women of color. That excludes people, and for various reasons we think that’s entirely appropriate an useful.

Q&A

Q [Terry Fisher]: You refer frequently to rule sets. If we wanted to have a discussion in a forum like this, you could announce a set of rules. Or the organizer could announce values, such as: we value respect, or we want people to take the best version of what others say. Or, you could not say anything and model it in your behavior. When you and I went to school, there were no rules in classrooms. It was all done by modeling. But this also meant that gender roles were modeled. My experience of you as a wonderful teacher, JP, is that you model values so well. It doesn’t surprise me that so many of your students talk with the precision and respectfulness that you model. I am worried about relying on rule sets, and doubt their efficacy for the long term. Rather, the best hope is people modeling and conveying better values, as in the old method.

JP: Students, Terry Fischer was my teacher. May answer will be incredibly tentative: It is essential for an institution to convey its values. We do this at Andover. Our values tell us, for example, that we don’t want gender-based balance and are aware that we are in a misogynist culture, and thus need reasonable rules. But, yes, modeling is the most powerful.

Q [Dorothy Zinberg]: I’ve been at Harvard for about 70 yrs and I have seen the importance of an individual in changing an institution. For example, McGeorge Bundy thought he should bring 12 faculty to Harvard from non-traditional backgrounds, including Erik Erikson who did not have a college degree. He had been a disciple of Freud’s. He taught a course at Harvard called “The Lifecycle.” Every Harvard senior was reading The Catcher in the Rye. Erikson was giving brilliant lectures, but I told him it was from his point of view as a man, and had nothing to do with the young women. So, he told me, a grad student, to write the lectures. No traditional professor would have done that. Also: for forming groups, there’s nothing like closing the door. People need to be able to let go and try a lot of ideas.

Q: I am from the Sudan. How do you create a safe space in environments that are exclusive. [I may have gotten that wrong. Sorry.] How do you acknowledge the native American tribes whose land this institution is built on, or the slaves who did the building?

JP: We all have that obligation. [JP gives some examples of the Law School recently acknowledging the slave labor, and the money from slave holders, that helped build the school.]

Q: You used a kitchen as an example of a safe space. Great example. But kitchens are not established or protected by any authority. It’s a new idea that institutions ought to set these up. Do you think there should be safe spaces that are privately set up as well as by institutions? Should some be permitted to exclude people or not?

(JP asks a student to respond): Institutional support can be very helpful when you have a diversity of students. Can institutional safe spaces supplement private ones? I’m not sure. And I do think exclusive groups have a place. As a consensus forms, it’s important to allow the marginalized voices to connect.

Q [ head of Gann]: I’m a grad of Phillips Academy. As head of a religious school, we’re struggling with all these questions. Navigating these spaces isn’t just a political or intellectual activity. It is a work of the heart. If the institution thinks of this only as a rational activity and doesn’t tend to the hearts of our students, and is not explicit about the habits of heart we need to navigate these sensitive waters, only those with natural emotional skills will be able to flourish. We need to develop leaders who can turn hard conversations into generative ones. What would it look like to take on the work of developing social and emotional development?

JP: Ive been to Gann and am confident that’s what you’re doing. And you can see evidence of Andover’s work on it in the students who spoke tonight. Someone asked me if a student became a Nazi, would you expel him? Yes, if it were apparent in his actions, but probably not for his thoughts. Ideally, our students won’t come to have those views because of the social and emotional skills they’re learning. But people in our culture do have those views. Your question brings it back to the project of education and of democracy.

[This session was so JP!]

 


 

A couple of reactions to this discussion without having yet read the book.

First, about Prof. Fisher’s comment: I think we are all likely to agree that modeling the behavior we want is the most powerful educational tool. JP and Prof. Fisher, are both superb, well, models of this.

But, as Prof. Fisher noted in his question, the dominant model of discourse for our generation silently (and sometimes explicitly) favored males, white middle class values, etc. Explicit rules weren’t as necessary because we had internalized them and had stacked the deck against those who were marginalized by them. Now that diversity has thankfully become an explicit goal, and now that the Internet has thrown us into conversations across differences, we almost always need to make those rules explicit; a conversation among people from across divides of culture, economics, power, etc. that does not explicitly acknowledge the different norms under which the participants operate is almost certainly going to either fragment or end in misunderstanding.

(Clay Shirky and I had a collegial difference of opinion about this about fifteen years ago. Clay argued for online social groups having explicit constitutions. I argued
for the importance of the “unspoken” in groups, and the damage that making norms explicit can cause.)

Second, about the need for setting a baseline: I’m curious to see what JP’s book says about this, because the evidence is that we as a culture cannot agree about what the baseline is: vociferous and often nasty arguments about this have been going on for decades. For example, what’s the baseline for inviting (or disinviting) people with highly noxious views to a private college campus? I don’t see a practical way forward for establishing a baseline answer. We can’t even get Texas schools to stop teaching Creationism.

So, having said that modeling is not enough, and having despaired at establishing a baseline, I think I am left being unhelpfully dialectical:

1. Modeling is essential but not enough.

2. We ought to be appropriately explicit about rules in order to create places where people feel safe enough to be frank and honest…

3. …But we are not going to be able to agree on a meaningful baseline for the U.S., much less internationally — “meaningful” meaning that it is specific enough that it can be applied to difficult cases.

4. But modeling may be the only way we can get to enough agreement that we can set a baseline. We can’t do it by rules because we don’t have enough unspoken agreement about what those rules should be. We can only get to that agreement by seeing our leading voices in every field engage across differences in respectful and emotionally truthful ways. So at the largest level, I find I do agree with Prof. Fisher: we need models.

5. But if our national models are to reflect the values we want as a baseline, we need to be thoughtful, reflective, and explicit about which leading voices we want to elevate as models. We tend to do this not by looking for rules but by looking for Prof. Fisher’s second alternative: values. For example, we say positively that we love John McCain’s being a “maverick” or Kamala Harris’ careful noting of the evidence for her claims, and we disdain Trump’s name-calling. Rules derive from values such as those. Values come before rules.

I just wish I had more hope about the direction we’re going in…although I do see hopeful signs in some of the model voices who are emerging, and most of all, in the younger generation’s embrace of difference.

Comments Off on [liveblog] John Palfrey’s new book (and thoughts on rules vs. models)

October 16, 2017

How to screw up a succah. In a good way.

I know you’re all wondering how I was able to build such a magnificent succah, and how I managed to combine inexpensiveness with convenience. But most of all, you’re wondering what the hell is a succah?


A succah is essentially a temporary Jew shack that you eat in during the holiday of Succos (AKA Sukkot). It has to meet certain requirements that make it somewhat sturdier than a pillow fort: It has to be temporary, covered incompletely on top, closed on at least three sides, etc. If you’re an observant Jew, as elements of my family are, you eat all your meals out there during the 8-day holiday. Some Jews even sleep in them. Far more commonly, the custom is to have guests as often as possible so that meals are extended and highly social. In some Jewish communities, succah-hopping is a thing. A good thing.


For the past 20+ yrs, I’ve been constructing it out of the same set of PVC pipes. I have a rubber mallet, which is comical enough that I should probably have bought it from Acme Hardware, which I use to bang poles into T-fittings. (For the middle uprights, they’re T-s with a third sleeve in the third dimension, which sounds way more complex than it actually is.)


This is fine except for my constant anxiety about wind overcoming the friction that holds the slippery tubes into their slippery connectors. So, every year after I’ve pounded the poles together — and, if you try to visualize the process you’ll see that pounding a tube into one sleeve unpounds it from the sleeve at the other end — I’ve drilled a hole through the sleeve and tube and inserted a weenie nail, just to add some charming shrapnel to the explosion when the wind suddenly tosses it apart like a child knocking down a house made of drinking straws.


So, I did some research and this year built a succah using a remarkable breakthrough in applied physics: threaded connectors. Here’s how.


Our succah is 10′ x 10′. Each side wall consists of two corner uprights, one upright in the middle, and four horizontal poles. The uprights have have fittings with a threaded nut. The horizontals have fittings that screw into the nuts. The fittings are glued on to the poles using PVC glue. You simply screw all the pieces together.


It’s a little more complicated than that, though, because everything is. The threaded fittings are sleeves. But because they’re all designed to connect to lengths of pipe the way you might want to connect one garden hose to another, you can’t use them to connect pipes perpendicularly. But every joint in this construction connects a horizontal to an upright, which means you need 90-degree turns.


So you get yourself some plain old fittings, like the ones I used in the prior version. You attach them to the uprights. But those fittings are sleeves designed to join two pipes. The threaded fittings are also sleeves. How do you join a sleeve to a sleeve? With a pipe! So, for each join, cut a 2″ piece of pipe. Glue one end into the sleeve on the upright. Glue the other end to the threaded end of the threaded joins. Press them in so that they’re flush. Below is an example where the connector was a little too long, so the joins are not flush, purely for illustrative purposes I assure you:


Now assemble the pipes. Our uprights are 7′. The horizontals are 47″ each, which, with the additional lengths imposed by the fittings, worked out to about 10′. But if you need exactness, you should cut them to fit. Just remember to label them so they’ll go together next year. Also, wear eye protection: the pipes cut easily with a circular saw, but it creates a lot of flying plastic jaggies.


Here’s the invoice for the fittings:

invoice


You might want to get ourself a spare or two. I’m still amazed that I got away without needing one.


Note that the outer rings tighten counter-clockwise. You have to get the pieces lined up pretty well to be able to screw them together. I suggest that you assemble it from the ground up so that you won’t have to expand magic suspending pieces in mid-air.


The succah worked out well. It seemed pretty robust for a plastic structure made out of pipes not intended for that purpose. It disassembled quite easily. My only concern is how many years we’ll get out of the threaded pieces; they seem rugged but so did I once. (Actually, I didn’t.)

2 Comments »

September 26, 2017

[liveblog][PAIR] Rebecca Fiebrink on how machines can create new things

At the PAIR symposium, Rebecca Fiebrink of Goldsmiths University of London asks how machines can create new things.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

She works with sensors. ML can allow us to build new interactions from examples of human action and computer response. E.g., recognize my closed fist and use it to play some notes. Add more gestures. This is a conventional suprvised training framework. But suppose you want to build a new gesture recognizer?

The first problem is the data set: there isn’t an obvious one to use. Also, would a 99% recognition rate be great or not so much? It depends on what was happening. IF it goes wrong, you modify the training examples.

She gives a live demo — the Wekinator — using a very low-res camera (10×10 pixels maybe) image of her face to control a drum machine. It learns to play stuff based on whether she is leaning to the left or right, and immediately learns to change if she holds up her hand. She then complicates it, starting from scratch again, training it to play based on her hand position. Very impressive.

Ten years ago Rebecca began with the thought that ML can help unlock the interactive potential of sensors. She plays an early piece by Anne Hege using Playstation golf controllers to make music:

Others make music with instruments that don’t look normal. E.g., Laetitia Sonami uses springs as instruments.

She gives other examples. E.g., a facial expression to meme system.

Beyond building new things, what are the consequences, she asks?

First, faster creation means more prototyping and wider exploration, she says.

Second, ML opens up new creative roles for humans. For example, Sonami says, playing an instrument now can be a bit wild, like riding a bull.

Third, ML lets more people be creators and use their own data.

Rebecca teaches a free MOC on Kadenze
: Machine learning for artists and musicians.

Comments Off on [liveblog][PAIR] Rebecca Fiebrink on how machines can create new things

[liveblog][PAIR] Doug Eck on creativity

At the PAIR Symposium, Doug Eck, a research scientist at Google Magenta, begins by playing a video:

Douglas Eck – Transforming Technology into Art from Future Of StoryTelling on Vimeo.

Magenta is part of Google Brain that explores creativity.
By the way:

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He talks about three ideas Magenta has come to for “building a new kind of artist.”

1. Get the right type of data. It’s important to get artists to share and work with them, he says.

Magenta has been trying to get neural networks to compose music. They’ve learned that rather than trying to model musical scores, it’s better to model performances captured as MIDI. They have tens of thousands of performances. From this they were able to build a model that tries to predict the piano roll view of the music. At any moment, should the AI stay at the same time, stacking up notes into chords, or move forward? What are the next notes? Etc. They are not yet capturing much of the “geometry” of, say, Chopin: the piano-roll-ish vision of the score. (He plays music created by ML trained on scores and one trained on performances. The score-based on is clipped. The other is far more fluid and expressive.)

He talks about training ML to draw based on human drawings. He thinks running human artists’ work through ML could point out interesting facets of them.

He points to the playfulness in the drawings created by ML from simple human drawings. ML trained on pig drawings interpreted a drawing of a truck as pig-like.

2. Interfaces that work. Guitar pedals are the perfect interface: they’re indestructible, clear, etc. We should do that for AI musical interfaces, but the sw is so complex technically. He points to the NSyth sound maker and AI duet from Google Creative Lab. (He also touts deeplearn.js.)

3. Learning from users. Can we use feedback from users to improve these systems?

He ends by pointing to the blog, datasets, discussion list, and code at g.co/magenta.

Comments Off on [liveblog][PAIR] Doug Eck on creativity

[liveblog] Google AI Conference

I am, surprisingly, at the first PAIR (People + AI Research) conference at Google, in Cambridge. There are about 100 people here, maybe half from Google. The official topic is: “How do humans and AI work together? How can AI benefit everyone?” I’ve already had three eye-opening conversations and the conference hasn’t even begun yet. (The conference seems admirably gender-balanced in audience and speakers.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The great Martin Wattenberg (half of Wattenberg – Fernanda Viéga) kicks it off, introducing John Giannandrea, a VP at Google in charge of AI, search, and more. “We’ve been putting a lot of effort into using inclusive data sets.”

John says that every vertical will affected by this. “It’s important to get the humanistic side of this right.” He says there are 1,300 languages spoken world wide, so if you want to reach everyone with tech, machine learning can help. Likewise with health care, e.g. diagnosing retinal problems caused by diabetes. Likewise with social media.

PAIR intends to use engineering and analysis to augment expert intelligence, i.e., professionals in their jobs, creative people, etc. And “how do we remain inclusive? How do we make sure this tech is available to everyone and isn’t used just by an elite?”

He’s going to talk about interpretability, controllability, and accessibility.

Interpretability. Google has replaced all of its language translation software with neural network-based AI. He shows an example of Hemingway translated into Japanese and then back into English. It’s excellent but still partially wrong. A visualization tool shows a cluster of three strings in three languages, showing that the system has clustered them together because they are translations of the same sentence. [I hope I’m getting this right.] Another example: a photo of integrated gradients hows that the system has identified a photo as a fire boat because of the streams of water coming from it. “We’re just getting started on this.” “We need to invest in tools to understand the models.”

Controllability. These systems learn from labeled data provided by humans. “We’ve been putting a lot of effort into using inclusive data sets.” He shows a tool that lets you visuallly inspect the data to see the facets present in them. He shows another example of identifying differences to build more robust models. “We had people worldwide draw sketches. E.g., draw a sketch of a chair.” In different cultures people draw different stick-figures of a chair. [See Eleanor Rosch on prototypes.] And you can build constraints into models, e.g., male and female. [I didn’t get this.]

Accessibility. Internal research from Youtube built a model for recommending videos. Initially it just looked at how many users watched it. You get better results if you look not just at the clicks but the lifetime usage by users. [Again, I didn’t get that accurately.]

Google open-sourced Tensor Flow, Google’s AI tool. “People have been using it from everything to to sort cucumbers, or to track the husbandry of cows.”People have been using it from everything to to sort cucumbers, or to track the husbandry of cows. Google would never have thought of this applications.

AutoML: learning to learn. Can we figure out how to enable ML to learn automatically. In one case, it looks at models to see if it can create more efficient ones. Google’s AIY lets DIY-ers build AI in a cardboard box, using Raspberry Pi. John also points to an Android app that composes music. Also, Google has worked with Geena Davis to create sw that can identify male and female characters in movies and track how long each speaks. It discovered that movies that have a strong female lead or co-lead do better financially.

He ends by emphasizing Google’s commitment to open sourcing its tools and research.

 


 

Fernanda and Martin talk about the importance of visualization. (If you are not familiar with their work, you are leading deprived lives.) When F&M got interested in ML, they talked with engineers. ““ML is very different. Maybe not as different as software is from hardware. But maybe. ”ML is very different. Maybe not as different as software is from hardware. But maybe. We’re just finding out.”

M&F also talked with artists at Google. He shows photos of imaginary people by Mike Tyka created by ML.

This tells us that AI is also about optimizing subjective factors. ML for everyone: Engineers, experts, lay users.

Fernanda says ML spreads across all of Google, and even across Alphabet. What does PAIR do? It publishes. It’s interdisciplinary. It does education. E.g., TensorFlow Playground: a visualization of a simple neural net used as an intro to ML. They opened sourced it, and the Net has taken it up. Also, a journal called Distill.pub aimed at explaining ML and visualization.

She “shamelessly” plugs deeplearn.js, tools for bringing AI to the browser. “Can we turn ML development into a fluid experience, available to everyone?”
What experiences might this unleash, she asks.

They are giving out faculty grants. And expanding the Brain residency for people interested in HCI and design…even in Cambridge (!).

Comments Off on [liveblog] Google AI Conference

September 19, 2017

[bkc] Hate speech on Facebook

I’m at a Very Special Harvard Berkman Klein Center for Internet & Society Tuesday luncheon featuring Monika Bickert, Facebook’s Head of Global Policy Management in conversation with Jonathan Zittrain. Monika is in charge of what types of content can be shared on FB, how advertisers and developer interact with the site, and FB’s response to terrorist content. [NOTE: I am typing quickly, getting things wrong, missing nuance, filtering through my own interests and biases, omitting what I can’t hear or parse, and not using a spelpchecker. TL;DR: Please do not assume that this is a reliable account.]

Monika: We have more than 2B users…

JZ: Including bots?

MB: Nope, verified. Billions of messages are posted every day.

[JZ posts some bullet points about MB’s career, which is awesome.]

JZ: Audience, would you want to see photos of abused dogs taken down. Assume they’re put up without context. [It sounds to me like more do not want it taken down.]

MB: The Guardian covered this. [Maybe here?] The useful part was it highlighted how much goes into the process of deciding these things. E.g., what counts as mutilation of an animal? The Guardian published what it said were FB’s standards, not all of which were.

MB: For user generated content there’s a set of standards that’s made public. When a comment is reported to FB, it goes to a FB content reviewer.

JZ: What does it take to be one of those? What does it pay?

MB: It’s not an existing field. Some have content-area expertise, e.g., terrorism. It’s not a minimum wage sort of job. It’s a difficult, serious job. People go through extensive training, and continuing training. Each reviewer is audited. They take quizzes from time to time. Our policies change constantly. We have something like a mini legislative session every two weeks to discuss proposed policy changes, considering internal suggestions, including international input, and external expert input as well, e.g., ACLU.

MB: About animal abuse: we consider context. Is it a protest against animal cruelty? After a natural disaster, you’ll see awful images. It gets very complicated. E.g., someone posts a photo of a bleeding body in Syria with no caption, or just “Wow.” What do we do?

JZ: This is worlds away from what lawyers learn about the First Amendment.

MB: Yes, we’re a private company so the Amendment doesn’t apply. Behind our rules is the idea that “You don’t have to agree with the content, but you should feel safe”FB should be a place where people feel safe connecting and expressing themselves. You don’t have to agree with the content, but you should feel safe.

JZ: Hate speech was defined as an attack against a protected category…

MB: We don’t allow hate speech, but no two people define it the same way. For us, it’s hate speech if you are attacking a person or a group of people based upon a protected characteristic — race, gender, gender identification, etc. —. Sounds easy in concept, but applying it is hard. Our rule is if I say something about a protected category and it’s an attack, we’d consider it hate speech and remove it.

JZ: The Guardian said that in training there’s a quiz. Q: Who do we protect: Women drivers, black children, or white men? A: White men.

MB: Not our policy any more. Our policy was that if there’s another characteristic beside the protected category, it’s not hate speech. So, attacking black children was ok but not white men, because of the inclusion of “children.” But we’ve changed that. Now we would consider attacks on women drivers and black children as hate speech. But when you introduce other characteristics such as profession, it’s harder. We’re evaluating and testing policies now. We try marking content and doing a blind test to see how it affects outcomes. [I don’t understand that. Sorry.]

JZ: Should the internal policy be made public?

MB: I’d be in favor of it. Making the training decks transparent would also be useful. It’s easier if you make clear where the line is.

JZ: Do protected categories shift?

MB: Yes, generally. I’ve been at FB for 5.5 yrs, in this are for 4 yrs. Overall, we’ve gotten more restrictive. Sometimes something becomes a topic of news and we want to make sure people can discuss it.

JZ: Didi Delgado’s post “all white people are racist” was deleted. But it would have been deleted if had said that all black people are racist, right?

MB: Yes. “If it’s a protected characteristic, we’ll protect it”If it’s a protected characteristic, we’ll protect it. [Ah, if only life were that symmetrical.]

JZL How about calls to violence, e.g., “Someone shoot Trump/Hillary”? If you think it should be taken down. [Sounds like most would let it stand.]

JZ: How about “Kick a person with red hair.” [most let it stand]

JZ: “How about: To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat.” [most let it stand][fuck, that’s hard to see up on the screen.]

JZ: “Let’s beat up the fat kids.” [most let it stand]

JZ: “#stab and become the fear of the Zionist” [most take it down]

MB: We don’t allow credible calls for violence.

JZ: Suppose I, a non-public figure, posted “Post one more insult and I’ll kill you.”

MB: We’d take that down. We also look at the degree of violence. Beating up and kicking might not rise to the standard. Snapping someone’s neck would be taken down, although if it were purely instructions on how to do something, we’d leave it up. “Zionist” is often associated with hate speech, and stabbing is serious, so we’d take them down. We leave room for aspirational statements wishing some bad thing would happen. “Someone should shoot them all” we’d count as a call to violence. We also look for specifity, as in “Let’s kill JZ. He leaves work at 3.” We also look at the vulnerability of people; if it’s a dangerous situation,
we’ll tend to treat all such things as calls to violence, [These are tough questions, but I’m not aligned with FB’s decisions on this.]

JZ: How long does someone spend reviewing this stuff?

MB: Some is easy. Nudity is nudity, although we let breast cancer photos through. But a beheading video is prohibited no matter what the context. Profiles can be very hard to evaluate. E.g., is this person a terrorist?

JZ: Given the importance of FB, does it seem right that these decisions reside with FB as a commercial entity. Or is there some other source that would actually be a relief?

MB: “We’re not making these decisions in a silo”We’re not making these decisions in a silo. We reach out for opinions outside of the company. We have Safety Advisory Board, a Global Safety Network [got that wrong, I think], etc.

JZ: These decisions are global? If I insult the Thai King…

MB: That doesn’t violate our global community standard. We have a group of academics around the world, and people on our team, who are counter-terrorism experts. It’s very much a conversation with the community.

JZ: FB requires real names, which can be a form of self-doxxing. Is the Real Name policy going to evolve?

MB: It’s evolved a little about what counts as their real name, i.e., the name people call you as opposed to what’s on your drivers license. Using your real name has always been a cornerstone of FB. A quinessential element of FB.

JZ: You don’t force disambiguation among all the Robert Smiths…

MB: When you communicate with people you know, you know you know them. “We don’t want people to be communicating with people who are not who you think they are”We don’t want people to be communicating with people who are not who you think they are. When you share something on FB, it’s not public or private. You can choose which groups you want to share it with, so you know who will see it. That’s part of the real name policy as well.

MB: We have our community standards. Sometimes we get requests from countries to remove violations of their law, e.g., insults to the King of Thailand. If we get such a request, if it doesn’t violate the standards, we look if the request is actually about real law in that country. Then we ask if it is political speech; if it is, to the extent possible, we’ll push back on those requests. E.g., Germans have a little more subjectivity in their hate speech laws. They may notify us about something that violates those laws, and if it does not violate our global standards, we’ll remove it in Germany only. (It’s done by IP addresses, the language you’re using, etc.) When we do that, we include it in our 6 month reports. If it’s removed, you see a notice that the content is restricted in your jurisdiction.

Q&A

Q: Have you spoken to users about people from different cultures and backgrounds reviewing their content?

A: It’s a legitimate question. E.g., when it comes to nudity, even a room of people as homogenous as this one will disagree. So, “our rules are written to be very objective”our rules are written to be very objective. And we’re increasingly using tech to make these decisions. E.g., it’s easy to automate the finding of links to porn or spam, and much harder for evaluating speech.

Q: What drives change in these policies and algorithms?

A: It’s constantly happening. And public conversation is helpful. And our reviewers raise issues.

Q: a) When there are very contentious political issues, how do you prevent bias? b) Are there checks on FB promoting some agenda?

A: a) We don’t have a rule saying that people from one or another country can review contentious posts. But we review the reviewers’ decisions every week. b) The transparency report we put out every six months is one such check. If we don’t listen to feedback, we tend to see news stories calling us out on it.

[Monika now quickly addresses some of the questions from the open question tool.]

Q: Would you send reports to Lumen? MB: We don’t currently record why decisions were made.

Q: How to prevent removal policies from being weaponized but trolls or censorious regimes? MB: We treat all reports the same — there’s an argument that we shouldn’t — but we don’t continuously re-review posts.

JZ: For all of the major platforms struggling with these issues, is it your instinct that it’s just a matter of incrementally getting this right, bringing in more people, continue to use AI, etc. OR do you think sometimes that this is just nuts; there’s got to be a better way.

There’s a tension between letting anyone see what they want, or have global standards. People say US hates hate speech and the Germans not so much, but there’s actually a spectrum in each. The catch is that there’s content that you’re going to be ok seeing but we think is not ok to be shared.

[Monika was refreshingly direct, and these are, I believe, literally impossible problems. But I came away thinking that FB’s position has a lot to do with covering their butt at the expense of protecting the vulnerable. E.g., they treat all protected classes equally, even though some of us — er, me — are in top o’ the heap, privileged classes. The result is that FB applies a rule equally to all, which can bring inequitable results. That’s easier and safer, but it’s not like I have a solution to these intractable problems.]

1 Comment »

September 3, 2017

Free e-book from Los Angeles Review of Books

I’m proud that my essay about online knowledge has been included in a free e-book collecting essays about the effect of the digital revolution, published by the Los Angeles Review of Books.

It’s actually the first essay in the book, which obviously is not arranged in order of preference, but probably means at least the editors didn’t hate it.

 


The next day: Thanks to a tweet by Siva Vaidhyanathan, I and a lot of people on Twitter have realized that all but one of the authors in this volume are male. I’d simply said yes to the editors’ request to re-publish my article. It didn’t occur to me to ask to see the rest of the roster even though this is an issue I care about deeply. LARB seems to feature diverse writers overall, but apparently not so much in tech.

On the positive, this has produced a crowd-sourced list of non-male writers and thinkers about tech with a rapidity that is evidence of the pain and importance of this issue.

Comments Off on Free e-book from Los Angeles Review of Books

« Previous Page | Next Page »