Joho the Blogculture Archives - Joho the Blog

May 16, 2018

[liveblog] Aubrey de Grey

I’m at the CUBE Tech conference in Berlin. (I’m going to give a first keynote on the book I’m finishing.) Aubrey de Grey begins his keynote begins by changing the question from “Who wants to get old?” to “Who wants Alzheimers?” because we’ve been brainwashed into thinking that aging is somehow good for us: we get wiser, get to retire, etc. Now we are developing treatments for aging. Ambiguity about aging is now “hugely damaging” because it hinders the support of research. E.g., his SENS Research Foundation is going too slowly because of funding restraints.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

“The defeat of aging via medicine is foreseseeable now.” He says he has to be credible because people have been saying this forever and have been wrong.

“Why is aging still a problem?” One hundred years ago, a third of babies would die before they were one year old. We fixed this in the industrialized world through simple advances, e.g., hygiene, mosquito, antibiotics. So why are diseases of old age so much harder to control? People think it’s because so many things go wrong with us late in life, interacting with one another and creating incredible complexity. But that’s not the main answer.

“Aging is easy to define: it is a side effect of being alive.” “It’s a fact of the operation of the human body generates damage.” It accumulates. The body tolerates a certain amount. When you pass that amount, you get pathologies of old age. Our approach has been to develop geriatric medicine to counteract those pathologies. That’s where most of the research goes.

aubrey de gray metabolism diagram

“Metabolism: The ultimate undocumented spaghetti code”

But that won’t work because the damage continues. Geriatric medicine bangs away at the pathologies, but will necessarily become less effective over time. “We make this mistake because of a misclassification we make.”

If you ask people to make categories of disease, they’ll come up with communicable, congenital, and chronic. Then most people add a fourth way of being sick: aging itself. It includes fraility, sarcopenia (loss of muscle), immunosenesence (aging of the immune system)…But that’s silly. Aging in a living organism is the same as aging in a machine. “Aging is the accumulation of damage that occurs as a side-effect of the body’s normal operation.”It is the accumulation of damage to the body that occurs as an intrinsic side-effect of the body’s normal operation. That means the categories are right, except aging covers column 3 and 4. Column 3 — specific diseases such as alzheimer’s and cancer — is also part aging. This means that aging isn’t a blessing in surprise, and that we can’t say that column 3 are high-priorities of medicine but those in 4 are not.

A hundred years ago a few people started to think about this and realized that if we tried to interfere with the process of aging earlier one, we’d do better. This became the field of gerontology. Some species age much more slowly than others. Maybe we can figure out the basis for that variation. But the metabolism is really really complicated. “This is the ultimate nightmare of uncommented spaghetti code.” We know so little about how the body works.

“There is another approach. And it’s completely bleeding obvious”: Periodically repair the damage. We don’t need to slow down the rate at which metabolism causes damage. We need to engineer a system we don’t understand. But “we don’t need to understand how metabolism causes damag”we don’t need to understand how metabolism causes damage. Nor do we need to know what to do when the damage is too great, because we’re not going to let it get to that state. We do this with, say, antique cars. Preventitive maintenance works. “The only question is, can we do it for a much more complicated machine like the human body?

“We’re sidestepping our ignorance of metabolism and pathology. But we have to cope with the fact that damage is complicated” All of the types of damage, from cell loss toe extracellular matrix stiffening — there are 7 categories — can be repaired through a single approach: genetic repair. E.g., loss of cells can be repaired by replacing them using stem cells. Unfortunately, most of the funding is going only to this first category. SENS was created to enable research on the other seven. Aubrey talks about SENS’ work on protecting cells from the bad effects of cholesterol.

He points to another group (unnamed) that has reinvented this approach and is getting a lot of notice.

He says longevity is not what people think it is. These therapies will let people stay alive longer, but they will also stay youthful longer. “”Longevity is a side effect of health.” ”“Longevity is a side effect of health.”

Will this be only for the rich? Overpopulation? Boredom? Pensions collapse? We’re taking care of overpopulation by cleaning up its effects, he says. He says there are solutions to these problems. But there are choices we have to make. No one wants to get Alzheimers. We can’t have it both ways. Either we want to keep people healthy or not.

He says SENS has been successful enough that they’ve been able to spin out some of the research into commercial operations. But we need to cary on in the non-profit research world as well. Project 21 aims at human rejuvenation clinical trials.


Banks everywhere

I just took a 45 minute walk through Berlin and did not pass a single bank. I know this because I was looking for an ATM.

In Brookline, you can’t walk a block without passing two banks. When a local establishment goes out of business, the chances are about 90 percent that a bank is going to go in. The town is now 83 percent banks.[1]

pie chart of businesses


[1] All figures are approximate.


May 10, 2018

When Edison chose not to invent speech-to-text tech

In 1911, the former mayor of Kingston, Jamaica, wrote a letter [pdf] to Thomas Alva Edison declaring that “The days of sitting down and writing one’s thoughts are now over” … at least if Edison were to agree to take his invention of the voice recorder just one step further and invent a device that transcribes voice recordings into speech. It was, alas, an idea too audacious for its time.

Here’s the text of Philip Cohen Stern’s letter:

Dear Sir :-

Your world wide reputation has induced me to trouble you with the following :-

As by talking in the in the Gramaphone [sic] we can have our own voices recorded why can this not in some way act upon a typewriter and reproduce the speech in typewriting

Under the present condition we dictate our matter to a shorthand writer who then has to typewrite it. What a labour saving device it would be if we could talk direct to the typewriter itself! The convenience of it would be enormous. It frequently occurs that a man’s best thoughts occur to him after his business hours and afetr [sic] his stenographer and typist have left and if he had such an instrument he would be independent of their presence.

The days of sitting down and writing out one’s thoughts are now over. It is not alone that there is always the danger in the process of striking out and repairing as we go along, but I am afraid most business-men have lost the art by the constant use of stenographer and their thoughts won’t run into their fingers. I remember the time very well when I could not think without a pen in my hand, now the reverse is the case and if I walk about and dictate the result is not only quicker in time but better in matter; and it occurred to me that such an instrument as I have described is possible and that if it be possible there is no man on earth but you who could do it

If my idea is worthless I hope you will pardon me for trespassing on your time and not denounce me too much for my stupidity. If it is not, I think it is a machine that would be of general utility not only in the commercial world but also for Public Speakers etc.

I am unfortunately not an engineer only a lawyer. If you care about wasting a few lines on me, drop a line to Philip Stern, Barrister-at-Law at above address, marking “Personal” or “Private” on the letter.

Yours very truly,
[signed] Philip Stern.

At the top, Edison has written:

The problem you speak of would be enormously difficult I cannot at present time imagine how it could be done.

The scan of the letter lives at Rutger’s Thomas A. Edison Papers Digital Edition site: “Letter from Philip Cohen Stern to Thomas Alva Edison, June 5th, 1911,” Edison Papers Digital Edition, accessed May 6, 2018, Thanks to Rutgers for mounting the collection and making it public. And a special thanks to Lewis Brett Smiler, the extremely helpful person who noted Stern’s letter to my sister-in-law, Meredith Sue Willis, as a result of a talk she gave recently on The Novelist in the Digital Age.

By the way, here’s Philip Stern’s obituary.


March 24, 2018

Sixteen speeches

In case you missed any of today’s speeches at The March for Our Lives, here’s a page that has sixteen of them

I, on the other hand, am speechless.

Comments Off on Sixteen speeches

January 11, 2018

Artificial water (+ women at PC Gamer)

I’ve long wondered — like for a couple of decades — when software developers who write algorithms that produce beautiful animations of water will be treated with the respect accorded to painters who create beautiful paintings of water. Both require the creators to observe carefully, choose what they want to express, and apply their skills to realizing their vision. When it comes to artistic vision or merit, are there any serious differences?

In the January issue of PC Gamer , Philippa Warr [twitter: philippawarr] — recently snagged
from Rock, Paper, Shotgun points to v r 3 a museum of water animations put together by Pippin Barr. (It’s conceivable that Pippin Barr is Philippa’s hobbit name. I’m just putting that out there.) The museum is software you download (here) that displays 24 varieties of computer-generated water, from the complex and realistic, to simple textures, to purposefully stylized low-information versions.


Philippa also points to the Seascape
page by Alexander Alekseev where you can read the code that procedurally produces an astounding graphic of the open sea. You can directly fiddle with the algorithm to immediately see the results. (Thank you, Alexander, for putting this out under a Creative Commons license.) Here’s a video someone made of the result:

Philippa also points to David Li’s Waves where you can adjust wind, choppiness, and scale through sliders.

More than ten years ago we got to the point where bodies of water look stunning in video games. (Falling water is a different question.) In ten years, perhaps we’ll be there with hair. In the meantime, we should recognize software designers as artists when they produce art.



Good work, PC Gamer, in increasing the number of women reviewers, and especially as members of your editorial staff. As a long-time subscriber I can say that their voices have definitely improved the magazine. More please!

Comments Off on Artificial water (+ women at PC Gamer)

December 15, 2017

[liveblog] Geun-Sik Jo on AR and video mashups

I’m at the STEAM ed Finland conference in Jyväskylä. Geun-Sik Jo is a professor at Inha University in Seoul. He teaches AI and is an augmented reality expert. He also has a startup using AR for aircraft maintenance [pdf].

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Why do we need mashups? To foster innovation. To create new systems very efficiently. He uses the integration of Google Maps into Craigslist as a simple example.

Interactive video — video with clickable spots — is often a mashup: click and go to a product page or send a text to a friend. E.g., or http://www.raptmedia or .

TVs today are computers with their own operating systems and applications. Prof. Jo shows a video of AR TV. The “screen” is a virtual image displayed on special glasses.

If we mash up services, linked data clouds, TV content, social media, int an AR device, “we can do nice things.”

He shows some videos created by his students that present AR objects that are linked to the Internet: a clickable travel ad, location-based pizza ordering, a very cool dance instruction video, a short movie.

He shows a demo of the AI Content Creation System that can make movies. In the example, it creates a drama, mashing up scenes from multiple movies. The system identifies the characters, their actions, and their displayed emotions.

Is this creative? “I’m not a philosopher. I’m explaining this from the engineering point of view,” he says modestly. [If remixes count as creativity — and I certainly think they do — then it’s creative. Does that mean the AI system is creative? Not necessarily in any meaningful sense. Debate amongst yourselves.]

Comments Off on [liveblog] Geun-Sik Jo on AR and video mashups

[liveblog] Sonja Amadae on computational creativity

I’m at the STEAM ed Finland conference in Jyväskylä. Sonja Amadae at Swansea University (also currently at Helsinki U.) works on robotic ethics. She will argue in this talk that computers are algorithmic, that they only do what they’re programmed to do, that they don’t understand what they’re doing and they don’t feel human experience. AI is, she concludes, a tool.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

AI is like a human prosthetic that helps us walk. AI is an enhancement of human capabilities.

She will talk about about three cases.

Case 1: Generating a Rembrandt

A bank funded a project [cool site about it] to see what would happen if a computer had all of the data about Rembrandt’s portraits. They quantified the paintings: types, facial aspects including the size and distance of facial features, depth, contour, etc. They programmed the algorithm to create a portrait. The result was quite goood. People were pleased. Of course, it painted a white male. Is that creativity?

We are now recogizing the biases widespread in AI. E.g., “Biased algorithms are everywhere and one seeems to care” in MIT Tech Review by Will Knight. She also points to the time that Google mistakenly tagged black people as “gorillas.” So, we know there are limitations.

So, we fix the problem…and we end up with facial recognition systems so good that China can identify jaywalkers from surveillance cams, and then they post their images and names on large screens at the intersections.

Case 2: Forgery detection

The aim of one project was to detect forgeries. It was built on work done by Marits Michel van Dantzig in the 1950s. He looked at the brushstrokes on a painting; artists have signature brushstrokes. Each painting has on average 80,000 brushstrokes. A computer can compare a suspect painting’s brushstrokes with the legitimate brushstrokes of the artist. The result: the AI could identify forgeries 80% of the time from a single stroke.

Case 3: Computational creativity

She cites Wikipedia on Computational Creativity because she thinks it gets it roughly right:

Computational creativity (also known as artificial creativity, mechanical creativity, creative computing or creative computation) is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.
The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends:[

  • To construct a program or computer capable of human-level creativity.

  • To better understand human creativity and to formulate an algorithmic perspective on creative behavior in humans.

  • To design programs that can enhance human creativity without necessarily being creative themselves.

She also quotes John McCarthy:

‘To ascribe certain beliefs, knowledge, free will, intentions, consciousness, abilities, or wants to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person.’

If you google “computational art” and you’ll see many pictures created computationally. [Or here.] Is it genuine creativity? What’s going on here?

We know that AI’s products can be taken as human. A poem created by AI won a poetry contest for humans. E.g., “A home transformed by the lightning the balanced alcoves smother” But the AI doesn’t know it’s making a poem.

Can AI make art? Well, art is in the eye of the beholder, so if you think it’s art, it’s art. But philosophically, we need to recall Turing-Church Computability which states that the computation “need not be intelligible to the one calculating.” The fact that computers can create works that look creative does not mean that the machines have the awareness required for creativity.

Can the operations of the brain be simulated on a computer? The Turing-Church statement says yes. But now we have computing so advanced that it’s unpredictable, probabilistic, and is beyond human capability. But the computations need not be artistic to the one computing.

Computability has limits:

1. Information and data are not meaning or knowledge.

2. Every single moment of existence is unique in the universe. Every single moment we see a unique aspect of the world. A Turing computer can’t see the outside world. It only has what it’s internal to it.

3. Human mind has existential experience.

4. The mind can reflect on itself.

5. Scott Aaronson says that humans can exercise free will and AI cannot, based on quantum theory. [Something about quantum free states.]

6.The universe has non-computable systems. Equilibrium paths?

“Aspect seeing” means that we can make a choice about how we see what we see. And each moment of each aspect is unique in time.

In SF, the SPCA uses a robot to chase away homeless people. Robots cannot exercise compassion.

Computers compute. Humans create. Creativity is not computable.


Q: [me] Very interesting talk. What’s at stake in the question?

A: AI has had such a huge presence in our lives. There’s a power of thinking about rationality as computation. Gets best articulated in game theory. Can we conclude that this game theoretical rationality — the foundational understanding of rationality — is computable? Do human brings anything to the table? This leads to an argument for the obsolescence of the human. If we’re just computational, then we aren’t capable of any creativity. Or free will. That’s what’s ultimately at stake here.

Q: Can we create machines that are irrational, and have them bring a more human creativity?

A: There are many more types of rationality than game theory sort. E.g., we are rational in connection with one another working toward shared goals. The dichotomy between the rational and irrational is not sufficient.

Comments Off on [liveblog] Sonja Amadae on computational creativity

December 5, 2017

[liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

I’ve been at a two-day workshop sponsored by the Michigan State Uiversity and the National Science Foundation: “Workshop on Trustworthy Algorithmic Decision-Making.” After multiple rounds of rotating through workgroups iterating on five different questions, each group presented its findings — questions, insights, areas of future research.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Seriously, I cannot capture all of this.

Conduct of Data Science

What are the problems?

  • Who defines and how do we ensure good practice in data science and machine learning?

Why is the topic important? Because algorithms are important. And they have important real-world effects on people’s lives.

Why is the problem difficult?

  • Wrong incentives.

  • It can be difficult to generalize practices.

  • Best practices may be good for one goal but not another, e.g., efficiency but not social good. Also: Lack of shared concepts and vocabulary.

How to mitigate the problems?

  • Change incentives

  • Increase communication via vocabularies, translations

  • Education through MOOCS, meetups, professional organizations

  • Enable and encourage resource sharing: an open source lesson about bias, code sharing, data set sharing

Accountability group

The problem: How to integratively assess the impact of an algorithmic system on the public good? “Integrative” = the impact may be positive and negative and affect systems in complex ways. The impacts may be distributed differently across a population, so you have to think about disparities. These impacts may well change over time

We aim to encourage work that is:

  • Aspirationally casual: measuring outcomes causally but not always through randomized control trials.

  • The goal is not to shut down algorithms to to make positive contributions that generat solutions.

This is a difficult problem because:

  • Lack of variation in accountability, enforcements, and interventions.

  • It’s unclear what outcomes should be measure and how. This is context-dependent

  • It’s unclear which interventions are the highest priority

Why progress is possible: There’s a lot of good activity in this space. And it’s early in the topic so there’s an ability to significantly influence the field.

What are the barriers for success?

  • Incomplete understanding of contexts. So, think it in terms of socio-cultural approaches, and make it interdisciplinary.

  • The topic lies between disciplines. So, develop a common language.

  • High-level triangulation is difficult. Examine the issues at multiple scales, multiple levels of abstraction. Where you assess accountability may vary depending on what level/aspect you’re looking at.

Handling Uncertainty

The problem: How might we holistically treat and attribute uncertainty through data analysis and decisions systems. Uncertainty exists everywhere in these systems, so we need to consider how it moves through a system. This runs from choosing data sources to presenting results to decision-makers and people impacted by these results, and beyond that its incorporation into risk analysis and contingency planning. It’s always good to know where the uncertainty is coming from so you can address it.

Why difficult:

  • Uncertainty arises from many places

  • Recognizing and addressing uncertainties is a cyclical process

  • End users are bad at evaluating uncertain info and incorporating uncertainty in their thinking.

  • Many existing solutions are too computationally expensive to run on large data sets

Progress is possible:

  • We have sampling-based solutions that provide a framework.

  • Some app communities are recognizing that ignoring uncertainty is reducing the quality of their work

How to evaluate and recognize success?

  • A/B testing can show that decision making is better after incorporating uncertainty into analysis

  • Statistical/mathematical analysis

Barriers to success

  • Cognition: Train users.

  • It may be difficult to break this problem into small pieces and solve them individually

  • Gaps in theory: many of the problems cannot currently be solved algorithmically.

The presentation ends with a note: “In some cases, uncertainty is a useful tool.” E.g., it can make the system harder to game.

Adversaries, workarounds, and feedback loops

Adversarial examples: add a perturbation to a sample and it disrupts the classification. An adversary tries to find those perturbations to wreck your model. Sometimes this is used not to hack the system so much as to prevent the system from, for example, recognizing your face during a protest.

Feedback loops: A recidivism prediction system says you’re likely to commit further crimes, which sends you to prison, which increases the likelihood that you’ll commit further crimes.

What is the problem: How should a trustworthy algorithm account for adversaries, workarounds, and feedback loops?

Who are the stakeholders?

System designers, users, non-users, and perhaps adversaries.

Why is this a difficult problem?

  • It’s hard to define the boundaries of the system

  • From whose vantage point do we define adversarial behavior, workarounds, and feedback loops.

Unsolved problems

  • How do we reason about the incentives users and non-users have when interacting with systems in unintended ways.

  • How do we think about oversight and revision in algorithms with respect to feedback mechanisms

  • How do we monitor changes, assess anomalies, and implement safeguards?

  • How do we account for stakeholders while preserving rights?

How to recognize progress?

  • Mathematical model of how people use the system

  • Define goals

  • Find stable metrics and monitor them closely

  • Proximal metrics. Causality?

  • Establish methodologies and see them used

  • See a taxonomy of adversarial behavior used in practice

Likely approaches

  • Security methodology to anticipating and unintended behaviors and adversarial interactions’. Monitor and measure

  • Record and taxonomize adversarial behavior in different domains

  • Test . Try to break things.


  • Hard to anticipate unanticipated behavior

  • Hard to define the problem in particular cases.

  • Goodhardt’s Law

  • Systems are born brittle

  • What constitutes adversarial behavior vs. a workaround is subjective.

  • Dynamic problem

Algorithms and trust

How do you define and operationalize trust.

The problem: What are the processes through which different stakeholders come to trust an algorithm?

Multiple processes lead to trust.

  • Procedural vs. substantive trust: are you looking at the weights of the algorithms (e.g.), or what were the steps to get you there?

  • Social vs personal: did you see the algorithm at work, or are you relying on peers?

These pathways are not necessarily predictive of each other.

Stakeholders build truth through multiple lenses and priorities

  • the builders of the algorithms

  • the people who are affected

  • those who oversee the outcomes

Mini case study: a child services agency that does not want to be identified. [All of the following is 100% subject to my injection of errors.]

  • The agency uses a predictive algorithm. The stakeholders range from the children needing a family, to NYers as a whole. The agency knew what into the model. “We didn’t buy our algorithm from a black-box vendor.” They trusted the algorithm because they staffed a technical team who had credentials and had experience with ethics…and who they trusted intuitively as good people. Few of these are the quantitative metrics that devs spend their time on. Note that FAT (fairness, accountability, transparency) metrics were not what led to trust.


  • Processes that build trust happen over time.

  • Trust can change or maybe be repaired over time. “

  • The timescales to build social trust are outside the scope of traditional experiments,” although you can perhaps find natural experiments.


  • Assumption of reducibility or transfer from subcomponents

  • Access to internal stakeholders for interviews and process understanding

  • Some elements are very long term



What’s next for this workshop

We generated a lot of scribbles, post-it notes, flip charts, Slack conversations, slide decks, etc. They’re going to put together a whitepaper that goes through the major issues, organizing them, and tries to capture the complexity while helping to make sense of it.

There are weak or no incentives to set appropriate levels of trust

Key takeways:

  • Trust is irreducible to FAT metrics alone

  • Trust is built over time and should be defined in terms of the temporal process

  • Isolating the algorithm as an instantiation misses the socio-technical factors in trust.

Comments Off on [liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

November 29, 2017

"The Walking Dead" is Negan

[SPOILERS??] There are no direct spoilers of the “So and So dies” sort in this post, but it assumes you are pretty much up to date on the current season of The Walking Dead.

The Walking Dead has become Negan. I mean the show itself.

Negan brings to the show a principle of chaos: you never know who he’s going to bash to death. This puts all the characters at risk, although perhaps some less so than others based on their fan-base attachments.

That adds some threat and tension of the sort that Game of Thrones used to have. But only if it’s a principle of chaos embedded within a narrative structure and set of characters that we care about. And for the prior season and the current one, there’s almost no narrative structure and, frankly, not that many characters who don’t feel like narrative artifices.

As a result, the main tension in the current season is exactly the same as it was at the beginning of last season when we waited to find out who Negan would choose to bash to death. Negan was so random that “the viewer discussions generally were attempts to anticipate what the writers wanted to do to us”the viewer discussions generally were attempts to anticipate what the writers wanted to do to us. They had to kill someone significant or else the threat level would go down. But they couldn’t kill so-and-so because s/he was too popular, or whatever. There were no intrinsic reasons why Negan would chose one victim over another — Wild Card! — so the reasons had to have to do with audience retention.

This entire season is random in that bad way. The writers are now Negan, choosing randomly among Team Rick’s characters. They’re going to kill off someone for some ratings-based reason, and we’re just waiting for them to make up their mind.

The series didn’t start out this way. It had characters in conflict, and characters in arcs. Rick and The Punisher. Carol and her sister. Daryl and his other brother Daryl. Gingerbeard and The Mullet. Now there’s nothing, maybe because every character’s arc has been the same: S/he becomes an empowered action star.

There are still some things I like about the show. For example, it’s heartening to watch them work on the female empowerment, although it’d be more interesting if they didn’t all become like Rick. And Negan is a pretty good villain. Sure, I could do with fewer predictable charming smiles, but he’s scary.

But I’ll be damned if in the last episode of this series [MADE-UP SPOILERS AHEAD] Team Rick (which will probably be Team Maggie by then) realizes that it has become Negan. I’ll be especially pissed off if the last shot is of the dying Jesus saying, “We are Negan.” Star wipe. Out. Puke.

Comments Off on "The Walking Dead" is Negan

October 25, 2017

[liveblog] John Palfrey’s new book (and thoughts on rules vs. models)

John Palfrey is doing a launch event at the Berkman Klein Center for his new book, Safe Spaces, Brave Spaces: Diversity and Free Expression in Education. John is the Head of School at Phillips Academy Andover, and for many years was the executive director of the Berkman Klein Center and the head of the Harvard Law School Library. He’s also the chairman of the board of the Knight Foundation. This event is being put on by the BKC, the Law Library, and Andover. His new book is available on paper, or online as an open access book. (Of course it is. It’s John Palfrey, people!)

[Disclosure: Typical conversations about JP, when he’s not present, attempt — and fail — to articulate his multi-facted awesomeness. I’ll fail at this also, so I’ll just note that JP is directly responsible for my affiliation with the BKC and and for my co-directorship of the Harvard Library Innovation Lab…and those are just the most visible ways in which he has enabled me to flourish as best I can. ]

Also, at the end of this post I have some reflections on rules vs. models, and the implicit vs. explicit.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

John begins by framing the book as an attempt to find a balance between diversity and free expression. Too often we have pitted the two against each other, especially in the past few years, he says: the left argues for diversity and the right argues for free expression. It’s important to have both, although he acknowledges that there are extremely hard cases where there is no reconciliation; in those cases we need rules and boundaries. But we are much better off when we can find common ground.

“This may sound old-fashioned in the liberal way. And that’s true,” he says. But we’re having this debate in part because young people have been advancing ideas that we should be listening to. We need to be taking a hard look.

Our institutions should be deeply devoted to diversity, equity and inclusion. Our institutions haven’t been as supportive of these as they should be, although they’re getting better at it, e.g. getting better at acknowledging the effects of institutional racism.

The diversity argument pushes us toward the question of “safe spaces.” Safe spaces are crucial in the same way that every human needs a place where everyone around them supports them and loves them, and where you can say dumb things. We all need zones of comfort, with rules implicit or explicit. It might be a room, a group, a virtual space… E.g., survivors of sexual assault need places where they know there are rules and they can express themselves without feeling at risk.

But, John adds, there should also be spaces where people are uncomfortable, where their beliefs are challenged.

Spaces of both sorts are experienced differently by different people. Privileged people like John experience spaces as safe that others experience as uncomfortable.

The examples in his book include: trigger warnings, safe spaces, the debates over campus symbols, the disinvitation of speakers, etc. These are very hard to navigate and call out for a series of rules or principles. Different schools might approach these differently. E.g.,students from the Gann Academy are here tonight, a local Jewish high school. They well might experience a space differently than students at Andover. Different schools well might need different rules.

Now John turns it over to students for comments. (This is very typical JP: A modest but brilliant intervention and then a generous deferral to the room. I had the privilege of co-teaching a course with him once, and I can attest that he is a brilliant, inspiring teacher. Sorry, but to be such a JP fanboy, but I am at least an evidence-based fanboy.) [I have not captured these student responses adequately, in some cases simply because I had trouble hearing them. They were remarkable, however. And I could not get their names with enough confidence to attempt to reproduce them here. Sorry!]

Student Responses

Student: I graduated from Andover and now I’m at Harvard. I was struck by the book’s idea that we need to get over the dichotomy between diversity and free expression. I want to address Chapter 5, about hate speech. It says each institution ought to assess its own values to come up with its principles about speech and diversity, and those principles ought to be communicated clearly and enforced consistently. But, I believe, we should in fact be debating what the baseline should be for all institutions. We don’t all have full options about what school we’re going to go to, so there ought to be a baseline we all can rely on.

JP: Great critique. Moral relativism is not a good idea. But I don’t think one size fits all. In the hardest cases, there might be sharpest limits. But I do agree there ought to be some sort of baseline around diversity, equity, and inclusion. I’d like to see that be a higher baseline, and we’ve worked on this at Andover. State universities are different. E.g., if a neo-Nazi group wants to demonstrate on a state school campus and they follow the rules laid out in the Skokie case, etc., they should be allowed to demonstrate. If they came to Andover, we’d say no. As a baseline, we might want to change the regulations so that the First Amendment doesn’t apply if the experience is detrimental to the education of the students; that would be a very hard line to draw. Even if we did, we still might want to allow local variations.

Student: Brave spaces are often build from safe spaces. E.g., at Andover we used Facebook to build a safe space for women to talk, in the face of academic competitions where misogyny was too common. This led to creating brave places where open, frank discussion across differences was welcomed.

JP: Yes, giving students a sense of safety so they can be brave is an important point. And, yes, brave spaces do often grow from safe spaces.

Andover student: I was struck by why diversity is important: the cross-pollination of ideas. But from my experience, a lot of that hasn’t occurred because we’re stuck in our own groups. There’s also typically a divide between the students and the faculty. Student activitsts are treated as if they’re just going through a phase. How do we bridge that gap?

JP: How do we encourage more cross-pollination? It’s a really hard problem for educators. I’ve been struck by the difference between teaching at Harvard Law and Andover in terms of the comfort with disagreeing across political divides; it was far more comfortable at the Law School. I’ve told students if you present a paper that disagrees with my point of view and argues for it beautifully, you’ll do better than parroting ideas back to me. Second, we have to stop using demeaning language to talk about student activists. BTW, there is an interesting dynamic, as teachers today may well have been activists when they were young and think of themselves as the reformers.

Student: [hard to hear] At Andover, our classes were seminar-based, which is a luxury not all students have. Also: Wouldn’t encouraging a broader spread of ideas create schisms? How would you create a school identity?

JP: This echoes the first student speaker’s point about establishing a baseline. Not all schools can have 12 students with two teachers in a seminar, as at Andover. We need to find a dialectic. As for schisms: we have to communicate values. Institutions are challenged these days but there is a huge place for them as places that convey values. There needs to be some top down communication of those values. Students can challenge those values, and they should. This gets at the heart of the problem: Do we tolerate the intolerant?

Student: I’m a graduate of Andover and currently at Harvard. My generation has grown up with the Internet. What happens when what is supposed to be a safe space becomes a brave space for some but not all? E.g., a dorm where people speak freely thinking it’s a safe space. What happens when the default values overrides what someone else views as comfortable? What is the power of an institution to develop, monitor, and mold what people actually feel? When communities engage in groupthink, how can an institution construct space safes?

JP: I don’t have an easy answer to this. We do need to remember that these spaces are experienced differently by different people, and the rules ought to reflect this. Some of my best learning came from late night bull sessions. It’s the duty of the institution to do what it can to enable that sort of space. But we also have to recognize that people who have been marginalized react differently. The rule sets need to reflect that fact.

Student: Andover has many different forum spaces available, from hallways to rooms. We get to decide to choose when and where these conversations will occur. For a more traditional public high school where you only have 30-person classroom as a forum, how do we have the difficult conversations that students at Andover choose to have in more intimate settings?

JP: The size and rule-set of the group matters enormously. Even in a traditional HS you can still break a class into groups. The answer is: How do you hack the space?

Student: I’m a freshman at Harvard. Before the era of safe spaces, we’d call them friends: people we can talk with and have no fear that our private words will be made public, and where we will not be judged. Safe spaces may exclude people, e.g., a safe space open only to women.

JP Andover has a group for women of color. That excludes people, and for various reasons we think that’s entirely appropriate an useful.


Q [Terry Fisher]: You refer frequently to rule sets. If we wanted to have a discussion in a forum like this, you could announce a set of rules. Or the organizer could announce values, such as: we value respect, or we want people to take the best version of what others say. Or, you could not say anything and model it in your behavior. When you and I went to school, there were no rules in classrooms. It was all done by modeling. But this also meant that gender roles were modeled. My experience of you as a wonderful teacher, JP, is that you model values so well. It doesn’t surprise me that so many of your students talk with the precision and respectfulness that you model. I am worried about relying on rule sets, and doubt their efficacy for the long term. Rather, the best hope is people modeling and conveying better values, as in the old method.

JP: Students, Terry Fischer was my teacher. May answer will be incredibly tentative: It is essential for an institution to convey its values. We do this at Andover. Our values tell us, for example, that we don’t want gender-based balance and are aware that we are in a misogynist culture, and thus need reasonable rules. But, yes, modeling is the most powerful.

Q [Dorothy Zinberg]: I’ve been at Harvard for about 70 yrs and I have seen the importance of an individual in changing an institution. For example, McGeorge Bundy thought he should bring 12 faculty to Harvard from non-traditional backgrounds, including Erik Erikson who did not have a college degree. He had been a disciple of Freud’s. He taught a course at Harvard called “The Lifecycle.” Every Harvard senior was reading The Catcher in the Rye. Erikson was giving brilliant lectures, but I told him it was from his point of view as a man, and had nothing to do with the young women. So, he told me, a grad student, to write the lectures. No traditional professor would have done that. Also: for forming groups, there’s nothing like closing the door. People need to be able to let go and try a lot of ideas.

Q: I am from the Sudan. How do you create a safe space in environments that are exclusive. [I may have gotten that wrong. Sorry.] How do you acknowledge the native American tribes whose land this institution is built on, or the slaves who did the building?

JP: We all have that obligation. [JP gives some examples of the Law School recently acknowledging the slave labor, and the money from slave holders, that helped build the school.]

Q: You used a kitchen as an example of a safe space. Great example. But kitchens are not established or protected by any authority. It’s a new idea that institutions ought to set these up. Do you think there should be safe spaces that are privately set up as well as by institutions? Should some be permitted to exclude people or not?

(JP asks a student to respond): Institutional support can be very helpful when you have a diversity of students. Can institutional safe spaces supplement private ones? I’m not sure. And I do think exclusive groups have a place. As a consensus forms, it’s important to allow the marginalized voices to connect.

Q [ head of Gann]: I’m a grad of Phillips Academy. As head of a religious school, we’re struggling with all these questions. Navigating these spaces isn’t just a political or intellectual activity. It is a work of the heart. If the institution thinks of this only as a rational activity and doesn’t tend to the hearts of our students, and is not explicit about the habits of heart we need to navigate these sensitive waters, only those with natural emotional skills will be able to flourish. We need to develop leaders who can turn hard conversations into generative ones. What would it look like to take on the work of developing social and emotional development?

JP: Ive been to Gann and am confident that’s what you’re doing. And you can see evidence of Andover’s work on it in the students who spoke tonight. Someone asked me if a student became a Nazi, would you expel him? Yes, if it were apparent in his actions, but probably not for his thoughts. Ideally, our students won’t come to have those views because of the social and emotional skills they’re learning. But people in our culture do have those views. Your question brings it back to the project of education and of democracy.

[This session was so JP!]



A couple of reactions to this discussion without having yet read the book.

First, about Prof. Fisher’s comment: I think we are all likely to agree that modeling the behavior we want is the most powerful educational tool. JP and Prof. Fisher, are both superb, well, models of this.

But, as Prof. Fisher noted in his question, the dominant model of discourse for our generation silently (and sometimes explicitly) favored males, white middle class values, etc. Explicit rules weren’t as necessary because we had internalized them and had stacked the deck against those who were marginalized by them. Now that diversity has thankfully become an explicit goal, and now that the Internet has thrown us into conversations across differences, we almost always need to make those rules explicit; a conversation among people from across divides of culture, economics, power, etc. that does not explicitly acknowledge the different norms under which the participants operate is almost certainly going to either fragment or end in misunderstanding.

(Clay Shirky and I had a collegial difference of opinion about this about fifteen years ago. Clay argued for online social groups having explicit constitutions. I argued
for the importance of the “unspoken” in groups, and the damage that making norms explicit can cause.)

Second, about the need for setting a baseline: I’m curious to see what JP’s book says about this, because the evidence is that we as a culture cannot agree about what the baseline is: vociferous and often nasty arguments about this have been going on for decades. For example, what’s the baseline for inviting (or disinviting) people with highly noxious views to a private college campus? I don’t see a practical way forward for establishing a baseline answer. We can’t even get Texas schools to stop teaching Creationism.

So, having said that modeling is not enough, and having despaired at establishing a baseline, I think I am left being unhelpfully dialectical:

1. Modeling is essential but not enough.

2. We ought to be appropriately explicit about rules in order to create places where people feel safe enough to be frank and honest…

3. …But we are not going to be able to agree on a meaningful baseline for the U.S., much less internationally — “meaningful” meaning that it is specific enough that it can be applied to difficult cases.

4. But modeling may be the only way we can get to enough agreement that we can set a baseline. We can’t do it by rules because we don’t have enough unspoken agreement about what those rules should be. We can only get to that agreement by seeing our leading voices in every field engage across differences in respectful and emotionally truthful ways. So at the largest level, I find I do agree with Prof. Fisher: we need models.

5. But if our national models are to reflect the values we want as a baseline, we need to be thoughtful, reflective, and explicit about which leading voices we want to elevate as models. We tend to do this not by looking for rules but by looking for Prof. Fisher’s second alternative: values. For example, we say positively that we love John McCain’s being a “maverick” or Kamala Harris’ careful noting of the evidence for her claims, and we disdain Trump’s name-calling. Rules derive from values such as those. Values come before rules.

I just wish I had more hope about the direction we’re going in…although I do see hopeful signs in some of the model voices who are emerging, and most of all, in the younger generation’s embrace of difference.

Comments Off on [liveblog] John Palfrey’s new book (and thoughts on rules vs. models)

Next Page »