Joho the BlogJoho the Blog - Page 2 of 962 - Let's just see what happens

June 28, 2017

Re-reading Hornblower

I read all of C.S. Forester’s Horatio Hornblower series when I was in high school.

I’m on a week of vacation — i.e., a nicer place to work — and have been re-reading them.

Why isn’t everyone re-reading them? They’re wonderful. Most of the seafaring descriptions are opaque to me, but it doesn’t matter. The stories are character-based and Forester is great at expressing personality succinctly, as well as taking us deep into Hornblower’s character over the course of the books. Besides, all the talk of binneys ’round the blaggard binge don’t get in the way of understanding the action

Some prefer Patrick O’Brian’s Aubrey-Maturin “Master and Commander” series. They are wrong. I believe the Internet when it says O’Brian’s battles are more realistic because they’re based on actual events. I don’t care. I do care, however, about O’Brian’s clumsy construction of his main characters. I can sense the author trying to inflate them into three dimensions. Then they’re given implausible roles and actions.

Of course you may disagree with me entirely about that. But here’s the killer for me: O’Brian relies on long pages of back-and-forth dialogue…while not telling you who’s talking. I don’t like having to count back by twos to find the original speaker. All I need is an occasional, “‘Me, neither,’ said Jack.” Is that asking too much?

Anyway, take a look at Hornblower and the Atropos to see if you’re going to like the series. That begins with a few chapters of Hornblower arranging the logistics for the flotilla portion of Lord Nelson’s funeral. If you find yourself as engrossed in chapters about logistics as I did, you’re probably hooked forever.

1 Comment »

June 13, 2017

Top 2 Beatles songs

About a week ago, out of the blue I blurted out to my family what the two best Beatles songs are. I pronounced this with a seriousness befitting the topic, and with a confidence born of the fact that it’s a ridiculous question and it doesn’t matter anyway.

Vulture just published a complete ranking of all Beatles songs.

Nailed it.

Their #1 selection is an obvious contender. #2 is controversial and probably intentionally so. But, obviously, I think it’s a good choice.

If you want to see what they chose, click here: #1. Day in the Life #2 Strawberry Fields

By the way, the Vulture write-ups of each of the songs are good. At least the ones I read were. If you’re into this, the best book I’ve read is Ian MacDonald’s Revolution in the Head, which has an essay on each recording with comments about the social and personal context of the song and a learned explanation of the music. Astounding book.

5 Comments »

June 6, 2017

[liveblog] metaLab

Harvard metaLab is giving an informal Berkman Klein talk about their work on designing for ethical AI. Jeffrey Schnapp introduces metaLab as “an idea foundry, a knowledge-design lab, and a production studio experimenting in the networked arts and humanities.” The discussion today will be about metaLab’s various involvements in the Berkman Klein – MIT MediaLab project on ethics and governance of AI. The conference is packed with Fellows and the newly-arrived summer interns.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Matthew Battles and Jessica Yurkofsky begin by talking about Curricle, a “new platform for experimenting with shopping for courses.” How can the experience be richer, more visual, and use more of the information and data that Harvard has? They’ve come up with a UI that has three elements: traditional search, a visualization, and a list of the results.

“They’ve been grappling with the ethics of putting forward new search algorithms. ”They’ve been grappling with the ethics of putting forward new search algorithms. The design is guided by transparency, autonomy, and visualization. Transparency means that they make apparent how the search works, allowing students to assign weights to keywords. If Curricle makes recommendations, it will explain that it’s because other students like you have chosen it or because students like you have never done this, etc. Visualization shows students what’s being returned by their search and how it’s distributed.

Similar principles guide a new project, AI Compass, that is the entry point for information about Berkman Klein’s work on the Ethics and Governance of AI project. It is designed to document the research being done and to provide a tool for surveying the field more broadly. They looked at how neural nets are visualized, how training sets are presented, and other visual metaphors. They are trying to find a way to present these resources in their connections. They have decided to use Conway’s Game of Life [which I was writing about an hour ago, which freaks me out a bit]. The game allows complex structures to emerge from simple rules. AI Compass is using animated cellular automata as icons on the site.

metaLab wants to enable people to explore the information at three different scales. The macro scale shows all of the content arranged into thematic areas. This lets you see connections among the pieces. The middle scale shows the content with more information. At the lowest scale, you see the resource information itself, as well as connections to related content.

Sarah Newman talks about how AI is viewed in popular culture: the Matrix, Ahnuld, etc. “We generally don’t think about AI as it’s expressed in the tools we actually use”We generally don’t think about AI as it’s expressed in the tools we actually use, such as face recognition, search, recommendations, etc. metaLab is interested in how art can draw out the social and cultural dimensions of AI. “What can we learn about ourselves by how we interact with, tell stories about, and project logic, intelligence, and sentience onto machines?” The aim is to “provoke meaningful reflection.”

One project is called “The Future of Secrets.” Where our email and texts be in 100 years? And what does this tell us about our relationship with our tech. Why and how do we trust them? It’s an installation that’s been at the Museum of Fine Arts in Boston and recently in Berlin. People enter secrets that are printed out anonymously. People created stories, most of which weren’t true, often about the logic of the machine. People tended to project much more intelligence on the machine than was there. Cameras were watching and would occasionally print out images from the show itself.

From this came a new piece (done with fellow Rachel Kalmar) in which a computer reads the secrets out loud. It will be installed at the Berkman Klein Center soon.

Working with Kim Albrecht in Berlin, the center is creating data visualizations based on the data that a mobile phone collects, including the accelerometer. “These visualizations let us see how the device is constructing an image of the world we’re moving through”These visualizations let us see how the device is constructing an image of the world we’re moving through. That image is messy, noisy.

The lab is also collaborating on a Berlin exhibition, adding provocative framing using X degrees of Separation. It finds relationships among objects from disparate cultures. What relationships do algorithms find? How does that compare with how humans do it? What can we learn?

Starting in the fall, Jeffrey and a co-teacher are going to be leading a robotics design studio, experimenting with interior and exterior architecture in which robotic agents are copresent with human actors. This is already happening, raising regulatory and urban planning challenges. The studio will also take seriously machine vision as a way of generating new ways of thinking about mobility within city spaces.

Q&A

Q: me: For AI Compass, where’s the info coming from? How is the data represented? Open API?

Matthew: It’s designed to focus on particular topics. E.g., Youth, Governance, Art. Each has a curator. The goal is not to map the entire space. It will be a growing resource. An open API is not yet on the radar, but it wouldn’t be difficult to do.

Q: At the AI Advance, Jonathan Zittrain said that organizations are a type of AI: governed by a set of rules, they grow and learn beyond their individuals, etc.

Matthew: We hope to deal with this very capacious approach to AI is through artists. What have artists done that bear on AI beyond the cinematic tropes? There’s a rich discourse about this. We want to be in dialogue with all sorts of people about this.

Q: About Curricle: Are you integrating Q results [student responses to classes], etc.?

Sarah: Not yet. There’s mixed feeling from administrators about using that data. We want Curricle to encourage people to take new paths. The Q data tends to encourage people down old paths. Curricle will let students annotate their own paths and share it.

Jeffrey: We’re aiming at creating a curiosity engine. We’re working with a century of curricular data. This is a rare privilege.

me: It’d enrich the library if the data about resources was hooked into LibraryCloud.

Q: kendra: A useful feature would be finding a random course that fits into your schedule.

A: In the works.

Q: It’d be great to have transparency around the suggestions of unexpected courses. We don’t want people to be choosing courses simply to be unique.

A: Good point.

A: The same tool that lets you diversify your courses also lets you concentrate all of them into two days in classrooms near your dorm. Because the data includes courses from all the faculty, being unique is actually easy. The challenge is suggesting uniqueness that means something.

Q: People choose courses in part based on who else is choosing that course. It’d be great to have friends in the platform.

A: Great idea.

Q: How do you educate the people using the platform? How do you present and explain the options? How are you going to work with advisors?

A: Important concerns at the core of what we’re thinking about and working on.

1 Comment »

June 3, 2017

1.5 random thoughts

1. Life Pro tip: Aim at what you hit.

2. A metaphor that may come in handy someday: As undignified as a child climbing a slide.

1 Comment »

May 29, 2017

The Internet is an agreement

Jaap van Till has posted an aggregation of thoughts and links to remind us of what it seems we have so much trouble remembering: The Internet is not a thing but an agreement.

An internet, network of networks, is a voluntary agreement among network operators to exchange traffic for their mutual benefit. (The Internet is a prototype internet.) That’s all — it’s an agreement.

That’s from an earlier post by Jaap, which along the way links out to the World of Ends post that Doc Searls and I wrote in 2003 that aimed at explaining the Internet to legislators.

I sense that we are due for a shift in tides, maybe over the next two years, in which the point that needs making is not that the Internet is dangerous and sucks, but that it it is dangerous and sucks and is the greatest invention in the history of our species. Cf. Virginia Heffernan, Magic and Loss.)

This pendulum swing can’t come soon enough.

1 Comment »

May 18, 2017

Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

Be the first to comment »

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Be the first to comment »

[liveblog][AI] Perspectives on community and AI

Chelsea Barabas is moderating a set of lightning talks at the AI Advance, aat Berkman Klein and MIT Media Lab.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Lionel Brossi recounts growing up in Argentina and the assumption that all boys care about football. He moved to Chile which is split between people who do and do not watch football. “Humans are inherently biased.” So, our AI systems are likely to be biased. Cognitive science has shown that the participants in their studies tend to be WEIRD: western, educated, industrialized, rich and developed. Also straight and white. He references Kate Crawford‘s “AI’s White Guy Problem.” We need not only diverse teams of developers, but also to think about how data can be more representative. We also need to think about the users. One approach is work on goal centered design.

If we ever get to unbiased AI, Borges‘ statement, “The original is unfaithful to the translation” may apply.

Chelsea: What is an inclusive way to think of cross-border countries?

Lionel: We need to co-design with more people.

Madeline Elish is at Data and Society and an anthropology of technology grad student at Columbia. She’s met designers who thought it might be a good to make a phone run faster if you yell at it. But this would train children to yell at things. What’s the context in which such designers work? She and Tim Hwang set about to build bridges between academics and businesses. They asked what designers see as their responsibility for the social implications of their work. They found four core challenges:

1. Assuring users perceive good intentions
2. Protecting privacy
3. Long term adoption
4. Accuracy and reliability

She and Tim wrote An AI Pattern Language [pdf] about the frameworks that guide design. She notes that none of them were thinking about social justice. The book argues that there’s a way to translate between the social justice framework and, for example, the accuracy framework.

Ethan Zuckerman: How much of the language you’re seeing feels familiar from other hype cycles?

Madeline: Tim and I looked at the history of autopilot litigation to see what might happen with autonomous cars. We should be looking at Big Data as the prior hype cycle.

Yarden Katz is at the BKC and at the Dept. of Systems Biology at Harvard Medical School. He talks about the history of AI, starting with 1958 claim about translation machine. 1966: Minsky Then there was an AI funding winter, but now it’s big again. “Until recently, AI was a dirty word.”

Today we use it schizophrenically: for Deep Learning or in a totally diluted sense as something done by a computer. “AI” now seems to be a branding strategy used by Silicon Valley.

“AI’s history is diverse, messy, and philosophical.” If complexit is embraced, “AI” might not be a useful caregory for policy. So we should go basvk to the politics of technology:

1. who controls the code/frameworks/data
2. Is the system inspectable/open?
3. Who sets the metrics? Who benefits from them?

The media are not going to be the watchdogs because they’re caught up in the hype. So who will be?

Q: There’s a qualitative difference in the sort of tasks now being turned over to computers. We’re entrusting machines with tasks we used to only trust to humans with good judgment.

Yarden: We already do that with systems that are not labeled AI, like “risk assessment” programs used by insurance companies.

Madeline: Before AI got popular again, there were expert systems. We are reconfiguring our understanding, moving it from a cognition frame to a behavioral one.

Chelsea: I’ve been involved in co-design projects that have backfired. These projects have sometimes been somewhat extractive: going in, getting lots of data, etc. How do we do co-design that are not extractive but that also aren’t prohibitively expensive?

Nathan: To what degree does AI change the dimensions of questions about explanation, inspectability, etc.

Yarden: The promoters of the Deep Learning narrative want us to believe you just need to feed in lots and lots of data. DL is less inspectable than other methods. DL is not learning from nothing. There are open questions about their inductive power.


Amy Zhang and Ryan Budish give a pre-alpha demo of the AI Compass being built at BKC. It’s designed to help people find resources exploring topics related to the ethics and governance of AI.

Be the first to comment »

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Be the first to comment »

May 11, 2017

[liveblog] St. Goodall

I’m in Rome at the National Geographic Science Festival
, co-produced by Codice Edizioni which, not entirely coincidentally published, the Italian version of my book Took Big to Know. Jane Goodall is giving the opening talk to a large audience full of students. I won’t try to capture what she is saying because she is talking without notes, telling her personal story.

She embodies an inquiring mind capable of radically re-framing our ideas simply by looking at the phenomena. We may want to dispute her anthropomorphizing of chimps but it is a truth that needed to be uncovered. For example, she says that when she got to Oxford to get a graduate degree — even though she had never been to college — she was told that she should’t have given the chimps names. But this, she says, was because at the time science believed humans were unique. Since then genetics has shown how close we are to them, but even before that her field work had shown the psychological and behavioral similarities. So, her re-framing was fecund and, yes, true.

At a conference in America in 1986, every report from Africa was about the decimation of the chimpanzee population and the abuse of chimpanzees in laboratories. “I went to this conference as a scientist, ready to continue my wonderful life, and I left as an activist.” Her Tacare Institute
works with and for Africans. For example, local people are equipped with tablets and phones and mark chimp nests, downed trees, and the occasional leopard. (Takari provides scholarships to keep girls in school, “and some boys too.”)

She makes a totally Dad joke about “the cloud.”

It is a dangerous world, she says. “Our intellects have developed tremendously.” “Isn’t it strange that this most intellectual creature ever is destroying its home.” She calls out the damage done to our climate by our farming of animals. “There are a lot of reasons to avoid eating a lot of meat or any, but that’s one of them.”

There is a disconnect between our beautiful brains and our hearts, she says. Violence, domestic violence, greed…”we don’t think ‘Are we having a happy life?'” She started “Roots and Shoots
” in 1991 in Tanzania, and now it’s in 99 countries, from kindergartens through universities. It’s a program for young people. “We do not tell the young people what to do.” They decide what matters to them.

Her reasons for hope: 1. The reaction to Roots and Shoots. 2. Our amazing brains. 3. The resilience of nature. 4. Social media, which, if used right can be a “tremendous tool for change.” 6. “The indomitable human spirit.” She uses Nelson Mandela as an example, but also refugees making lives in new lands.

“It’s not only humans that have an indomitable spirit.” She shows a brief video of the release of a chimp that left at least some wizened adults in tears:

She stresses making the right ethical choices, a phrase not heard often enough.

If in this audience of 500 students she has not made five new scientists, I’ll be surprised.

Be the first to comment »

« Previous Page | Next Page »