Joho the Blogscience Archives - Joho the Blog

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Be the first to comment »

[liveblog] AI Advance opening: Jonathan Zittrain and lightning talks

I’m at a day-long conference/meet-up put on by the Berkman Klein Center‘s and MIT Media Lab‘s “AI for the Common Good” project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Jonathan Zittrain gives an opening talk. Since we’re meeting at Harvard Law, JZ begins by recalling the origins of what has been called “cyber law,” which has roots here. Back then, the lawyers got to the topic first, and thought that they could just think their way to policy. We are now at another signal moment as we are in a frenzy of building new tech. This time we want instead to involve more groups and think this through. [I am wildly paraphrasing.]

JZ asks: What is it that we intuitively love about human judgment, and are we willing to insist on human judgments that are worse than what a machine would come up with? Suppose for utilitarian reasons we can cede autonomy to our machines — e.g., autonomous cars — shouldn’t we? And what do we do about maintaining local norms? E.g., “You are now entering Texas where your autonomous car will not brake for pedestrians.”

“Should I insist on being misjudged by a human judge because that’s somehow artesinal?” when, ex hypothesis, an AI system might be fairer.

Autonomous systems are not entirely new. They’re bringing to the fore questions that have always been with us. E.g., we grant a sense of discrete intelligence to corporations. E.g., “McDonald’s is upset and may want to sue someone.”

[This is a particularly bad representation of JZ’s talk. Not only is it wildly incomplete, but it misses the through-line and JZ’s wit. Sorry.]

Lightning Talks

Finale Doshi-Velez is particularly interested in interpretable machine learning (ML) models. E.g., suppose you have ten different classifiers that give equally predictive results. Should you provide the most understandable, all of them…?

Why is interpretability so “in vogue”? Suppose non-interpretable AI can do something better? In most cases we don’t know what “better” means. E.g., someone might want to control her glucose level, but perhaps also to control her weight, or other outcomes? Human physicians can still see things that are not coded into the model, and that will be the case for a long time. Also, we want systems that are fair. This means we want interpretable AI systems.

How do we formalize these notions of interpretability? How do we do so for science and beyond? E.g., what is a legal “right to explanation
” mean? She is working with Sam Greshman on how to more formally ground AI interpretability in the cognitive science of explanation.

Vikash Mansinghka leads the eight-person Probabilistic Computing project at MIT. They want to build computing systems that can be our partners, not our replacements. We have assumed that the measure of success of AI is that it beats us at our own game, e.g., AlphaGo, Deep Blue, Watson playing Jeopardy! But games have clearly measurable winners.

His lab is working on augmented intelligence that gives partial solutions, guidelines and hints that help us solve problems that neither system could solve on their own. The need for these systems are most obvious in large-scale human interest projects, e.g., epidemiology, economics, etc. E.g., should a successful nutrition program in SE Asia be tested in Africa too? There are many variables (including cost). BayesDB, developed by his lab, is “augmented intelligence for public interest data science.”

Traditional computer science, computing systems are built up from circuits to algorithms. Engineers can trade off performance for interpretability. Probabilisitic systems have some of the same considerations. [Sorry, I didn’t get that last point. My fault!]

John Palfrey is a former Exec. Dir. of BKC, chair of the Knight Foundation (a funder of this project) and many other things. Where can we, BKC and the Media Lab, be most effective as a research organization? First, we’ve had the most success when we merge theory and practice. And building things. And communicating. Second, we have not yet defined the research question sufficiently. “We’re close to something that clearly relates to AI, ethics and government” but we don’t yet have the well-defined research questions.

The Knight Foundation thinks this area is a big deal. AI could be a tool for the public good, but it also might not be. “We’re queasy” about it, as well as excited.

Nadya Peek is at the Media Lab and has been researching “macines that make machines.” She points to the first computer-controlled machine (“Teaching Power Tools to Run Themselves“) where the aim was precision. People controlled these CCMs: programmers, CAD/CAM folks, etc. That’s still the case but it looks different. Now the old jobs are being done by far fewer people. But the spaces between doesn’t always work so well. E.g., Apple can define an automatiable workflow for milling components, but if you’re student doing a one-off project, it can be very difficult to get all the integrations right. The student doesn’t much care about a repeatable workflow.

Who has access to an Apple-like infrastructure? How can we make precision-based one-offs easier to create? (She teaches a course at MIT called “How to create a machine that can create almost anything.”)

Nathan Mathias, MIT grad student with a newly-minted Ph.D. (congrats, Nathan!), and BKC community member, is facilitating the discussion. He asks how we conceptualize the range of questions that these talks have raised. And, what are the tools we need to create? What are the social processes behind that? How can we communicate what we want to machines and understand what they “think” they’re doing? Who can do what, where that raises questions about literacy, policy, and legal issues? Finally, how can we get to the questions we need to ask, how to answer them, and how to organize people, institutions, and automated systems? Scholarly inquiry, organizing people socially and politically, creating policies, etc.? How do we get there? How can we build AI systems that are “generative” in JZ’s sense: systems that we can all contribute to on relatively equal terms and share them with others.

Nathan: Vikash, what do you do when people disagree?

Vikash: When you include the sources, you can provide probabilistic responses.

Finale: When a system can’t provide a single answer, it ought to provide multiple answers. We need humans to give systems clear values. AI things are not moral, ethical things. That’s us.

Vikash: We’ve made great strides in systems that can deal with what may or may not be true, but not in terms of preference.

Nathan: An audience member wants to know what we have to do to prevent AI from repeating human bias.

Nadya: We need to include the people affected in the conversations about these systems. There are assumptions about the independence of values that just aren’t true.

Nathan: How can people not close to these systems be heard?

JP: Ethan Zuckerman, can you respond?

Ethan: One of my colleagues, Joy Buolamwini, is working on what she calls the Algorithmic Justice League, looking at computer vision algorithms that don’t work on people of color. In part this is because the tests use to train cv systems are 70% white male faces. So she’s generating new sets of facial data that we can retest on. Overall, it’d be good to use test data that represents the real world, and to make sure a representation of humanity is working on these systems. So here’s my question: We find co-design works well: bringing in the affected populations to talk with the system designers?

[Damn, I missed Yochai Benkler‘s comment.]

Finale: We should also enable people to interrogate AI when the results seem questionable or unfair. We need to be thinking about the proccesses for resolving such questions.

Nadya: It’s never “people” in general who are affected. It’s always particular people with agendas, from places and institutions, etc.

Be the first to comment »

May 11, 2017

[liveblog] St. Goodall

I’m in Rome at the National Geographic Science Festival
, co-produced by Codice Edizioni which, not entirely coincidentally published, the Italian version of my book Took Big to Know. Jane Goodall is giving the opening talk to a large audience full of students. I won’t try to capture what she is saying because she is talking without notes, telling her personal story.

She embodies an inquiring mind capable of radically re-framing our ideas simply by looking at the phenomena. We may want to dispute her anthropomorphizing of chimps but it is a truth that needed to be uncovered. For example, she says that when she got to Oxford to get a graduate degree — even though she had never been to college — she was told that she should’t have given the chimps names. But this, she says, was because at the time science believed humans were unique. Since then genetics has shown how close we are to them, but even before that her field work had shown the psychological and behavioral similarities. So, her re-framing was fecund and, yes, true.

At a conference in America in 1986, every report from Africa was about the decimation of the chimpanzee population and the abuse of chimpanzees in laboratories. “I went to this conference as a scientist, ready to continue my wonderful life, and I left as an activist.” Her Tacare Institute
works with and for Africans. For example, local people are equipped with tablets and phones and mark chimp nests, downed trees, and the occasional leopard. (Takari provides scholarships to keep girls in school, “and some boys too.”)

She makes a totally Dad joke about “the cloud.”

It is a dangerous world, she says. “Our intellects have developed tremendously.” “Isn’t it strange that this most intellectual creature ever is destroying its home.” She calls out the damage done to our climate by our farming of animals. “There are a lot of reasons to avoid eating a lot of meat or any, but that’s one of them.”

There is a disconnect between our beautiful brains and our hearts, she says. Violence, domestic violence, greed…”we don’t think ‘Are we having a happy life?'” She started “Roots and Shoots
” in 1991 in Tanzania, and now it’s in 99 countries, from kindergartens through universities. It’s a program for young people. “We do not tell the young people what to do.” They decide what matters to them.

Her reasons for hope: 1. The reaction to Roots and Shoots. 2. Our amazing brains. 3. The resilience of nature. 4. Social media, which, if used right can be a “tremendous tool for change.” 6. “The indomitable human spirit.” She uses Nelson Mandela as an example, but also refugees making lives in new lands.

“It’s not only humans that have an indomitable spirit.” She shows a brief video of the release of a chimp that left at least some wizened adults in tears:

She stresses making the right ethical choices, a phrase not heard often enough.

If in this audience of 500 students she has not made five new scientists, I’ll be surprised.

Be the first to comment »

October 18, 2015

The Martian

My wife and I just saw The Martian. Loved it. It was as good a movie as could possibly be made out of a book that’s about sciencing the shit out of problems.

The book was the most fun I’ve had in a long time. So I was ready to be disappointed by the movie. Nope.

Compared to say, Gravity? Gravity‘s choreography was awesome, and the very ending of it worked for me. (No spoilers here!) But, it had irksome moment and themes, especially Sandra Bullock’s backstory. (No spoilers!)

The Martian was much less pretentious, IMO. It’s about science as problem-solving. Eng Fi, if you will. But the theme that emerges from this is:

Also, Let’s go the fuck to Mars!


(I still think Interstellar is a better movie, although it’s nowhere near as much fun. But I’m not entirely reasonable about Interstellar.)

1 Comment »

October 14, 2015

August 18, 2015

Newton’s non-clockwork universe

The New Atlantis has just published five essays exploring “The Unknown Newton”. It is — bless its heart! — open access. Here’s the table of contents:

Rob Iliffe provides an overview of Newton’s religious thought, including his radically unorthodox theology.

William R. Newman examines the scientific ambitions in Newton’s alchemical labors, which are often written off as deviations from science.

Stephen D. Snobelen — who in the course of writing his essay discovered Newton’s personal, dog-eared copy of a book that had been lost — provides an in-depth look at the connection between Newton’s interpretation of biblical prophecy and his cosmological views.

Andrew Janiak explains how Newton reconciled the apparent tensions between the Bible and the new view of the world described by physics.

Finally, Sarah Dry describes the curious fate of Newton’s unpublished papers, showing what they mean for our understanding of the man and why they remained hidden for so long.


Stephen Snobelen’s article, “Cosmos and Apocalypse,” begins with a paper in the John Locke collection at the Bodelian: Newton’s hand-drawn timeline of the events in Revelations. Snobelen argues that we’ve read too much of The Enlightenment back into Newton.


In particular, the concept of the universe as a pure clockwork that forever operates according to mechanical laws comes from Laplace, not Newton, says Snobelen. He refers to David Kubrin’s 1967 paper “Newton and the Cyclical Cosmos“; it is not open access. (Sign up for free with Jstor and you get constrained access to its many riches.) Kubrin’s paper is a great piece of work. He makes the case — convincingly to an amateur like me — that Newton and many of his cohorts feared that a perfectly clockwork universe that did not need Divine intervention to operate would be seen as also not needing God to start up. Newton instead thought that without God’s intervention, the universe would wind down. He hypothesized that comets — newly discovered — were God’s way of refreshing the Universe.


The second half of the Kubrin article is about the extent to which Newton’s late cosmogeny was shaped by his Biblical commitments. Most of Snobelen’s article is about a discovery in 2004 of a new document that confirms this, and adds to it that God’s intervention heads the universe in a particular direction:

In sum, Newton’s universe winds down, but God also renews it and ensures that it is going somewhere. The analogy of the clockwork universe so often applied to Newton in popular science publications, some of them even written by scientists and scholars, turns out to be wholly unfitting for his biblically informed cosmology.

Snobelen attributes this to Newton’s recognition that the universe consists of forces all acting on one another at the same time:

Newton realized that universal gravity signaled the end of Kepler’s stable orbits along perfect ellipses. These regular geometric forms might work in theory and in a two-body system, but not in the real cosmos where many more bodies are involved.

To maintain the order represented by perfect ellipses required nudges and corrections that only a Deity could accomplish.


Snobelen points out that the idea of the universe as a clockwork was more Leibniz’s idea than Newton’s. Newton rejected it. Leibniz got God into the universe through a far odder idea than as the Pitcher of Comets: souls (“monads”) experience inhabiting a shared space in which causality obtains only because God coordinatis a string of experiences in perfect sync across all the monads.


“Newton’s so-called clockwork universe is hardly timeless, regular, and machine-like,” writes Snobelen. “[I]nstead, it acts more like an organism that is subject to ongoing growth, decay, and renewal.” I’m not sold on the “organism” metaphor based on Snobelen’s evidence, but that tiny point aside, this is a fascinating article.

1 Comment »

May 28, 2015

I’m a winner! A limerick winner!

After many years of intermittent entries, I have at long last won the monthly mini-Annals of Improbable Research Limerick Competition. Woohoo! Ish.

AIR presents research that one might find celebrated at the Ig Nobels. In fact, AIR is the creator of the Ig Nobels. AIR’s monthly mini version is free and amusing.

The limerick had to be about: “Preoperative and postoperative gait analyses of patients undergoing great toe-to-thumb transfer,” from the Journal of Hand Surgery, vol. 12, no. 1, 1987, pp 66-69. Rich comic material, obviously.

“Your gait will be fine, understand,
If we sew a toe onto your hand.
   If we did the reverse
   It might be much worse,”
Said the doc in remarks made off hand.

This month’s article for your limericking is: “Improving Phrap-Based Assembly of the Rat Using ‘Reliable’ Overlaps.”

I shall see you on the five-line field of battle!

3 Comments »

December 27, 2014

Oculus Thrift

I just received Google’s Oculus Rift emulator. Given that it’s made of cardboard, it’s all kinds of awesome.

Google Cardboard is a poke in Facebook’s eyes. FB bought Oculus Rift, the virtual reality headset, for $2B. Oculus hasn’t yet shipped a product, but its prototypes are mind-melting. My wife and I tried one last year at an Israeli educational tech lab, and we literally had to have people’s hands on our shoulders so we wouldn’t get so disoriented that we’d swoon. The Lab had us on a virtual roller coaster, with the ability to turn our heads to look around. It didn’t matter that it was an early, low-resolution prototype. Swoon.

Oculus is rumored to be priced at around $350 when it ships, and they will sell tons at that price. Basically, anyone who tries one will be a customer or will wish s/he had the money to be a customer. Will it be confined to game players? Not a chance on earth.

So, in the midst of all this justifiable hype about the Oculus Rift, Google announced Cardboard: detailed plans for how to cut out and assemble a holder for your mobile phone that positions it in front of your eyes. The Cardboard software divides the screen in two and creates a parallaxed view so you think you’re seeing in 3D. It uses your mobile phone’s kinetic senses to track the movement of your head as you purview your synthetic domain.

I took a look at the plans for building the holder and gave up. For $15 I instead ordered one from Unofficial Cardboard.

When it arrived this morning, I took it out of its shipping container (made out of cardboard, of course), slipped in my HTC mobile phone, clicked on the Google Cardboard software, chose a demo, and was literally — in the virtual sense — flying over the earth in any direction I looked, watching a cartoon set in a forest that I was in, or choosing YouTube music videos by turning to look at them on a circular wall.

Obviously I’m sold on the concept. But I’m also sold on the pure cheekiness of Google’s replicating the core functionality of the Oculus Rift by using existing technology, including one made of cardboard.

(And, yeah, I’m a little proud of the headline.)

4 Comments »

November 24, 2014

[siu] Panel: Capturing the research lifecycle

It’s the first panel of the morning at Shaking It Up. Six men from six companies give brief overviews of their products. The session is led by Courtney Soderberg from the
Center for Open Science, which sounds great. [Six panelists means that I won’t be able to keep up. Or keep straight who is who, since there are no name plates. So, I’ll just distinguish them by referring to them as “Another White Guy,” ‘k?]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Riffyn: “Manufacturing-grade quality in the R&D process.” This can easily double R&D productivity “because you stop missing those false negatives.” It starts with design

Github: “GitHub is a place where people do software development together.” 10M people. 15M software repositories. He points to Zenodo, a respository for research outputs. Open source communities are better at collaborating than most academic research communities are. The principles of open source can be applied to private projects as well. A key principle: everything has a URL. Also, the processes should be “lock-free” so they can be done in parallel and the decision about branching can be made later.

Texas Advanced Computing Center: Agave is a Science-as-a-Service platform. It’s a platform, that provides lots of services as well as APIs. “It’s SalesForce for science.”

CERN is partnering with GitHub. “GitHub meets Zenodo.” But it also exports the software into INSPIRE which links the paper with the software. [This
might be the INSPIRE he’s referring to. Sorry. I know I should know this.
]

Overleaf was inspired by etherpad, the collaborative editor. But Etherpad doesn’t do figures or equations. OverLeaf does that and much more.

Publiscize helps researchers translate their work into terms that a broader audience can understand. He sees three audiences: intradisciplinary, interdisciplinary, and the public. The site helps scientists create a version readable by the public, and helps them disseminate them through social networks.

Q&A

Some white guys provided answers I couldn’t quite hear to questions I couldn’t hear. They all seem to favor openness, standards, users owning their own data, and interoperability.

[They turned on the PA, so now I can hear. Yay. I missed the first couple of questions.]

Github: Libraries have uploaded 100,000 open access books, all for free. “Expect the unexpected. That happens a lot.” “Academics have been among the most abusive of our platform…in the best possible way.”

Zenodo: The most unusual uses are the ones who want to instal a copy at their local institutions. “We’re happy to help them fork off Zenodo.”

Q: Where do you see physical libraries fitting in?

AWG: We keep track of some people’s libraries.

AWG: People sometimes accidentally delete their entire company’s repos. We can get it back for you easily if you do.

AWG: Zenodo works with Chris Erdmann at Harvard Library.

AWG: We work with FigShare and others.

AWG: We can provide standard templates for Overleaf so, for example, your grad students’ theses can be managed easily.

AWG: We don’t do anything particular with libraries, but libraries are great.

Courtney:We’re working with ARL on a shared notification system

Q: Mr. GitHub (Arfon Smith), you said in your comments that reproducibility is a workflow issue?

GitHub: You get reproducibility as a by-product of using tools like the ones represented on this panel. [The other panelists agree. Reproducibility should be just part of the infrastructure that you don’t have to think about.]

5 Comments »

[siu] Geoff Bilder on getting the scholarly cyberinfrastructure right

I’m at “Shaking It Up: How to thrive in — and change — the research ecosystem,” an event co-sponsored by Digital Science, Microsoft, Harvard, and MIT. (I think, based on little, that Digital Science is the primary instigator.) I’m late to the opening talk, by Geoff Bilder [twitter:gbilder] , dir. of strategic initiatives at CrossRef. He’s also deeply involved in Orcid, an authority-base that provides a stable identity reference for scholars. He refers to Orcid’s principles as the basis of this talk.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Geoff Bilder

Geoff is going through what he thinks is required for organizations contributing to a scholarly cyberinfrastructure. I missed the first few.


It should transcend disciplines and other boundaries.


An organization nees a living will: what will happen to it when it ends? That means there should be formal incentives to fulfill the mission and wind down.


Sustainability: time-limited funds should be used only for time-limited activities. You need other sources for sustaining fundamental operations. The goal should be to generate surplus so the organization isn’t brittle and can respond to new opportunities. There should be a contingency fund sufficient to keep it going for 12 months. This builds trust in the organization.

The revenues ought to be based on series, not on data. You certainly shouldn’t raise money by doing things that are against your mission.


But, he says, people are still wary about establishing a single organization that is central and worldwide. So people need the insurance of forkability. Make sure the data is open (within the limits of privacy) and is available in practical ways. “If we turn evil, you can take the code and the data and start up your own system. If you can bring the community with you, you will win.” It also helps to have a patent non-assertion so no one can tie it up.


He presents a version of Maslow’s hierarchy of needs for a scholarly cyberinfrastructure: tools, safety, esteem, self-actualization.


He ends by pointing to Building 20, MIT’s temporary building for WW II researchers. It produced lots of great results but little infrastructure. “We have to stop asking researchers how to fund infrastructure.” They aren’t particularly good at it. We need to get people who are good at it and are eager to fund a research infrastructure independent of funding individual research projects.

3 Comments »

Next Page »