Joho the Blogphilosophy Archives - Joho the Blog

May 6, 2018

[liveblog][ai] Primavera De Filippi: An autonomous flower that merges AI and Blockchain

Primavera De Filippi is an expert in blockchain-based tech. She is giving a ThursdAI talk on Plantoid, an event held by Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab. Her talk is officially on operational autonomy vs. decisional autonomy, but it’s really about how weird things become when you build a computerized flower that merges AI and the blockchain. For me, a central question of her talk was: Can we have autonomous robots that have legal rights and can own and spend assets, without having to resort to conferring personhood on them the way we have with corporations?

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Autonomy and liability

She begins by pointing to the 3 industrial revolutions so far: Steam led to mechanized production ; Electricity led to mass production; Electronics led to automated production. The fourth — AI — is automating knowledge production.

People are increasingly moving into the digital world, and digital systems are moving back into the physical worlds, creating cyber-physical systems. E.g., the Internet of Things senses, communicates, and acts. The Internet of Smart Things learns from the data the things collect, makes inferences, and then acts. The Internet of Autonomous Things creates new legal challenges. Various actors can be held liable: manufacturer, software developer, user, and a third party. “When do we apply legal personhood to non-humans?”

With autonomous things, the user and third parties become less liable as the software developer takes on more of the liability: There can be a bug. Someone can hack into it. The rules that make inferences are inaccurate. Or a bad moral choice has led the car into an accident.

The sw developer might have created bug-free sw but its interaction with other devices might lead to unpredictability; multiple systems operating according to different rules might be incompatible; it can be hard to identify the chain of causality. So, who will be liable? The manufacturers and owners are likely to have only limited liability.

So, maybe we’ll need generalized insurance: mandatory insurance that potentially harmful devices need to subscribe to.

Or, perhaps we will provide some form of legal personhood to machines so the manufacturers can be sued for their failings. Suing a robot would be like suing a corporation. The devices would be able to own property and assets. The EU is thinking about creating this type of agenthood for AI systems. This is obviously controversial. At least a corporation has people associated with it, while the device is just a device, Primavera points out.

So, when do we apply legal personhood to non-humans? In addition to people and corporations, some countries have assigned personhood to chimpanzees (Argentina, France) and to natural resources (NZ: Whanganui river). We do this so these entities will have rights and cannot be simply exploited.

If we give legal personhood to AI-based systems, can AI have property rights over their assets and IP? If they are legally liable, they can be held responsible for their actions, and can be sued for compensation? “Maybe they should have contractual rights so they can enter into contracts. Can they be rewarded for their work? Taxed?”Maybe they should have contractual rights so they can enter into contracts. Can they be rewarded for their work? Taxed? [All of these are going to turn out to be real questions. … Wait for it …]

Limitations: “Most of the AI-based systems deployed today are more akin to slaves than corporations.” They’re not autonomous the way people are. They are owned, controlled and maintained by people or corporations. They act as agents for their operators. They have no technical means to own or transfer assets. (Primavera recommends watching the Star Trek: The Next Generation episode “The Measure of the Man” that asks, among other things, whether Data (the android) can be dismantled and whether he can resign.)

Decisional autonomy is the capacity to make a decision on your own, but it doesn’t necessarily bring what we think of as real autonomy. E.g., an AV can decide its route. For real autonomy we need operational autonomy: no one is maintaining the thing’s operation at a technical level. To take a non-random example, a blockchain runs autonomously because there is no single operator controlling. E.g., smart contracts come with a guarantee of execution. Once a contract is registered with a blockchain, no operator can stop it. This is operational autonomy.

Blockchain meets AI. Object: Autonomy

We are getting first example of autonomous devices using blockchain. The most famous is the Samsung washing machine that can detect when the soap is empty, and makes a smart contract to order more. Autonomous cars could work with the same model; they could not be owned by anyone and collect money when someone uses them. These could be initially purchased by someone and then buy themselves off: “They’d have to be emancipated,” she says. Perhaps they and other robots can use the capital they accumulate to hire people to work for them. [Pretty interesting model for an Uber.]

She introduces Plantoid, a blockchain-based life form. “Plantoid is autonomous, self-sufficient, and can reproduce.”It’s autonomous, self-sufficient, and can reproduce. Real flowers use bees to reproduce. Plantoids use humans to collect capital for their reproduction. Their bodies are mechanical. Their spirit is an Ethereum smart contract. It collects cryptocurrency. When you feed it currency it says thank you; the Plantoid Primavera has brought, nods its flower. When it gets enough funds to reproduce itself, it triggers a smart contract that activates a call for bids to create the next version of the Plantoid. In the “mating phase” it looks for a human to create the new version. People vote with micro-donations. Then it identifies a winner and hires that human to create the new one.

There are many Plantoids in the world. Each has its own “DNA”. New artists can add to it. E.g., each artist has to decide on its governance, such as whether it will donate some funds to charity. The aim is to make it more attractive to be contributed to. The most fit get the most money and reproduces themselves. BurningMan this summer is going to feature this.

Every time one reproduces, a small cut is given to the pattern that generated it, and some to the new designer. This flips copyright on its head: the artist has an incentive to make her design more visible and accessible and attractive.

So, why provide legal personhood to autonomous devices? We want them to be able to own their own assets, to assume contractual rights, and legal capacity so they can sue and be sued, and limit their liability. “ Blockchain lets us do that without having to declare the robot to be a legal person.” Blockchain lets us do that without having to declare the robot to be a legal person.

The plant effectively owns the cryptofunds. The law cannot affect this. Smart contracts are enforced by code

Who are the parties to the contract? The original author and new artist? The master agreement? Who can sue who in case of a breach? We don’t know how to answer these questions yet.

Can a plantoid sure for breach of contract? Not if the legal system doesn’t recognize them as legal persons. So who is liable if the plant hurts someone? Can we provide a mechanism for this without conferring personhood? “How do you enforce the law against autonomous agents that cannot be stopped and whose property cannot be seized?”

Q&A

Could you do this with live plants? People would bioengineer them…

A: Yes. Plantoid has already been forked this way. There’s an idea for a forest offering trees to be cut down, with the compensation going to the forest which might eventually buy more land to expand itself.

My interest in this grew out of my interest in decentralized organizations. This enables a project to be an entity that assumes liability for its actions, and to reproduce itself.

Q: [me] Do you own this plantoid?

A: Hmm. I own the physical instantiation but not the code or the smart contract. If this one broke, I could make a new one that connects to the same smart contract. If someone gets hurt because it falls on the, I’m probably liable. If the smart contract is funding terrorism, I’m not the owner of that contract. The physical object is doing nothing but reacting to donations.

Q: But the aim of its reactions is to attract more money…

A: It will be up to the judge.

Q: What are the most likely senarios for the development of these weird objects?

A: A blockchain can provide the interface for humans interacting with each other without needing a legal entity, such as Uber, to centralize control. But you need people to decide to do this. The question is how these entities change the structure of the organization.

Comments Off on [liveblog][ai] Primavera De Filippi: An autonomous flower that merges AI and Blockchain

April 2, 2018

"If a lion could talk" updated

“If a lion could talk, we could not understand him.”
— Ludwig Wittgenstein, Philosophical Investigations, 1953.

“If an algorithm could talk, we could not understand it.”
— Deep learning, Now.

1 Comment »

February 15, 2018

Here comes a new round of "I think, therefore I am" philosophical Dad jokes

An earlier draft of Descartes’ Meditations has been discovered, which will inevitably lead to a new round of unfunny jokes under the rubric of “Descartes’ First Draft.” I can’t wait :(

The draft is a big discovery. Camilla Shumaker at Research Frontiers reports that Jeremy Hyman, a philosophy instructor at the University of Arkansas, came across a reference to the manuscript and hied off to a municipal library in Toulouse … a gamble, but he apparently felt he had nothing left Toulouse.

And so it begins…

Comments Off on Here comes a new round of "I think, therefore I am" philosophical Dad jokes

February 11, 2018

The brain is not a computer and the world is not information

Robert Epstein argues in Aeon against the dominant assumption that the brain is a computer, that it processes information, stores and retrieves memories, etc. That we assume so comes from what I think of as the informationalizing of everything.

The strongest part of his argument is that computers operate on symbolic information, but brains do not. There is no evidence (that I know of, but I’m no expert. On anything) that the brain decomposes visual images into pixels and those pixels into on-offs in a code that represents colors.

In the second half, Epstein tries to prove that the brain isn’t a computer through some simple experiments, such as drawing a dollar bill from memory and while looking at it. Someone committed to the idea that the brain is a computer would probably just conclude that the brain just isn’t a very good computer. But judge for yourself. There’s more to it than I’m presenting here.

Back to Epstein’s first point…

It is of the essence of information that it is independent of its medium: you can encode it into voltage levels of transistors, magnetized dust on tape, or holes in punch cards, and it’s the same information. Therefore, a representation of a brain’s states in another medium should also be conscious. Epstein doesn’t make the following argument, but I will (and I believe I am cribbing it from someone else but I don’t remember who).

Because information is independent of its medium, we could encode it in dust particles swirling clockwise or counter-clockwise; clockwise is an on, and counter is an off. In fact, imagine there’s a dust cloud somewhere in the universe that has 86 billion motes, the number of neurons in the human brain. Imagine the direction of those motes exactly matches the on-offs of your neurons when you first spied the love of your life across the room. Imagine those spins shift but happen to match how your neural states shifted over the next ten seconds of your life. That dust cloud is thus perfectly representing the informational state of your brain as you fell in love. It is therefore experiencing your feelings and thinking your thoughts.

That by itself is absurd. But perhaps you say it is just hard to imagine. Ok, then let’s change it. Same dust cloud. Same spins. But this time we say that clockwise is an off, and the other is an on. Now that dust cloud no longer represents your brain states. It therefore is both experiencing your thoughts and feeling and is not experiencing them at the same time. Aristotle would tell us that that is logically impossible: a thing cannot simultaneously be something and its opposite.

Anyway…

Toward the end of the article, Epstein gets to a crucial point that I was very glad to see him bring up: Thinking is not a brain activity, but the activity of a body engaged in the world. (He cites Anthony Chemero’s Radical Embodied Cognitive Science (2009) which I have not read. I’d trace it back further to Andy Clark, David Chalmers, Eleanor Rosch, Heidegger…). Reducing it to a brain function, and further stripping the brain of its materiality to focus on its “processing” of “information” is reductive without being clarifying.

I came into this debate many years ago already made skeptical of the most recent claims about the causes of consciousness by having some awareness of the series of failed metaphors we have used over the past couple of thousands of years. Epstein puts this well, citing another book I have not read (and another book I’ve consequently just ordered):

In his book In Our Own Image (2015), the artificial intelligence expert George Zarkadakis describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence.

In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.

The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.

By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

Maybe this time our tech-based metaphor has happened to get it right. But history says we should assume not. We should be very alert to the disanologies, which Epstein helps us with.

Getting this right, or at least not getting it wrong, matters. The most pressing problem with the informationalizing of thought is not that it applies a metaphor, or even that the metaphor is inapt. Rather it’s that this metaphor leads us to a seriously diminished understanding of what it means to be a living, caring creature.

I think.

 

Hat tip to @JenniferSertl for pointing out the Aeon article.

Comments Off on The brain is not a computer and the world is not information

December 5, 2017

[liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

I’ve been at a two-day workshop sponsored by the Michigan State Uiversity and the National Science Foundation: “Workshop on Trustworthy Algorithmic Decision-Making.” After multiple rounds of rotating through workgroups iterating on five different questions, each group presented its findings — questions, insights, areas of future research.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Seriously, I cannot capture all of this.

Conduct of Data Science

What are the problems?

  • Who defines and how do we ensure good practice in data science and machine learning?

Why is the topic important? Because algorithms are important. And they have important real-world effects on people’s lives.

Why is the problem difficult?

  • Wrong incentives.

  • It can be difficult to generalize practices.

  • Best practices may be good for one goal but not another, e.g., efficiency but not social good. Also: Lack of shared concepts and vocabulary.

How to mitigate the problems?

  • Change incentives

  • Increase communication via vocabularies, translations

  • Education through MOOCS, meetups, professional organizations

  • Enable and encourage resource sharing: an open source lesson about bias, code sharing, data set sharing

Accountability group

The problem: How to integratively assess the impact of an algorithmic system on the public good? “Integrative” = the impact may be positive and negative and affect systems in complex ways. The impacts may be distributed differently across a population, so you have to think about disparities. These impacts may well change over time

We aim to encourage work that is:

  • Aspirationally casual: measuring outcomes causally but not always through randomized control trials.

  • The goal is not to shut down algorithms to to make positive contributions that generat solutions.

This is a difficult problem because:

  • Lack of variation in accountability, enforcements, and interventions.

  • It’s unclear what outcomes should be measure and how. This is context-dependent

  • It’s unclear which interventions are the highest priority

Why progress is possible: There’s a lot of good activity in this space. And it’s early in the topic so there’s an ability to significantly influence the field.

What are the barriers for success?

  • Incomplete understanding of contexts. So, think it in terms of socio-cultural approaches, and make it interdisciplinary.

  • The topic lies between disciplines. So, develop a common language.

  • High-level triangulation is difficult. Examine the issues at multiple scales, multiple levels of abstraction. Where you assess accountability may vary depending on what level/aspect you’re looking at.

Handling Uncertainty

The problem: How might we holistically treat and attribute uncertainty through data analysis and decisions systems. Uncertainty exists everywhere in these systems, so we need to consider how it moves through a system. This runs from choosing data sources to presenting results to decision-makers and people impacted by these results, and beyond that its incorporation into risk analysis and contingency planning. It’s always good to know where the uncertainty is coming from so you can address it.

Why difficult:

  • Uncertainty arises from many places

  • Recognizing and addressing uncertainties is a cyclical process

  • End users are bad at evaluating uncertain info and incorporating uncertainty in their thinking.

  • Many existing solutions are too computationally expensive to run on large data sets

Progress is possible:

  • We have sampling-based solutions that provide a framework.

  • Some app communities are recognizing that ignoring uncertainty is reducing the quality of their work

How to evaluate and recognize success?

  • A/B testing can show that decision making is better after incorporating uncertainty into analysis

  • Statistical/mathematical analysis

Barriers to success

  • Cognition: Train users.

  • It may be difficult to break this problem into small pieces and solve them individually

  • Gaps in theory: many of the problems cannot currently be solved algorithmically.

The presentation ends with a note: “In some cases, uncertainty is a useful tool.” E.g., it can make the system harder to game.

Adversaries, workarounds, and feedback loops

Adversarial examples: add a perturbation to a sample and it disrupts the classification. An adversary tries to find those perturbations to wreck your model. Sometimes this is used not to hack the system so much as to prevent the system from, for example, recognizing your face during a protest.

Feedback loops: A recidivism prediction system says you’re likely to commit further crimes, which sends you to prison, which increases the likelihood that you’ll commit further crimes.

What is the problem: How should a trustworthy algorithm account for adversaries, workarounds, and feedback loops?

Who are the stakeholders?

System designers, users, non-users, and perhaps adversaries.

Why is this a difficult problem?

  • It’s hard to define the boundaries of the system

  • From whose vantage point do we define adversarial behavior, workarounds, and feedback loops.

Unsolved problems

  • How do we reason about the incentives users and non-users have when interacting with systems in unintended ways.

  • How do we think about oversight and revision in algorithms with respect to feedback mechanisms

  • How do we monitor changes, assess anomalies, and implement safeguards?

  • How do we account for stakeholders while preserving rights?

How to recognize progress?

  • Mathematical model of how people use the system

  • Define goals

  • Find stable metrics and monitor them closely

  • Proximal metrics. Causality?

  • Establish methodologies and see them used

  • See a taxonomy of adversarial behavior used in practice

Likely approaches

  • Security methodology to anticipating and unintended behaviors and adversarial interactions’. Monitor and measure

  • Record and taxonomize adversarial behavior in different domains

  • Test . Try to break things.

Barriers

  • Hard to anticipate unanticipated behavior

  • Hard to define the problem in particular cases.

  • Goodhardt’s Law

  • Systems are born brittle

  • What constitutes adversarial behavior vs. a workaround is subjective.

  • Dynamic problem

Algorithms and trust

How do you define and operationalize trust.

The problem: What are the processes through which different stakeholders come to trust an algorithm?

Multiple processes lead to trust.

  • Procedural vs. substantive trust: are you looking at the weights of the algorithms (e.g.), or what were the steps to get you there?

  • Social vs personal: did you see the algorithm at work, or are you relying on peers?

These pathways are not necessarily predictive of each other.

Stakeholders build truth through multiple lenses and priorities

  • the builders of the algorithms

  • the people who are affected

  • those who oversee the outcomes

Mini case study: a child services agency that does not want to be identified. [All of the following is 100% subject to my injection of errors.]

  • The agency uses a predictive algorithm. The stakeholders range from the children needing a family, to NYers as a whole. The agency knew what into the model. “We didn’t buy our algorithm from a black-box vendor.” They trusted the algorithm because they staffed a technical team who had credentials and had experience with ethics…and who they trusted intuitively as good people. Few of these are the quantitative metrics that devs spend their time on. Note that FAT (fairness, accountability, transparency) metrics were not what led to trust.

Temporality:

  • Processes that build trust happen over time.

  • Trust can change or maybe be repaired over time. “

  • The timescales to build social trust are outside the scope of traditional experiments,” although you can perhaps find natural experiments.

Barriers:

  • Assumption of reducibility or transfer from subcomponents

  • Access to internal stakeholders for interviews and process understanding

  • Some elements are very long term

 


 

What’s next for this workshop

We generated a lot of scribbles, post-it notes, flip charts, Slack conversations, slide decks, etc. They’re going to put together a whitepaper that goes through the major issues, organizing them, and tries to capture the complexity while helping to make sense of it.

There are weak or no incentives to set appropriate levels of trust

Key takeways:

  • Trust is irreducible to FAT metrics alone

  • Trust is built over time and should be defined in terms of the temporal process

  • Isolating the algorithm as an instantiation misses the socio-technical factors in trust.

Comments Off on [liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

December 4, 2017

Workshop: Trustworthy Algorithmic Decision-Making

I’m at a two-day inter-disciplinary workshop on “Trustworthy Algorithmic Decision-Making” put on by the National Science Foundation and Michigan State University. The 2-page whitepapers
from the participants are online. (Here’s mine.) I may do some live-blogging of the workshops.

Goals:

– Key problems and critical qustionos?

– What to tell pol;icy-makers and others about the impact of these systems?

– Product approaches?

– What ideas, people, training, infrastructure are needed for these approaches?

Excellent diversity of backgrounds: CS, policy, law, library science, a philosopher, more. Good diversity in gender and race. As the least qualified person here, I’m greatly looking forward to the conversations.

Comments Off on Workshop: Trustworthy Algorithmic Decision-Making

August 8, 2017

Messy meaning

Steve Thomas [twitter: @stevelibrarian] of the Circulating Ideas podcast interviews me about the messiness of meaning, library innovation, and educating against fake news.

You can listen to it here.

Comments Off on Messy meaning

July 18, 2017

America's default philosophy

John McCumber — a grad school colleague with whom I have alas not kept up — has posted at Aeon an insightful historical argument that America’s default philosophy came about because of a need to justify censoring American communist professorss (resulting in a naive scientism) and a need to have a positive alternative to Marxism (resulting in the adoption of rational choice theory).

That compressed summary does not do justice to the article’s grounding in the political events of the 1950s nor to how well-written and readable it is.

1 Comment »

May 18, 2017

Indistinguishable from prejudice

“Any sufficiently advanced technology is indistinguishable from magic,” said Arthur C. Clarke famously.

It is also the case that any sufficiently advanced technology is indistinguishable from prejudice.

Especially if that technology is machine learning. ML creates algorithms to categorize stuff based upon data sets that we feed it. Say “These million messages are spam, and these million are not,” and ML will take a stab at figuring out what are the distinguishing characteristics of spam and not spam, perhaps assigning particular words particular weights as indicators, or finding relationships between particular IP addresses, times of day, lenghts of messages, etc.

Now complicate the data and the request, run this through an artificial neural network, and you have Deep Learning that will come up with models that may be beyond human understanding. Ask DL why it made a particular move in a game of Go or why it recommended increasing police patrols on the corner of Elm and Maple, and it may not be able to give an answer that human brains can comprehend.

We know from experience that machine learning can re-express human biases built into the data we feed it. Cathy O’Neill’s Weapons of Math Destruction contains plenty of evidence of this. We know it can happen not only inadvertently but subtly. With Deep Learning, we can be left entirely uncertain about whether and how this is happening. We can certainly adjust DL so that it gives fairer results when we can tell that it’s going astray, as when it only recommends white men for jobs or produces a freshman class with 1% African Americans. But when the results aren’t that measurable, we can be using results based on bias and not know it. For example, is anyone running the metrics on how many books by people of color Amazon recommends? And if we use DL to evaluate complex tax law changes, can we tell if it’s based on data that reflects racial prejudices?[1]

So this is not to say that we shouldn’t use machine learning or deep learning. That would remove hugely powerful tools. And of course we should and will do everything we can to keep our own prejudices from seeping into our machines’ algorithms. But it does mean that when we are dealing with literally inexplicable results, we may well not be able to tell if those results are based on biases.

In short: Any sufficiently advanced technology is indistinguishable from prejudice.[2]

[1] We may not care, if the result is a law that achieves the social goals we want, including equal and fair treatment of tax players regardless of race.

[2] Please note that that does not mean that advanced technology is prejudiced. We just may not be able to tell.

Comments Off on Indistinguishable from prejudice

May 15, 2017

[liveblog][AI] AI and education lightning talks

Sara Watson, a BKC affiliate and a technology critic, is moderating a discussion at the Berkman Klein/Media Lab AI Advance.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Karthik Dinakar at the Media Lab points out what we see in the night sky is in fact distorted by the way gravity bends light, which Einstein called a “gravity lens.” Same for AI: The distortion is often in the data itself. Karthik works on how to help researchers recognize that distortion. He gives an example of how to capture both cardiologist and patient lenses to better to diagnose women’s heart disease.

Chris Bavitz is the head of BKC’s Cyberlaw Clinic. To help Law students understand AI and tech, the Clinic encourages interdisciplinarity. They also help students think critically about the roles of the lawyer and the technologist. The clinic prefers early relationships among them, although thinking too hard about law early on can diminish innovation.

He points to two problems that represent two poles. First, IP and AI: running AI against protected data. Second, issues of fairness, rights, etc.

Leah Plunkett, is a professor at Univ. New Hampshire Law School and is a BKC affiliate. Her topic: How can we use AI to teach? She points out that if Tom Sawyer were real and alive today, he’d be arrested for what he does just in the first chapter. Yet we teach the book as a classic. We think we love a little mischief in our lives, but we apparently don’t like it in our kids. We kick them out of schools. E.g., of 49M students in public schools in 20-11, 3.45M were suspended, and 130,000 students were expelled. These disproportionately affect children from marginalized segments.

Get rid of the BS safety justification and the govt ought to be teaching all our children without exception. So, maybe have AI teach them?

Sarah: So, what can we do?

Chris: We’re thinking about how we can educate state attorneys general, for example.

Karthik: We are so far from getting users, experts, and machine learning folks together.

Leah: Some of it comes down to buy-in and translation across vocabularies and normative frameworks. It helps to build trust to make these translations better.

[I missed the QA from this point on.]

Comments Off on [liveblog][AI] AI and education lightning talks

Next Page »