Joho the BlogJoho the Blog - Let's just see what happens

May 16, 2018

[liveblog] Aubrey de Grey

I’m at the CUBE Tech conference in Berlin. (I’m going to give a first keynote on the book I’m finishing.) Aubrey de Grey begins his keynote begins by changing the question from “Who wants to get old?” to “Who wants Alzheimers?” because we’ve been brainwashed into thinking that aging is somehow good for us: we get wiser, get to retire, etc. Now we are developing treatments for aging. Ambiguity about aging is now “hugely damaging” because it hinders the support of research. E.g., his SENS Research Foundation is going too slowly because of funding restraints.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

“The defeat of aging via medicine is foreseseeable now.” He says he has to be credible because people have been saying this forever and have been wrong.

“Why is aging still a problem?” One hundred years ago, a third of babies would die before they were one year old. We fixed this in the industrialized world through simple advances, e.g., hygiene, mosquito, antibiotics. So why are diseases of old age so much harder to control? People think it’s because so many things go wrong with us late in life, interacting with one another and creating incredible complexity. But that’s not the main answer.

“Aging is easy to define: it is a side effect of being alive.” “It’s a fact of the operation of the human body generates damage.” It accumulates. The body tolerates a certain amount. When you pass that amount, you get pathologies of old age. Our approach has been to develop geriatric medicine to counteract those pathologies. That’s where most of the research goes.

aubrey de gray metabolism diagram

“Metabolism: The ultimate undocumented spaghetti code”

But that won’t work because the damage continues. Geriatric medicine bangs away at the pathologies, but will necessarily become less effective over time. “We make this mistake because of a misclassification we make.”

If you ask people to make categories of disease, they’ll come up with communicable, congenital, and chronic. Then most people add a fourth way of being sick: aging itself. It includes fraility, sarcopenia (loss of muscle), immunosenesence (aging of the immune system)…But that’s silly. Aging in a living organism is the same as aging in a machine. “Aging is the accumulation of damage that occurs as a side-effect of the body’s normal operation.”It is the accumulation of damage to the body that occurs as an intrinsic side-effect of the body’s normal operation. That means the categories are right, except aging covers column 3 and 4. Column 3 — specific diseases such as alzheimer’s and cancer — is also part aging. This means that aging isn’t a blessing in surprise, and that we can’t say that column 3 are high-priorities of medicine but those in 4 are not.

A hundred years ago a few people started to think about this and realized that if we tried to interfere with the process of aging earlier one, we’d do better. This became the field of gerontology. Some species age much more slowly than others. Maybe we can figure out the basis for that variation. But the metabolism is really really complicated. “This is the ultimate nightmare of uncommented spaghetti code.” We know so little about how the body works.

“There is another approach. And it’s completely bleeding obvious”: Periodically repair the damage. We don’t need to slow down the rate at which metabolism causes damage. We need to engineer a system we don’t understand. But “we don’t need to understand how metabolism causes damag”we don’t need to understand how metabolism causes damage. Nor do we need to know what to do when the damage is too great, because we’re not going to let it get to that state. We do this with, say, antique cars. Preventitive maintenance works. “The only question is, can we do it for a much more complicated machine like the human body?

“We’re sidestepping our ignorance of metabolism and pathology. But we have to cope with the fact that damage is complicated” All of the types of damage, from cell loss toe extracellular matrix stiffening — there are 7 categories — can be repaired through a single approach: genetic repair. E.g., loss of cells can be repaired by replacing them using stem cells. Unfortunately, most of the funding is going only to this first category. SENS was created to enable research on the other seven. Aubrey talks about SENS’ work on protecting cells from the bad effects of cholesterol.

He points to another group (unnamed) that has reinvented this approach and is getting a lot of notice.

He says longevity is not what people think it is. These therapies will let people stay alive longer, but they will also stay youthful longer. “”Longevity is a side effect of health.” ”“Longevity is a side effect of health.”

Will this be only for the rich? Overpopulation? Boredom? Pensions collapse? We’re taking care of overpopulation by cleaning up its effects, he says. He says there are solutions to these problems. But there are choices we have to make. No one wants to get Alzheimers. We can’t have it both ways. Either we want to keep people healthy or not.

He says SENS has been successful enough that they’ve been able to spin out some of the research into commercial operations. But we need to cary on in the non-profit research world as well. Project 21 aims at human rejuvenation clinical trials.

3 Comments »

Banks everywhere

I just took a 45 minute walk through Berlin and did not pass a single bank. I know this because I was looking for an ATM.

In Brookline, you can’t walk a block without passing two banks. When a local establishment goes out of business, the chances are about 90 percent that a bank is going to go in. The town is now 83 percent banks.[1]

pie chart of businesses

Lovely.

[1] All figures are approximate.

Be the first to comment »

May 10, 2018

When Edison chose not to invent speech-to-text tech

In 1911, the former mayor of Kingston, Jamaica, wrote a letter [pdf] to Thomas Alva Edison declaring that “The days of sitting down and writing one’s thoughts are now over” … at least if Edison were to agree to take his invention of the voice recorder just one step further and invent a device that transcribes voice recordings into speech. It was, alas, an idea too audacious for its time.

Here’s the text of Philip Cohen Stern’s letter:

Dear Sir :-

Your world wide reputation has induced me to trouble you with the following :-

As by talking in the in the Gramaphone [sic] we can have our own voices recorded why can this not in some way act upon a typewriter and reproduce the speech in typewriting

Under the present condition we dictate our matter to a shorthand writer who then has to typewrite it. What a labour saving device it would be if we could talk direct to the typewriter itself! The convenience of it would be enormous. It frequently occurs that a man’s best thoughts occur to him after his business hours and afetr [sic] his stenographer and typist have left and if he had such an instrument he would be independent of their presence.

The days of sitting down and writing out one’s thoughts are now over. It is not alone that there is always the danger in the process of striking out and repairing as we go along, but I am afraid most business-men have lost the art by the constant use of stenographer and their thoughts won’t run into their fingers. I remember the time very well when I could not think without a pen in my hand, now the reverse is the case and if I walk about and dictate the result is not only quicker in time but better in matter; and it occurred to me that such an instrument as I have described is possible and that if it be possible there is no man on earth but you who could do it

If my idea is worthless I hope you will pardon me for trespassing on your time and not denounce me too much for my stupidity. If it is not, I think it is a machine that would be of general utility not only in the commercial world but also for Public Speakers etc.

I am unfortunately not an engineer only a lawyer. If you care about wasting a few lines on me, drop a line to Philip Stern, Barrister-at-Law at above address, marking “Personal” or “Private” on the letter.

Yours very truly,
[signed] Philip Stern.

At the top, Edison has written:

The problem you speak of would be enormously difficult I cannot at present time imagine how it could be done.

The scan of the letter lives at Rutger’s Thomas A. Edison Papers Digital Edition site: “Letter from Philip Cohen Stern to Thomas Alva Edison, June 5th, 1911,” Edison Papers Digital Edition, accessed May 6, 2018, http://edison.rutgers.edu/digital/items/show/57054. Thanks to Rutgers for mounting the collection and making it public. And a special thanks to Lewis Brett Smiler, the extremely helpful person who noted Stern’s letter to my sister-in-law, Meredith Sue Willis, as a result of a talk she gave recently on The Novelist in the Digital Age.

By the way, here’s Philip Stern’s obituary.

4 Comments »

May 8, 2018

Net Neutrality red alert

The Senate is voting on Net Neutrality.

Last chance before we throw the Republicans out of office.

1 Comment »

May 6, 2018

[liveblog][ai] Primavera De Filippi: An autonomous flower that merges AI and Blockchain

Primavera De Filippi is an expert in blockchain-based tech. She is giving a ThursdAI talk on Plantoid, an event held by Harvard’s Berkman Klein Center for Internet & Society and the MIT Media Lab. Her talk is officially on operational autonomy vs. decisional autonomy, but it’s really about how weird things become when you build a computerized flower that merges AI and the blockchain. For me, a central question of her talk was: Can we have autonomous robots that have legal rights and can own and spend assets, without having to resort to conferring personhood on them the way we have with corporations?

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Autonomy and liability

She begins by pointing to the 3 industrial revolutions so far: Steam led to mechanized production ; Electricity led to mass production; Electronics led to automated production. The fourth — AI — is automating knowledge production.

People are increasingly moving into the digital world, and digital systems are moving back into the physical worlds, creating cyber-physical systems. E.g., the Internet of Things senses, communicates, and acts. The Internet of Smart Things learns from the data the things collect, makes inferences, and then acts. The Internet of Autonomous Things creates new legal challenges. Various actors can be held liable: manufacturer, software developer, user, and a third party. “When do we apply legal personhood to non-humans?”

With autonomous things, the user and third parties become less liable as the software developer takes on more of the liability: There can be a bug. Someone can hack into it. The rules that make inferences are inaccurate. Or a bad moral choice has led the car into an accident.

The sw developer might have created bug-free sw but its interaction with other devices might lead to unpredictability; multiple systems operating according to different rules might be incompatible; it can be hard to identify the chain of causality. So, who will be liable? The manufacturers and owners are likely to have only limited liability.

So, maybe we’ll need generalized insurance: mandatory insurance that potentially harmful devices need to subscribe to.

Or, perhaps we will provide some form of legal personhood to machines so the manufacturers can be sued for their failings. Suing a robot would be like suing a corporation. The devices would be able to own property and assets. The EU is thinking about creating this type of agenthood for AI systems. This is obviously controversial. At least a corporation has people associated with it, while the device is just a device, Primavera points out.

So, when do we apply legal personhood to non-humans? In addition to people and corporations, some countries have assigned personhood to chimpanzees (Argentina, France) and to natural resources (NZ: Whanganui river). We do this so these entities will have rights and cannot be simply exploited.

If we give legal personhood to AI-based systems, can AI have property rights over their assets and IP? If they are legally liable, they can be held responsible for their actions, and can be sued for compensation? “Maybe they should have contractual rights so they can enter into contracts. Can they be rewarded for their work? Taxed?”Maybe they should have contractual rights so they can enter into contracts. Can they be rewarded for their work? Taxed? [All of these are going to turn out to be real questions. … Wait for it …]

Limitations: “Most of the AI-based systems deployed today are more akin to slaves than corporations.” They’re not autonomous the way people are. They are owned, controlled and maintained by people or corporations. They act as agents for their operators. They have no technical means to own or transfer assets. (Primavera recommends watching the Star Trek: The Next Generation episode “The Measure of the Man” that asks, among other things, whether Data (the android) can be dismantled and whether he can resign.)

Decisional autonomy is the capacity to make a decision on your own, but it doesn’t necessarily bring what we think of as real autonomy. E.g., an AV can decide its route. For real autonomy we need operational autonomy: no one is maintaining the thing’s operation at a technical level. To take a non-random example, a blockchain runs autonomously because there is no single operator controlling. E.g., smart contracts come with a guarantee of execution. Once a contract is registered with a blockchain, no operator can stop it. This is operational autonomy.

Blockchain meets AI. Object: Autonomy

We are getting first example of autonomous devices using blockchain. The most famous is the Samsung washing machine that can detect when the soap is empty, and makes a smart contract to order more. Autonomous cars could work with the same model; they could not be owned by anyone and collect money when someone uses them. These could be initially purchased by someone and then buy themselves off: “They’d have to be emancipated,” she says. Perhaps they and other robots can use the capital they accumulate to hire people to work for them. [Pretty interesting model for an Uber.]

She introduces Plantoid, a blockchain-based life form. “Plantoid is autonomous, self-sufficient, and can reproduce.”It’s autonomous, self-sufficient, and can reproduce. Real flowers use bees to reproduce. Plantoids use humans to collect capital for their reproduction. Their bodies are mechanical. Their spirit is an Ethereum smart contract. It collects cryptocurrency. When you feed it currency it says thank you; the Plantoid Primavera has brought, nods its flower. When it gets enough funds to reproduce itself, it triggers a smart contract that activates a call for bids to create the next version of the Plantoid. In the “mating phase” it looks for a human to create the new version. People vote with micro-donations. Then it identifies a winner and hires that human to create the new one.

There are many Plantoids in the world. Each has its own “DNA”. New artists can add to it. E.g., each artist has to decide on its governance, such as whether it will donate some funds to charity. The aim is to make it more attractive to be contributed to. The most fit get the most money and reproduces themselves. BurningMan this summer is going to feature this.

Every time one reproduces, a small cut is given to the pattern that generated it, and some to the new designer. This flips copyright on its head: the artist has an incentive to make her design more visible and accessible and attractive.

So, why provide legal personhood to autonomous devices? We want them to be able to own their own assets, to assume contractual rights, and legal capacity so they can sue and be sued, and limit their liability. “ Blockchain lets us do that without having to declare the robot to be a legal person.” Blockchain lets us do that without having to declare the robot to be a legal person.

The plant effectively owns the cryptofunds. The law cannot affect this. Smart contracts are enforced by code

Who are the parties to the contract? The original author and new artist? The master agreement? Who can sue who in case of a breach? We don’t know how to answer these questions yet.

Can a plantoid sure for breach of contract? Not if the legal system doesn’t recognize them as legal persons. So who is liable if the plant hurts someone? Can we provide a mechanism for this without conferring personhood? “How do you enforce the law against autonomous agents that cannot be stopped and whose property cannot be seized?”

Q&A

Could you do this with live plants? People would bioengineer them…

A: Yes. Plantoid has already been forked this way. There’s an idea for a forest offering trees to be cut down, with the compensation going to the forest which might eventually buy more land to expand itself.

My interest in this grew out of my interest in decentralized organizations. This enables a project to be an entity that assumes liability for its actions, and to reproduce itself.

Q: [me] Do you own this plantoid?

A: Hmm. I own the physical instantiation but not the code or the smart contract. If this one broke, I could make a new one that connects to the same smart contract. If someone gets hurt because it falls on the, I’m probably liable. If the smart contract is funding terrorism, I’m not the owner of that contract. The physical object is doing nothing but reacting to donations.

Q: But the aim of its reactions is to attract more money…

A: It will be up to the judge.

Q: What are the most likely senarios for the development of these weird objects?

A: A blockchain can provide the interface for humans interacting with each other without needing a legal entity, such as Uber, to centralize control. But you need people to decide to do this. The question is how these entities change the structure of the organization.

Be the first to comment »

April 29, 2018

Live, from a comet!

This is the comet 67P/Churyumov-Gerasimenko as seen from the European Space Agency’s Rosetta flyby.

The video was put together via crowd-sourcing.

A reliable source tells me that it is not snowing on the comet. Rather, what you’re seeing is dust and star streaks.

Can you imagine telling someone that this would be possible not so very long ago?

Be the first to comment »

April 27, 2018

[liveblog][ai] Ben Green: The Limits of "Fair" Algorithms

Ben Green is giving a ThursdAI talk on “The Limits, Perils, and Challenges of ‘Fair’ Algorithms for Criminal Justice Reform.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

In 2016, the COMPAS algorithm
became a household name (in some households) when ProPublica showed that it predicted that black men were twice as likely as white men to jump bail. People justifiably got worried that algorithms can be highly biased. At the same time, we think that algorithms may be smarter than humans, Ben says. These have been the poles of the discussion. Optimists think that we can limit the bias to take advantage of the added smartness.

There have been movements to go toward risk assessments for bail, rather than using money bail. E.g., Rand Paul and Kamala Harris have introduced the Pretrial Integrity and Safety Act of 2017. There have also been movements to use scores only to reduce risk assessments, not to increase them.

But are we asking the right questions? Yes, the criminal justice system would be better if judges could make more accurate and unbiased predictions, but it’s not clear that machine learning can do this. So, two questions: 1. Is ML an appropriate tool for this. 2. Is implementing MK algorithms an effective strategy for criminal justice reform?

#1 Is ML and appropriate tool to help judges make more accurate and unbiased predictions?

ML relies on data about the world. This can produce tunnel vision by causing us to focus on particular variables that we have quantified, and ignore others. E.g., when it comes to sentencing, a judge balances deterrence, rehabilitation, retribution, and incapacitating a criminal. COMPAS predicts recidivism, but none of the other factors. This emphasizes incapacitation as the goal of sentencing. This might be good or bad, but the ML has shifted the balance of factors, framing the decision without policy review or public discussion.

Q: Is this for sentencing or bail? Because incapacitation is a more important goal in sentencing than in bail.

A: This is about sentencing. I’ll be referring to both.

Data is always about the past, Ben continues. ML finds statistical correlations among inputs and outputs. It applies those correlations to the new inputs. This assumes that those correlations will hold in the future; it assumes that the future will look like the past. But if we’re trying reform the judicial system, we don’t want the future to look like the past. ML can thus entrench historical discrimination.

Arguments about the fairness of COMPAS are often based on competing mathematical definitions of fairness. But we could also think about the scope of what we couint as fair. ML tries to make a very specific decision: among a population, who recidivates? If you take a step back and consider the broader context of the data and the people, you would recognize that blacks recidivate at a higher rate than whites because of policing practices, economic factors, racism, etc. Without these considerations, you’re throwing away the context and accepting the current correlations as the ground truth. Even if we were to change the base data, the algorithm wouldn’t make the change, unless you retrain it.

Q: Who retrains the data?

A: It depends on the contract the court system has.

Algorithms are not themselves a natural outcome of the world. Subjective decisions go into making them: which data to input, choosing what to predict, etc. The algorithms are brought into court as if they were facts. Their subjectivity is out of the frame. A human expert would be subject to cross examination. We should be thinking of algorithms that way. Cross examination might include asking how accurate the system is for the particular group the defendant is in, etc.

Q: These tools are used in setting bail or a sentence, i.e., before or after a trial. There may not be a venue for cross examination.

In the Loomis case, an expert witness testified that the algorithm was misused. That’s not exactly what I’m suggesting; they couldn’t get to all of it because of the trade secrecy of the algorithms.

Back to the framing question. If you can make the individual decision points fair we sometimes think we’ve made the system fair. But technocratic solutions tend to sanitize rather than alter. You’re conceding the overall framework of the system, overlooking more meaningful changes. E.g., in NY, 71% of voters support ending pre-trial jail for misdemeanors and non-violent felonies. Maybe we should consider that. Or, consider that cutting food stamps has been shown to increases recidivism. Or perhaps we should be reconsidering the wisdom of preventative detention, which was only introduced in the 1980s. Focusing on the tech de-focuses on these sorts of reforms.

Also, technocratic reforms are subject to political capture. E.g., NJ replaced money bail with a risk assessment tool. After some of the people released committed crimes, they changed the tool so that certain crimes were removed from bail. What is an acceptable risk level? How to set the number? Once it’s set, how is it changed?

Q: [me] So, is your idea that these ML tools drive out meaningful change, so we ought not to use them?

A: Roughly, yes.

[Much interesting discussion which I have not captured. E.g., Algorithms can take away the political impetus to restore bail as simply a method to prevent flight. But sentencing software is different, and better algorithms might help, especially if the algorithms are recommending sentences but not imposing them. And much more.]

2. Do algorithms actually help?

How do judges use algorithms to make a decision? Even if the algorithm were perfect, would it improve the decisions judges make? We don’t have much of an empirical answer.

Ben was talking to Jeremy Heffner at Hunch Lab. They make predictive policing software and are well aware of the problem of bias. (“If theres any bias in the system it’s because of the crime data. That’s what we’re trying to address.” — Heffner) But all of the suggestions they give to police officers are called “missions,” which is in the military/jeopardy frame.

People are bad at incorporating quantitative data into decisions. And they filter info through their biases. E.g., the “ban the box” campaign to remove the tick box about criminal backgrounds on job applications actually increased racial discrimination because employers assumed the white applicants were less likely to have arrest records. (Agan and Starr 2016) Also, people have been shown to interpret police camera footage according to their own prior opinions about the police. (sommers 2016)

Evidence from Kentucky (Stevenson 2018): after mandatory risk assessments for bail only made a small increase in pretrial release, and these changes eroded over time as judges returned to their previous habits.

So, we need to be asking the empirical question of how judges actual use these decisions. And should judges incorporate these predictions into their decisions?

Ben’s been looking at the first question:L how do judges use algorithmic predictions? He’s running experiments on Mechanical Turk showing people profiles of defendants — a couple of sentences about the crime, race, previous record arrest record. The Turkers have to give a prediction of recidivism. Ben knows which ones actually recidivated. Some are also given a recommendation based on an algorithmic assessment. That risk score might be the actual one, random, or biased; the Turkers don’t know that about the score.

Q: It might be different if you gave this test to judges.

A: Yes, that’s a limitation.

Q: You ought to give some a percentage of something unrelated, e.g., it will rain, just to see if the number is anchoring people.

A: Good idea

Q: [me] Suppose you find that the Turkers’ assessment of risk is more racially biased than the algorithm…

A: Could be.

[More discussion until we ran out of time. Very interesting.]

3 Comments »

April 5, 2018

[liveblog] Neil Gaikwad Human-AI Collaboration for Sustainable Market Design

I’m at a ThursdAI talk (Harvard’s Berkman Klein Center for Internet & Society and MIT Media Lab) being given by Neil Gaikwad (Twitter: @neilthemathguy, a Ph.D. at the MediaLab, in the Space Enabled Group.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Markets and institutions are parts of complex ecosystem, Neil says. His research looks at data from satellites that show how the Earth is changing: crops, water, etc. Once you’ve gathered the data, you can use machine learning to visualize the changes. There are ecosystems, including of human behavior, that are affected by this. It affects markets and institutions. E.g., a drought may require an institutional response, and affect markets.

Traditional markets, financial markets, and gig economies all share characteristics. Farmers markets are complex ecosystems of people with differing information and different amounts of it, i.e. asymmetric info. Same for financial markets. Same for gig economies.

Indian markets have been failing; there have been 300,000 suicides in the last 30 years. Stock markets have crashed suddenly due to blackbox marketing; in some cases we still don’t know why. And London has banned Uber. So, it doesn’t matter which markets or institutions we look at, they’re losing our trust.

An article in New Scientist asked what we can do to regain this trust. For black box AI, there are questions of fairness and equity. But what would human-machine collaboration be like? Are there design principles for markets.?

Neil stops for us to discuss.

Q: How do you define the justice?

A: Good question. Fairness? Freedom? The designer has a choice about how to define it.

Q: A UN project created an IT platform that put together farmers and direct consumers. The pricing seemed fairer to both parties. So, maybe avoid intermediaries, as a design principle?

Neil continues. So, what is the concept of justice here?

1. Rawls and Kant: Transcendental institutionalism. It’s deontological: follow a principle for perfect justice. Use those principles to define a perfect institution. The properties are defined by a social contract. But it doesn’t work, as in the examples we just saw. What is missing. People and society. [I.e., you run the institution according to principles, but that doesn’t guarantee that the outcome will be fair and just. My example: Early Web enthusiasts like me thought the Web was an institution built on openness, equality, creative anarchy, etc., yet that obviously doesn’t ensure that the outcome will share those properties.]

2. Realized-focused institutionalism (Sen
2009): How to reverse this trend. It is consequentialist: what will be the consequences of the design of an institution. It’s a comparative assessment of different forms of institutions. Instead of asking for the perfectly justice society, Sen asks how justice can be advanced. The most critical tool for evaluating any institution is to look at how it actually realizes how people’s lives change.

Sen argues that principles are important. They can be expressed by “niti,” Sanskrit for rules and institutions. But you also need nyaya: a form of social arrangement that makes sure that those rules are obeyed. These rules come from social choice, not social contract.

Example: Gig economies. The data comes from mechanical turk, upwork, crowdflower, etc. This creates employment for many people, but it’s tough. E.g., identifying images. Use supervised learning for this. The Turkers, etc., do the labelling to train the image recognition system. The Turkers make almost no money at this. This is the wicked problem of market design: The worker can have identifications rejected, sometimes with demeaning comments.

The Market for Lemons” (Akerlog, et al., 1970): all the cars started to look alike and now all gig-workers look alike to those who hire them: there’s no value given to bringing one’s value to the labor.

So, who owns the data? Who has a stake in the models? In the intellectual property?

If you’re a gig worker, you’re working with strangers. You don’t know the reputation of the person giving me data. Or renting me the Airbnb apartment. So, let’s put a rule: reputation is the backbone. In sharing economies, most of the ratings are the highest. Reputation inflation. So, can we trust reputation? This happens because people have no incentive to rate. There’s social pressure to give a positive rating.

So, thinking about Sen, can we think about an incentive for honest reputation? Neil’s group has been thinking about a system [I thought he said Boomerang, but I can’t find that]. It looks at the workers’ incentives. It looks at the workers’ ratings of each other. If you’re a requester, you’ll see the workers you like first.

Does this help AI design?

MoralMachine has had 1.3M voters and 18M pairwise comparisons (i.e., people deciding to go straight or right). Can this be used as a voting based system for ethical decision making (AAAI 2018)? You collect the pairwise preferences, learn the model of preference, come to a collective preference, and have voting rules for collective decision.

Q: Aren’t you collect preferences, not normative judgments? The data says people would rather kill fat people than skinny ones.

A: You need the social behavior but also rules. For this you have to bring people into the loop.

Q: How do we differentiate between what we say we want and what we really want?

A: There are techniques, such as “Bayesian Truth Serum”nomics.mit.edu/files/1966”>Bayesian Truth Serum.

Conclusion: The success of markets, institutions or algorithms, is highly dependent on how this actually affects people’s lives. This thinking should be central to the design and engineering of socio-technical systems.

1 Comment »

April 4, 2018

A history of Internet addresses

For something I’m writing, I wanted to show what an Internet address was like before the World Wide Web introduced the http:// and the www., but after the DNS — domain name service — had been introduced. So I asked my friend Scott Bradner who has been involved in Internet governance for a very long while. He recently retired after fifty years at Harvard University where he managed networks, was chief security officer, and did so much more.

Scott is a generous teacher, so he answered far more fully than I’d hoped. Here, with his permission, is his response:


 

not such an easy answer – some facts
the ARPANET moved from NCP to TCP/IP on 1 Jan 1983
before then the Network Control Protocol used network addresses that looked like: 9 (the address for the PDP-10 at Harvard)
after 1 jan 1983 the addresses looked like 128.103.1.1 (also the address for the PDP-10 at Harvard)
and that is what the addresses look like to this day (IPv6 addresses look different)
before the DNS was deployed people used a “hosts.txt” file to map a human friendly hame into a network address
so the hosts.txt file pre 1/1/83 had the following entry for harvard
Harv10 9
and the entries in hosts.txt file for harvard after 1/1/83 was
Harv10 128.103.1.1
Harvard 128.103.1.1
and later (still before DNS was deployed) another line was added:
harvard.harvard.edu      128.103.1.1
the user would type something like “ftp harv10” and the system would look up the name in hosts.txt to get the address
all DNS did was to turn the hosts.txt file (which was maintained centrally and was, by definition, out of
date by the time you finished downloading it) into a distributed set of servers/databases – each of which could
be kept up to date on its own and since that database was queried in real time, the response would be up to date
but even with the hosts.txt or DNS you could & still can use the underlying network address itself
e.g.: ftp 128.103.8.36 (my personal computer at the Harvard Psychology Department)

Be the first to comment »

April 2, 2018

"If a lion could talk" updated

“If a lion could talk, we could not understand him.”
— Ludwig Wittgenstein, Philosophical Investigations, 1953.

“If an algorithm could talk, we could not understand it.”
— Deep learning, Now.

1 Comment »

Next Page »