Joho the Blogculture Archives - Joho the Blog

July 4, 2018

Moral rights kill culture


Moral rights of creators are inventions grounded in a bad analogy with property rights.

If you want to maintain your “moral right” to what you’ve written, then don’t publish it.

If you publish it, you are making it public. Thank you for doing so.

You will make money from it for some fixed period — a period designed to provide you (but not necessarily Stephen King) with sufficient incentive to continue to create and publish works, but a short enough period that creative works can be assimilated by the culture.

Why put limits on the author’s exclusive right to publish? To keep culture lively. Which is the same as keeping that culture alive.

Cultural assimilation requires the freedom to talk about your work, to reuse it, misuse it, abuse it, to get it terribly wrong, to make it our own as individuals, to make it ours as a culture.

Imagine a Renaissance in which “moral rights” were enforced. Can’t.

Moral rights kill culture.

(Note that this applies to works that are published as copies. Please don’t take a hammer to any irreplaceable statues. Thanks.)


Creative Commons License
This work is licensed under a Creative Commons Attribution 2.0 Generic License.

Be the first to comment »

June 19, 2018

Game addiction

From Jane Wakefield at the BBC:

Its 11th International Classification of Diseases (ICD) will include the condition “gaming disorder”.

The draft document describes it as a pattern of persistent or recurrent gaming behaviour so severe that it takes “precedence over other life interests”.

Oy. IMO, this will go down in history as a ludicrous example of the hysteria we’re living through, which I take as strong evidence of the depth of the changes the Internet is bringing. It’ll be the example of cultural hysteria mentioned after the anti-comic-book hysteria of the 1950s.

That’s not to deny that some people suffer from the symptoms listed. But we don’t have a disease called “fingernail addiction” because some people chew their nails compulsively. Or TV addiction. These obsessive behaviors are (in my non-expert opinion) expressions of other issues, not caused by the object of the obsession. Or else games are a peculiarly finicky addictive substance. If only heroin were so selective!

Pardon me, but I left my character in Tom Clancy’s Wildlands
sitting on the edge of an airfield, awaiting her 37th attempt to steal that frigging airplane.

Be the first to comment »

May 16, 2018

[liveblog] Aubrey de Grey

I’m at the CUBE Tech conference in Berlin. (I’m going to give a first keynote on the book I’m finishing.) Aubrey de Grey begins his keynote begins by changing the question from “Who wants to get old?” to “Who wants Alzheimers?” because we’ve been brainwashed into thinking that aging is somehow good for us: we get wiser, get to retire, etc. Now we are developing treatments for aging. Ambiguity about aging is now “hugely damaging” because it hinders the support of research. E.g., his SENS Research Foundation is going too slowly because of funding restraints.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

“The defeat of aging via medicine is foreseseeable now.” He says he has to be credible because people have been saying this forever and have been wrong.

“Why is aging still a problem?” One hundred years ago, a third of babies would die before they were one year old. We fixed this in the industrialized world through simple advances, e.g., hygiene, mosquito, antibiotics. So why are diseases of old age so much harder to control? People think it’s because so many things go wrong with us late in life, interacting with one another and creating incredible complexity. But that’s not the main answer.

“Aging is easy to define: it is a side effect of being alive.” “It’s a fact of the operation of the human body generates damage.” It accumulates. The body tolerates a certain amount. When you pass that amount, you get pathologies of old age. Our approach has been to develop geriatric medicine to counteract those pathologies. That’s where most of the research goes.

aubrey de gray metabolism diagram

“Metabolism: The ultimate undocumented spaghetti code”

But that won’t work because the damage continues. Geriatric medicine bangs away at the pathologies, but will necessarily become less effective over time. “We make this mistake because of a misclassification we make.”

If you ask people to make categories of disease, they’ll come up with communicable, congenital, and chronic. Then most people add a fourth way of being sick: aging itself. It includes fraility, sarcopenia (loss of muscle), immunosenesence (aging of the immune system)…But that’s silly. Aging in a living organism is the same as aging in a machine. “Aging is the accumulation of damage that occurs as a side-effect of the body’s normal operation.”It is the accumulation of damage to the body that occurs as an intrinsic side-effect of the body’s normal operation. That means the categories are right, except aging covers column 3 and 4. Column 3 — specific diseases such as alzheimer’s and cancer — is also part aging. This means that aging isn’t a blessing in surprise, and that we can’t say that column 3 are high-priorities of medicine but those in 4 are not.

A hundred years ago a few people started to think about this and realized that if we tried to interfere with the process of aging earlier one, we’d do better. This became the field of gerontology. Some species age much more slowly than others. Maybe we can figure out the basis for that variation. But the metabolism is really really complicated. “This is the ultimate nightmare of uncommented spaghetti code.” We know so little about how the body works.

“There is another approach. And it’s completely bleeding obvious”: Periodically repair the damage. We don’t need to slow down the rate at which metabolism causes damage. We need to engineer a system we don’t understand. But “we don’t need to understand how metabolism causes damag”we don’t need to understand how metabolism causes damage. Nor do we need to know what to do when the damage is too great, because we’re not going to let it get to that state. We do this with, say, antique cars. Preventitive maintenance works. “The only question is, can we do it for a much more complicated machine like the human body?

“We’re sidestepping our ignorance of metabolism and pathology. But we have to cope with the fact that damage is complicated” All of the types of damage, from cell loss toe extracellular matrix stiffening — there are 7 categories — can be repaired through a single approach: genetic repair. E.g., loss of cells can be repaired by replacing them using stem cells. Unfortunately, most of the funding is going only to this first category. SENS was created to enable research on the other seven. Aubrey talks about SENS’ work on protecting cells from the bad effects of cholesterol.

He points to another group (unnamed) that has reinvented this approach and is getting a lot of notice.

He says longevity is not what people think it is. These therapies will let people stay alive longer, but they will also stay youthful longer. “”Longevity is a side effect of health.” ”“Longevity is a side effect of health.”

Will this be only for the rich? Overpopulation? Boredom? Pensions collapse? We’re taking care of overpopulation by cleaning up its effects, he says. He says there are solutions to these problems. But there are choices we have to make. No one wants to get Alzheimers. We can’t have it both ways. Either we want to keep people healthy or not.

He says SENS has been successful enough that they’ve been able to spin out some of the research into commercial operations. But we need to cary on in the non-profit research world as well. Project 21 aims at human rejuvenation clinical trials.


Banks everywhere

I just took a 45 minute walk through Berlin and did not pass a single bank. I know this because I was looking for an ATM.

In Brookline, you can’t walk a block without passing two banks. When a local establishment goes out of business, the chances are about 90 percent that a bank is going to go in. The town is now 83 percent banks.[1]

pie chart of businesses


[1] All figures are approximate.


May 10, 2018

When Edison chose not to invent speech-to-text tech

In 1911, the former mayor of Kingston, Jamaica, wrote a letter [pdf] to Thomas Alva Edison declaring that “The days of sitting down and writing one’s thoughts are now over” … at least if Edison were to agree to take his invention of the voice recorder just one step further and invent a device that transcribes voice recordings into speech. It was, alas, an idea too audacious for its time.

Here’s the text of Philip Cohen Stern’s letter:

Dear Sir :-

Your world wide reputation has induced me to trouble you with the following :-

As by talking in the in the Gramaphone [sic] we can have our own voices recorded why can this not in some way act upon a typewriter and reproduce the speech in typewriting

Under the present condition we dictate our matter to a shorthand writer who then has to typewrite it. What a labour saving device it would be if we could talk direct to the typewriter itself! The convenience of it would be enormous. It frequently occurs that a man’s best thoughts occur to him after his business hours and afetr [sic] his stenographer and typist have left and if he had such an instrument he would be independent of their presence.

The days of sitting down and writing out one’s thoughts are now over. It is not alone that there is always the danger in the process of striking out and repairing as we go along, but I am afraid most business-men have lost the art by the constant use of stenographer and their thoughts won’t run into their fingers. I remember the time very well when I could not think without a pen in my hand, now the reverse is the case and if I walk about and dictate the result is not only quicker in time but better in matter; and it occurred to me that such an instrument as I have described is possible and that if it be possible there is no man on earth but you who could do it

If my idea is worthless I hope you will pardon me for trespassing on your time and not denounce me too much for my stupidity. If it is not, I think it is a machine that would be of general utility not only in the commercial world but also for Public Speakers etc.

I am unfortunately not an engineer only a lawyer. If you care about wasting a few lines on me, drop a line to Philip Stern, Barrister-at-Law at above address, marking “Personal” or “Private” on the letter.

Yours very truly,
[signed] Philip Stern.

At the top, Edison has written:

The problem you speak of would be enormously difficult I cannot at present time imagine how it could be done.

The scan of the letter lives at Rutger’s Thomas A. Edison Papers Digital Edition site: “Letter from Philip Cohen Stern to Thomas Alva Edison, June 5th, 1911,” Edison Papers Digital Edition, accessed May 6, 2018, Thanks to Rutgers for mounting the collection and making it public. And a special thanks to Lewis Brett Smiler, the extremely helpful person who noted Stern’s letter to my sister-in-law, Meredith Sue Willis, as a result of a talk she gave recently on The Novelist in the Digital Age.

By the way, here’s Philip Stern’s obituary.


March 24, 2018

Sixteen speeches

In case you missed any of today’s speeches at The March for Our Lives, here’s a page that has sixteen of them

I, on the other hand, am speechless.

Comments Off on Sixteen speeches

January 11, 2018

Artificial water (+ women at PC Gamer)

I’ve long wondered — like for a couple of decades — when software developers who write algorithms that produce beautiful animations of water will be treated with the respect accorded to painters who create beautiful paintings of water. Both require the creators to observe carefully, choose what they want to express, and apply their skills to realizing their vision. When it comes to artistic vision or merit, are there any serious differences?

In the January issue of PC Gamer , Philippa Warr [twitter: philippawarr] — recently snagged
from Rock, Paper, Shotgun points to v r 3 a museum of water animations put together by Pippin Barr. (It’s conceivable that Pippin Barr is Philippa’s hobbit name. I’m just putting that out there.) The museum is software you download (here) that displays 24 varieties of computer-generated water, from the complex and realistic, to simple textures, to purposefully stylized low-information versions.


Philippa also points to the Seascape
page by Alexander Alekseev where you can read the code that procedurally produces an astounding graphic of the open sea. You can directly fiddle with the algorithm to immediately see the results. (Thank you, Alexander, for putting this out under a Creative Commons license.) Here’s a video someone made of the result:

Philippa also points to David Li’s Waves where you can adjust wind, choppiness, and scale through sliders.

More than ten years ago we got to the point where bodies of water look stunning in video games. (Falling water is a different question.) In ten years, perhaps we’ll be there with hair. In the meantime, we should recognize software designers as artists when they produce art.



Good work, PC Gamer, in increasing the number of women reviewers, and especially as members of your editorial staff. As a long-time subscriber I can say that their voices have definitely improved the magazine. More please!

Comments Off on Artificial water (+ women at PC Gamer)

December 15, 2017

[liveblog] Geun-Sik Jo on AR and video mashups

I’m at the STEAM ed Finland conference in Jyväskylä. Geun-Sik Jo is a professor at Inha University in Seoul. He teaches AI and is an augmented reality expert. He also has a startup using AR for aircraft maintenance [pdf].

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Why do we need mashups? To foster innovation. To create new systems very efficiently. He uses the integration of Google Maps into Craigslist as a simple example.

Interactive video — video with clickable spots — is often a mashup: click and go to a product page or send a text to a friend. E.g., or http://www.raptmedia or .

TVs today are computers with their own operating systems and applications. Prof. Jo shows a video of AR TV. The “screen” is a virtual image displayed on special glasses.

If we mash up services, linked data clouds, TV content, social media, int an AR device, “we can do nice things.”

He shows some videos created by his students that present AR objects that are linked to the Internet: a clickable travel ad, location-based pizza ordering, a very cool dance instruction video, a short movie.

He shows a demo of the AI Content Creation System that can make movies. In the example, it creates a drama, mashing up scenes from multiple movies. The system identifies the characters, their actions, and their displayed emotions.

Is this creative? “I’m not a philosopher. I’m explaining this from the engineering point of view,” he says modestly. [If remixes count as creativity — and I certainly think they do — then it’s creative. Does that mean the AI system is creative? Not necessarily in any meaningful sense. Debate amongst yourselves.]

Comments Off on [liveblog] Geun-Sik Jo on AR and video mashups

[liveblog] Sonja Amadae on computational creativity

I’m at the STEAM ed Finland conference in Jyväskylä. Sonja Amadae at Swansea University (also currently at Helsinki U.) works on robotic ethics. She will argue in this talk that computers are algorithmic, that they only do what they’re programmed to do, that they don’t understand what they’re doing and they don’t feel human experience. AI is, she concludes, a tool.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

AI is like a human prosthetic that helps us walk. AI is an enhancement of human capabilities.

She will talk about about three cases.

Case 1: Generating a Rembrandt

A bank funded a project [cool site about it] to see what would happen if a computer had all of the data about Rembrandt’s portraits. They quantified the paintings: types, facial aspects including the size and distance of facial features, depth, contour, etc. They programmed the algorithm to create a portrait. The result was quite goood. People were pleased. Of course, it painted a white male. Is that creativity?

We are now recogizing the biases widespread in AI. E.g., “Biased algorithms are everywhere and one seeems to care” in MIT Tech Review by Will Knight. She also points to the time that Google mistakenly tagged black people as “gorillas.” So, we know there are limitations.

So, we fix the problem…and we end up with facial recognition systems so good that China can identify jaywalkers from surveillance cams, and then they post their images and names on large screens at the intersections.

Case 2: Forgery detection

The aim of one project was to detect forgeries. It was built on work done by Marits Michel van Dantzig in the 1950s. He looked at the brushstrokes on a painting; artists have signature brushstrokes. Each painting has on average 80,000 brushstrokes. A computer can compare a suspect painting’s brushstrokes with the legitimate brushstrokes of the artist. The result: the AI could identify forgeries 80% of the time from a single stroke.

Case 3: Computational creativity

She cites Wikipedia on Computational Creativity because she thinks it gets it roughly right:

Computational creativity (also known as artificial creativity, mechanical creativity, creative computing or creative computation) is a multidisciplinary endeavour that is located at the intersection of the fields of artificial intelligence, cognitive psychology, philosophy, and the arts.
The goal of computational creativity is to model, simulate or replicate creativity using a computer, to achieve one of several ends:[

  • To construct a program or computer capable of human-level creativity.

  • To better understand human creativity and to formulate an algorithmic perspective on creative behavior in humans.

  • To design programs that can enhance human creativity without necessarily being creative themselves.

She also quotes John McCarthy:

‘To ascribe certain beliefs, knowledge, free will, intentions, consciousness, abilities, or wants to a machine or computer program is legitimate when such an ascription expresses the same information about the machine that it expresses about a person.’

If you google “computational art” and you’ll see many pictures created computationally. [Or here.] Is it genuine creativity? What’s going on here?

We know that AI’s products can be taken as human. A poem created by AI won a poetry contest for humans. E.g., “A home transformed by the lightning the balanced alcoves smother” But the AI doesn’t know it’s making a poem.

Can AI make art? Well, art is in the eye of the beholder, so if you think it’s art, it’s art. But philosophically, we need to recall Turing-Church Computability which states that the computation “need not be intelligible to the one calculating.” The fact that computers can create works that look creative does not mean that the machines have the awareness required for creativity.

Can the operations of the brain be simulated on a computer? The Turing-Church statement says yes. But now we have computing so advanced that it’s unpredictable, probabilistic, and is beyond human capability. But the computations need not be artistic to the one computing.

Computability has limits:

1. Information and data are not meaning or knowledge.

2. Every single moment of existence is unique in the universe. Every single moment we see a unique aspect of the world. A Turing computer can’t see the outside world. It only has what it’s internal to it.

3. Human mind has existential experience.

4. The mind can reflect on itself.

5. Scott Aaronson says that humans can exercise free will and AI cannot, based on quantum theory. [Something about quantum free states.]

6.The universe has non-computable systems. Equilibrium paths?

“Aspect seeing” means that we can make a choice about how we see what we see. And each moment of each aspect is unique in time.

In SF, the SPCA uses a robot to chase away homeless people. Robots cannot exercise compassion.

Computers compute. Humans create. Creativity is not computable.


Q: [me] Very interesting talk. What’s at stake in the question?

A: AI has had such a huge presence in our lives. There’s a power of thinking about rationality as computation. Gets best articulated in game theory. Can we conclude that this game theoretical rationality — the foundational understanding of rationality — is computable? Do human brings anything to the table? This leads to an argument for the obsolescence of the human. If we’re just computational, then we aren’t capable of any creativity. Or free will. That’s what’s ultimately at stake here.

Q: Can we create machines that are irrational, and have them bring a more human creativity?

A: There are many more types of rationality than game theory sort. E.g., we are rational in connection with one another working toward shared goals. The dichotomy between the rational and irrational is not sufficient.

Comments Off on [liveblog] Sonja Amadae on computational creativity

December 5, 2017

[liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

I’ve been at a two-day workshop sponsored by the Michigan State Uiversity and the National Science Foundation: “Workshop on Trustworthy Algorithmic Decision-Making.” After multiple rounds of rotating through workgroups iterating on five different questions, each group presented its findings — questions, insights, areas of future research.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Seriously, I cannot capture all of this.

Conduct of Data Science

What are the problems?

  • Who defines and how do we ensure good practice in data science and machine learning?

Why is the topic important? Because algorithms are important. And they have important real-world effects on people’s lives.

Why is the problem difficult?

  • Wrong incentives.

  • It can be difficult to generalize practices.

  • Best practices may be good for one goal but not another, e.g., efficiency but not social good. Also: Lack of shared concepts and vocabulary.

How to mitigate the problems?

  • Change incentives

  • Increase communication via vocabularies, translations

  • Education through MOOCS, meetups, professional organizations

  • Enable and encourage resource sharing: an open source lesson about bias, code sharing, data set sharing

Accountability group

The problem: How to integratively assess the impact of an algorithmic system on the public good? “Integrative” = the impact may be positive and negative and affect systems in complex ways. The impacts may be distributed differently across a population, so you have to think about disparities. These impacts may well change over time

We aim to encourage work that is:

  • Aspirationally casual: measuring outcomes causally but not always through randomized control trials.

  • The goal is not to shut down algorithms to to make positive contributions that generat solutions.

This is a difficult problem because:

  • Lack of variation in accountability, enforcements, and interventions.

  • It’s unclear what outcomes should be measure and how. This is context-dependent

  • It’s unclear which interventions are the highest priority

Why progress is possible: There’s a lot of good activity in this space. And it’s early in the topic so there’s an ability to significantly influence the field.

What are the barriers for success?

  • Incomplete understanding of contexts. So, think it in terms of socio-cultural approaches, and make it interdisciplinary.

  • The topic lies between disciplines. So, develop a common language.

  • High-level triangulation is difficult. Examine the issues at multiple scales, multiple levels of abstraction. Where you assess accountability may vary depending on what level/aspect you’re looking at.

Handling Uncertainty

The problem: How might we holistically treat and attribute uncertainty through data analysis and decisions systems. Uncertainty exists everywhere in these systems, so we need to consider how it moves through a system. This runs from choosing data sources to presenting results to decision-makers and people impacted by these results, and beyond that its incorporation into risk analysis and contingency planning. It’s always good to know where the uncertainty is coming from so you can address it.

Why difficult:

  • Uncertainty arises from many places

  • Recognizing and addressing uncertainties is a cyclical process

  • End users are bad at evaluating uncertain info and incorporating uncertainty in their thinking.

  • Many existing solutions are too computationally expensive to run on large data sets

Progress is possible:

  • We have sampling-based solutions that provide a framework.

  • Some app communities are recognizing that ignoring uncertainty is reducing the quality of their work

How to evaluate and recognize success?

  • A/B testing can show that decision making is better after incorporating uncertainty into analysis

  • Statistical/mathematical analysis

Barriers to success

  • Cognition: Train users.

  • It may be difficult to break this problem into small pieces and solve them individually

  • Gaps in theory: many of the problems cannot currently be solved algorithmically.

The presentation ends with a note: “In some cases, uncertainty is a useful tool.” E.g., it can make the system harder to game.

Adversaries, workarounds, and feedback loops

Adversarial examples: add a perturbation to a sample and it disrupts the classification. An adversary tries to find those perturbations to wreck your model. Sometimes this is used not to hack the system so much as to prevent the system from, for example, recognizing your face during a protest.

Feedback loops: A recidivism prediction system says you’re likely to commit further crimes, which sends you to prison, which increases the likelihood that you’ll commit further crimes.

What is the problem: How should a trustworthy algorithm account for adversaries, workarounds, and feedback loops?

Who are the stakeholders?

System designers, users, non-users, and perhaps adversaries.

Why is this a difficult problem?

  • It’s hard to define the boundaries of the system

  • From whose vantage point do we define adversarial behavior, workarounds, and feedback loops.

Unsolved problems

  • How do we reason about the incentives users and non-users have when interacting with systems in unintended ways.

  • How do we think about oversight and revision in algorithms with respect to feedback mechanisms

  • How do we monitor changes, assess anomalies, and implement safeguards?

  • How do we account for stakeholders while preserving rights?

How to recognize progress?

  • Mathematical model of how people use the system

  • Define goals

  • Find stable metrics and monitor them closely

  • Proximal metrics. Causality?

  • Establish methodologies and see them used

  • See a taxonomy of adversarial behavior used in practice

Likely approaches

  • Security methodology to anticipating and unintended behaviors and adversarial interactions’. Monitor and measure

  • Record and taxonomize adversarial behavior in different domains

  • Test . Try to break things.


  • Hard to anticipate unanticipated behavior

  • Hard to define the problem in particular cases.

  • Goodhardt’s Law

  • Systems are born brittle

  • What constitutes adversarial behavior vs. a workaround is subjective.

  • Dynamic problem

Algorithms and trust

How do you define and operationalize trust.

The problem: What are the processes through which different stakeholders come to trust an algorithm?

Multiple processes lead to trust.

  • Procedural vs. substantive trust: are you looking at the weights of the algorithms (e.g.), or what were the steps to get you there?

  • Social vs personal: did you see the algorithm at work, or are you relying on peers?

These pathways are not necessarily predictive of each other.

Stakeholders build truth through multiple lenses and priorities

  • the builders of the algorithms

  • the people who are affected

  • those who oversee the outcomes

Mini case study: a child services agency that does not want to be identified. [All of the following is 100% subject to my injection of errors.]

  • The agency uses a predictive algorithm. The stakeholders range from the children needing a family, to NYers as a whole. The agency knew what into the model. “We didn’t buy our algorithm from a black-box vendor.” They trusted the algorithm because they staffed a technical team who had credentials and had experience with ethics…and who they trusted intuitively as good people. Few of these are the quantitative metrics that devs spend their time on. Note that FAT (fairness, accountability, transparency) metrics were not what led to trust.


  • Processes that build trust happen over time.

  • Trust can change or maybe be repaired over time. “

  • The timescales to build social trust are outside the scope of traditional experiments,” although you can perhaps find natural experiments.


  • Assumption of reducibility or transfer from subcomponents

  • Access to internal stakeholders for interviews and process understanding

  • Some elements are very long term



What’s next for this workshop

We generated a lot of scribbles, post-it notes, flip charts, Slack conversations, slide decks, etc. They’re going to put together a whitepaper that goes through the major issues, organizing them, and tries to capture the complexity while helping to make sense of it.

There are weak or no incentives to set appropriate levels of trust

Key takeways:

  • Trust is irreducible to FAT metrics alone

  • Trust is built over time and should be defined in terms of the temporal process

  • Isolating the algorithm as an instantiation misses the socio-technical factors in trust.

Comments Off on [liveblog] Conclusion of Workshop on Trustworthy Algorithmic Decision-Making

Next Page »