Joho the Blog

June 25, 2019

Nudge gone evil

Princeton has published the ongoing results of its “Dark Patterns” project. The site says:

Dark patterns are user interface design choices that benefit an online service by coercing, steering, or deceiving users into making unintended and potentially harmful decisions.

So, Nudge gone evil. (And far beyond nudging.)

From the Dark Patterns page, here are some of the malicious patterns they are tracking:

  • Activity Notification: Influencing shopper decisions by making the product appear popular with others.
  • Confirmshaming: Steering shoppers to certain choices through shame and guilt.
  • Countdown Timer: Pressuring shoppers with a decreasing count-down timer.
  • Forced Enrollment: Requiring shoppers to agree to something in order to use basic functions of the website.
  • Hard to Cancel: Making it easy for shoppers to sign up and obstructing their ability to cancel.
  • Hidden Costs: Waiting to reveal extra costs to shoppers until just before they make a purchase.
  • Hidden Subscription: Charging a recurring fee after accepting an initial fee or trial period.
  • High Demand: Pressuring shoppers by suggesting that a product has high demand.
  • Limited Time: Telling shoppers that a deal or discount will expire soon.
  • Low-Stock Notification: Pressuring shoppers with claims that the inventory is low.
  • Pressured Selling: Pre-selecting or pressuring shoppers to accept the most expensive options.
  • Sneak into Basket: Adding extra products into shopping carts without consent or through boxes checked by default.
  • Trick Questions: Steering shoppers into certain choices with confusing language
  • Visual Interference: Distracting shoppers away from certain information through flashy color, style, and messages.

Home page:https://webtransparency.cs.princeton.edu/dark-patterns/

Academic paper:
https://webtransparency.cs.princeton.edu/dark-patterns/assets/dark-patterns.pdf

Nathan Mathias has put a front end to this data at the Tricky Sites site: https://trickysites.cs.princeton.edu/

Be the first to comment »

June 23, 2019

Everyday Chaos coverage, etc.

I just posted a new page at the Everyday Chaos web site. It lists media coverage, talks, and other ways into the book.

Take a look!

Be the first to comment »

June 22, 2019

Animating flexboxes when a mouse enters

Flexboxes were the first thing CSS created once it became sentient. Where CSS used to lay out boxes robotically following directions, now it can dynamically respond to changes to the size of a browser window, re-laying them out in relation to one another. This means you no longer have to engage in the standard design process of first trying to “float” elements to the left or right until you give up and just use a static table which you should have done in the first place anyway.

To get started with flexboxes, do what every non-professional Web muck-abouter does and pin this page to your browser.

But suppose you want those boxes to do something when a mouse comes into them? Since I keep on having to re-solve this problem because I can’t remember how or where I did it last time, I decided to post one way that works but is guaranteed not to be the way that actual Web developers do it. Also, it is not specific to flexboxes, so you may wonder flexboxes are in the title. Good point!

So, download jQuery and let us begin…

In this example (and on this actual page) we’re going to change the color of a border and increase its width when a mouse enters, and restore the color and width when the mouse leaves. You stick this code somewhere where it will be run before your user starts moving her mouse over your lovely flexboxes.

In this example, we have a set of flexboxes that have the class “flexbox”. Here goes:

// -- When mouse enters
 $( ".flexbox" ).mouseenter(function(){
     $(this).css({"borderColor" : "#AC3B61"})
     $( this ).animate({
         borderWidth: "10px"
}, 500, function() {
    // do something after the animation finishes
   });
  });

// -- When mouse leaves
$( ".flexbox" ).mouseleave(function(){
      $(this).css({"borderColor" : "white"})
      $( this ).animate({
          borderWidth: "1px"
}, 500, function() {
   // do something after the animation finishes
   });
  });

As you may have observed, these two blocks are essentially the same: one thickens the element’s border when a mouse cursor enters, and the other restores it when the cursor exits.

If it’s not clear what’s going on here, the first line runs a new function whenever a mouse enters an element that has been given the class “flexbox”. Of course you can make up any class name you want.

Then there’s a line of jQuery magic that is going to affect which element the mouse entered (so long as that element has the “flexbox” class); the “this” designates the actual flexbox the mouse entered. That line changes the color of the border.

The next line, with the word “animate”, tells the page that what follows should be an animation — a transition over time — and that time is specified as half a second (500 milliseconds) . What’s it going to do? Change the border width.

Then there’s an empty function that executes after the animation. I left it there in case you need it.

The “mouseleave” functions undo those changes.

So, that’s it. I’m sure it’s suboptimal and probably wrong, but it’s working so far. And now maybe I’ll remember where I put the code.

Be the first to comment »

May 26, 2019

Fake news, harassment, and the robot threat: Some numbers

Lee Rainie, of Pew Research, is giving a talk at a small pre-conference event I’m at. I’m a Lee Rainie and Pew Lifetime Fan. [DISCLAIMERS: These are just some points Lee made. I undoubtedly left out important qualifiers. I’m sure I got things wrong, too. If I were on a connection that got more than 0.4mbps (thanks, Verizon /s) , I’d find the report to link to.]

He reports that 23% of people say they have forwarded fake news, although most in order to warn other people about it. 26% of American adults and 46% between 18 and 29 years old have had fake news about them posted. The major harm reported was reputational.

He says that 41% of American adults have been harassed; the list of types of harassment is broad. About a fifth of Americans have been harassed in severe ways: stalked, sexually harassed, physical threatened, etc. Two thirds have seen someone else be harassed.

The study analyzed the Facebook posts of all the members of Congress. The angrier the contents were, the more often they’re shared, liked, or commented on. Online discussions are reported to be far less likely to be respectful, civil, etc. Seventy one percent of Facebook users did not know what data about them FB is sharing about them, as listed on the FB privacy managaement page.

A majority of Americans favor free speech even if it allows bad ideas to proliferate. [ I wonder how’d they answer if you gave them examples, or if you differentiated free speech from unmoderated speech on private platforms such as Facebook.]

Two thirds of Americans expect that robots and computers will do much of the work currently done by humans within 50 yrs. But we think it’ll mainly be other people who are put out of work; people think they personally will not be replaced. Seventy two percent are worried about a future in which robots do so much. But 63% of experts (a hand-crafted, non-representative list, Lee points out) think AI will make life better. These experts worry first of all about the loss of human agency. They also worry about data abuse, job loss, dependence lock-in (i.e., losing skills as robots take over those tasks), and mayhem (e.g., robots going nuts on the battlefield).

Q: In Europe, the fear of jobdisplacement is the opposite. People worry about their own job being displaced.

1 Comment »

May 20, 2019

Three Chaotic podcasts

My book Everyday Chaos launched last week. Yay! As part of the launch, I gave some talks and interviews. Here are three of the conversations, three three great interviewers:

Leonard Lopate, WBAI

Hidden Forces podcast

Berkman Klein book talk, and conversation with Joi Ito:

Be the first to comment »

May 19, 2019

[SPOILER] If it’s really a game of thrones, here’s how Game of Thrones should end

SPOILER ALERT: I’m writing this hours before the final episode and will spoil prior episodes.

Based on the end of Episode 5 of Season 8 — the penultimate episode — it sure looks like Arya is on her way to kill Dany. But that’d be a cop out. I hope GoT goes all Red Weddingon us.

The GoT is a pacifist work intent on reminding us of the cost of war. War is unpredictable at both its micro level — even obvious heroes can be killed without warning — and macro level.

At the macro level, Dany certainly seems to have lost her claim to be a virtuous ruler. But so what? GoT should not end based on what will make its audience feel good.

Dany should become the ruler of Westeros. That will require killing Jon since he’s the legit heir to the throne. After that, the script writers will do the old Towering Infernothing of deciding who lives and who dies — for God’s sake, why did they have to kill Fred Astaire? — and who makes it. If I had to guess, I’d say Sansa dies, Tyrion survives in some humiliating role, and Arya lives on as an enemy. Because GoT should not fully resolve … which, given GRRM’s pace, it looks like it never will.

[Confidence level: 12%]

Be the first to comment »

April 29, 2019

Forbes on 4 lessons from Everyday Chaos

Joe McKendrick at Forbes has posted a concise and thoughtful column about
Everyday Chaos, including four rules to guide your expectations about machine learning.

It’s great to see a pre-publication post so on track about what the book says and how it applies to business.

1 Comment »

April 16, 2019

First chapter of Everyday Chaos on Medium…and more!

Well, actually less. And more. Allow me to explain:

The first half of the first chapter of Everyday Chaos is now available at Medium. (An Editor’s Choice, no less!)

You can also read the first half of the chapter on how our model of models is changing at the Everyday Chaos site (Direct link: pdf).

At that site you’ll also find a fifteen minute video (Direct link: video) in which I attempt to explain why I wrote the book and what it’s about.

Or, you can just skip right to the pre-order button (Direct link: Amazon or IndieBound) :)

Comments Off on First chapter of Everyday Chaos on Medium…and more!

April 15, 2019

Fountain Pens: The Tool, the Minor Fetish

In response to a tweet asking writers what they write out longhand, I replied that if I’m particularly at sea, I’ll write out an outline, usually with lots of looping arrows, on a pad. But only with a fountain pen. Ballpoints don’t work.

My old bloggy friend AKMA wondered how he’d known me so long without knowing that I’m a fountain pen guy. The truth is that I’ve only recently become one. I’ve liked them at various times over the course my life, but only about four years ago did I integrate fountain pens into my personality.

It happened because I bought a $20 Lamy Safari on impulse in a stationery store. From there I got some single-digit Chinese fountain pens. Then, when I made some money on a writing contract, I treated myself to a $120 Lamy 2000, a lifetime pen. It’s pretty much perfect, from the classic 1960s design to the way the ink flows onto paper just wet enough and with enough scratchiness to feel like you’re on a small creek splashing over stones as it carves out words.

I have recently purchased a TWSBI ECO for $30. It has replaced my Safari as my daily pen. It’s lovely to write with, holds a lot of ink, and feels slightly sturdier than the Safari. Recommended.

Even though my handwriting is horrendous, I look forward to opportunities to write with these pens. But I avoid writing anything I’ll then have to transcribe because transcribing is so tedious. I do harbor a romantic notion of writing fiction longhand with a fountain pen on pads of Ampad “golden fibre.” Given that my fiction is worse than my handwriting, we can only hope that this notion itself remains a fiction.

So much of my writing is undoing, Penelope-like, the words I wove the day before that I am not tempted even a little to switch from word processors when the words and their order are the object. But when the words are mere vehicles, my thinking is helped — I believe — by a pen that drags its feet in the dirt.

3 Comments »

March 24, 2019

Automating our hardest things: Machine Learning writes

In 1948 when Claude Shannon was inventing information science [pdf] (and, I’d say, information itself), he took as an explanatory example a simple algorithm for predicting the element of a sentence. For example, treating each letter as equiprobable, he came up with sentences such as:

XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD.

If you instead use the average frequency of each letter, you instead come up with sentences that seem more language-like:

OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL.

At least that one has a reasonable number of vowels.

If you then consider the frequency of letters following other letters—U follows a Q far more frequently than X does—you are practically writing nonsense Latin:

ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE.

Looking not at pairs of letters but triplets Shannon got:

IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE.

Then Shannon changes his units from triplets of letters to triplets of words, and gets:

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

Pretty good! But still gibberish.

Now jump ahead seventy years and try to figure out which pieces of the following story were written by humans and which were generated by a computer:

In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.

“Pérez and his friends were astonished to see the unicorn herd”Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.

While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”

Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.

While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”

However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.

The answer: The first paragraph was written by a human being. The rest was generated by a machine learning system trained on a huge body of text. You can read about it in a fascinating article (pdf of the research paper) by its creators at OpenAI. (Those creators are: Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever.)

There are two key differences between this approach and Shannon’s.

First, the new approach analyzed a very large body of documents from the Web. It ingested 45 million pages linked in Reddit comments that got more than three upvotes. After removing duplicates and some other cleanup, the data set was reduced to 8 million Web pages. That is a lot of pages. Of course the use of Reddit, or any one site, can bias the dataset. But one of the aims was to compare this new, huge, dataset to the results from existing sets of text-based data. For that reason, the developers also removed Wikipedia pages from the mix since so many existing datasets rely on those pages, which would smudge the comparisons.

(By the way, a quick google search for any page from before December 2018 mentioning both “Jorge Pérez” and “University of La Paz” turned up nothing. “The AI is constructing, not copy-pasting.”The AI is constructing, not copy-pasting.)

The second distinction from Shannon’s method: the developers used machine learning (ML) to create a neural network, rather than relying on a table of frequencies of words in triplet sequences. ML creates a far, far more complex model that can assess the probability of the next word based on the entire context of its prior uses.

The results can be astounding. While the developers freely acknowledge that the examples they feature are somewhat cherry-picked, they say:

When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50% of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.

There are obviously things to worry about as this technology advances. For example, fake news could become the Earth’s most abundant resource. For fear of its abuse, its developers are not releasing the full dataset or model weights. Good!

Nevertheless, the possibilities for research are amazing. And, perhaps most important in the longterm, one by one the human capabilities that we take as unique and distinctive are being shown to be replicable without an engine powered by a miracle.

That may be a false conclusion. Human speech does not consist simply of the utterances we make but the complex intentional and social systems in which those utterances are more than just flavored wind. But ML intends nothing and appreciates nothing. “Nothing matters to ML.”Nothing matters to ML. Nevertheless, knowing that sufficient silicon can duplicate the human miracle should shake our confidence in our species’ special place in the order of things.

(FWIW, my personal theology says that when human specialness is taken as conferring special privilege, any blow to it is a good thing. When that specialness is taken as placing special obligations on us, then at its very worst it’s a helpful illusion.)

6 Comments »

Next Page »