logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

February 10, 2020

Brink has just posted a piece of mine that suggests that the Internet and machine learning have been teaching companies that our assumptions about the predictability of the future — based in turn on assumptions about the law-like and knowable nature of change — don’t hold. But those are the assumptions that have led to the relatively recent belief in the efficacy of strategy.

My article outlines some of the ways organizations are facing the future differently. And, arguably, more realistically.

Tweet
Follow me

Categories: business, everyday chaos, future, too big to know Tagged with: business • everydaychaos • future Date: February 10th, 2020 dw

Be the first to comment »

January 28, 2020

Games without strategies

Digital Extremes wants to break the trend of live-service games meticulously planning years of content ahead of time using road maps…’What happens then is you don’t have a surprise and you don’t have a world that feels alive,’ [community director Rebecca] Ford says. ‘You have a product that feels like a result of an investor’s meeting 12 months ago.'”

— Steven Messner, “This Means War,” PC Gamer, Feb. 2020, p. 34

Video games have been leading indicators for almost forty years. It was back in the early 1980s that games started welcoming modders who altered the visuals, turning Castle Wolfenstein into Castle Smurfenstein, adding maps, levels, cars, weapons, and rules to game after game. Thus the games became more replayable. Thus the games became whatever users wanted to make them. Thus games — the most rule-bound of activities outside of a law court or a tea ceremony — became purposefully unpredictable.

Rebecca Ford is talking about Warframe, but what she says about planning and road maps points the way for what’s happening with business strategies overall. The Internet has not only gotten us used to an environment that is overwhelming and unpredictable, but we’ve developed approaches that let us leverage that unpredictability, from open platforms to minimum viable products to agile development.

The advantage of strategy is that it enables an organization to focus its attention and resources on a single goal. The disadvantages are that strategic planning assumes that the playing field is relatively stable, and that change general happens according to rules that we can know and apply. But that stability is a dream. Now that we have tech that lets us leverage unpredictability, we are coming to once again recognize that strategies work almost literally by squinting our eyes so tight that they’re almost closed.

Maybe games will help us open our eyes so that we do less strategizing and more playing.

Tweet
Follow me

Categories: business, everyday chaos, games Tagged with: everydaychaos • future • games • internet • machine learning • strategy Date: January 28th, 2020 dw

Be the first to comment »

October 11, 2016

[liveblog] PAPIs: Cynthia Rudin on Regulating Greed

I’m at the PAPIs (Predictive Applications and APIS) [twitter: papistotio] conference at the NERD Center in Cambridge.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The first speaker is Cynthia Rudin, Director of the Prediction Analysis Lab at MIT. Her topic is “Regulating Greed over Time: An Important Lesson for Practical Recommender Systems.” It’s about her Lab’s entry in a data mining competition. (The entry did not win.) The competition was to design a better algorithm for Yahoo’s recommendation of articles. To create an unbiased data set they showed people random articles for two weeks. Your algorithm had to choose to show one of the pool of articles to a user. To evaluate a recommender system, they’d check if your algorithm recommended the same thing that was shown to the user. If the user clicked on it, you could get an evaluation. [I don’t think I got this right.] If so, you sent your algorithm to Yahoo, and they evaluated its clickthrough rate; you never got access to Yahoo’s data.

This is, she says, a form of the multi-arm bandit problem: one arm is better (more likely to lead to a pay out) but you don’t know which one. So you spend your time figuring out which arm is the best, and then you only pull that one. Yahoo and Microsoft are among the companies using multi-arm bandit systems for recommendation systems. “They’re a great alternative to massive A-B testing

  • ” [Alternative view

] [No, I don’t understand this. Not Cynthia’s fault!.].

Because the team didn’t have access to Yahoo’s data, they couldn’t tune their algorithms to it. Nevertheless, they achieved a 9% clickthrough rate … and still lost (albeit by a tiny margin). Cynthia explains how they increased the efficiency of their algorithms, but it’s math so I can only here play the sound of a muted trumpet. But it involves “decay exploration on the old articles,” and a “peak grabber”: If any articles gets more than 9 clicks out of the last 100 times they show the article, and they keep displaying it: if you have a good article, grab it. The dynamic version of a Peak Grabber had them continuing to showing a peak article if it had a clickthrough rate 14% above the global clickthrough rate.

“We were adjusting the exploration-exploitation tradeoff based on trends.” Is this a phenomenon worth exploring?The phenomenon: you shouldn’t always explore. There are times when you should just stop and exploit the flowers.

Some data supports this. E.g., in England, on Boxing Day you should be done exploring and just put your best prices on things — not too high, not too low. When the clicks on your site are low, you should be exploring. When high, maybe not. “Life has patterns.” The Multiarm Bandit techniques don’t know about these patterns.

Her group came up with a formal way of putting this. At each time there is a known reward multiplier: G(t). G is like the number of people in the store. When G is high, you want to exploit, not explore. In the lower zones you want to balance exploration and exploitation.

So they created two theorems, each leading to an algorithm. [She shows the algorithm. I can’t type in math notation that fast..]

Tweet
Follow me

Categories: big data, future, marketing Tagged with: future • marketing • math Date: October 11th, 2016 dw

Be the first to comment »

January 2, 2016

The future behind us

We’re pretty convinced that the future lies ahead of us. But according to Bernard Knox, the ancient Greeks were not. In Backing into the Future he writes:

“ The future, invisible, is behind us. ” the Greek word opiso, which literally means ‘behind’ or ‘back, refers not to the past but to the future. The early Greek imagination envisaged the past and the present as in front of us–we can see them. The future, invisible, is behind us. Only a few very wise men can see what is behind them. (p. 11)

G.W. Whitrow in Time in History quotes George Steiner in After Babel to make the same point about the ancient Hebrews:

…the future is preponderantly thought to lie before us, while in Hebrew future events are always expressed as coming after us. (p. 14)

Whitrow doesn’t note that Steiner’s quote (which Steiner puts in quotes) comes from Thorlief Borman’s Hebrew Thought Compared with Greek. Borman writes:

…we Indo-Germanic peoples think of time as a line on which we ourselves stand at a point called now; then we have the future lying before us, and the past stretches out behind us. The [ancient] Israelites use the same expressions ‘before’ and ‘after’ but with opposite meanings. qedham means ‘what is before’ (Ps. 139.5) therefore, ‘remote antiquity’, past. ‘ahar means ‘back’, ‘behind’, and of the time ‘after; aharith means ‘hindermost side’, and then ‘end of an age’, future… (p. 149)

This is bewildering, and not just because the Borman’s writing is hard to parse.“we also sometimes switch the direction of future and past.”

He continues on to note that we modern Westerners also sometimes switch the direction of future and past. In particular, when we “appreciate time as the transcendental design of history,” we

think of ourselves as living men who are on a journey from the cradle to the grave and who stand in living association with humanity which is also journeying ceaselessly forward. . Then the generation of the past are our progenitors, at least our forebears, who have existed before us because they have gone on before us, and we follow after then. In that case we call the past foretime. According to this mode of thinking, the future generation are our descendants, at least our successors, who therefore come after us. (p. 149. Emphasis in the original.)

Yes, I find this incredibly difficult to wrap my brain around. I think the trick is the ambiguity of “before us.” The future lies before us, but our forebears were also before us.

Borman tries to encapsulate our contradictory ways of thinking about the future as follows: “the future lies before us but comes after us.” The problem in understanding this is that we hear “before us” as “ahead of us.” The word “before” means “ahead” when it comes to space.

Anyway.


Borman’s explanation of the ancient Hebrew way of thinking is related to Knox’s explanation of the Greek idiom:

From the psychological viewpoint it is absurd to say that we have the future before us and the past behind us, as though the future were visible to us and the past occluded. “…as though the future were visible to us and the past occluded. Quite the reverse is true.”Quite the reverse is true. What our forebears have accomplished lies before us as their completed works; the house we see, the meadows and fields, the cultural and political system are congealed expressions of the deeds of our fathers. The same is true of everything they have done, lived, or suffered; it lies before us as completed facts… The present and the future are, on the contrary still in the process of coming and becoming. (p. 150)

The nature of becoming is different for the Greeks and Hebrews, so the darkness of the future has different meanings. But both result in the future lying behind us.

Tweet
Follow me

Categories: future, philosophy Tagged with: future • philosophy • platform Date: January 2nd, 2016 dw

1 Comment »

October 26, 2015

[liveblog][act-tiac] A federal portal in the age of search?

Sarah Crane, Dir., Federal Citizen Information Center, GSA., is going to talk about USA.gov. “In a world where everyone can search and has apps, is a web portal relevant?,” she asks.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.


When the US web portal (first.gov [archival copy]) was launched in 2000, it had an important role in aggregating and centralizing content. Now people arrive through search.


USA.gov is a platform that offers a full suite of bilingual products, built around a single structured API: information, contacts, social media, etc. All is built around a single API. The presentation layer is independent of the content. Thanks to the API, all the different outputs use the same consistent content.


It’s designed to integrate with other agency content. In fact, they don’t want to be writing any of the content; it should come from other agencies. Also, it’s built to so its modules and content can be reused. And it’s built to scale. It can support expansion or consolidation. E.g., if an initiative loses steam, its content can be pulled in and be kept available.


How people use govt services: They look online, they ask their friends, and they expect it to be easy. People are surprised when it’s complex. Some people prefer in-person help.


So, how does the portal remain relevant?


Customer experience is a core tenant. They recently launched a Customer Experience division. Constant measurement of performance. Fixing what doesn’t work. Clear lines of reporting up to the senior management. The lines of reporting also reach all the way to the devs.


Last year they re-did their personas, based on four different behaviors: 1. Someone who knows exactly what s/he’s looking for. 2. Someone has a general idea, but not informed enough to search. 3. Someone wants to complete a transaction. 4. Someone who wants to contact an elected official. They analyzed the experiences, and did “journey maps”: how someone gets to what she wants. These journeys often include travels into other agencies, which they also mapped.


What’s next for them now that info is cheap and easy to find? Sarah likes Mint.com‘s model:


  • Aggregated, personalized content collected from multiple agencies.

  • Pre-emptive service – alert, etc.

  • Relevant updates as you are in the task.

For further info, see Blog.USA.gov, and USA.gov/Explore


Q&A

Q: [me] Are people building on top of your API?


A: Some aspects, yes. Heavily used: the A-Z agency index – the only complete listing of every agency and their contact info. There’s a submission to build a machine-readable org chart of the govt that will build on top of our platform. [OMG! That would be incredible! And what is happening to me that I’m excited about a machine-readable org chart?]


Also if you use bit.ly to shorten a gov’t url, it creates one.usa.gov which you can use to track twitter activity, etc.


Certain aspects of the API are being used heavily, primarily the ones that show a larger perspective.


Q: Won’t people find personal notifications from the govt creepy, even though they like it when it’s Mint or Amazon?


A: The band-aid solution is to make it opt-in. Also being transparent about the data, where it’s stored, etc. This can never be mandatory. The UK’s e-verify effort aims at making the top 20 services digital through a single ID. We’d have to study that carefully We’d have to engage with the privacy groups (eg., EPIC) early on.


Q: Suppose it was a hybrid of automated and manual? E.g., I tell the site I’m turning 62 and then it gives me the relevant info, as opposed to it noting from its data that I’m turning 62.


Q: We’re losing some of the personal contact. And who are you leaving behind?


A: Yes, some people want to talk in person. Our agency actually started in 1972 supplying human-staffed kiosks where people could ask questions. Zappos is a model: You can shop fully online, but people call their customer service because it’s so much fun. We’re thinking about prompting people if they want to chat with a live person.


The earliest adopters are likely to be the millennials, and they’re not the ones who need the services generally. But they talk with their parents.

 


 

I briefly interviewed Sarah afterwards. Among other things, I learned:



  • The platform was launched in July


  • They are finding awesome benefits to the API approach as an internal architecture: consistent and efficiently-created content deployed across multiple sites and devices; freedom to innovate at both the front and back end; a far more resilient system that will allow them to swap in a new CMS with barely a hiccup.


  • I mentioned NPR’s experience with moving to an API architecture, and she jumped in with  COPE (create once, publish everywhere) and has been talking with Dan Jacobson, among others. (I wrote about that here.)


  • She’s certainly aware of the “government as platform” approach, but says that that phrase and model is more direclty influential over at 18F


  • Sarah is awesome.

Tweet
Follow me

Categories: egov, future Tagged with: api • future • platform Date: October 26th, 2015 dw

Be the first to comment »

October 15, 2015

Samuel Butler's early technodeterminism

“If all machines were to be annihilated at one moment, so that not a knife nor lever nor rag of clothing nor anything whatsoever were left to man but his bare body alone that he was born with, and if all knowledge of mechanical laws were taken from him so that he could make no more machines, and all machine-made food destroyed so that the race of man should be left as it were naked upon a desert island, we should become extinct in six weeks. A few miserable individuals might linger, but even these in a year or two would become worse than monkeys. Man’s very soul is due to the machines; it is a machine-made thing: he thinks as he thinks, and feels as he feels, through the work that machines have wrought upon him, and their existence is quite as much a sine quâ non for his, as his for theirs.”

Samuel Butler, Erewhon, Chapter XXIV, 1872.

This is less rhapsodic than it may seem, for it continues:

“This fact precludes us from proposing the complete annihilation of machinery, but surely it indicates that we should destroy as many of them as we can possibly dispense with, lest they should tyrannise over us even more completely.”

Tweet
Follow me

Categories: misc Tagged with: andy clark • future • technodeterminism Date: October 15th, 2015 dw

Be the first to comment »

August 18, 2015

Newton’s non-clockwork universe

The New Atlantis has just published five essays exploring “The Unknown Newton”. It is — bless its heart! — open access. Here’s the table of contents:

Rob Iliffe provides an overview of Newton’s religious thought, including his radically unorthodox theology.

William R. Newman examines the scientific ambitions in Newton’s alchemical labors, which are often written off as deviations from science.

Stephen D. Snobelen — who in the course of writing his essay discovered Newton’s personal, dog-eared copy of a book that had been lost — provides an in-depth look at the connection between Newton’s interpretation of biblical prophecy and his cosmological views.

Andrew Janiak explains how Newton reconciled the apparent tensions between the Bible and the new view of the world described by physics.

Finally, Sarah Dry describes the curious fate of Newton’s unpublished papers, showing what they mean for our understanding of the man and why they remained hidden for so long.


Stephen Snobelen’s article, “Cosmos and Apocalypse,” begins with a paper in the John Locke collection at the Bodelian: Newton’s hand-drawn timeline of the events in Revelations. Snobelen argues that we’ve read too much of The Enlightenment back into Newton.


In particular, the concept of the universe as a pure clockwork that forever operates according to mechanical laws comes from Laplace, not Newton, says Snobelen. He refers to David Kubrin’s 1967 paper “Newton and the Cyclical Cosmos“; it is not open access. (Sign up for free with Jstor and you get constrained access to its many riches.) Kubrin’s paper is a great piece of work. He makes the case — convincingly to an amateur like me — that Newton and many of his cohorts feared that a perfectly clockwork universe that did not need Divine intervention to operate would be seen as also not needing God to start up. Newton instead thought that without God’s intervention, the universe would wind down. He hypothesized that comets — newly discovered — were God’s way of refreshing the Universe.


The second half of the Kubrin article is about the extent to which Newton’s late cosmogeny was shaped by his Biblical commitments. Most of Snobelen’s article is about a discovery in 2004 of a new document that confirms this, and adds to it that God’s intervention heads the universe in a particular direction:

In sum, Newton’s universe winds down, but God also renews it and ensures that it is going somewhere. The analogy of the clockwork universe so often applied to Newton in popular science publications, some of them even written by scientists and scholars, turns out to be wholly unfitting for his biblically informed cosmology.

Snobelen attributes this to Newton’s recognition that the universe consists of forces all acting on one another at the same time:

Newton realized that universal gravity signaled the end of Kepler’s stable orbits along perfect ellipses. These regular geometric forms might work in theory and in a two-body system, but not in the real cosmos where many more bodies are involved.

To maintain the order represented by perfect ellipses required nudges and corrections that only a Deity could accomplish.


Snobelen points out that the idea of the universe as a clockwork was more Leibniz’s idea than Newton’s. Newton rejected it. Leibniz got God into the universe through a far odder idea than as the Pitcher of Comets: souls (“monads”) experience inhabiting a shared space in which causality obtains only because God coordinatis a string of experiences in perfect sync across all the monads.


“Newton’s so-called clockwork universe is hardly timeless, regular, and machine-like,” writes Snobelen. “[I]nstead, it acts more like an organism that is subject to ongoing growth, decay, and renewal.” I’m not sold on the “organism” metaphor based on Snobelen’s evidence, but that tiny point aside, this is a fascinating article.

Tweet
Follow me

Categories: future, science Tagged with: future • newton • prediction Date: August 18th, 2015 dw

1 Comment »

August 17, 2015

Newton was not an astrologer

I got a little interested in the question of Isaac Newton’s connection to astrology because of something I’ve been working about casuality. After all, Newton pursued alchemical studies with great seriousness. And he gave us a theory of action at a distance that I thought might be taken as providing a rationale for astrological effects.

But, no. According to a post by Graham Bates:

In a library of 1763 books, (1752 different titles excluding duplicates) he had 369 books on what we would call scientific subjects, plus 169 on Alchemy (including many of the important texts on the subject copied in his own hand), there were also 477 books on Theology. He possessed only four books on astrology; two of these were treatises on astrology, one was an almanac, and one was a refutation of astrology

Bates says that a book on astrology that he purchased as a boy led him to learn about Euclid’s theorems so he could construct an astrologocial chart, but that is the extent of his known interest.

Bates also does a good job tracking down a spurious quote:

There is a story, much quoted in astrological articles and books, about a dispute between Newton and Halley (of the comet fame), supposedly about astrology, in which Newton replies to a remark by Halley “I have studied these things, you have not”.

The actual quote refers to theology, not astrology. So, no, Newton was not practitioner of astrology and there’s no reason to think that he gave it any credence. (Me neither, by the way.)

The post is on the Urania Trust site, which I had not heard of before. The group was founded in 1970 “to further the advancement of education by the teaching of the relationship between main’s [sic] knowledge of, beliefs about, the heavens and every aspect of his art science philosophy and religion.” Given its commitment to taking astrology seriously, the fairness of its post about Newton is admirable.

(Now if I could only find out if Newton played billiards.)

Tweet
Follow me

Categories: misc Tagged with: future • newton • pseudoscience • science Date: August 17th, 2015 dw

1 Comment »

July 13, 2015

What open APIs could do for the news

In 2008-9, NPR, the NY Times, and The Guardian opened up public APIs, hoping that it would spur developers around the world to create wonderful and weird apps that would make use of their metadata and spread the availability of news.

Very few little happened. By any normal measure, the experiment would have to be deemed a failure.

These three news organizations are nevertheless fervid evangelists for the same APIs—for internal use. They provide an abstraction layer that makes the news media’s back ends far easier to maintain without disrupting their availability to users, they enable these organizations to adapt to new devices and workflows insanely quickly, they facilitate strategic partnerships, they lower the risk of experimentation, and more.

This was the topic of the paper I wrote during my fellowship at The Shorenstein Center. The paper then looks at ways we might still get to the open ecosystem for news that was first envisioned.

The full paper is available freely at the Shorenstein site.

There’s an op-ed length version at Nieman Reports.

Tweet
Follow me

Categories: journalism, tech Tagged with: future • journalism • news • platforms • shorenstein Date: July 13th, 2015 dw

1 Comment »

November 26, 2014

Welcome to the open Net!

I wanted to play Tim Berners-Lee’s 1999 interview with Terry Gross on WHYY’s Fresh Air. Here’s how that experience went:

  • I find a link to it on a SlashDot discussion page.

  • The link goes to a text page that has links to Real Audio files encoded either for 28.8 or ISBN.

  • I download the ISBN version.

  • It’s a RAM (Real Audio) file that my Mac (Yosemite) cannot play.

  • I look for an updated version on the Fresh Air site. It has no way of searching, so I click through the archives to get to the Sept. 16, 1999 page.

  • It’s a 404 page-not-found page.

  • I search for a way to play an old RAM file.

  • The top hit takes me to Real Audio’s cloud service, which offers me 2 gigabytes of free storage. I decline.

  • I pause for ten silent seconds in amazement that the Real Audio company still exists. Plus it owns the domain “real.com.”

  • I download a copy of RealPlayerSP from CNET, thus probably also downloading a copy of MacKeeper. Thanks, CNET!

  • I open the Real Player converter and Apple tells me I don’t have permission because I didn’t buy it through Apple’s TSA clearance center. Thanks, Apple!

  • I do the control-click thang to open it anyway. It gives me a warning about unsupported file formats that I don’t understand.

  • Set System Preferences > Security so that I am allowed to open any software I want. Apple tells me I am degrading the security of my system by not giving Apple a cut of every software purchase. Thanks, Apple!

  • I drag in the RAM file. It has no visible effect.

  • I use the converter’s upload menu, but this converter produced by Real doesn’t recognize Real Audio files. Thanks, Real Audio!

  • I download and install the Real Audio Cloud app. When I open it, it immediately scours my disk looking for video files. I didn’t ask it to do that and I don’t know what it’s doing with that info. A quick check shows that it too can’t play a RAM file. I uninstall it as quickly as I can.

  • I download VLC, my favorite audio player. (It’s a new Mac and I’m still loading it with my preferred software.)

  • Apple lets me open it, but only after warning me that I shouldn’t trust it because it comes from [dum dum dum] The Internet. The scary scary Internet. Come to the warm, white plastic bosom of the App Store, it murmurs.

  • I drag the file in to VLC. It fails, but it does me the favor of tellling me why: It’s unable to connect to WHYY’s Real Audio server. Yup, this isn’t a media file, but a tiny file that sets up a connection between my computer and a server WHYY abandoned years ago. I should have remembered that that’s how Real worked. Actually, no, I shouldn’t have had to remember that. I’m just embarrassed that I did not. Also, I should have checked the size of the original Fresh Air file that I downloaded.

  • A search for “Time Berners-Lee Fresh Air 1999” immediately turns up an NPR page that says the audio is no longer available.

    It’s no longer available because in 1999 Real Audio solved a problem for media companies: install a RA server and it’ll handle the messy details of sending audio to RA players across the Net. It seemed like a reasonable approach. But it was proprietary and so it failed, taking Fresh Air’s archives with it. Could and should have Fresh Air converted its files before it pulled the plug on the Real Audio server? Yeah, probably, but who knows what the contractual and technical situation was.

    By not following the example set by Tim Berners-Lee — open protocols, open standards, open hearts — this bit of history has been lost. In this case, it was an interview about TBL’s invention, thus confirming that irony remains the strongest force in the universe.

    Tweet
    Follow me

    Categories: future, net neutrality, open access Tagged with: future • interoperability • open • platforms • protocols • web Date: November 26th, 2014 dw

    1 Comment »

  • Next Page »


    Creative Commons License
    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
    TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

    Joho the Blog uses WordPress blogging software.
    Thank you, WordPress!