Mills Baker defends personalization on the right grounds. In a brilliant and brilliantly written post, he maintains that the personalization provided by sites does at scale what we do in the real world to enable conversations: through multiple and often subtle signals, we let an interlocutor know where our interests and beliefs are similar enough that we are able to safely express our differences.
Digression: This is at the heart of our cultural fear of echo chambers, in my opinion. Conversation consists of iteration on small differences based on an iceberg of agreement. Every conversation inadvertently reinforces the beliefs that enable it to go forward. Likewise, understanding is contextual, assimilating the novel to the familiar, thus reinforcing that context by making it richer and more coherent. But our tradition has taught us that Reason requires us to be open to all ideas, ready to undo the entire structure of our beliefs. Reason, if applied purely, would thus make conversation, understanding, and knowledge impossible. In fearing echo chambers, we are running from the fact that understanding and conversation share the basic elements of echo chambers. I’ll return to this point in a later post sometime…
I love everything about Mills’ post except his under-valuing of concerns about the power personalization has over us on-line. Yes, personalization is a requirement in a scaled environment. Yes, the right comparison is between our new info flows and our old info trickles. But…
…Miles does not fully confront the main complaint: our interests and the interests of the commercial entities that are doing the personalizing do not fully coincide. Facebook has an economic motivation to get us to click more and to exit Facebook sessions eager to return for more. Facebook thus has an economic interest in showing us personalized clickbait, and to filter our feeds toward happiness rather than hey-my-cat-died-yesterday posts.
In one sense, this is entirely Mills’ point. He wants designers to understand the positive role personalization has always played, so they can reinstate that role in software that works for us. He thinks that getting this right is the responsibility of the software for “Most users do not want the ‘control’ of RSS and Twitter lists and blocking, muting, and unfollowing their fellows.” Thus the software needs to learn from the clues left inadvertently by users. (I’d argue that there’s also room for better designed control systems. I bet Mills agrees, because how could anyone argue against better designed anything?)
But in my view he too casually dismisses the responsibility and culpability of some of the most important sites when he writes:
The idea that personalization is about corporate or political control is an emotionally satisfying but inaccurate one.
If we take “personalization” in the insightful and useful way he has defined it, then sure. But when people rail against personalization they are thinking about the algorithmic function performed by commercial entities. And those entities have a massive incentive—exercised by companies like Facebook—to personalize the flow of information toward users as consumers rather than as persons.
Thanks to Dave Birk for pointing me to Mills’ post.