Just a quick note updating my post yesterday about the musky Tesla-Times affair. [(‘m in an airport with just a few minutes before boarding.)
Times Man John Broder has posted his step-by-step rebuttal-explanation-apologia of Elon Musk’s data-driven accusations that Broder purposefully drove a Tesla S into a full stop. Looked at purely as a drama of argument, it just gets more and more fascinating. But it is of course not merely a drama or an example; reputations of people are at stake, and reputations determine careers and livelihoods.
Broder’s overall defense is that he was on the phone with Tesla support at most of the turning points, and followed instructions scrupulously. As a result, just about every dimension of this story is now in play and in question: Were the data accurate or did Broder misremember turning on cruise control? Were the initial conditions accounted for (e.g., different size wheels)? Were the calculations based on that data accurate, or are the Tesla algorithms off when the weather is cold? Does being a first-time driver count as a normal instance? Does being 100% reliant on the judgment of support technicians make a test optimal or atypical? Should Broder have relied on what the instruments in the car said or what Support told him? If a charging pump is in a service area but no one sees it, does it exist?
And then there’s the next level. We humans live with this sort of uncertainty — multi-certainty? — all the time. It’s mainly what we talk about when given a chance. For most of us, it’s idle chatter — you get to rail against the NY Times, I get to write about data and knowledge, and Tesla car owners get to pronounce in high dudgeon. Fun for all. But John Broder’s boss is going to have to decide how to respond. It’s quite likely that that decision is going to reflect the murky epistemology of the situation. Evidence will be weighed and announced to be probabilistic. Policy guidelines will be consulted. Ultimately the decision is likely to be pegged to a single point of policy, phrased as something like, “In order to maintain the NYT’s reputation against even unlikely accusations, we have decided to …” or “Because our reviewer followed every instruction given him by Tesla…” Or some such; I’m not trying to predict the actual decision, but only that it will prioritize one principle from among dozens of possibilities.
Thus, as is usually the case, the decision will force a false sense of closure. It will pick one principle, and over time, the decision will push an even grosser simplification, for people will remember which way the bit flipped — fired, suspended, backed fully, whatever — but not the principle, not the doubt, not the unredeemable uncertainty. This case will become yet one more example of something simple &mdash masking the fathomless complexity revealed even by a single review of a car.
That complexity is now permanently captured in the web of blue underlined text. We can always revisit it. But, we won’t, because the matter was decided, and decisions betray complexity.
[Damn. Wish I had time to re-read this before posting! Forgive typos, thinkos, etc.?]