I am far from the first person to notice this, but it really pisses me off.
Treyarch, the creators of Call of Duty, in the latest version let’s us decide to play as a woman. Other games have been doing this for a long time. So, have half a yay, Treyarch. Nevertheless, your player’s gender is not reflected in the script. There’s an argument I don’t want to have about whether this makes sense; it really just comes down to bucks.
But what really pisses me, and many other people, off is this character choice screen:
This was an incredibly expensive game to design. The graphics are awesome, the sets are amazingly detailed. In single-player campaign mode, it’s a full length action movie—albeit not a very good one—and is budgeted like one.
But Treyarch couldn’t be bothered to spend $2.50 more to add some non-white faces to the roster. Really?
Here’s a fairly random screen shot I picked up from the Web.
Keep in mind that this is about one sixth the resolution you get on a gaming PC. Treyarch can add intense detail to a gun or piece of shrapnel but can’t be troubled to design a few faces that aren’t white?
We’ve got a word for people who assume that the white race is the “real” race.
Why do we never see job offerings that specify that applicants should have at least forty years of experience? Thirty years? Twenty years?
I understand that people can be qualified for a job with far less experience than one might think. We’ve all met people like that, damn them. But that’s why we couch some qualifications under the rubric “preferred.” So, do we think that having a lifetime of experience in a field is never preferred? Or even just a lifetime of experience of living and working?
(PS: If you hear of such a job in the Boston area, you know how to reach me.)
Until close to Newton’s time, the stars had been accepted as a fixed background to the motions of the Earth and the rest of the solar system. The idea developed that they might be bodies like our sun, but even through a telescope they still looked like luminous points, revealing nothing of their size. Newton found a way to tackle this problem (System 596). He noted that a prominent (first magnitude) star looked about as bright as Saturn. He knew how far away Saturn is; and also knew that we see Saturn by the sunlight that it scatters back towards us. Given that the intensity of light from a source falls off as the inverse square of the distance, he could calculate how far away a star like our sun would have to be to look as bright by direct radiation as Saturn does by reflected light. His result, expressed in modern terms, was about ten lightyears, which is absolutely of the right order of magnitude.”
A.P. French, “”Isaac Newton, Explorer of the Real World,” pp. 50-77, in Stayer, Marcia Sweet, ed., Newton’s Dream. Montreal, CA: MQUP, 1988.
Categories: misc Tagged with: newton Date: April 22nd, 2016 dw
The excerpt argues that the 1960’s political movement did not fail. It changed expectations by changing our sense of what’s possible. One effect of this: it limited the ability of American politicians to blithely engage in foreign wars for a full generation…and then changed the way we engage in those wars, albeit not necessarily for the better.
Obviously we can argue about this. But that’s not my main interest in the excerpt. Rather, I’m interested in the power of changes in common sense, which I’m taking to mean our most basic ideas about how the world is put together, how it could be put together, and how it should be put together.
This is the very core of my fascination with technology for the past thirty years. It’s why I studied the history of philosophy before that.
And btw, this is not technodeterminism. “The link between technology and common sense is indirect, but real”The link between technology and common sense is indirect, but real: new tech opens new possibilities. We seize those opportunities based on non-technological motivations and understandings. When tech is radically different enough that new strategies successfully exploit those opportunities, we can learn a new common sense from those strategies. That is, in my view, what has been happening for the past twenty years.
Anyway, I now I have three books to read: Something by Wallerstein, The Democracy Project, and Graeber’s early work, Debt.
A tip of the haat to Jaap Van Till for pointing me to this. His recent post on the current French protests fills an important gaap in American media coverage. (I tease because I love :)
Mills Baker defends personalization on the right grounds. In a brilliant and brilliantly written post, he maintains that the personalization provided by sites does at scale what we do in the real world to enable conversations: through multiple and often subtle signals, we let an interlocutor know where our interests and beliefs are similar enough that we are able to safely express our differences.
Digression: This is at the heart of our cultural fear of echo chambers, in my opinion. Conversation consists of iteration on small differences based on an iceberg of agreement. Every conversation inadvertently reinforces the beliefs that enable it to go forward. Likewise, understanding is contextual, assimilating the novel to the familiar, thus reinforcing that context by making it richer and more coherent. But our tradition has taught us that Reason requires us to be open to all ideas, ready to undo the entire structure of our beliefs. Reason, if applied purely, would thus make conversation, understanding, and knowledge impossible. In fearing echo chambers, we are running from the fact that understanding and conversation share the basic elements of echo chambers. I’ll return to this point in a later post sometime…
I love everything about Mills’ post except his under-valuing of concerns about the power personalization has over us on-line. Yes, personalization is a requirement in a scaled environment. Yes, the right comparison is between our new info flows and our old info trickles. But…
…Miles does not fully confront the main complaint: our interests and the interests of the commercial entities that are doing the personalizing do not fully coincide. Facebook has an economic motivation to get us to click more and to exit Facebook sessions eager to return for more. Facebook thus has an economic interest in showing us personalized clickbait, and to filter our feeds toward happiness rather than hey-my-cat-died-yesterday posts.
In one sense, this is entirely Mills’ point. He wants designers to understand the positive role personalization has always played, so they can reinstate that role in software that works for us. He thinks that getting this right is the responsibility of the software for “Most users do not want the ‘control’ of RSS and Twitter lists and blocking, muting, and unfollowing their fellows.” Thus the software needs to learn from the clues left inadvertently by users. (I’d argue that there’s also room for better designed control systems. I bet Mills agrees, because how could anyone argue against better designed anything?)
But in my view he too casually dismisses the responsibility and culpability of some of the most important sites when he writes:
The idea that personalization is about corporate or political control is an emotionally satisfying but inaccurate one.
If we take “personalization” in the insightful and useful way he has defined it, then sure. But when people rail against personalization they are thinking about the algorithmic function performed by commercial entities. And those entities have a massive incentive—exercised by companies like Facebook—to personalize the flow of information toward users as consumers rather than as persons.
Thanks to Dave Birk for pointing me to Mills’ post.
My wife and I have been going to dance competitions and multi-troupe performances for the past few years because our son and his partner are in various dance companies. This puts us into environments where we do not belong. It’s pretty awesome.
Dance is big in Boston. There are tons of groups, and when they get together they fill large auditoriums; a competition this weekend had about fifteen groups performing in front of a standing-room-only crowd of over 1,500.
And what audiences! They are beyond enthusiastic. They cheer on the teams at an astounding number of whoos per minute.
The teams are remarkable, and not just because of the high level of performance and choreography:
They are diverse in every direction: gender, race, sexual and gender identity, body type.
The dances are often gender indifferent in their choreography, although there are tropes that remain: men lift and catch women more than vice versa. Still, the women hit as hard as the men.
They are dancing to some of America’s cultural gifts: hiphop, jazz, show, and their mashups.
They have worked hard on a shared project with occasional star turns — the guy who can windmill, the woman who excels at pop and locking — but without stars.
You can be the oldest people in the audience, as my wife and I usually are, and be forgiven for thinking that no matter how cynical this generation may be, they are dancing the American dream.
It is a long form interview, and basically unedited: I did a little clean-up for clarity, but it’s still got conversational ambiguities, as well as some thematic inconsistencies because David was asking me questions I haven’t thought about.
In the interview I do talk a bit about why I’m embarrassed about being a gamer. “The first step is admitting just how much of a gamer I am”The first step is admitting just how much of a gamer I am. I’m pretty much of one, going back all the way to the original Colossal Cave adventure. I’ve tried most genres but seem to get the most enjoyment from various forms of first person shooters. I’m no good at platformers or other forms of twitch games. RPGs are too slow for me because I don’t get invested in the characters. Most online games are too hard for me, so I feel like I’m slowing down my teammates, although I’ve spent a lot of time in Left 4 Dead 2. Some other favorites: The Bioshock series. Portal 2. The original Doom and Wolfenstein. The Luxor games. Some pinball games. I enjoyed Dead Rising 3 and even Max Payne 3. Far Cry 4, too. I guess it takes at least three tries to get games right. Anyway, I’ve never had a systematic memory, so those are just the beans that fall out when I shake the ol’ pod, but they’re probably representative.
Games are literally a pass-time for me: I tend to play them as a break from work. I would count programming as a hobby, not a pastime because it’s got an outcome, like a crossword puzzle that once you’re finished you can use for something. When programming, I feel like I’m doing something, even though mostly what I work on are utilities that cost me hundreds of hours and by the time I die will have saved me minutes. Games simply fill the gaps in my interest.
So, why is it embarrassing to me? For one thing, many games support values that I detest. The most obvious is violence, but I haven’t found that a lifetime of killing screen-based enemies has inured me to real violence or has led me to favor violence over peaceful solutions.
“The hypermasculinity of action games concerns me more”The hypermasculinity of action games concerns me more because few people are going to be convinced by games that shooting hordes of aliens is normal, but many will be further confirmed that men are the real heroes of life’s narratives. Although games have become less grossly misogynistic and homophobic (e.g., female action leads are now not uncommon), if you have any doubts that they still trade on harmful stereotypes and assumptions — and why would you? — Anita Sarkeesian’s brilliant “Tropes vs. Women” videos will set you straight.
But I’m more embarrassed about playing games than I am about watching action movies about which those same criticisms can be made.
In part it’s because games are associated with children. In the Don’t Die interview, I point to games that are more sophisticated and adult, but many of the games I listed above are no more sophisticated emotionally or narratively than a very bad TV show. So, mainly because I’m interested, here’s what I find appealing about the games I’ve listed:
Left 4 Dead is beautifully designed to encourage genuine collaboration among four players.
The Bioshock series creates imaginative science fiction worlds that would be better termed “political fiction.”
Portal 2 is a great logic game — a few rules and ingenious problems. But it is also an hilarious social commentary with Pixar-quality touches of brilliance. Example: the singing sentry guns.
The original Doom was scary as hell.
The original Wolfenstein let you explore a maze with surprises.
Luxor is an arcade game that is at a good challenge level for me. Also, the balls make a reassuring sound. (I am particularly fond of Luxor Evolved, which is “trippy” and somehow appeals to my lizard-brain-on-acid.)
Max Payne 3 was dumb fun in a well-realized setting.
Dead Rising 3 mocks its genre while indulging in it. It does not require precise control, of which I am lacking.
When I think about it, almost all of these games share some traits. First, they are easy enough that I can succeed at them. Most games are not. Second, they tend to have pushed the graphic envelope when introduced. I remain in awe of what those computer dohickeys can do these days. Third, “many of them are meta about their genres, which is often just an excuse for being retrograde”many of them are meta about their genres, which is often just an excuse for being retrograde in their values. Apparently I fall for that.
I think it comes down to this: If embarrassment is the exposure of something private that doesn’t match one’s public persona, then clearly, the major reason I find gaming embarrassing is because I am publicly a thoughtful person. Or at least I try to be. Or at the least least, I pretend to be. Most of the games I play are not thoughtful. Sure, Portal 2 is. Going Home is. Bioshock is in its way. But Dead Rising is mindless…except for its meta-awareness of its tropes and its own ridiculousness; I completed large chunks of it while dressed in a tutu.
This is not what a semi-academic is supposed to be doing. Or so my embarrassment tells me.
PS: In the Don’t Die interview, the game I’m trying to remember that has the word “dust” in its title is “Spec Ops.” There is dust in the game, but not in the title.
Jigsaw — the Big Ideas do-tank of Google — has created a site that wants to pop your filter bubble. It’s pretty awesome.
Unfiltered.news visualizes on a map the top topics in countries around the world. Click on a topic — all of them are translated into your langue — to see how it’s trending in different localities. Use a slider to go back in time. It’s all very slick and animated.
Best of all, it automatically shows you the topics that are not trending in your region. That makes it more MultiFiltered than Unfiltered, but that’s actually what we want. (HeteroFiltered? Nah, that sounds very wrong.)
The Web was supposed to enable the world to talk amongst itself. It has left us way ahead of where we were but far behind where we want to be. This is not a failure of the Web but the result of the way understanding and attention work: understanding is a semi-coherent context (AKA echo chamber) that works by assimilating the novel into the familiar. Attention notices the novel based upon its sense of the familiar. It’s impossible to break out of this cycle. Even G-d couldn’t do it, having to show Itself to Moses as a talking, burning bush — weird, but a weird combination of familiar elements.
(This “hermeneutic circle” may be inevitable, but as you traverse it you can still be open-minded and curious or a self-righteous a-hole.)
I would love to see an integration of Google News and Unfiltered so that it doesn’t require you to remember to go to the site after you’ve gone to Google News. If Google News were integrated into Unfiltered.news, we could make the latter our main news site. Even better, Unfiltered could be integrated into Google News so that everyone who uses that very popular site could have their curiosity piqued.
Until then, I hope to make Unfiltered a regular part of my news behavior.
Mike Ananny has a post at Nieman Lab that I hope the NY Times editorial board reads. It argues that the next public editor (what we used to call an ombudsperson) is deeply versed in digital life, from algorithms to social media. Amen.
Thiis timely because the current public editor, Margaret Sullivan, is leaving the Times to become a columnist for the Washington Post. I think she has done an excellent job in a very difficult position, and I’m sorry to see her leave that position.
But that does make this a good time for the Times to re-think not just the competencies of a public editor, but also the modality of the position.
Currently the public sees the public editor as a columnist who stands between them and the editorial staff of the paper. She writes on behalf of the readers, explaining and adjudicating. It is a challenging job, to say the least.
But this role should be broadened so that it includes not just the public editor but the public at large. Let the public editor continue to write blog posts — Sullivan’s have been good examples of the form. But also let the public have its say in more than comments on the posts. As a blogger, the public editor can only discuss only a small percentage of the concerns that readers have.
To scale this, the Times could set up an open forum in which the public can raise topics that readers can discuss and upvote. Or, perhaps a Stackoverflow sort of board would work. No matter how it’s done, the public would get to raise issues, and the public would get to discuss and promote (or demote) issues . Most of the issues are likely to be handled by the readers talking amongst themselves, but the public editor would watch carefully to see where she needs to step in.
Maybe those implementations would fail or spin out of control. But there is very likely a way to scale the conversation so that readers are far more engaged in what they would increasingly see as their paper.