I saw it on New Year’s Eve and liked it a lot. But, I think it’s best taken as a series of brilliant set pieces. String them together and you have a fairly predictable narrative arc, and a thematic point ( [SPOILER] Greed is bad) that isn’t going to change anyone’s mind. But the set pieces are incredibly well done because Scorcese. And Leo Dicaprio is just great in it.
Some people are upset because the movie doesn’t condemn the behavior it depicts. Yikes. Scorcese is obviously showing us behavior he finds so extraordinarily bad that he was motivated to make a movie about it. To tack on some moralizing elements would only lessen the impact, because that would imply that we need to be told that the behavior depicted is bad.
Mike Ryan at HuffPo writes about this question and, citing Chuck Klosterman, compares Leo’s character to Archie Bunker. But there’s very little to understand about Archie. He’s a bigot and ignorant. Haha. Wolf of Wall Strett instead shows us a sub-culture that is twisted and extreme, but is coherent within its own little world. There’s something to understand there, which is why Leo is able to give an Oscar-worthy performance. In that it’s much like The Godfather or The Sopranos, not to mention Good Fellas. It is also more like American Psycho than like Wall Street. (And speaking of Oliver Stone, one of my very least favorite directors, if you want to be hit repeatedly with a gigantic Morality Hammer, watch Platoon, if you can get through it.)
Good Fellas is a better movie than Wolf (in my opinion, natch) because it is less predictable, the main character is more morally nuanced, there are more unforgettable characters, etc. But I thought Wolf was very good, very entertaining, and treated us like moral grownups.
Not that all of us are.
Tagged with: movies
Date: January 3rd, 2014 dw
My Pebble watch arrived a week ago. It’s a programmable wristwatch that talks to your Android phone or iPhone. When it arrived, I was a little disappointed. I’m happier with it now.
I didn’t make it into the Kickstarter in time, but I was in the first wave of buyers after that. Pebble has done an outstanding job of blogging about the process by which it has gone from concept to shipping product, and I’ve generally liked the choices they’ve made. Ever since my Casio AE-20, I’ve wanted a digital representation of analog hands. Plus I very much like the idea of being able to download watch faces that are open source and designed by, well, anyone. Plus, there can be and will be apps.
But I was disappointed because it’s ugly. It’s too big on my wrist. Not exactly sleek. Plus, I hate the band it ships with: resin (or some other type of plastic), plain, and irritating to my skin. (Of course this is a personal reaction. It’s a blog, people!) But I replaced the band with a blue leather band — I got the black version of the watch — and I think it looks much better, In fact, now I like the way it looks.
Also, I began by downloading a set of fake analog faces, and I like them ok, but I’ve started using a default face that spells out the time in words. It’s a little harder to parse than a set of hands, and it doesn’t have the date on it, but if Project Runway has taught me anything, it is that one must make sacrifices for fashion. (PLus now I found a variant with the date on it.)
There are not a lot of apps yet, an I haven’t even found a stopwatch/countdown timer that I like. But I will. Also, I was surprised that after I’d downloaded about six watch faces, it told me that it was out of memory. (To delete a face, you use the Pebble app on your smart phone.)
So, I haven’t gotten to the basics yet. It’s got a readable display that’s more like e-paper than the usual LCD; it’s fine in bright light and the night light works well. A charged battery is supposed to last a week, and mine has so far. You need a special cable to charge it; it plugs into any normal USB charger on the wall side, but the watch side holds itself to the watch via the magic of magnetism. I know Pebble considered using a normal USB socket, but then it wouldn’t be waterproof, so it seems like a reasonable trade-off, although I’m pretty sure I’ve already lost the cable. I hope they sell them by the dozen.
The watch sync’ed incredibly easily via Bluetooth with my Android phone. By default it sends the text of emails and SMS texts to your watch. Since it buzzes every time, and since I get maybe 150 emails a day, I turned off the email syncing. But since I get very few texts, and they’re almost always from my family, I’ve left that notification on. It buzzes your wrist, and you can use the watch buttons to scroll through the message. You can’t compose text on your watch.
It also comes ready to pause or skip forward or backward your phone’s music. I’ve found this useful while listening to podcasts; a click of a watch button and I can hear the bus driver telling me us to duck. (The ol’ 66 is a pretty tough bus route.)
This is definitely a 1.0 release. It’s fully functional, and with a new band it looks pretty snappy. If I were you, I’d wait for the next release, by which time it may have some strong competition. It’s also a little expensive at $150. Still, I like the watch, I like the integration with Android, and I like the company’s transparency. It’s bringing me pleasure.
Tagged with: android
Date: June 17th, 2013 dw
[SPOILERS COMING] A few paragraphs down I’m going to talk explicitly about the theme. If you haven’t seen the movie, you should stop there; I’ve marked it with a spoiler alert. Until then, there are no spoilers. But, this is a movie you should see with no expectations other than that it isn’t your ordinary film. So, my advice is to stop here.
I watched Upstream Colors last night, the second movie by Shane Carruth, who gave us Primer in 2004, a time-travel movie that has spawned analyses that make Memento look like Babar’s Vacation.
Upstream Colors is mysterious and difficult to fathom, but not because it is as intricately plotted as Primer. With Primer, you have to notice that a character’s middle button is undone in one scene but is buttoned in another. (I haven’t seen Primer in a while, so I’ve made up that example.) With Upstream Colors you can let yourself relax a bit more. The salient details are flagged, generally. But how they go together, especially after the first third (i.e., after the pigs are introduced), will keep you focused.
The theme is as difficult as the plot. In fact, I can’t imagine anyone recognizing what the theme is — what the movie is actually about — while watching it. Still, you watch it enthralled. And that makes this a truly masterful movie. It is so beautifully constructed in images, sounds, and music (Carruth wrote the awesome score) that it carries you along. You are given enough narrative clues to keep you interested in what’s coming next, and you care about the characters. But Carruth has invented his own rhetoric for this movie, a correspondence of gestures and sounds that conveys shared meanings.
I had to read some analyses on the Web before the penny dropped. And even then there’s plenty left to ponder.
There are, in fact, at least two pennies. One concerns the narrative thread, along the lines of “What’s up with the pigs?” About this I shall say no more, but will instead recommend Daniel D’Addario’s article in Salon, which I liked up until the last couple of paragraphs…precisely where he goes from narrative to theme.
The second penny isexpressed eloquently by Carruth himself in a terrific interview by Charlie Jane Anders. And a second interview by her about the ending is equally important. In it, Carruth explains why the ending is subversive of narratives, but it’s also clear that the theme itself is even more deeply subversive.
[SPOILER ALERT: ]
This movie is about people who think they are controlling their lives but in fact are being controlled by forces outside of themselves, at least according to Carruth. But control is expressed in the movie as being the author of one’s own narrative. These characters are certainly not in charge of the meta narrative about what’s shaping their story. The fact that it’s pigs ‘n’ worms (and, yes, orchids) is just one more splash of cold water: the narrative the characters tell themselves when they take back control couldn’t be less ennobling. Further, one can read the ending as showing the characters becoming the next set of enablers of the cycle.
I’m not at all sure that that’s what Carruth has in mind. His interview suggests that he instead sees the pigs and worms simply as part of nature, and nature doesn’t care about what we find pleasant or gross. The transcendence at the end is not about taking back control of one’s narrative but about accepting that the stories we tell ourselves are not stories that we give ourselves. That’s far better expressed through pigs in shit than bunnies in clover.
And yet this is a movie with a highly stylized and artificial language of image, sound, and music. It is a story we have been given by a creator who, like The Sampler (the guy recording sounds), is invisible to the characters but who is shaping so much of what they experience —the shepherd of the forces controlling the characters’ experience. I can’t avoid assuming that Carruth knows that he himself is The Sampler and we are his protagonists. During the movie and then afterwards, we — like his characters — are going to think we’re taking back control of the story, piecing together what happened. We assume there must be a story, and even that it has to be about us, but suppose it’s not. Suppose there’s nothing but pigs and worms. Suppose the story is nothing but the beautiful rhetoric of an author we cannot see — an author himself embedded in a cycle he did not create.
By the way, this is a great movie — although it does bother me that I had to read about it to see why.
Tagged with: movies
• upstream colors
Date: May 25th, 2013 dw
Guy Horton criticizes Michael Heiser’s new artwork at the LA County Museum of Art.
I saw it a couple of weeks ago. And outwardly, it’s nothing but a giant rock with a walkway cut underneath it. But inwardly, it’s a big expensive rock with a pointless walkway cut underneath it.
Aesthetically, I got nothing from it. The rock is big and heavy. The walkway is sloping and concretey. Walking underneath it reveals nothing about the rock except that its looks the same from underneath as from ground level.
So maybe it’s one of them artworks that are really about an idea. But what idea? Rocks are big? Rocks have bottoms? Do you like rocks? I like rocks. Some idea like that?
I’m not saying that you have to be able to explain everything about an artwork. I take as one of the points of Rothko’s paradigmatic works that you can’t really explain why the best of them are numinous.
But you can at least gesture at the colors and use a word like “numinous. If nobody can point to what there is to like about a work, then maybe it’s just a rock.
Here’s what the LACMA’s page says:
Taken whole, Levitated Mass speaks to the expanse of art history, from ancient traditions of creating artworks from megalithic stone, to modern forms of abstract geometries and cutting-edge feats of engineering.
In short: “It’s a rock. It wasn’t carved or nothing, and it was !@#$ing hard to get it here. We hope you enjoy it $10M worth.”
Tagged with: art
Date: July 12th, 2012 dw
Most fiction is crap. Often the plot is arbitrary or unsurprising. More often, the you can see the author’s plans behind the writing: The author needs a brainy nerd, a wisecracking minor character, a mysterious presence, someone with the key to the jalopy. Whatever. The characters, the plot, the entire mess feels constructed. Which is usually the opposite of art. (This is certainly true of my pathetic stabs at fiction.)
Then, of course, there are the magicians. John Updike could make you feel you were inhabiting a real person within a single paragraph. I’m reading Philip Roth’s Nemesis now, and while I often find Roth’s world unpleasant to live in, I find myself in that world without any sense of Roth standing between it and me.
So, meet Brad Abruzzi. Brad was a Berkman Fellow last year, and we hit it off. Brad was also a lawyer in Harvard’s Office of the General Counsel, and I got to know him in that capacity since he was a silent hero in the effort to negotiate the freedom of 12M+ bibliographic records from Harvard Library. He has since moved to MIT, which is too bad for Harvard. I like Brad a lot.
But I had no idea, none at all, that he is a fiction writer whose work is the opposite of crap. You wouldn’t know it to look at him, but the guy can write. Of course, I don’t know what I would expect a good fiction writer to look like, short of a beret and a thick coat of pretension.
I downloaded Brad’s novel New Jersey’s Famous Turnpike Witch with trepidation, figuring I’d have to say something nice to him about it while technically salvaging my integrity through some clever, noncommital choice of words. But NJFTPW is just wonderful. I’m only 70% through, and I’ll let you know how the whole thing goes, but I’m loving it so far. Brad has created a skewed world in which the NJ Turnpike is its own realm, with its own culture, sociology, and politics. The fulcrum of the story is Alice, a performance artist who — implausibly, until you realize that this is not the NJ Turnpike you’re used to driving — is beloved by the long lines of cars she ties up with her antics. The story is brimming with characters, none stock, most somewhat over-the-top, each richly imagined and each with her or his own unexpected history — funny short stories on their own. Brad, it turns out, is endlessly inventive. You would never ever read back from this book and figure it was probably written by a Harvard-MIT lawyer.
This is a really good book. Once you give into its absurd premises, it follows a logic that makes sense as it unfolds. It’s funny, satiric, frequently hilarious, and full of sentences you’ll re-read because they’re that enjoyable.
Holy cow, Brad! Holy holy cow.
Tagged with: books
Date: June 18th, 2012 dw
[Note that this is cross posted at the new Digital Scholarship at Harvard blog.]
Ralph Schroeder and Eric Meyer of the Oxford Internet Institute are giving a talk sponsored by the Harvard Library on Internet, Science, and Transformations of knowledge.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Ralph begins by defining e-research as “Research using digital tools and digital data for the distributed and collaborative production of knowledge.” He points to knowledge as the contentious term. “But we’re going to take a crack at why computational methods are such an important part of knowledge.” They’re going to start with theory and then move to cases.
Over the past couple of decades, we’ve moved from talking about supercomputing to the grid to Web 2.0 to clouds and now Big Data, Ralph says. There is continuity, however: it’s all e-research, and to have a theory of how e-research works, you need a few components: 1. Computational manipulability (mathematization) and 2. The social-technical forces that drive that.
Computational manipulability. This is important because mathematics enables consensus and thus collaboration. “High consensus, rapid discovery.”
Research technologies and driving forces. The key to driving knowledge is research technologies, he says. I.e., machines. You also need an organizational component.
Then you need to look at how that plays out in history, physics, astronomy, etc. Not all fields are organized in the same way.
Eric now talks, beginning with a quote from a scholar who says he now has more information then he needs, all without rooting around in libraries. But others complain that we are not asking new enough questions.
He begins with the Large Hadron Collider. It takes lots of people to build it and then to deal with the data it generates. Physics is usually cited as the epitome of e-research. It is the exemplar of how to do big collaboration, he says.
Distributed computation is a way of engaging citizens in science, he says. E.g. Galaxy Zoo, which engages citizens in classifying galaxies. Citizens have also found new types of galaxies (“green peas”), etc. there. Another example: the Genetic Association Information Network is trying to find the cause of bipolarism. It has now grown into a worldwide collaboration. Another: Structure of Populations, Levels of Abundance, and Status of Humpbacks (SPLASH), a project that requires human brains to match humpback tails. By collaboratively working on data from 500 scientists around the Pacific Rim, patterns of migration have emerged, and it was possible to come up with a count of humpbacks (about 15-17K). We may even be able to find out how long humpbacks live. (It’s a least 120 years because a harpoon head was found in one from a company that went out of business that long ago.)
Ralph looks at e-research in Sweden as an example. They have a major initiative under way trying to combine health data with population data. The Swedes have been doing this for a long time. Each Swede has a unique ID; this requires the trust of the population. The social component that engenders this trust is worth exploring, he says. He points to cases where IP rights have had to be negotiated. He also points to the Pynchon Wiki where experts and the crowd annotate Pynchon’s works. Also, Google Books is a source of research data.
Eric: Has Google taken over scholarly research? 70% of scholars use Google and 66% use Google Scholar. But in the humanities, 59% go to the library. 95% consult peers and experts — they ask people they trust. It’s true in the physical sciences too, he says, although the numbers vary some.
Eric says the digital is still considered a bit dirty as a research tool. If you have too many URLS in your footnotes it looks like you didn’t do any real work, or so people fear.
Ralph: Is e-research old wine in new bottles? Underlying all the different sorts of knowledge is mathematization: a shared symbolic language with which you can do things. You have a physical core that consists of computers around which lots of different scholars can gather. That core has changed over time, but all offer types of computational manipulability. The Pynchon Wiki just needs a server. The LHC needs to be distributed globally across sites with huge computing power. The machines at the core are constantly being refined. Different fields use this power differently, and focus their efforts on using those differences to drive their fields forward. This is true in literature and language as well. These research technologies have become so important since they enable researchers to work across domains. They are like passports across fields.
A scholar who uses this tech may gain social traction. But you also get resistance: “What are these guys doing with computing and Shakespeare?”
What can we do with this knowledge about how knowledge is changing? 1. We can inform funding decisions: What’s been happening in different fields, how they affected by social organizations, etc. 2. We need a multidisciplinary way of understanding e-research as a whole. We need more than case studies, Ralph says. We need to be aiming at developing a shared platform for understanding what’s going on. 3. Every time you use these techniques, you are either disintermediating data (e.g., Galaxy Zoo) or intermediating (biomedicine). 4. Given that it’s all digital, we as outsiders have tremendous opportunities to study it. We can analyze it. Which fields are moving where? Where are projects being funded and how are they being organized? You can map science better than ever. One project took a large chunk of academic journals and looked in real time at who is reading what, in what domain.
This lets us understand knowledge better, so we can work together better across departments and around the globe.
Q: Sometimes you have to take a humanities approach to knowledge. Maybe you need to use some of the old systems investigations tools. Maybe link Twitter to systems thinking.
A: Good point. But caution: I haven’t seen much research on how the next generation is doing research and is learning. We don’t have the good sociology yet to see what difference that makes. Does it fragment their attention? Or is this a good thing?
Q: It’d be useful to know who borrows what books, etc., but there are restrictions in the US. How about in Great Britain?
A: If anything, it’s more restrictive in the UK. In the UK a library can’t even archive a web site without permission.
A: The example I gave of real time tracking was of articles, not books. Maybe someone will track usage at Google Books.
Q: Can you talk about what happens to the experience of interpreting a text when you have so much computer-generated data?
A: In the best cases, it’s both/and. E.g., you can’t read all the 19th century digitized newspapers, but you can compute against it. But you still need to approach it with a thought process about how to interpret it. You need both sets of skills.
A: If someone comes along and says it’s all statistics, the reply is that no one wants to read pure stats. They want to read stats put into words.
Q: There’s a science reader that lets you keep track of which papers are being read.
A: E.g., Mendeley. But it’s a self-selected group who use these tools.
Q: In the physical sciences, the more info that’s out there, it’s hard to tell what’s important.
A: One way to address it is to think about it as a cycle: as a field gets overwhelmed with info, you get tools to concentrate the information. But if you only look at a small piece of knowledge, what are you losing? In some areas, e.g., areas within physics, everyone knows everyone else and what everyone else is doing. Earth sciences is a much broader community.
[Interesting talk. It's orthogonal to my own interests in how knowledge is becoming something that "lives" at the network level, and is thus being redefined. It's interesting to me to see how this look when sliced through at a different angle.]
, too big to know
Tagged with: 2b2k
Date: June 7th, 2012 dw
Edward Burman recently sent me a very interesting email in response to my article about the 50th anniversary of Thomas Kuhn’s The Structure of Scientific Revolutions. So I bought his 2003 book Shift!: The Unfolding Internet – Hype, Hope and History (hint: If you buy it from Amazon, check the non-Amazon sellers listed there) which arrived while I was away this week. The book is not very long — 50,000 words or so — but it’s dense with ideas. For example, Edward argues in passing that the Net exploits already-existing trends toward globalization, rather than leading the way to it; he even has a couple of pages on Heidegger’s thinking about the nature of communication. It’s a rich book.
Shift! applies The Structure of Scientific Revolutions to the Internet revolution, wondering what the Internet paradigm will be. The chapters that go through the history of failed attempts to understand the Net — the “pre-paradigms” — are fascinating. Much of Edward’s analysis of business’ inability to grasp the Net mirrors cluetrain‘s themes. (In fact, I had the authorial d-bag reaction of wishing he had referenced Cluetrain…until I realized that Edward probably had the same reaction to my later books which mirror ideas in Shift!) The book is strong in its presentation of Kuhn’s ideas, and has a deep sense of our cultural and philosophical history.
All that would be enough to bring me to recommend the book. But Edward admirably jumps in with a prediction about what the Internet paradigm will be:
This…brings us to the new paradigm, which will condition our private and business lives as the twenty-first century evolves. It is a simple paradigm, and may be expressed in synthetic form in three simple words: ubiquitous invisible connectivity. That is to say, when the technologies, software and devices which enable global connectivity in real time become so ubiquitous that we are completely unaware of their presence…We are simply connected.” [p. 170]
It’s unfair to leave it there since the book then elaborates on this idea in very useful ways. For example, he talks about the concept of “e-business” as being a pre-paradigm, and the actual paradigm being “The network itself becomes the company,” which includes an erosion of hierarchy by networks. But because I’ve just written about Kuhn, I found myself particularly interested in the book’s overall argument that Kuhn gives us a way to understand the Internet. Is there an Internet paradigm shift?
The are two ways to take this.
First, is there a paradigm by which we will come to understand the Internet? Edward argues yes, we are rapidly settling into the paradigmatic understanding of the Net. In fact, he guesses that “the present revolution [will] be completed and the new paradigm of being [will] be in force” in “roughly five to eight years” [p. 175]. He sagely points to three main areas where he thinks there will be sufficient development to enable the new paradigm to take root: the rise of the mobile Internet, the development of productivity tools that “facilitate improvements in the supply chain” and marketing, and “the increased deployment of what have been termed social applications, involving education and the political sphere of national and local government.” [pp. 175-176] Not bad for 2003!
But I’d point to two ways, important to his argument, in which things have not turned out as Edward thought. First, the 5-8 years after the book came out were marked by a continuing series of disruptive Internet developments, including general purpose social networks, Wikipedia, e-books, crowdsourcing, YouTube, open access, open courseware, Khan Academy, etc. etc. I hope it’s obvious that I’m not criticizing Edward for not being prescient enough. The book is pretty much as smart as you can get about these things. My point is that the disruptions just keep coming. The Net is not yet settling down. So we have to ask: Is the Net going to enable continuous disruption and self-transformation? If so will it be captured by a paradigm? (Or, as M. Knight Shyamalan might put it, is disruption the paradigm?)
Second, after listing the three areas of development over the next 5-8 years, the book makes a claim central to the basic formulation of the new paradigm Edward sees emerging: “And, vitally, for thorough implementation [of the paradigm] the three strands must be invisible to the user: ubiquitous and invisible connectivity.” [p. 176] If the invisibility of the paradigm is required for its acceptance, then we are no closer to that event, for the Internet remains perhaps the single most evident aspect of our culture. No other cultural object is mentioned as many times in a single day’s newspaper. The Internet, and the three components the book point to, are more evident to us than ever. (The exception might be innovations in logistics and supply chain management; I’d say Internet marketing remains highly conspicuous.) We’ve never had a technology that so enabled innovation and creativity, but there may well come a time when we stop focusing so much cultural attention on the Internet. We are not close yet.
Even then, we may not end up with a single paradigm of the Internet. It’s really not clear to me that the attendees at ROFLcon have the same Net paradigm as less Internet-besotted youths. Maybe over time we will all settle into a single Internet paradigm, but maybe we won’t. And we might not because the forces that bring about Kuhnian paradigms are not at play when it comes to the Internet. Kuhnian paradigms triumph because disciplines come to us through institutions that accept some practices and ideas as good science; through textbooks that codify those ideas and practices; and through communities of professionals who train and certify the new scientists. The Net lacks all of that. Our understanding of the Net may thus be as diverse as our cultures and sub-cultures, rather than being as uniform and enforced as, say, genetics’ understanding of DNA is.
Second, is the Internet affecting what we might call the general paradigm of our age? Personally, I think the answer is yes, but I wouldn’t use Kuhn to explain this. I think what’s happening — and Edward agrees — is that we are reinterpreting our world through the lens of the Internet. We did this when clocks were invented and the world started to look like a mechanical clockwork. We did this when steam engines made society and then human motivation look like the action of pressures, governors, and ventings. We did this when telegraphs and then telephones made communication look like the encoding of messages passed through a medium. We understand our world through our technologies. I find (for example) Lewis Mumford more helpful here than Kuhn.
Now, it is certainly the case that reinterpreting our world in light of the Net requires us to interpret the Net in the first place. But I’m not convinced we need a Kuhnian paradigm for this. We just need a set of properties we think are central, and I think Edward and I agree that these properties include the abundant and loose connections, the lack of centralized control, the global reach, the ability of everyone (just about) to contribute, the messiness, the scale. That’s why you don’t have to agree about what constitutes a Kuhnian paradigm to find Shift! fascinating, for it helps illuminate the key question: How are the properties of the Internet becoming the properties we see in — or notice as missing from — the world outside the Internet?
Howard Weaver’s Write Hard, Die Free is a two-fisted memoir of how The Anchorage Daily News — a newspaper he
helped found and then edited — went on to win two Pulitzer prizes and defeat the established major daily, which was, according to Howard, an oil industry mouthpiece. It’s an entertaining story of scoops, legwork, drinking, and camaraderie.
It’s also a reminder of an age that now seems as distant as the cowboys, although it was only a couple of decades ago. In part that’s because Alaska remains a frontier state, but it’s also because, while the future of newspapers is unknown, the days of brawlin’ reporters are over.
Write Hard, Die Free (I love the title) is, as they say, a good read, and a reminder of a time not as distant as it already seems.
Tagged with: books
Date: April 1st, 2012 dw
We saw The Artist tonight. Disappointing.
I’m not getting what people are seeing in it. Yes, the hero (Jean Dujardin) is very charming, and there are a couple of laughs. It’s not a terrible movie. But best picture of the year? Really?
It is utterly predictable. It’s message, such as it is, is shallow. The characters are one-dimensional, and sometimes less: the hero’s wife (Penelope Ann Miller) has only one point to make. The female lead (Bérénice Bejo) to me was unappealing and even a little creepy. The dog, about which everyone raves, could have taken lessons from Frasier’s dog.
Taken together, “The Artist” was pretty much the definition of meh. I don’t see why it’s been nominated for Best Picture, much less why it’s being treated as a shoe-in.
Tagged with: oscars
Date: February 19th, 2012 dw
Cory Doctorow has reviewed my book Too Big to Know at BoingBoing. It’s the sort of review an author dreams of, not only because it’s positive, but because it gets the book better than the author does.
, too big to know
Tagged with: 2b2k
• cory doctorow
Date: February 1st, 2012 dw
Next Page »