In 1962, Claude Levi-Strauss brought the concept of bricolage into the anthropological and philosophical lexicons. It has to do with thinking with one’s hands, putting together new things by repurposing old things. It has since been applied to the Internet (including, apparently, by me, thanks to a tip from Rageboy). The term “bricolage” uncovers something important about the Net, but it also covers up something fundamental about the Net that has been growing even more important.
In The Savage Mind (relevant excerpt), CLS argued against the prevailing view that “primitive” peoples were unable to form abstract concepts. After showing that they often in have extensive sets of concepts for flora and fauna, he maintains that these concepts go beyond what they pragmatically need to know:
…animals and plants are not known as a result of their usefulness; they are deemed to be useful or interesting because they are first of all known.
It may be objected that science of this kind can scarcely be of much practical effect. The answer to this is that its main purpose is not a practical one. It meets intellectual requirements rather than or instead of satisfying needs.
It meets, in short, a “demand for order.”
CLS wants us to see the mythopoeic world as being as rich, complex, and detailed as the modern scientific world, while still drawing the relevant distinctions. He uses bricolage as a bridge for our understanding. A bricoleur scavenges the environment for items that can be reused, getting their heft, trying them out, fitting them together and then giving them a twist. The mythopoeic mind engages in this bricolage rather than in the scientific or engineering enterprise of letting a desired project assemble the “raw materials.” A bricoleur has what s/he has and shapes projects around that. And what the bricoleur has generally has been fashioned for some other purpose.
Bricolage is a very useful concept for understanding the Internet’s mashup culture, its culture of re-use. It expresses the way in which one thing inspires another, and the power of re-contextualization. It evokes the sense of invention and play that is dominant on so much of the Net. While the Engineer is King (and, all too rarely, Queen) of this age, the bricoleurs have kept the Net weird, and bless them for it.
But there are at least two ways in which this metaphor is inapt.
First, traditional bricoleurs don’t have search engines that let them in a single glance look across the universe for what they need. Search engines let materials assemble around projects, rather than projects be shaped by the available materials. (Yes, this distinction is too strong. Yes, it’s more complicated than that. Still, there’s some truth to it.)
Second, we have been moving with some consistency toward a Net that at its topmost layers replicates the interoperability of its lower layers. Those low levels specify the rules — protocols — by which networks can join together to move data packets to their destinations. Those packets are designed so they can be correctly interpreted as data by any recipient applications. As you move up the stack, you start to lose this interoperability: Microsoft Word can’t make sense of the data output by Pages, and a graphics program may not be able to make sense of the layer information output by Photoshop.
But, over time, we’re getting better at this:
Applications add import and export services as the market requires. More consequentially, more and richer standards for interoperability continue to emerge, as they have from the very beginning: FTP, HTML, XML, Dublin Core, Schema.org, the many Semantic Web vocabularies, ontologies, and schema, etc.
More important, we are now taking steps to make sure that what we create is available for re-use in ways we have not imagined. We do this by working within standards and protocols. We do it by putting our work into the sphere of reusable items, whether that’s by applying the Creative Commons license, putting our work into a public archive, , or even just paying attention to what will make our work more findable.
This is very different from the bricoleur’s world in which objects are designed for one use, and it takes the ingenuity of the bricoleur to find a new use for it.
This movement continues the initial work of the Internet. From the beginning the Net has been predicated on providing an environment with the fewest possible assumptions about how it will be used. The Net was designed to move anyone’s information no matter what it’s about, what it’s for, where it’s going, or who owns it. The higher levels of the stack are increasingly realizing that vision. The Net is thus more than ever becoming a universe of objects explicitly designed for reuse in unexpected ways. (An important corrective to this sunny point of view: Christian Sandvig’s brilliant description of how the Net has incrementally become designed for delivering video above all else.)
Insofar as we are explicitly creating works designed for unexpected reuse, the bricolage metaphor is flawed, as all metaphors are. It usefully highlights the “found” nature of so much of Internet culture. It puts into the shadows, however, the truly transformative movement we are now living through in which we are explicitly designing objects for uses that we cannot anticipate.
Tagged with: reuse
Date: June 12th, 2016 dw
There’s a small but interesting discussion at the philosophy subreddit of my review of Michael Lynch’s The Internet of Us.
Tagged with: 2b2k
Date: May 9th, 2016 dw
Mills Baker defends personalization on the right grounds. In a brilliant and brilliantly written post, he maintains that the personalization provided by sites does at scale what we do in the real world to enable conversations: through multiple and often subtle signals, we let an interlocutor know where our interests and beliefs are similar enough that we are able to safely express our differences.
Digression: This is at the heart of our cultural fear of echo chambers, in my opinion. Conversation consists of iteration on small differences based on an iceberg of agreement. Every conversation inadvertently reinforces the beliefs that enable it to go forward. Likewise, understanding is contextual, assimilating the novel to the familiar, thus reinforcing that context by making it richer and more coherent. But our tradition has taught us that Reason requires us to be open to all ideas, ready to undo the entire structure of our beliefs. Reason, if applied purely, would thus make conversation, understanding, and knowledge impossible. In fearing echo chambers, we are running from the fact that understanding and conversation share the basic elements of echo chambers. I’ll return to this point in a later post sometime…
I love everything about Mills’ post except his under-valuing of concerns about the power personalization has over us on-line. Yes, personalization is a requirement in a scaled environment. Yes, the right comparison is between our new info flows and our old info trickles. But…
…Miles does not fully confront the main complaint: our interests and the interests of the commercial entities that are doing the personalizing do not fully coincide. Facebook has an economic motivation to get us to click more and to exit Facebook sessions eager to return for more. Facebook thus has an economic interest in showing us personalized clickbait, and to filter our feeds toward happiness rather than hey-my-cat-died-yesterday posts.
In one sense, this is entirely Mills’ point. He wants designers to understand the positive role personalization has always played, so they can reinstate that role in software that works for us. He thinks that getting this right is the responsibility of the software for “Most users do not want the ‘control’ of RSS and Twitter lists and blocking, muting, and unfollowing their fellows.” Thus the software needs to learn from the clues left inadvertently by users. (I’d argue that there’s also room for better designed control systems. I bet Mills agrees, because how could anyone argue against better designed anything?)
But in my view he too casually dismisses the responsibility and culpability of some of the most important sites when he writes:
The idea that personalization is about corporate or political control is an emotionally satisfying but inaccurate one.
If we take “personalization” in the insightful and useful way he has defined it, then sure. But when people rail against personalization they are thinking about the algorithmic function performed by commercial entities. And those entities have a massive incentive—exercised by companies like Facebook—to personalize the flow of information toward users as consumers rather than as persons.
Thanks to Dave Birk for pointing me to Mills’ post.
Categories: echo chambers
Tagged with: 2b2k
Date: April 6th, 2016 dw
Here’s something I took from Heidegger that may not be in Heidegger:
The basis of morality is the recognition that the world matters to each person, but matters differently.
After that, I don’t know what to do except to be highly suspicious of anyone who cites moral precepts.
It turns out that I don’t find morality to be a very useful category since the way the world matters to us is so deeply contextual and individual: whether you should steal the loaf of bread has less to do with the general principle that it’s wrong to steal, and more to do with how hungry your family is, how much money you have, your opportunities to earn more money, the moral and legal codes of your culture, how kind the baker has been to you, what you know of the baker’s own circumstances, etc.
“Do unto others…,” Kant’s Categorical Imperative, the traditional Jewish formulation of “Don’t do unto others what you would not want done to you,” all are heuristics for remembering that the world matters to others just as much as it matters to you, but it matters differently. Trying to apply those heuristics without recognizing that the world can matter differently can lead to well-intentioned mistakes in which you substitute how your world matters to you for how theirs matters to them: you don’t believe in accepting blood transfusions so you refuse to give one to someone who believes otherwise.
This gets messy fast: You believe in the efficacy of blood transfusions, so you give one to someone who for religious reasons has stipulated that she does not want one. You are not treating her as an autonomous agent. Are you wrong? Once she’s under anesthesia should you let her die because she does not want a transfusion? I have my own inclination, but I have no confidence in it: Even the principle of always treating people as autonomous is hard to apply.
It’s easy to multiply examples, and very easy to find cases where I condemn entire cultures for how their world matters to them. For example, I’m really pretty sure that girls ought to be educated and women ought not to be subservient to men. I’d argue for that. I’d vote for that. I’d fight for that. But not because of morality. “Morality” just doesn’t seem like a helpful concept for deciding what one ought to do.
It can be useful as a name for the topic of what that “ought” means. But those discussions can obscure the particularities of each life that need to be as clear as possible when we talk about what we ought to do.
None of this is new or original with me. Maybe I’m just an old fashioned Existentialist — more Kierkegaardian than Satrean — but I feel like I could carry on the rest of my moral life without ever thinking about morality.
(No, I am not sure of any of the above.)
 That the world matters to us is certainly Heidegger. That it matters differently to us is more ambiguous. It’s captured in his notion of the existentiell, but his attempt at what seems to be a universal description of Dasein suggests that there may be some fundamental ways in which it matters in the same ways to us all. But it’s been a long time since I read Being and Time. Plus, he was a Nazi, so maybe he’s not the best person to consult about the nature of morality.
Tagged with: morality
Date: February 8th, 2016 dw
Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.
Thus begins Jonathan Zittrain‘s consideration of an all-too-plausible hypothetical. Should Google respond to a request to search everyone’s gmail inboxes to find everyone to whom the to-do list was sent ? As JZ says, you can’t get a warrant to search an entire city, much less hundreds of millions of inboxes.
But, while this is a search that sweeps a good portion of the globe, it doesn’t “listen in” on any mail except for that which contains a precise string of words in a precise order. What happens next would depend upon the discretion of the investigators.
JZ points out that Google already does something akin to this when it searches for inboxes that contain known child pornography images.
JZ’s treatment is even handed and clear. (He’s a renown law professor. He knows how to do these things.) He discusses the reasons pro and con. He comes to his own personal conclusion. It’s a model of clarity of exposition and reasoning.
I like this article a lot on its own, but I find it especially fascinating because of its implications for the confused feeling of violation many of us have when it’s a computer doing the looking. If a computer scans your emails looking for a terrorist to-do list, has it violated your sense of privacy? If a robot looks at you naked, should you be embarrassed? Our sense of violation is separable from our legal and moral right to privacy question, but the two meanings often get mixed up in such discussions. Not in JZ’s, but often enough.
Tagged with: 2b2k
Date: January 13th, 2016 dw
We’re pretty convinced that the future lies ahead of us. But according to Bernard Knox, the ancient Greeks were not. In Backing into the Future he writes:
“ The future, invisible, is behind us. ” the Greek word opiso, which literally means ‘behind’ or ‘back, refers not to the past but to the future. The early Greek imagination envisaged the past and the present as in front of us–we can see them. The future, invisible, is behind us. Only a few very wise men can see what is behind them. (p. 11)
G.W. Whitrow in Time in History quotes George Steiner in After Babel to make the same point about the ancient Hebrews:
…the future is preponderantly thought to lie before us, while in Hebrew future events are always expressed as coming after us. (p. 14)
Whitrow doesn’t note that Steiner’s quote (which Steiner puts in quotes) comes from Thorlief Borman’s Hebrew Thought Compared with Greek. Borman writes:
…we Indo-Germanic peoples think of time as a line on which we ourselves stand at a point called now; then we have the future lying before us, and the past stretches out behind us. The [ancient] Israelites use the same expressions ‘before’ and ‘after’ but with opposite meanings. qedham means ‘what is before’ (Ps. 139.5) therefore, ‘remote antiquity’, past. ‘ahar means ‘back’, ‘behind’, and of the time ‘after; aharith means ‘hindermost side’, and then ‘end of an age’, future… (p. 149)
This is bewildering, and not just because the Borman’s writing is hard to parse.“we also sometimes switch the direction of future and past.”
He continues on to note that we modern Westerners also sometimes switch the direction of future and past. In particular, when we “appreciate time as the transcendental design of history,” we
think of ourselves as living men who are on a journey from the cradle to the grave and who stand in living association with humanity which is also journeying ceaselessly forward. . Then the generation of the past are our progenitors, at least our forebears, who have existed before us because they have gone on before us, and we follow after then. In that case we call the past foretime. According to this mode of thinking, the future generation are our descendants, at least our successors, who therefore come after us. (p. 149. Emphasis in the original.)
Yes, I find this incredibly difficult to wrap my brain around. I think the trick is the ambiguity of “before us.” The future lies before us, but our forebears were also before us.
Borman tries to encapsulate our contradictory ways of thinking about the future as follows: “the future lies before us but comes after us.” The problem in understanding this is that we hear “before us” as “ahead of us.” The word “before” means “ahead” when it comes to space.
Borman’s explanation of the ancient Hebrew way of thinking is related to Knox’s explanation of the Greek idiom:
From the psychological viewpoint it is absurd to say that we have the future before us and the past behind us, as though the future were visible to us and the past occluded. “…as though the future were visible to us and the past occluded. Quite the reverse is true.”Quite the reverse is true. What our forebears have accomplished lies before us as their completed works; the house we see, the meadows and fields, the cultural and political system are congealed expressions of the deeds of our fathers. The same is true of everything they have done, lived, or suffered; it lies before us as completed facts… The present and the future are, on the contrary still in the process of coming and becoming. (p. 150)
The nature of becoming is different for the Greeks and Hebrews, so the darkness of the future has different meanings. But both result in the future lying behind us.
Tagged with: future
Date: January 2nd, 2016 dw
Socrates: The Extra Parmesanides
The unexamined pizza is probably still worth eating.
St. Augustine: Deep Dish Confessions
The mind commands the body and is instantly obeyed.
The mind commands itself and meets resistance.
The body commands pizza and it arrives within thirty minutes or it’s free. [“…ut servirem domino deo meo”]
Nietzsche: Thus Spake ‘Za-thruster, the Pizza Delivery Guy
The pizza that does not kill me makes me stronger.
If you gaze into a pizza, the pizza stares back at you. If you’re tripping balls.
Martin Heidegger: Being and Slices
“Dasein’s Being is always Being-toward-Pizza. Pizza stands before us as an ex-static project that discloses that which is Dasein’s ownmost, for no one can eat your pizza for you.”
Bonus for Librarians: Ranganathan’s Five Laws of Pizza Science
Pizzas are for use
For every eater, his pizza
For every pizza, its eater
Our warming oven saves time for the eater
Our pizzas are totally organic
Isaac Asimov: Three Rules of Pizzas
Suggested by Andromeda Yelton (@ThatAndromeda). Thanks!
A pizza may not injure a human being or, through inaction, allow a human being to come to harm.
A pizza must obey the orders given it by human beings except where such orders would conflict with the First Law.
A pizza must do ABSOLUTE NOTHING to protect its own existence as long as such lack of protection does not conflict with the First or Second Laws.
Tagged with: pizza
Date: December 21st, 2015 dw
I’m on a Heidegger mailing list where I get to lurk as serious scholars probe his writings and thoughts, and, not infrequently these days, his politics.
Recently, a member of the list I highly respect suggested that “Heidegger’s phenomenology of ‘Sein-zum-Tode’ [Being-toward-death] amounts to living each day of our lives with a sense of our finitude, our mortality, that unifies and heightens the meaningfulness of each and every moment.” He equates this to Michel de Montaigne saying that “it is my custom to have death not only in my imagination, but continually in my mouth.” This is great wisdom, said the list member.
I don’t want to argue against those who find wisdom in living “with the taste of death” in their mouths. But I also wouldn’t argue for it.
My understanding, such as it is, of Heidegger’s idea of Being-toward-death is that our temporal finitude is constantly present as a horizon: we look before we cross the street because we know we can die — “know” not as an explicit thought but as the landscape within which our experience occurs. We make long-term plans within a horizon of possibility that we number in decades and not centuries.
But that’s not what I take Montaigne to mean. And if that’s what Heidegger’s concept of authenticity entails (as I think it might), then that’s just another problem I have with his idea of authenticity.
Why is keeping explicit the awareness of my impending death preferable, wise, or phenomenologically true-er? Because only I can die my death, as Heidegger says? I’m also the only one who can eat my lunch or take my shower. [Frivolity aside: these are both instances of “Only I have my body.”] Because it makes our experience more precious? It doesn’t for me. For example:
We have a four-month-old grandchild, our first. (Yes, yes, thank you for your good wishes :) When I am caring for him–playing with him–my death is always present, but as an horizon. I’m aware that I’m 65 years older than he is, that I am in my waning years and he is just beginning. That is part of the deep joy of a grandchild, and it is definitional: if I thought I were immortal, the experience would be very different; if I didn’t have the concept of one life beginning and another ending, my experience of children would be incomprehensible. So, phenomenologically I think Heidegger is right about our death (finitude) always shaping our experience as an implicit horizon. Our stretch of time only extends so far before it snaps.
But, beyond that implicit horizon, do I need to keep a taste of death in my mouth to make the experience of our grandson more precious? On the contrary, the explicit thought, “Wow, I’m really going to be dead someday” would distract me from my grandson, and keep me from letting the adorable little phenomenon show himself as he is.
That’s a charged example, of course. But here’s another: I’m eating a delicious piece of chocolate cake. I do so within the horizon of my finitude, but that horizon is probably quite implicit. Perhaps it’s a bit more explicit than that, but still horizonal: I’m only eating half the slice for health reasons. But then I have a vivid taste of death alongside the chocolate: “Crap! I’m going to be dead someday.”
Does the cake taste better? I guess maybe for some people. For a lot of us, though, the realization that death is surely a-comin’ would make the cake turn to ash. Who cares about cake when I’m going to be dead sometime, maybe in a minute or a day? We’ve been pulled out of the experience and out of the world by the vivid intrusion of what is undeniably a truth. Why do you think Roquentin can never enjoy a nice slice of cake?
We can complain that such morbidity is inauthentic, but as far as I can tell that’s a value judgment, not philosophy and certainly not phenomenology.
My intention is not to argue against Montaigne on this. If keeping the fact of death explicitly present helps some of us appreciate life more, who am I to say otherwise? Seriously. And if someone goes further and seeks out death-defying experiences because she feels most alive when she is most at risk, who am I to judge? That works for her. Good! (I feel bad for her parents, though.)
But valorizing keeping death explicitly present seems to me to be more personality than philosophy.
I understand that Heidegger’s putting death front and center was a radical and healthy move for philosophy. Western philosophy, after all, has spent so much of its energy pursuing deathless wisdom and eternal Reality as the only truths. But as a reader of Heidegger, I put much of what he writes in Being and Time about death into the same bucket as what he writes about destiny, das Man, authenticity, and German peasant romanticism: It’s (to put it mildly) phenomenologically non-disclosive for me — part of the price of reading an ontologist whose methodology, at least initially, was phenomenology.
Tagged with: death
Date: November 28th, 2015 dw
At Digital Trends I take another look at a question that is now gaining some currency: How should autonomous cars be programmed when all the choices are bad and someone has to die in order to maximize the number of lives that are saved?
The question gets knottier the more you look at it. In two regards especially:
First, it makes sense to look at this through a utilitarian lens, but when you do, you have to be open to the possibility that it’s morally better to kill a 64 year old who’s at the end of his productive career (hey, don’t look at me that way!) vs. a young parent, or a promising scientist or musician. We consider age and health when doing triage for organ replacements. Should our cars do it for us when deciding who dies?
Second, the real question is who gets to decide this? The developers at Google who are programming the cars? And suppose the Google software disagrees with the prioritization of the Tesla self-driving cars? Who wins? Or, do we want to have a cross-manufacturer agreement about whose life to sacrifice if someone has to die in an accident? A global agreement about the value of lives?
Yeah, sure. What could go wrong with that? /s
Tagged with: philosophy
Date: October 28th, 2015 dw
Donatella Della Ratta, at Copenhangen University and a Berkman fellow, has posted a remarkable essay and linked to another.
The link is to a Wired article by Andy Greenberg about the New Palmyra Project, an effort to reconstruct the ancient monuments ISIS is destroying, and a plea for action to free the project’s creator, Bassel Khartabil, from a Syrian prison.
The second is Donatella’s article in CyberOrient that considers efforts that, like the New Palmyra Project, reconstruct sites destroyed by war, but not with that project’s historical purpose. In the article she brings to light some of the profound and disturbing ways the Net is changing how meaning works.
Her focus is on what she calls “expanded places,” physical places that have been physically destroyed, but that “have been re-animated through multiple mediated versions circulating and re-circulating on the networks.” As she says in the article’s abstract:
Thriving on the techno-human infrastructure of the networks, and relying on the endless proliferation of images resulting from the loss of control of image-makers over their own production, expanded places are aggregators of new communities that add novel layers of signification to the empirical world, and create their own multiple realities and histories.
Her primary example is Damescene Village, a theme park on the outskirts of Damascus where she conducted ethnographic research in 2010. The brief story of the role that theme park played in Syrian“the multiple layers of unreality that it attracted itself is mind-blowing” popular media, and the multiple layers of unreality that it attracted itself is mind-blowing: “a physical replica of the historic 1920s rebel stronghold conceived as a TV set for a reenactment drama of that very struggle; which, historically speaking, took place exactly in the location where the fictional copy had been rebuilt for the sake of media consumption.” To complete the media hall of mirrors, in the recent conflict each side shot “video accounts narrating the seizure of the theme park using themes, symbols and characters borrowed from the TV series.”
Eventually the Damascene Village was destroyed; yet, the self-shot videos, once uploaded onto YouTube, continued to fuel the spread of clashing narratives and contradictory understandings of national resistance, which turned a physical site hosting a staged representation of a conflict into a conflict zone itself, endlessly reproduced through social networking sites.
The complexity of this place as real, symbolic, organic, and manipulated is mirrored in the nature of the platform. She argues that the Internet’s “circulation, reflexivity, anonymity, and decentralized authorship” lead to a type of violence against meaning: “…the endless circulation of messages that are shared, manipulated, and repeated over and over again in a loop where any possible meaning is lost.” Citing Jodi Dean, Donatella says: “…the uncontrollable speed and spread of contributions over the networks help prevent the formation of any sort of signification,” generating not “a plurality of visions” but “…a feeling of ‘constituent anxiety.'” This process is, she says “inherent to the networks.”
A novel space has been created by the entanglement of warfare and technology, where lines are blurred between the physical, lived experiences of war and their media representations, which have gained a new existence by virtue of the endless circulation of the layering of times, spaces, and people enabled by the networks.
This new environment, defined around what I call “expanded places,” re-establishes the relationship between violence and visibility, and broadens the very idea of conflict. Here, mediated and symbolic languages are employed to perform and legitimize the violence perpetrated in physical spaces. At the same time, the large scale production and reproduction of this very violence through networked forms and formats serves to actualize and rationalize it, its viral circulation being endlessly nurtured and boosted by the techno-human structure of the networks.
But is Damescene Village is too good an example? It came onto the Net with so many layers of contested meta-meta-meaning that perhaps its online life is atypical. Donatella confronts this question, “ the Net not only continues the alienation of images of violence … but adds a participatory level”arguing that the Net not only continues the alienation of images of violence from their actuality and from ethical responses, as noted by Susan Sontag in the 1970s, but adds a participatory level to this: the images of violence are hyperlinked and recirculated by the viewers themselves. This borderless remixing and recirculation “have all contributed to the expansion of the place formerly known as the Damascene Village.”
But what to make of this expansion? Here again I worry that Donatella’s example is too good:
As shown by the story of the Damascene Village, the same symbolic and visual reference (Bab al hara) can be employed simultaneously by opposing factions (the Syrian army and the armed rebels) to produce contrasting narratives of resistance, and clashing ideas of nationhood. It can both serve to evoke a seemingly inclusive multiculturalism promoted under al Asad’s leadership; and, at the same time, to remind us that an entire nation is being besieged, not by occupying foreign forces but by the Syrian regime.
She takes this as a type of fictionality, as described by Jacques Rancière: a rearrangement of something real into new political and aesthetic formats without regard to the truth of that something, blurring “the logic of facts and the logic of fiction” in multiple layers of meaning. She invokes Baudrillard, saying that “The story of the Damascene Village proves that it does not really matter” whether the various factions’ fantasies correspond to historical truth. Rather:
what it is important to reflect upon is that this very fantasy has been used to generate and reproduce violence from opposite armed factions, both of which have employed mediated and networked languages to claim legitimacy over their own idea of homeland and national resistance.
But hasn’t that statement been true of every intra-cultural conflict? The truth of historians has never much mattered to factions trying to rouse support for their side. Donatella uses Rancière’s thought to find the difference between how this worked “the Net is in important ways moving us back to a simpler relation between image and reality through the posting of cellphone videos of police attacks, ”before and after the Net. I have not read him (I know, I know) but am not fully convinced by the ideas she cites. In the modern era, “technology is not understood as a mere technique of reproduction and transmission.” Yes, but that’s hardly new to the Internet. Not only has it been well understood at least since the 1960s, but one could argue that the Net is in important ways moving us back to a simpler relation between image and reality through the posting of cellphone videos of police attacks, the proliferation of video surveillance, and the new insistence that the police wear video cameras. Also: Russian dash cams.
She cites Rancière further to make the case that the anonymity of Net postings and the ability to record just about everything “has given rise a new understanding of history as a continuous process of assigning meanings to material realities, of connecting signs and symbols in unprecedented ways. In this sense we can define history as a ‘new form of fiction’…”
I have a complex reaction to this. (This is one of the reasons I so like Donatella’s writing.)
1. Yes, this is exactly what’s happening.
2. It is what happens when we all have access to the materials of history, and the decisions about what counts as history are not made by handfuls of people who control the media, which includes highly qualified historians, the editorial staffs of (sometimes scurrilous) newspapers, and self-interested political leaders.
3. If we substitute “current events” for “history,” the situation seems somewhat less novel. The word “history” carries with it a weight that “current events” does not. (a) We do not yet know what history (as practiced by that discipline) will say about current events. It may become far more settled than the fracturing of interpretations of current events now suggests, which depends to a large degree on how education and authority evolves over the years. (b) History of course always is fractured along the lines that divide people; one side in the United States Civil War still sometimes insists slavery was not the issue the war was fought over.
I am not disagreeing with the dangerousness of the fragmenting of interpretations engendered by the Net. I find illuminating and helpful Donatella’s brilliant exposition of the way in which these are not shards so much as multiply reflecting mirrors in which meanings cannot be separated from the act of meaning, and that act “meanings cannot be separated from the act of meaning, and that act of meaning is a performance that gets reflected, reappropriated, and reenacted without end ”of meaning is a performance that gets reflected, reappropriated, and reenacted without end and without the ability to see its source either in the actual world or in its initial expression — “the rise of the anonymous subject and decentralized authorship nurtured by virtue of the circularity and reflexivity of the networks.” Rancière says this creates “‘uncertain communities'” politically questioning “‘the distribution of roles, territories, and languages’.” That’s an important point, although these images also sometimes create powerful political communities, as was the case with images from Ferguson.
Donatella is admirably focused on what this means when the stakes are high:
…in expanded places that have been destroyed by violence and warfare, then have been re-born through a networked after-life, this process goes much further. Here, challenging the distribution of the sensible [Rancière’s term] is not only a matter of contentious politics, but of generating and regenerating violence and destruction through the endless circulation of formats of violence boosted by the inner techno-human structure of the networks.
Her presentation of the ways in which the Net leads to not just a fracturing of meaning but of an impossibly self-reflective entanglement of meaning is brilliant. Her drawing our attention to the direness of this when it comes to the most dire of human situations is crucial. Her concept of “expanded spaces” seems to me to be worth holding on to and exploring. In fact, it’s powerful enough that I don’t think it should be confined to places that have been destroyed, much less destroyed by war. It applies more broadly than that. Her discussion of places destroyed by violence seems to me to point to a case where the stakes are higher, but where the game is essentially the same.
I recognize I have not resolved the question posed in my title. You can thank Donatella for that :)
Tagged with: 2b2k
Date: October 23rd, 2015 dw
Next Page »