Yoram Yaacovi from Microsoft talks about Hololens and shows an awesome video. Maybe this one.
Imagine, he says, being at home but seeing the people next to you in the classroom. Also: collaboratie prototyping. Interactive whiteboards. Expanded user interfaces. Design for Reparability.
He shows a supercool video of an educational use.
He doesn’t know when it will be available.
Date: June 3rd, 2015 dw
Todd Revolt is worth Meta. It has 70 people. It’s shipping a Meta 1 developer kit. You use common hand gestures to manipulate virtual things.
He shows a video of people wearing Oculus Rifts in the real world and failing to navigate. Instead, Meta wants you to be together with people in the real world.
With augmented reality, he says, people know how to work it without training. Examples:
Fourth largest cause of death in the US: medical error. But with AR we can do more useful simulations. You can see the vital signs and the next steps in the procedures.
Princess Leia standing on your clipboard.
Tagged with: ar
Date: June 3rd, 2015 dw
Miriam Reiner is giving a talk on virtual reality. Her lab collects info about brain activity under VR to create a model of optimal learning.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Her lab lets them provide sensory experiences virtually: you can feel water, etc. New haptic interfaces. There’s a kickstarter project for an Oculus Rift that lets you smell and feel a breeze and temperature.
They also do augmented reality, overlaying the virtual onto the real.
A robot she worked with last year suffers from the uncanny valley. Face to face is important. “Only 10% of information is conveyed through words.”
In an experiment, they re-created a student virtually and had her teach another student how to use a blood pressure machine.
VR can help us understand what learning is. And enhance it.
Exxample: A human wears electrodes. As she plays a VR game, her brain activity is recorded. They measured response times to light, auditory, and haptic signals, Auditory was fastest. But if you put all three together, the response time goes down dramatically. What does this mean for learning? We should find out. It looks like multi-modal sensation increases learning.
If you learn something in the morning, and they test you over the next few days, your memory of it will be best after sleep. Sleep consolidates memory. If you can use neuro-feedback perhaps we can teach people to do that consolidation immediately after learning. Her research suggests this is possible.
“The advantage of vVR is not just in creating worlds that do not exist. For the first time we have a mthod to organize and enhance learning.”
Tagged with: ed
Date: June 3rd, 2015 dw
Avi Warshavski begins with a stock image of young people smiling at a computer screen. He points out that they’re all smiling, as if in an ad. There’s racial and gender balance. And they’re all looking at a screen. Having one object on whih we all focus is an old idea. He shows an old Roman frieze. Everyone is looking at a scroll.
Now we are in physical spaces, he says, not just brains that sit and learn: Maker movement, Internet of Things, Oculus Rift (which isn’t physical space, of course)…
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. And I’m getting this through a translator. You are warned, people.
He cites HD Thoreau saying that it’s great that everyone in the country can now communicate “but Maine and Texas, it may be, have nothing important to communicate.” We now have forms of communication that go beyond text on a screen. E.g., drones began with DIY makers who created open source software. Mary Meeker last week showed the incredibly steep growth in the drone market.
During the two day hackathon, a producer created a video of it, using one of the drones. (He shows a video of the hackathon — impressive job, especially given that it was done overnight.)
Date: June 3rd, 2015 dw
I am at an event in Tel Aviv called “Shaping the Future,” put on by the Center for Educational Technology; I’m on the advisory board. (I missed the ed tech hackathon that was held over the past two days because of a commitment to another event. I was very sorry to miss it. From all reports it was a great success. No surprise. I’m a big fan of Avi Warshavski, the head of MindCET, CET’s ed tech incubator.)
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. And I’m hearing this through a translator. You are warned, people.
The minister of ed, Naftali Bennett, is speaking. He’s a tech entrepreneur ( and also a right winger).
He begins by saying his son helped him fix his home wifi. A hundred years ago, that wouldnt have been possible because here was a monopoly on info, and it didn’t move from child to parent. We are in a time of radical change of reality. The changes in tech go beyond the changes in tech. E.g., the invention of the car also created suburbs.
Tech is not a spice for ed, a nice addition. There’s a transformative possibility. Israel went from 30K grads of a math exam to 9K. This is a threat to Israel, to develop Iron Domes, to win nobel prizes. We’re analyzing the issue. There are many hundreds of schools that don’t allow math ed sufficient for passing this test at the highest level. Few make it through MOOCs. It’s not going to work on its own.
The answer? I don’t know. Trial and error. We’ll fail and succeed.
Take a school with no qualified math teacher. What if we have a MOOC, online courses? The class will teach itself. The teacher will be a coach, facilitator, a motivator. But you need a self assurance on the part of the teacher. The teacher does not know the material. It’s a bungie jump for the teacher. The chain will be measured not by the weakest link but the strongest link. Success will be measured by the average. The 2-4% will get the material through the online materials. Then, just like butterflies, they will teah the other students. The students just has to connect the students. You don’t come to the teacher to ask what is the solution. The teacher says, “I don’t know. Let’s work on this together.” In Judaism, we call this “havruta”: sitting together in a group studying Talmud. We can join online courses with the Jewish idea of studying in a group. Connect the two and who knows what the outcome will be?
We now need teachers who are willing to dare. In the next year we’ll have all sorts of experimentation. No one knows if we’ll succeed .Wherever it’s success we’ll carry on with this.
 Thanks to Jay Hurvitz for correcting the Hebrew word. He adds: “Some of us prefer to write it – “khavrutah” – ?????? – from the root for both friendship and joining.”
, too big to know
Date: June 3rd, 2015 dw
Alex Hodgson of ReadCube is leading a panel called “Accessing Content: New Thinking and New Business Models or Accessing Research Literature” at the Shaking It Up conference.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Robert McGrath is from ReadCube, a platform for managing references. You import your pdfs, read them with their enhanced reader, and can annotate them and discover new content. You can click on references in the PDF and go directly to the sources. If you hit a pay wall, they provide a set of options, including a temporary “checkout” of the article for $6. Academic libraries can set up a fund to pay for such access.
Eric Hellman talks about Unglue.it. Everyone in the book supply chain wants a percentage. But free e-books break the system because there are no percentages to take. “Even libraries hate free ebooks.” So, how do you give access to Oral Literature in Africain Africa? Unglue.it ran a campaign, raised money, and liberated it. How do you get free textbooks into circulation? Teachers don’t know what’s out there. Unglue.it is creating MARC records for these free books to make it easy for libraries to include the. The novel Zero Sum Game is a great book that the author put it out under a Creative Commons license, but how do you find out that it’s available? Likewise for Barbie: A Computer Engineer, which is a legal derivative of a much worse book. Unglue.it has over 1,000 creative commons licensed books in their collection. One of Unglue.it’s projects: an author pledges to make the book available for free after a revenue target has been met. [Great! A bit like the Library License project from the Harvard Library Innovation Lab. They’re now doing Thanks for Ungluing which aggregates free ebooks and lets you download them for free or pay the author for it. [Plug: John Sundman’s Biodigital is available there. You definitely should pay him for it. It’s worth it.]
Marge Avery, ex of MIT Press and now at MIT Library, says the traditional barriers sto access are price, time, and format. There are projects pushing on each of these. But she mainly wants to talk about format. “What does content want to be?” Academic authors often have research that won’t fit in the book. Univ presses are experimenting with shorter formats (MIT Press Bits), new content (Stanford Briefs), and publishing developing, unifinished content that will become a book (U of Minnesota). Cambridge Univ Press published The History Manifesto, created start to finish in four months and is available as Open Access as well as for a reasonable price; they’ve sold as many copies as free copies have been downloaded, which is great.
William Gunn of Mendeley talks about next-gen search. “Search doesn’t work.” Paul Kedrosky was looking for a dishwasher and all he found was spam. (Dishwashers, and how Google Eats Its Own Tail). Likewise, Jeff Atwood of StackExchange: “Trouble in the House of Google.” And we have the same problems in scholarly work. E.g., Google Scholar includes this as a scholarly work. Instead, we should be favoring push over pull, as at Mendeley. Use behavior analysis, etc. “There’s a lot of room for improvement” in search. He shows a Mendeley search. It auto-suggests keyword terms and then lets you facet.
Jenn Farthing talks about JSTOR’s “Register and Read” program. JSTOR has 150M content accesses per year, 9,000 institutions, 2,000 archival journals, 27,000 books. Register and Read: Free limited access for everyone. Piloted with 76 journals. Up to 3 free reads over a two week period. Now there are about 1,600 journals, and 2M users who have checked out 3.5M articles. (The journals are opted in to the program by their publishers.)
Q: What have you learned in the course of these projects?
ReadCube: UI counts. Tracking onsite behavior is therefore important. Iterate and track.
Marge: It’d be good to have more metrics outside of sales. The circ of the article is what’s really of importance to the scholar.
Mendeley: Even more attention to the social relationships among the contributors and readers.
JSTOR: You can’t search for only content that’s available to you through Read and Register. We’re adding that.
Unglue.it started out as a crowdfunding platform for free books. We didn’t realize how broken the supply chain is. Putting a book on a Web site isn’t enough. If we were doing it again, we’d begin with what we’re doing now, Thanks for Ungluing, gathering all the free books we can find.
Q: How to make it easier for beginners?
Unglue .it: The publishing process is designed to prevent people from doing stuff with ebooks. That’s a big barrier to the adoption of ebooks.
ReadCube: Not every reader needs a reference manager, etc.
Q: Even beginning students need articles to interoperate.
Q: When ReadCube negotiates prices with publishers, how does it go?
ReadCube: In our pilots, we haven’t seen any decline in the PDF sales. Also, the cost per download in a site license is a different sort of thing than a $6/day cost. A site license remains the most cost-effective way of acquiring access, so what we’re doing doesn’t compete with those licenses.
Q: The problem with the pay model is that you can’t appraise the value of the article until you’ve paid. Many pay models don’t recognize that barrier.
ReadCube: All the publishers have agreed to first-page previews, often to seeing the diagrams. We also show a blurred out version of the pages that gives you a sense of the structure of the article. It remains a risk, of course.
Q: What’s your advice for large legacy publishers?
ReadCube: There’s a lot of room to explore different ways of brokering access — different potential payers, doing quick pilots, etc.
Mendeley: Make sure your revenue model is in line with your mission, as Geoff said in the opening session.
Marge: Distinguish the content from the container. People will pay for the container for convenience. People will pay for a book in Kindle format, while the content can be left open.
Mendeley: Reading a PDF is of human value, but computing across multiple articles is of emerging value. So we should be getting past the single reader business model.
JSTOR: Single article sales have not gone down because of Read and Register. They’re different users.
Unglue.it: Traditional publishers should cut their cost basis. They have fancy offices in expensive locations. They need to start thinking about how they can cut the cost of what they do.
Last night I got to give a talk at a public meeting of the Gloucester Education Foundation and the Gloucester Public School District. We talked about learning commons and libraries. It was awesome to see the way that community comports itself towards its teachers, students and librarians, and how engaged they are. Truly exceptional.
Afterwards there were comments by Richard Safier (superintendent), Deborah Kelsey (director of the Sawyer Free Library), and Samantha Whitney (librarian and teacher at the high school), and then a brief workshop at the attendees tables. The attendees included about a dozen of Samantha’s students; you can see in the liveliness of her students and the great questions they asked that Samantha is an inspiring teacher.
I came out of these conversations thinking that if my charter were to establish a “learning commons” in a school library, I’d ask what sort of learning I want to be modeled in that space. I think I’d be looking for four characteristics:
1. Students need to learn the basics (and beyond!) of online literacy: not just how to use the tools, but, more important, how to think critically in the networked age. Many schools are recognizing that, thankfully. But it’s something that probably will be done socially as often as not: “Can I trust a site?” is a question probably best asked of a network.
2. Old-school critical thinking was often thought of as learning how to sift claims so that only that which is worth believing makes it through. Those skills are of course still valuable, but on a network we are almost always left with contradictory piles of sifted beliefs. Sometimes we need to dispute those other beliefs because they are simply wrong. But on a network we also need to learn to live with difference — and to appreciate difference — more than ever. So, I would take learning to love difference to be an essential skill.
3. It kills me that most people have never clicked on a Wikipedia “Talk” page to see the discussion that resulted in the article they’re reading. If we’re going to get through this thing — life together on this planet — we’re really going to have to learn to be more meta-aware about what we read and encounter online. The old trick of authority was to erase any signs of what produced the authoritative declaration. We can’t afford that any more. We need always to be aware the what we come across resulted from humans and human processes.
4. We can’t rely on individual brains. We need brains that are networked with other brains. Those networks can be smarter than any of their individual members, but only if the participants learn how to let the group make them all smarter instead of stupider.
I am not sure how these skills can be taught — excellent educators and the communities that support them, like those I met last night, are in a better position to figure it out — but they are four skills that seem highly congruent with a networked learning commons.
A year ago, Harold Feld posted one of the most powerful ways of framing our excessive zeal for copyright that I have ever read. I was welling up even before he brought Aaron Swartz into the context.
Harold’s post is within a standard Jewish genre: the d’var Torah, an explanation of a point in the portion of the Torah being read that week. As is expected of the genre, he draws upon a long, self-reflective history of interpretation. I urge you to read it because of the light it sheds on our culture of copyright, but it’s also worth noticing the form of the discussion.
The content: In the Jewish tradition, Sodom’s sin wasn’t sexual but rather an excessive possessiveness leading to a fanatical unwillingness to share. Harold cites from a collection of traditional commentary, The Ethics of Our Fathers:
“There are four types of moral character. One who says: ‘what is mine is mine and what is yours is yours.’ This is an average person. Some say it is the Way of Sodom. The one who says: ‘what is mine is yours and what is yours is mine,’ is ignorant of the world. ‘What is mine is yours and what is yours is yours’ is the righteous. ‘What is mine is mine and what is yours is mine’ is the wicked.”
In a PowerPoint, it’d be a 2×2 chart. Harold’s point will be that the ‘what is mine is mine and what is yours is yours.’ of the average person becomes wicked when enforced without compassion or flexibility. Harold evokes the traditional Jewish examples of Sodom’s wickedness and compares them to what’s become our dominant “average” assumptions about how copyright ought to work.
I am purposefully not explaining any further. Read Harold’s piece.
The form: I find the space of explanation within which this d’var Torah — and most others that I’ve heard — operates to be fascinating. At the heart of Harold’s essay is a text accepted by believers as having been given by God, yet the explanation is accomplished by reference to a history of human interpretations that disagree with one another, with guidance by a set of values (e.g., sharing is good) that persevere in a community thanks to that community’s insistent adherence to its tradition. The result is that an agnostic atheist like me (I’m only pretty sure there is no God) can find truth and wisdom in the interpretation of a text I take as being ungrounded in a divine act.
But forget all that. Read Harold’s post, bubbelah.
On Friday, I had the tremendous honor of being awarded a Doctor of Letters degree from Simmons College, and giving the Commencement address at the Simmons graduate students’ ceremony.
Simmons is an inspiring place, and not only for its deep commitment to educating women. Being honored this way — especially along with Ruth Ellen Fitch, Madeleine M. Joullié, and Billie Jean King — made me very, very happy.
Thank you so much to Simmons College, President Drinan, and the Board of Trustees for this honor, which means the world to me. I’m just awed by it. Also, Professor Candy Schwartz and Dean Eileen Abels, a special thank you. And this honor is extra-special meaningful because my father-in-law, Marvin Geller, is here today, and his sister, Jeannie Geller Mason, was, as she called herself, a Simmons girl, class of 1940. Afterwards, Marvin will be happy to sing you the old “We are the girls of Simmons C” college song if you ask.
So, first, to the parents: I have been in your seat, and I know how proud – and maybe relieved – you are. So, congratulations to you. And to the students, it’s such an honor to be here with you to celebrate your being graduated from Simmons College, a school that takes seriously the privilege of helping its students not only to become educated experts, but to lead the next cohort in their disciplines and professions.
Now, as I say this, I know that some of you may be shaking your inner heads, because a commencement speaker is telling you about how bright your futures are, but maybe you have a little uncertainty about what will happen in your professions and with your career. That’s not only natural, it’s reasonable. But, some of you – I don’t know how many — may be feeling beyond that an uncertainty about your own abilities. You’re being officially certified with an advanced degree in your field, but you may not quite feel the sense of mastery you expected.
In other words, you feel the way I do now. And the way I did in 1979 when I got my doctorate in philosophy. I knew well enough the work of the guy I wrote my dissertation on, but I looked out at the field and knew just how little I knew about so much of it. And I looked at other graduates, and especially at the scholars and experts who had been teaching us, and I thought to myself, “They know so much more than I do.” I could fake it pretty well, but actually not all that well.
So, I want to reassure those of you who feel the way that I did and do, I want to reassure you that that feeling of not really knowing what you should, that feeling may stay with you forever. In fact, I hope it does — for your sake, for your profession, and for all of us.
But before explaining, I need to let you in on the secret: You do know enough. It’s like Gloria Steinem’s response, when she was forty, to people saying that she didn’t look forty. Steinem replied, “This is what forty looks like.” And this is what being a certified expert in your field feels like. Simmons knows what deserving your degree means, and its standards are quite high. So, congratulations. You truly earned this and deserve it.
But here’s why it’s good to get comfortable with always having a little lack of confidence. First, if you admit no self-doubt, you lose your impulse to learn. Second, you become a smug, know-it-all and no one likes you. Third, what’s even worse, is that you become a soldier in the army of ignorance. Your body language tells everyone else that their questions are a sign of weakness, which shuts down what should have been a space for learning.
The one skill I’ve truly mastered is asking stupid questions. And I don’t mean questions that I pretend are stupid but then, like Socrates, they show the folly of all those around me. No, they’re just dumb questions. Things I really should know by now. And quite often it turns out that I’m not the only one in the room with those questions. I’ve learned far more by being in over my head than by knowing what I’m talking about. And, as I’ll get to, we happen to be in the greatest time for being in over our heads in all of human history.
Let me give you just one quick example. In 1986 I became a marketing writer at a local tech startup called Interleaf that made text-and-graphic word processors. In 1986 that was a big deal, and what Interleaf was doing was waaaay over my head. So, I hung out with the engineers, and I asked the dumbest questions. What’s a font family? How can the spellchecker look up words as fast as you type them? When you fill a shape with say, purple, how does the purple know where to stop? Really basic. But because it was clear that I was a marketing guy who was genuinely interested in what the engineers were doing, they gave me a lot of time and an amazing education. Those were eight happy years being in over my head.
I’m still way over my head in the world of libraries, which are incredibly deep institutions. Compared to “normal” information technology, the data libraries deal with is amazingly profound and human. And librarians have been very generous in helping me learn just a small portion of what they know. Again, this is in part because they know my dumb questions are spurred by a genuine desire to understand what they’re doing, down to the details.
In fact, going down to the details is one very good way to make sure that you are continually over your head. We will never run out of details. The world’s just like that: there’s no natural end to how closely you can look at thing. And one thing I’ve learned is that everything is interesting if looked at at the appropriate level of detail.
Now, it used to be that you’d have to seek out places to plunge in over your head. But now, in the age of the Internets, all we have to do is stand still and the flood waters rise over our heads. We usually call this “information overload,” and we’re told to fear it. But I think that’s based on an old idea we need to get rid of.
Here’s what I mean. So, you know Flickr, the photo sharing site? If you go there and search for photos tagged “vista,” you’ll get two million photos, more vistas than you could look at if you made it your full time job.
If you go to Google and search for apple pie recipes, you’ll get over 1.3 million of them. Want to try them all out to find the best one. Not gonna happen.
If you go to Google Images and search for “cute cats,” you’ll get over seven million photos of the most adorable kittens ever, as well as some ads and porn, of course, because Internet.
So that’s two million vista photos. 1.3 million apple pie recipes. 7.6 million cute cat photos. We’re constantly warned about information overload, yet we never hear one word single word about the dangers of Vista Overload, Apple Pie Overload, or Cute Kitten overload. How have the media missed these overloads! It’s a scandal!
I think there’s actually a pretty clear reason why we pay no attention to these overloads. We only feel overloaded by that which we feel responsible for mastering. There’s no expectation that we’ll master vista photos, apple pie recipes, or photos of cute cats, so we feel no overload. But with information it’s different because we used to have so much less of it that back then mastery seemed possible. For example, in the old days if you watched the daily half hour broadcast news or spent twenty minutes with a newspaper, you had done your civic duty: you had kept up with The News. Now we can see before our eyes what an illusion that sense of mastery was. There’s too much happening on our diverse and too-interesting planet to master it, and we can see it all happening within our browsers.
The concept of Information Overload comes from that prior age, before we accepted what the Internet makes so clear: There is too, too much to know. As we accept that, the idea of mastery will lose its grip, We’ll stop feeling overloaded even though we’re confronted with exactly the same amount of information.
Now, I want to be careful because we’re here to congratulate you on having mastered your discipline. And grad school is a place where mastery still applies: in order to have a discipline — one that can talk with itself — institutions have to agree on a master-able set of ideas, knowledge, and skills that are required for your field. And that makes complete sense.
But, especially as the Internet becomes our dominant medium of ideas, knowledge, culture, and entertainment, we are all learning just how much there is that we don’t know and will never know.
And it’s not just the quantity of information that makes true mastery impossible in the Age of the Internet. It’s also what it’s doing to the domains we want to master — the topics and disciplines. In the Encyclopedia Britannica — remember that? — an article on a topic extends from the first word to the last, maybe with a few suggested “See also’s” at the end. The article’s job is to cover the topic in that one stretch of text. Wikipedia has different idea. At Wikipedia, the articles are often relatively short, but they typically have dozens or even hundreds of links. So rather than trying to get everything about, say, Shakespeare into a couple of thousand words, Wikipedia lets you click on links to other articles about what it mention — to Stratford-on-Avon, or iambic pentameter, or about the history of women in the theater. Shakespeare at Wikipedia, in other words, is a web of linked articles. Shakespeare on the Web is a web. And it seems to me that that webby structure actually is a more accurate reflection of the shape of knowledge: it’s an endless series of connected ideas and facts, limited by interest, not an article that starts here and ends there. In fact, I’d say that Shakespeare himself was a web, and so am I, and so are you.
But if topics and disciplines are webs, then they don’t have natural and clear edges. Where does the Shakespeare web end? Who decides if the article about, say, women in the theater is part of the Shakespeare web or not? These webs don’t have clearcut edges. But that means that we also can’t be nearly as clear about what it means to master Shakespeare. There’s always more. The very shape of the Web means we’re always in over our heads.
And just one more thing about these messy webs. They’re full of disagreement, contradiction, argument, differences in perspective. Just a few minutes on the Web reveals a fundamental truth: We don’t agree about anything. And we never will. My proof of that broad statement is all of human history. How do you master a field, even if you could define its edges, when the field doesn’t agree with itself?
So, the concept of mastery is tough in this Internet Age. But that’s just a more accurate reflection of the way it always was even if we couldn’t see it because we just didn’t have enough room to include every voice and every idea and every contradiction, and we didn’t have a way to link them so that you can go from one to another with the smallest possible motion of your hand: the shallow click of a mouse button.
The Internet has therefore revealed the truth of what the less confident among us already suspected: We’re all in over our heads. Forever. This isn’t a temporary swim in the deep end of the pool. Being in over our heads is the human condition.
The other side of this is that the world is far bigger, more complex, and more unfathomably interesting than our little brains can manage. If we can accept that, then we can happily be in over our heads forever…always a little worried that we really are supposed to know more than we do, but also, I hope, always willing to say that out loud. It’s the condition for learning from one another…
…And if the Internet has shown us how overwhelmed we are, it’s also teaching us how much we can learn from one another. In public. Acknowledging that we’re just humans, in a sea of endless possibility, within which we can flourish only in our shared and collaborative ignorance.
So, I know you’re prepared because I know the quality of the Simmons faculty, the vision of its leadership, and the dedication of its staff. I know the excellence of the education you’ve participated in. You’re ready to lead in your field. May that field always be about this high over your head — the depth at which learning occurs, curiosity is never satisfied, and we rely on one another’s knowledge, insight, and love.
, too big to know
Tagged with: 2b2k
Date: May 11th, 2014 dw
The New Republic continues to favor articles debunking claims that the Internet is bringing about profound changes. This time it’s an article on the digital humanities, titled “The Pseudo-Revolution,” by Adam Kirsch, a senior editor there. [This seems to be the article. Tip of the hat to Jose Afonso Furtado.]
I am not an expert in the digital humanities, but it’s clear to the people in the field who I know that the meaning of the term is not yet settled. Indeed, the nature and extent of the discipline is itself a main object of study of those in the discipline. This means the field tends to attract those who think that the rise of the digital is significant enough to warrant differentiating the digital humanities from the pre-digital humanities. The revolutionary tone that bothers Adam so much is a natural if not inevitable consequence of the sociology of how disciplines are established. That of course doesn’t mean he’s wrong to critique it.
But Adam is exercised not just by revolutionary tone but by what he perceives as an attempt to establish claims through the vehemence of one’s assertions. That is indeed something to watch out for. But I think it also betrays a tin-eared reading by Adam. Those assertions are being made in a context the authors I think properly assume readers understand: the digital humanities is not a done deal. The case has to be made for it as a discipline. At this stage, that means making provocative claims, proposing radical reinterpretations, and challenging traditional values. While I agree that this can lead to thoughtless triumphalist assumptions by the digital humanists, it also needs to be understood within its context. Adam calls it “ideological,” and I can see why. But making bold and even over-bold claims is how discourses at this stage proceed. You challenge the incumbents, and then you challenge your cohort to see how far you can go. That’s how the territory is explored. This discourse absolutely needs the incumbents to push back. In fact, the discourse is shaped by the assumption that the environment is adversarial and the beatings will arrive in short order. In this case, though, I think Adam has cherry-picked the most extreme and least plausible provocations in order to argue against the entire field, rather than against its overreaching. We can agree about some of the examples and some of the linguistic extensions, but that doesn’t dismiss the entire effort the way Adam seems to think it does.
It’s good to have Adam’s challenge. Because his is a long and thoughtful article, I’ll discuss the thematic problems with it that I think are the most important.
First, I believe he’s too eager to make his case, which is the same criticism he makes of the digital humanists. For example, when talking about the use of algorithmic tools, he talks at length about Franco Moretti‘s work, focusing on the essay “Style, Inc.: Reflections on 7,000 Titles.” Moretti used a computer to look for patterns in the titles of 7,000 novels published between 1740 and 1850, and discovered that they tended to get much shorter over time. “…Moretti shows that what changed was the function of the title itself.” As the market for novels got more crowded, the typical title went from being a summary of the contents to a “catchy, attention-grabbing advertisement for the book.” In addition, says Adam, Moretti discovered that sensationalistic novels tend to begin with “The” while “pioneering feminist novels” tended to begin with “A.” Moretti tenders an explanation, writing “What the article ‘says’ is that we are encountering all these figures for the first time.”
Adam concludes that while Moretti’s research is “as good a case for the usefulness of digital tools in the humanities as one can find” in any of the books under review, “its findings are not very exciting.” And, he says, you have to know which questions to ask the data, which requires being well-grounded in the humanities.
That you need to be well-grounded in the humanities to make meaningful use of digital tools is an important point. But here he seems to me to be arguing against a straw man. I have not encountered any digital humanists who suggest that we engage with our history and culture only algorithmically. I don’t profess expertise in the state of the digital humanities, so perhaps I’m wrong. But the digital humanists I know personally (including my friend Jeffrey Schnapp, a co-author of a book, Digital_Humanities, that Adam reviews) are in fact quite learned lovers of culture and history. If there is indeed an important branch of digital humanities that says we should entirely replace the study of the humanities with algorithms, then Adam’s criticism is trenchant…but I’d still want to hear from less extreme proponents of the field. In fact, in my limited experience, digital humanists are not trying to make the humanities safe for robots. They’re trying to increase our human engagement with and understanding of the humanities.
As to the point that algorithmic research can only “illustrate a truism rather than discovering a truth,” — a criticism he levels even more fiercely at the Ngram research described in the book Uncharted — it seems to me that Adam is missing an important point. If computers can now establish quantitatively the truth of what we have assumed to be true, that is no small thing. For example, the Ngram work has established not only that Jewish sources were dropped from German books during the Nazi era, but also the timing and extent of the erasure. This not only helps make the humanities more evidence-based —remember that Adam criticizes the digital humanists for their argument-by-assertion —but also opens the possibility of algorithmically discovering correlations that overturn assumptions or surprise us. One might argue that we therefore need to explore these new techniques more thoroughly, rather than dismissing them as adding nothing. (Indeed, the NY Times review of Uncharted discusses surprising discoveries made via Ngram research.)
Perhaps the biggest problem I have with Adam’s critique I’ve also had with some digital humanists. Adam thinks of the digital humanities as being about the digitizing of sources. He then dismisses that digitizing as useful but hardly revolutionary: “The translation of books into digital files, accessible on the Internet around the world, can be seen as just another practical tool…which facilitates but does not change the actual humanistic work of thinking and writing.”
First, that underplays the potential significance of making the works of culture and scholarship globally available.
Second, if you’re going to minimize the digitizing of books as merely the translation of ink into pixels, you miss what I think is the most important and transformative aspect of the digital humanities: the networking of knowledge and scholarship. Adam in fact acknowledges the networking of scholarship in a twisty couple of paragraphs. He quotes the following from the book Digital_Humanities:
The myth of the humanities as the terrain of the solitary genius…— a philosophical text, a definitive historical study, a paradigm-shifting work of literary criticism — is, of course, a myth. Genius does exist, but knowledge has always been produced and accessed in ways that are fundamentally distributed…
Adam responds by name-checking some paradigm-shifting works, and snidely adds “you can go to the library and check them out…” He then says that there’s no contradiction between paradigm-shifting works existing and the fact that “Scholarship is always a conversation…” I believe he is here completely agreeing with the passage he thinks he’s criticizing: genius is real; paradigm-shifting works exist; these works are not created by geniuses in isolation.
Then he adds what for me is a telling conclusion: “It’s not immediately clear why things should change just because the book is read on a screen rather than on a page.” Yes, that transposition doesn’t suggest changes any more worthy of research than the introduction of mass market paperbacks in the 1940s [source]. But if scholarship is a conversation, might moving those scholarly conversations themselves onto a global network raise some revolutionary possibilities, since that global network allows every connected person to read the scholarship and its objects, lets everyone comment, provides no natural mechanism for promoting any works or comments over any others, inherently assumes a hyperlinked rather than sequential structure of what’s written, makes it easier to share than to sequester works, is equally useful for non-literary media, makes it easier to transclude than to include so that works no longer have to rely on briefly summarizing the other works they talk about, makes differences and disagreements much more visible and easily navigable, enables multiple and simultaneous ordering of assembled works, makes it easier to include everything than to curate collections, preserves and perpetuates errors, is becoming ubiquitously available to those who can afford connection, turns the Digital Divide into a gradient while simultaneously increasing the damage done by being on the wrong side of that gradient, is reducing the ability of a discipline to patrol its edges, and a whole lot more.
It seems to me reasonable to think that it is worth exploring whether these new affordances, limitations, relationships and metaphors might transform the humanities in some fundamental ways. Digital humanities too often is taken simply as, and sometimes takes itself as, the application of computing tools to the humanities. But it should be (and for many, is) broad enough to encompass the implications of the networking of works, ideas and people.
I understand that Adam and others are trying to preserve the humanities from being abandoned and belittled by those who ought to be defending the traditional in the face of the latest. That is a vitally important role, for as a field struggling to establish itself digital humanities is prone to over-stating its case. (I have been known to do so myself.) But in my understanding, that assumes that digital humanists want to replace all traditional methods of study with computer algorithms. Does anyone?
Adam’s article is a brisk challenge, but in my opinion he argues too hard against his foe. The article becomes ideological, just as he claims the explanations, justifications and explorations offered by the digital humanists are.
More significantly, focusing only on the digitizing of works and ignoring the networking of their ideas and the people discussing those ideas, glosses over the locus of the most important changes occurring within the humanities. Insofar as the digital humanities focus on digitization instead of networking, I intend this as a criticism of that nascent discipline even more than as a criticism of Adam’s article.
« Previous Page | Next Page »