Alison Head, who is at the Berkman Center and the Library Information Lab this year, but who is normally based at U of Washington’s Info School, is giving a talk called “Modeling the Information-Seeking Process of College Students.” (I did a podcast interview with her a couple of months ago.)
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Project Information Literacy is a research project that reaches across institutions. They’ve (Michael Eisenberg co-leads the project) surveyed 11,000 students on 41 US campuses to find out how do students find and use information. They use voluntary samples, not random samples. But, Alison says, the project doesn’t claim to be able to generalize to all students; they look at the relationships among different kinds of schools and overall trends. They make special efforts to include community colleges, which are often under-represented in studies of colleges.
The project wanted to know what’s going through students’ heads as they do research. What’s it like to be a student in the digital age? “How do students define the research process, how do they conceptualize it” throughout everyday school life, including non-course-related research (e.g., what to buy).
Four takeaways from all five studies:
1. “Students say research is more difficult for them than ever before.” This is true both for course-related and everyday life research. Teachers and librarians denied this finding when it came out. But students describe the process using terms of stress (fear,angst, tired, etc.) Everyday-life research also had a lot of risk associated with it, e.g., when researching medical problems.
Their research led the project to come up with a preliminary model based on what students told them about the difficulties of doing research that says in the beginning part of research, students try to define four contexts: big picture, info-gathering, language, situational. These provide meaning and interpretation.
a. Big picture. In a focus group, a student said s/he went to international relations class and there was an assignment on how Socrates would be relevant to a problem today. Alison looked at the syllabus and wondered, “Was this covered?” Getting the big picture enables students to get their arms around a topic.
b. Info gathering. “We give students access to 80 databases at our small library, and they really want access to one,” says Barbara Fister at Gustavus Adolphus.
c. Language. This is why most students go to librarians. They need the vocabulary.
d. Situational. The expectations: how long should the paper be, how do I get an A, etc.? In everyday life, the situational question might be: how far do I go with an answer? When do I know enough?
Students surveyed said that for course related research they almost always need the big picture, often need info-gathering, sometimes need language, and sometimes need situational. Students were 1.5x more likely to go to a librarian for language context. For everyday-life, big picture is often a need, and the others are needed only sometimes. Many students find everyday-life research is harder because it’s open-ended, harder to know when you’re done, and harder to know when you’re right. Course-related research ends with a grade.
2. “Students turn to the same ‘tried and true’ resources over and over again.”. In course research, course readings were used 97% of the time. Search engines: 96%. Library databases: 94%. Instructors: 88%. Wikipedia: 85%. (Those are the 2010 results. In 2009, everything rose except course readings.) Students are not using a lot of on-campus sources. Alison says that during 20 years of teaching, she found students were very disturbed if she critiqued the course readings. Students go to course readings not only to get situational context, but also to get big picture context, i.e., the lay of the land. They don’t want you critiquing those readings, because you’re disrupting their big picture context. Librarians were near the bottom, in line with other research findings. But “instructors are a go-to source.” Also, note that students don’t go online for all their info. They talk to friends, instructors, etc.
In everyday life research, the list in order is: Search engines 95%, Wikipedia 84%, friends and family 87%, personal collection 75%, and government sites 65%.
Students tend to repeat the same processes.
3. “Students use a strategy of predictability and efficiency.” They’re not floundering. They have a strategy. You may not like it, but they have one. It’s a way to fill in the context.
Alison presents a composite student named Jessica. (i) She has no shortage of ideas for research. But she needs the language to talk about the project, and to get good results from searching. (ii) Students are often excited about the course research project, but they worry that they’ll pick a topic “that fails them,” i.e., that doesn’t let them fulfill the requirements. (iii) They are often risk-averse. They’ll use the same resource over and over, even Project Muse for a science course. (“I did a paper on the metaphor of breast cancer,” said one student.) (iv) They are often self-taught and independent, and try to port over what they learned in high school. But HS works for HS, and not for college. (iv) Currency matters.
What’s the most difficult step? 1. Getting started 84%. 2. Defining a topic 66%. Narrowing a topic 62%. Sorting through irrelevant results 61%. Task definition is the most difficult part of research. For life research, the hardest part is figuring out when you’re done.
So, where do they go when they’re having difficulty in course research? They go to instructors, but handouts fall short: few handouts the project looked at discussed what research means (16%). Six in ten handouts sent students to the library for a book. Only 18% mention plagiarism, and few of those explained what it is. Students want email access to the instructor. Second, most want a handout that they can take with them and check off as they do their work. Few hand-outs tell students how to gather information. Faculty express surprise at this, saying that they assume students know how to do research already, or that it’s not the prof’s job to teach them that. They tend not to mention librarians or databases.
Students use ibrary databases (84%), OPAC (78%), study areas (72%), check library shelves (55%), cafe (48%). Only 12% use the online “Ask a librarian” reference. 20% consult librarians about assignments, but 24% ask librarians about the library system.
Librarians use a model of scholarly thoroughness, while students use a model of efficiency. Students tend to read the course materials and then google for the rest.
Alison plays a video:
How have things changed? 1. Students contend with a staggering amount of information. 2. They are always on and always being notified. 3. It’s a Web 2.0 sharing culture. The old days of dreading group projects are ending; satudents sometimes post their topics on Facebook to elicit reactions and help. 4. The expectations from information has changed.
“Books, do I use them? Not really, they are antiquated interaces. You have to look in an index, way in the back, and it’s not hyperlinke.”
[I moderated the Q&A so I couldn't liveblog it.]
TAGS: -berkman
I’ve swiped the title of this post from Rebecca J. Rosen’s excellent post at The Atlantic. Darrell Issa has been generally good on open Internet issues, so why is he supporting a bill that would forbid the government from requiring researchers to openly post the results of their research? [Later that day: I revised the previous sentence, which was gibberish. Sorry.]
I’m enjoying a book by Brian Kernighan — yes, that Brian Kernighan — based on a course he’s been teaching at Princeton called “Computers in Our World.” D is for Digital is a clear, straightfoward, grownup introduction to computers: hardware and software, programming, and the Internet. [Disclosure: Brian wrote some of during his year as a fellow at the Berkman Center.]
D is for Digital is brief, but it drives its topics down to the nuts and bolts, which is a helpful reminder that all the magic on your screen is grounded in some very real wires and voltages. Likewise, Brian has a chapter on how to program, taking Javascript as his example. He does not back away from talking about libraries and APIs. He even explains public key encryption clearly enough that even I understand it. (Of course, I have frequently understood it for up to fifteen minutes at a time.) There are a few spots where the explanations are not quite complete enough — his comparison of programming languages doesn’t tell us enough about the differences — but they are rare indeed. Even so, I like that this book doesn’t pander to the reader.
D is for Digital would be a nice stocking stuffer with Blown to Bits by Harold Abelson, Ken Ledeen, Harry R. Lewis, which is an introduction to computers within the context of policy debates. Both are excellent. Together they are excellent squared.
I’ve posted a brief video interview with Avi Warshavsky of the Center for Educational Technology, the leading textbook publisher in Israel. Avi is a thoughtful and innovative software guy who has been experimenting with new ways of structuring textbooks.
Eric Frank is the co-founder of Flat World Knowledge, a company that publishes online textbooks that are free via a browser, but cost money if you want to download them. It’s a really interesting model. I interview him here.
An possible explanation of the observation of neutrinos traveling faster than light has been posted at Arxiv.org by Ronald van Elburg. I of course don’t have any of the conceptual apparatus to be able to judge that explanation, but I’m curious about why, among all the explanations, this is one I’ve now heard about it.
In a properly working knowledge ecology, the most plausible explanations would garner the most attention, because to come to light an article would have to pass through competent filters. In the new ecology, it may well be that what gets the most attention are articles that appeal to our lizard brains in various ways: they make overly-bold claims, they over-simplify, they confirm prior beliefs, they are more comprehensible to lay people than are ideas that require more training to understand, they have an interesting backstory (“Ashton Kutcher tweets a new neutrino explanation!”)…
By now we are all familiar with the critique of the old idea of a “properly working knowledge ecology”: Its filters were too narrow and were prone to preferring that which was intellectually and culturally familiar. There is a strong case to be made that a more robust ecology is wilder in its differences and disagreements. Nevertheless, it seems to me to be clearly true (i.e., I’m not going to present any evidence to support the following) that to our lizard brains the Internet is a flat rock warmed by a bright sun.
But that is hardly the end of the story. The Internet isn’t one ecology. It’s a messy cascade of intersecting environents. Indeed, the ecology metaphor doesn’t suffice, because each of us pins together our own Net environments by choosing which links to click on, which to bookmark, and which to pass along to our friends. So, I came across the possible neutrino explanation at Metafilter, which I was reading embedded within Netvibes, a feed aggregator that I use as my morning newspaper. A comment at Metafilter pointed to the top comment at Reddit’s AskScience forum on the article, which I turned to because on this sort of question I often find Reddit comment threads helpful. (I also had a meta-interest in how articles circulate.) If you despise Reddit, you would have skipped the Metafilter comment’s referral to that site, but you might well hae pursued a different trail of links.
If we take the circulation of Ronald van Elburg’s article as an example, what do we learn? Well, not much because it’s only one example. Nevertheless, I think it at least helps make clear just how complex our “media environment” has become, and some of the effects it has on knowledge and authority.
First, we don’t yet know how ideas achieve status as centers of mainstream contention. Is von Elburg’s article attaining the sort of reliable, referenceable position that provides a common ground for science? It was published at Arxiv, which lets any scientist with an academic affiliation post articles at any stage of readiness. On the other hand, among the thousands of articles posted every day, the Physics Arxiv blog at Technology Review blogged about this one. (Even who’s blogging about what where is complex!) If over time von Elburg’s article is cited in mainstream journals, then, yes, it will count as having vaulted the wall that separates the wannabes from the contenders. But, to what extent are articles not published in the prestigious journals capable of being established as touchpoints within a discipline? More important, to what extent does the ecology still center around controversies about which every competent expert is supposed to be informed? How many tentpoles are there in the Big Tent? Is there a Big Tent any more?
Second, as far as I know, we don’t yet have a reliable understanding of the mechanics of the spread of ideas, much less an understanding of how those mechanics relate to the worth of ideas. So, we know that high-traffic sites boost awareness of the ideas they publish, and we know that the mainstream media remain quite influential in either the creation or the amplification of ideas. We know that some community-driven sites (Reddit, 4chan) are extraordinarily effective at creating and driving memes. We also know that a word from Oprah used to move truckloads of books. But if you look past the ability of big sites to set bonfires, we don’t yet understand how the smoke insinuates its way through the forest. And there’s a good chance we will never understand it very fully because the Net’s ecology is chaotic.
Third, I would like to say that it’s all too complex and imbued with value beliefs to be able to decide if the new knowledge ecology is a good thing. I’d like to be perceived as fair and balanced. But the truth is that every time I try to balance the scales, I realize I’ve put my thumb on the side of traditional knowledge to give it heft it doesn’t deserve. Yes, the new chaotic ecology contains more untruths and lies than ever, and they can form a self-referential web that leaves no room for truth or light. At the same time, I’m sitting at breakfast deciding to explore some discussions of relativity by wiping the butter off my finger and clicking a mouse button. The discussions include some raging morons, but also some incredibly smart and insightful strangers, some with credentials and some who prefer not to say. That’s what happens when a population actually engages with its culture. To me, that engagement itself is more valuable than the aggregate sum of stupidity it allows.
—
(Yes, I know I’m having some metaphor problems. Take that as an indication of the unsettled nature of our thought. Or of bad writing.)
Soo Young Rieh is an associate professor at the University of Michigan School of Information. She recently finished a study (funded in part by MacArthur) on how people assess the credibility of sources when they are just searching for information and when they are actually posting information. Her study didn’t focus on a particular age or gender, and found [SPOILER] that we don’t take extra steps to assess the credibility of information when we are publishing it.
We’re really really really pleased that the Digital Public Library of America has chosen two of our projects to be considered (at an Oct. 21 open plenary meeting) for implementation as part of the DPLA’s beta sprint. The Harvard Library Innovation Lab (Annie Cain, Paul Deschner, Jeff Goldenson, Matt Phillips, and Andy Silva), which I co-direct (along with Kim Dulin) worked insanely hard all summer to turn our prototypes for Harvard into services suitable for a national public library. I have to say I’m very proud of what our team accomplished, and below is a link that will let you try out what we came up with.
Upon the announcement of the beta sprint in May, we partnered up with folks at thirteen other institutions…an amazing group of people. Our small team at Harvard , with generous internal support, built ShelfLife and LibraryCloud on top of the integrated catalogs of five libraries, public and university, with a combined count of almost 15 million items, plus circulation data. We also pulled in some choice items from the Web, including metadata about every TED talk, open courseware, and Wikipedia pages about books. (Finding all or even most of the Wikipedia pages about books required real ingenuity on the part of our team, and was a fun project that we’re in the process of writing up.)
The metadata about those items goes into LibraryCloud, which collects and openly publishes that metadata via APIs and as linked open data. We’re proposing LibraryCloud to DPLA as a metadata server for the data DPLA collects, so that people can write library analytics programs, integrate library item information into other sites and apps, build recommendation and navigation systems, etc. We see this as an important way what libraries know can become fully a part of the Web ecosystem.
ShelfLife is one of those possible recommendation and navigation systems. It is based on a few basic principles hypotheses:
- The DPLA should be not only a service but a place where people can not only read/view items, but can engage with other users.
- Library items do not exist on their own, but are always part of various webs. It’s helpful to be able to switch webs and contexts with minimal disruption.
- The behavior of the users of a collection of items can be a good guide to those items; we think of this as “community relevance,” and calculate it as “shelfRank.”
- The system should be easy to use but enable users to drill down or pop back up easily.
- Libraries are social systems. Library items are social objects. A library navigation system should be social as well.
Apparently the DPLA agreed enough to select ShelfLife and LibraryCloud along with five other projects out of 38 submitted proposals. The other five projects — along with another three in a “lightning round” (where the stakes are doubled and anything can happen??) — are very strong contenders and in some cases quite amazing. It seems clear to our team that there are synergies among them that we hope and assume the DPLA also recognizes. In any case, we’re honored to be in this group, and look forward to collaborating no matter what the outcome.
You can try the prototype of ShelfLife and LibraryCloud here. Keep in mind please that this is live code running on top of a database of 15M items in real time, and that it is a prototype (and in certain noted areas merely a demo or sketch). I urge you to talk the tour first; there’s a lot in these two projects that you’ll miss if you don’t.
I forked yesterday for the first time. I’m pretty thrilled. Not about the few lines of code that I posted. If anyone notices and thinks the feature is a good idea, they’ll re-write my bit from the ground up.* What’s thrilling is seeing this ecology in operation, for the software development ecology is now where the most rapid learning happens on the planet, outside the brains of infants.
Compare how ideas and know-how used to propagate in the software world. It used to be that you worked in a highly collaborative environment, so it was already a site of rapid learning. But the barriers to sharing your work beyond your cube-space were high. You could post to a mailing list or UseNet if you had permission to share your company’s work, you could publish an article, you could give a talk at a conference. Worse, think about how you would learn if you were not working at a software company or attending college: Getting answers to particular questions — the niggling points that hang you up for days — was incredibly frustrating. I remember spending much of a week trying to figure out how to write to a file in Structured BASIC [SBASIC], my first programming language , eventually cold-calling a computer science professor at Boston University who politely could not help me. I spent a lot of time that summer learning how to spell “Aaaaarrrrrggggghhhhh.”
On the other hand, this morning Antonio, who is doing some work for the Library Innovation Lab this summer, poked his head in and pointed us to a jquery-like data visualization library. D3 makes it easy for developers to display data interactively on Web pages (the examples are eye-popping), and the author, mbostock, made it available for free to everyone. So, global software productivity just notched up. A bunch of programs just got easier to use, or more capable, or both. But more than that, if you want to know how to do how mbostock did it, you can read the code. If you want to modify it, you will learn deeply from the code. And if you’re stuck on a problem — whether n00bish or ultra-geeky — Google will very likely find you an answer. If not, you’ll post at StackOverflow or some other site and get an answer that others will also learn from.
The general principles of this rapid-learning ecology are pretty clear.
First, we probably have about the same number of smart people as we did twenty years ago, so what’s making us all smarter is that we’re on a network together.
Second, the network has evolved a culture in which there’s nothing wrong with not knowing. So we ask. In public.
Third, we learn in public.
Fourth, learning need not be private act that occurs between a book and a person, or between a teacher and a student in a classroom. Learning that is done in public also adds to that public.
Fifth, show your work. Without the “show source” button on browsers, the ability to create HTML pages would have been left in the hands of HTML Professionals.
Sixth, sharing is learning is sharing. Holy crap but the increased particularity of our ownership demands about our ideas gets in the way of learning!
Knowledge once was developed among small networks of people. Now knowledge is the network.
*I added a couple of features I needed to an excellent open source program that lets you create popups that guide users through an app. The program is called Guiders-JS by Jeff Pickhardt at Optimizely. Thanks, Jeff!)
The plan to provide ultra high speed Internet connectivity to universities (mainly in the heartland) is exciting. And it’s got some serious people behind it, including Lev Gonick and Blair Levin.
The NY Times article, seeking to find something negative to say about it, finds someone who doubts that providing significantly higher speeds will lead to innovative uses of those greased-lightning pipes. Does history count for nothing?