Joho the Blog » too big to know

June 15, 2013

[2b2k][eim] My Stuttgart syllabus

I’ve just finished leading two days of workshops at University of Stuttgart as part of my fellowship at the Internazionales Zentrum für Kultur- und Technikforschung. (No, I taught in English.) This was for me a wonderful experience. First of all, the students were engaged, smart, talked from diverse standpoints, and fun. Second, it reminded me how to teach. I had so much trouble trying to structure sessions, feeling totally unsure how one does so. But the eight 1.5 hour sessions reminded me why I loved teaching.

For my own memory, here are the sessions (and if any of you were there and took notes, I’d love to see them):

Friday

#1 Cyberutopianism, technodeterminism, and Internet exceptionalism defined, with JP Barlow’s Declaration of the Independent of Cyberspace as an example. Class introductions.

#2 Information Age to Age of Connected. Why Ted Nelson’s Xanadu did not succeed the way the Web did. Rough technical architecture of the Net and (perhaps) its embedded political values. Hyperlinks.

#3 Digital order. Everything is miscellaneous? From information Retrieval to search engines. Schema-based databases to tagging.

#4 Networked knowledge. What knowledge looks like once it’s been freed of paper. Four challenges to networked knowledge (with many more added by the students.)

On Saturday we talked about topics that the students decided were interesting:

#1 Mobile net. Is Facebook making us more or less social? Why do we fill up every interstice by using Facebook on mobiles? What does this say about us and the notion of the self?

#2 Downloading. Do you download music illegally? What is your justification? How might artists respond? Why is the term “intellectual property” so loaded?

#3 Education. What makes a great in-person course? What makes for a miserable one? Oddly, many of the characteristics of miserable classes are also characteristics of MOOCs. What might we do about that? How much of this is caused by the fact that MOOCs are construed as courses in the traditional sense?

#4 Internet culture. Is there such a thing? If there are many, is any particular one to be privileged? How does the Net look to a culture that is dedicated to warding off what it says as corrupting influences? End with LolCatBible and the astounding TheJohnnyCashProject

Thank you, students. This experience meant a great deal to me.

Be the first to comment »

June 2, 2013

[2b2k] Knowledge in its natural state

I gave a 20 minute talk at the Wired Next Fest in Milan on June 1, 2013. Because I needed to keep the talk to its allotted time and because it was being simultaneously translated into Italian, I wrote it out and gave a copy to the translators. Inevitably, I veered from the script a bit, but not all that much. What follows is the script with the veerings that I can remember. The paragraph breaks track to the slide changes

(I began by thanking the festival, and my progressive Italian publisher, Codice Edizioni Codice are pragmatic idealists and have been fantastic to work with.)

Knowledge seems to fit so perfectly into books. But to marvel at how well Knowledge fits into books…

… is to marvel at how well each rock fits into its hole in the ground. Knowledge fits books because we’ve shaped knowledge around books and paper.

And knowledge has taken on the properties of books and paper. Like books, knowledge is ordered and orderly. It is bounded, just as books stretch from cover to cover. It is the product of an individual mind that then is filtered. It is kept private and we’re not responsible for it until it’s published. Once published, it cannot be undone. It creates a privileged class of experts, like the privileged books that are chosen to be published and then chosen to be in a library

Released from the bounds of paper, knowledge takes on the shape of its new medium, the Internet. It takes on the properties of its new medium just it had taken on the properties of its old paper medium. It’s my argument today that networked knowledge assumes a more natural shape. Here are some of the properties of new, networked knowledge

1. First, because it’s a network, it’s linked.

2. These links have no natural stopping point for your travels. If anything, the network gives you temptations to continue, not stopping points.

3. And, like the Net, it’s too big for any one head, Michael Nielsen, the author of Reinventing Discovery, uses the discovery of the Higgs Boson as an example. That discovery required gigantic networks of equipment and vast networks of people. There is no one person who understands everything about the system that proved that that particle exists. That knowledge lives in the system, in the network.

4. Like the net, networked knowledge is in perpetual disagreement. There is nothing about which everyone agrees. We like to believe this is a temporary state, but after thousands of years of recorded history, we can now see for sure that we are never going to agree about anything. The hope for networked knoweldge is that we’re learning to disagree more fruitfully, in a linked environment

5. And, as the Internet makes very clear, we are fallible creatures. We get everything wrong. So, networked knowledge becomes more credible when it acknowledges fallibility. This is very different from the old paper based authorities who saw fallibility as a challenge to their authority.

6. Finally, knowledge is taking on the humor of the Internet. We’re on the Internet voluntarily and freed of the constrictions of paper, it turns out that we like being with one another. Even when the topic is serious like this topic at Reddit [a discussion of a physics headline], within a few comments, we’re making jokes. And then going back to the serious topic. Paper squeezed the humor out of knowledge. But that’s unnatural.

These properties of networked knowledge are also properties of the Network. But they’re also properties that are more human and more natural than the properties of traditional knowledge.

But there’s one problem:

There is no such thing as natural knowledge. Knowledge is a construct. Our medium may have changed, but we haven’t, at least so it seems. And so we’re not free to reinvent knowledge any way we’d like. Significant problems based on human tendencies are emerging. I’ll point to four quick problem areas.

First, We see the old patterns of concentration of power reemerge on the Net. Some sites have an enormous number of viewers, but the vast majority of sites have very few. [Slide shows Clay Shirky’s Power Law distribution chart, and a photo of Clay]

Albert-László Barabási has shown that this type of clustering is typical of networks even in nature, and it is certainly true of the Internet

Second, on the Internet, without paper to anchor it, knowledge often loses its context. A tweet…

Slips free into the wild…

It gets retweeted and perhaps loses its author

And then gets retweeted and lose its meaning. And now it circulates as fact. [My example was a tweet about the government not allowing us to sell body parts morphing into a tweet about the government selling body parts. I made it up.]

Third, the Internet provides an incentive to overstate.

Fourth, even though the Net contains lots of different sorts of people and ideas and thus should be making us more open in our beliefs…

… we tend to hang out with people who are like us. It’s a natural human thing to prefer people “like us,” or “people we’re comfortable with.” And this leads to confirmation bias — our existing beliefs get reinforced — and possibly to polarization, in which our beliefs become more extreme.

This is known as the echo chamber problem, and it’s a real problem. I personally think it’s been overstated, but it is definitely there.

So there are four problems with networked knowledge. Not one of them is new. Each has a analog from before the Net.

  1. The loss of context has always been with us. Most of what we believe we believe because we believe it, not because of evidence. At its best we call it, in English, common sense. But history has shown us that common sense can include absurdities and lead to great injustices.

  2. Yes, the Net is not a flat, totally equal place. But it is far less centralized than the old media were, where only a handful of people were allowed to broadcast their ideas and to choose which ideas were broadcast.

  3. Certainly the Internet tends towards overstatement. But we have had mass media that have been built on running over-stated headlines. This newspaper [Weekly World News] is a humor paper, but it’s hard to distinguish from serious broadcast news.

  4. And speaking of Fox, yes, on the Internet we can simply stick with ideas that we already agree with, and get more confirmed in our beliefs. But that too is nothing new. The old media actually were able to put us into even more tightly controlled echo chambers. We are more likely to run into opposing ideas — and even just to recognize that there are opposing ideas — on the Net than in a rightwing or leftwing newspaper.

It’s not simply that all the old problems with knowledge have reemerged. Rather, they’ve re-emerged in an environment that offers new and sometimes quite substantial ways around them.

  1. For example, if something loses its context, we can search for that context. And links often add context.

  2. And, yes, the Net forms hubs, but as Clay Shirky and Chris Anderson have pointed out, the Net also lets a long tail form, so that voices that in the past simply could not have been heard, now can be. And the activity in that long tail surpasses the attention paid to the head of the tail.

  3. Yes, we often tend to overstate things on the Net, but we also have a set of quite powerful tools for pushing back. We review our reviews. We have sites the well-regard American site, Snopes.com, that will tell you if some Internet rumor is true. And it’s highly reliable. Then we have all of the ways we talk with one another on the Net, evaluating the truth of what we’ve read there.

  4. And, the echo chamber is a real danger, but we also have on the Net the occasional fulfillment of our old ideal of being able to have honest, respectful conversations with people with whom we fundamentally disagree. These examples are from Reddit, but there are others.

So, yes, there are problems of knowledge that persist even when our technology of knowledge changes. That’s because these are not technical problems so much as human problems…

…and thus require human solutions. And the fundamental solution is that we need to become more self-aware about knowledge.

Our old technology — paper — gave us an idea of knowledge that said that knowledge comes from experts who are filtered, printed, and then it’s settled, because that’s how books work. Our new technology shows us we are complicit in knowing. In order to let knowledge get as big as our new medium allows, we have to recognize that knowledge comes from all of us (including experts), it is to be linked, shared, discussed, argued about, made fun of, and is never finished and done. It is thoroughly ours – something we build together, not a product manufactured by unknown experts and delivered to us as if it were more than merely human.

The required human solution therefore is to accept our human responsibility for knowledge, to embrace and improve the technology that gives knowledge to us –- for example, by embracing Open Access and the culture of linking and of the Net, and to be explicit about these values.

Becoming explicit is vital because our old medium of knowledge did its best to hide the human qualities of knowledge. Our new medium makes that responsibility inescapable. With the crumbling of the paper authorities, it bcomes more urgent than ever that we assume personal and social responsibility for what we know.

Knowing is an unnatural act. If we can remember that –- remember the human role in knowing — we now have the tools and connections that will enable even everyday knowledge to scale to a dimension envisioned in the past only by the mad and the God-inspired.

Thank you.

1 Comment »

May 26, 2013

[2b2k] Is big data degrading the integrity of science?

Amanda Alvarez has a provocative post at GigaOm:

There’s an epidemic going on in science: experiments that no one can reproduce, studies that have to be retracted, and the emergence of a lurking data reliability iceberg. The hunger for ever more novel and high-impact results that could lead to that coveted paper in a top-tier journal like Nature or Science is not dissimilar to the clickbait headlines and obsession with pageviews we see in modern journalism.

The article’s title points especially to “dodgy data,” and the item in this list that’s by far the most interesting to me is the “data reliability iceberg,” and its tie to the rise of Big Data. Amanda writes:

…unlike in science…, in big data accuracy is not as much of an issue. As my colleague Derrick Harris points out, for big data scientists the abilty to churn through huge amounts of data very quickly is actually more important than complete accuracy. One reason for this is that they’re not dealing with, say, life-saving drug treatments, but with things like targeted advertising, where you don’t have to be 100 percent accurate. Big data scientists would rather be pointed in the right general direction faster — and course-correct as they go – than have to wait to be pointed in the exact right direction. This kind of error-tolerance has insidiously crept into science, too.

But, the rest of the article contains no evidence that the last sentence’s claim is true because of the rise of Big Data. In fact, even if we accept that science is facing a crisis of reliability, the article doesn’t pin this on an “iceberg” of bad data. Rather, it seems to be a melange of bad data, faulty software, unreliable equipment, poor methodology, undue haste, and o’erweening ambition.

The last part of the article draws some of the heat out of the initial paragraphs. For example: “Some see the phenomenon not as an epidemic but as a rash, a sign that the research ecosystem is getting healthier and more transparent.” It makes the headline and the first part seem a bit overstated — not unusual for a blog post (not that I would ever do such a thing!) but at best ironic given this post’s topic.

I remain interested in Amanda’s hypothesis. Is science getting sloppier with data?

Be the first to comment »

April 9, 2013

Elsevier acquires Mendeley + all the data about what you read, share, and highlight

I liked the Mendeley guys. Their product is terrific — read your scientific articles, annotate them, be guided by the reading behaviors of millions of other people. I’d met with them several times over the years about whether our LibraryCloud project (still very active but undergoing revisions) could get access to the incredibly rich metadata Mendeley gathers. I also appreciated Mendeley’s internal conflict about the urge to openness and the need to run a business. They were making reasonable decisions, I thought. At they very least they felt bad about the tension :)

Thus I was deeply disappointed by their acquisition by Elsevier. We could have a fun contest to come up with the company we would least trust with detailed data about what we’re reading and what we’re attending to in what we’re reading, and maybe Elsevier wouldn’t win. But Elsevier would be up there. The idea of my reading behaviors adding economic value to a company making huge profits by locking scholarship behind increasingly expensive paywalls is, in a word, repugnant.

In tweets back and forth with Mendeley’s William Gunn [twitter: mrgunn], he assures us that Mendeley won’t become “evil” so long as he is there. I do not doubt Bill’s intentions. But there is no more perilous position than standing between Elsevier and profits.

I seriously have no interest in judging the Mendeley folks. I still like them, and who am I to judge? If someone offered me $45M (the minimum estimate that I’ve seen) for a company I built from nothing, and especially if the acquiring company assured me that it would preserve the values of that company, I might well take the money. My judgment is actually on myself. My faith in the ability of well-intentioned private companies to withstand the brute force of money has been shaken. After all this time, I was foolish to have believed otherwise.

MrGunn tweets: “We don’t expect you to be joyous, just to give us a chance to show you what we can do.” Fair enough. I would be thrilled to be wrong. Unfortunately, the real question is not what Mendeley will do, but what Elsevier will do. And in that I have much less faith.

 


I’ve been getting the Twitter handles of Mendeley and Elsevier wrong. Ack. The right ones: @Mendeley_com and @ElsevierScience. Sorry!

17 Comments »

March 28, 2013

[annotation][2b2k] Critique^it

Ashley Bradford of Critique-It describes his company’s way of keeping review and feedback engaging.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

To what extent can and should we allow classroom feedback to be available in the public sphere? The classroom is a type of Habermasian civic society. Owning one’s discourse in that environment is critical. It has to feel human if students are to learn.

So, you can embed text, audio, and video feedback in documents, video and images. It translates docs into HTML. To make the feedback feel human, it uses slightly stamps. You can also type in comments, marking them as neutral, positive, or critique. A “critique panel” follows you through the doc as you read it, so you don’t have to scroll around. It rolls up comments and stats for the student or the faculty.

It works the same in different doc types, including Powerpoint, images, and video.

Critiques can be shared among groups. Groups can be arbitrarily defined.

It uses HTML 5. It’s written in Javascript, PHP, and uses Mysql.

“We’re starting with an environment. We’re building out tools.” Ashley aims for Critique^It to feel very human.

2 Comments »

[annotation][2b2k] Mediathread

Jonah Bossewich and Mark Philipsonfrom Columbia University talk about Mediathread, an open source project that makes it easy to annotate various digital sources. It’s used in many courses at Columbi, as well as around the world.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

It comes from Columbia’s Center for New Media Teaching and Learning. It began with Vital, a video library tool. It let students clip and save portions of videos, and comment on them. Mediathread connects annotations to sources by bookmarking, via a bookmarklet that interoperates with a variety of collections. The bookmarklet scrapes the metadata because “We couldn’t wait for the standards to be developed.” Once an item is in Mediathread, it embeds the metadata as well.

It has always been conceived of a “small-group sharing and collaboration space.” It’s designed for classes. You can only see the annotations by people in your class. It does item-level annotation, as well as regions.

Mediathread connects assignments and responses, as well as other workflows. [He's talking quickly :)]

Mediathread’s bookmarklet approach requires it to have to accommodate the particularities of sites. They are aiming at making the annotations interoperable in standard forms.

Be the first to comment »

[annotation][2b2k] Phil Desenne on Harvard annotation tools

Phil Desenne begins with a brief history of annotation tools at Harvard. There are a lot, for annotating from everything to texts to scrolls to music scores to video. Most of them are collaborative tools. The collaborative tool has gone from Adobe AIR to Harvard iSites, to open source HTML 5. “It’s been a wonderful experience.” It’s been picked up by groups in Mexico, South America and Europe.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Phil works on edX. “We’re beginning to introduce annotation into edX.” It’s being used to encourage close reading. “It’s the beginning of a new way of thinking about teaching and assessing students.” Students tag the text, which “is the beginning of a semantic tagging system…Eventually we want to create a semantic ontology.”

What are the implications for the “MOOC Generation”? MOOC students are out finding information anywhere they can. They stick within a single learning management system (LMS). LMS’s usually have commentary tools “but none of them talk with one another . Even within the same LMS you don’t have cross-referencing of the content.” We should have an interoperable layer that rides on top of the LMS’s.

Within edX, there are discussions within classes, courses, tutorials, etc. These should be aggregated so that the conversations can reach across the entire space, and, of course, outside of it. edX is now working on annotation systems that will do this. E.g., imagine being able to discuss a particular image or fragments of videos, and being able to insert images into streams of commentary. Plus analytics of these interations. Heatmaps of activity. And a student should be able to aggregate all her notes, journal-like, so they can be exported, saved, and commented on, “We’re talking about a persistent annotation layer with API access.” “We want to go there.”

For this we need stable repositories. They’ll use URNs.

Be the first to comment »

[annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

Paolo Ciccarese begins by reminding us just how vast the scientific literature is. We can’t possibly read everything we should. But “science is social” so we rely on each other, and build on each other’s work. “Everything we do now is connected.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Today’s media do provide links, but not enough. Things are so deeply linked. “How do we keep track of it?” How do we communicate with others so that when they read the same paper they get a little bit of our mental model, and see why we found the article interesting?

Paolo’s project — Domeo [twitter:DomeoTool] — is a web app for “producing, browsing, and sharing manual and semi-automatic (structure and unstructured) annotations, using open standards. Domeo shows you an article and lets you annotate fragments. You can attach a tag or an unstructured comment. The tag can be defined by the user or by a defined ontology. Domeo doesn’t care which ontologies you use, which means you could use it for annotating recipes as well as science articles.

Domeo also enables discussions; it has a threaded msg facility. You can also run text mining and entity recognition systems (Calais, etc.) that automatically annotates the work with those words, which helps with search, understanding, and curation. This too can be a social process. Domeo lets you keep the annotation private or share it with colleagues, groups, communities, or the Web. Also, Domeo can be extended. In one example, it produces information about experiments that can be put into a database where it can be searched and linked up with other experiments and articles. Another example: “hypothesis management” lets readers add metadata to pick out the assertions and the evidence. (It uses RDF) You can visualize the network of knowledge.

It supports open APIs for integrating with other systems., including into the Neuroscience Information Framework and Drupal. “Domeo is a platform.” It aims at supporting rich source, and will add the ability to follow authors and topics, etc., and enabling mashups.

Be the first to comment »

[annotation][2b2k] Neel Smith: Scholarly annotation + Homer

Neel Smith of Holy Cross is talking about the Homer Multitext project, a “long term project to represent the transmission of the Iliad in digital form.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He shows the oldest extant ms of the Iliad, which includes 10th century notes. “The medieval scribes create a wonderful hypermedia” work.

“Scholarly annotation starts with citation.” He says we have a good standard: URNs, which can point to, for example, and ISBN number. His project uses URNs to refer to texts in a FRBR-like hierarchy [works at various levels of abstraction]. These are semantically rich and machine-actionable. You can google URN and get the object. You can put a URN into a URL for direct Web access. You can embed an image into a Web page via its URN [using a service, I believe].

An annotation is an association. In a scholarly notation, it’s associated with a citable entity. [He shows some great examples of the possibilities of cross linking and associating.]

The metadata is expressed as RDF triples. Within the Homer project, they’re inductively building up a schema of the complete graph [network of connections]. For end users, this means you can see everything associated with a particular URN. Building a facsimile browser, for example, becomes straightforward, mainly requiring the application of XSL and CSS to style it.

Another example: Mise en page: automated layout analysis. This in-progress project analyzes the layout of annotation info on the Homeric pages.

1 Comment »

[annotations][2b2k] Rob Sanderson on annotating digitized medieval manuscripts

Rob Sanderson [twitter:@azaroth42] of Los Alamos is talking about annotating Medieval manuscripts.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

He says many Medieval manuscripts are being digitized. The Mellon Foundation is funding many such projects. But these have tended to reinvent the same tech, and have not been designed for interoperability with other projects. So the Digital Medieval Initiative was founded, with a long list of prestigious partners. They thought about what they’d like: distributed, linked data, interoperable, etc. For this they need a shared description format.

The traditional approach is annotate an image of a page. But it can be very difficult to know which images to annotate; he gives as an example a page that has fold-outs. “The naive assuption is that an image equals a page.” But there may be fragments, or only portions of the page have been digitized (e.g., the illuminations), etc. There may be multiple images on a page, revealed by multi-spectral imaging. There may be multiple orientations of the page, etc.

The solution? The canvas paradigm. A canvas is an empty space corresponding to the rectangle (or whatever) of the page. You allow rich resources to be associated with it, and allow users to comment. For this, they use Open Annotation. You can specify a choice of images. You can associate text with an area of the canvas. There are lots of different ways to visualize those comments: overlays, side-by-side, etc.

You can build hybrid pages. For example, and old scan might have a new color scan of its illustrations pointing at it. Or you could have a recorded performance of a piece of music pointing at the musical notation.

In summary, the SharedCanvas model uses open standards (HTML 5, Open Annotation, TEI, etc.) and can be implement distributed across reporsitories, encouraging engagement by domain experts.

Be the first to comment »

Next Page »