I was invited to give a talk yesterday afternoon to the faculty at Brookline High School where all three of our children were educated, and that graduated my wife and both of her parents. Furthermore, the event was held in the Black Box, a performance space I watched our youngest child perform in many times. (Go T-Tones!) So, it was thrilling and quite intimidating, even though the new headmaster, Deb Holman [twitter: bhsheadmaster] could not be more welcoming and open.
There were some great (= hard) questions, and a lot of skepticism about my comments, but not all that much time to carry on a conversation. After most people left, a couple of teachers stayed to talk.
One said that she thoroughly disagrees with my generally positive characterization of the Internet. In her experience, it is where children go to get quick answers. Rather than provoking them and challenging them, the Net lets them get instant gratification, and shuts down their curiosity.
We talked for a while. Her experience certainly rings true. After all, I go to the Net for quick answers also, and if I had to write an assignment on, say, The Great Gatsby, and I wanted to finish it before The Walking Dead comes on, I’d be out on the Net. And I’d get it done much faster than in the old days when I’d have to go to the library.
I’m still not sure what to make of this phenomenon. Did the old library experience of looking things up in the card catalog or in the Periodical Index made me any more thoughtful than googling does now? In fact, I’m more likely to see more ideas and opinions on the Net than in a trip to the library. On the other hand, the convenience of the Net means that I can just look up some ideas rather than having to work through them myself; the Net is letting student short-circuit the process of forming ideas. Perhaps the old difficulty of accessing materials added friction that usefully slowed down thought. I don’t know. I don’t feel that way about my own experience, but I am not a high school student, and I’m pretty self-deluding to begin with.
Anyway, that’s pretty much the issue the second teacher brought up after the talk. Keep in mind that BHS has an extraordinary set of teachers, always caring and frequently quite inspiring. She is in the School Within a School, which is more loosely structured than the rest of BHS. When she gives writing assignments, she tells her students to come up with an idea that will surprise her, and to express it in their own voice. Very cool.
Her concern is that jangle of the Net keeps students from mulling over ideas. Thought comes from a private and individual place, she believes, and students need that stillness and aloneness.
I can’t disagree with her. I want students to understand — to experience — the value of solitude and quiet, and to have internalized enough information that they can have it at hand to play with and synthesize. And yet…
..I’m not convinced that private thought is realest thought. I know that who I am when I’m alone doesn’t feel more real than when I am with others, and in many ways feels less authentic; I’ve written before about the inner narrator who accompanies me when I visit someplace new alone, making me feel more crazy than authentic. In a similar way, I’m not ready to accept that private thinking is the best thinking or the most authentic thinking. It has its place, of course, but personally (data point of one!) I think best when engaged with others, or when I’m writing while imagining my words engaging with others.
We have, it seems to me, overvalued private thinking, which is certainly not to say that it has no value. We have likewise undervalued social thinking. But now We think in public, out loud, with others. Most of our public engagements of course are not particularly deep or thoughtful in any normal use of the term. That’s why we need to be educating our children to appreciate thinking out loud with others, and teaching them how to do it. It’s in these public multi-way discussions that ideas and knowledge develop.
While there are many ways in which public thinking can go wrong, it has the advantage of revealing the mechanisms of knowledge in all their fallibility. We are still carrying over the cultural wish for black box authorities whom we can trust simply because they were the ones who said it. We need to steer our children away from that wish for inhuman knowledge, and thus toward recognizing how ideas and knowledge actually develop. Public thinking does that. At least it should. And it will do it more if our children learn to always wonder how knowledge has been brought forward. Especially when the ideas seem so obvious.
This is one reason I find the “flipped classroom” idea so interesting. (Good discussion of this yesterday on On Point.) I was asked yesterday what I’d like BHS to do if I could have it do anything. I answered rather badly, but part of it would have to be that students learn how to engage with one another socially so that they build knowledge together, and this knowledge tolerates disagreement, is assumed to be public, and is aware of itself as a product of social engagement. Of course that happens already in classrooms — and more so (presumably) in flipped classrooms — but we should be preparing our students for doing this virtually as well as in real space because the “real” discussions will increasingly be online where there is a wealth of sources to draw upon and to argue about.
But it’s hard to see how we get there so long as we continue to assign papers and reports as the primary type of knowledge artifact, isn’t it? (I’m not even going to mention standardized testing.) Doing so implicitly tells students that knowing is what you do alone: foraging sources, coming back with useful bits, and then engaging in an internal thought process that renders them into one of the conventional written forms. In that frame, the Net looks like an uncurated library, overflowing with lies, studded with occasional truths.
Instead, students could be required to explore a topic together, in public (or at least in the protected public of their class), discussing, arguing, joking, and evaluating one another’s sources. In that frame, the Net looks like a set of discussions, not an information resource at the end of the Information Highway. After all, kids don’t come into a class interested in The Great Gatsby. The teacher will help them to see what’s interesting about the novel, which is crucial and not easy to do. But primarily we get interested in things through one another. My interest steers yours, and yours amplifies mine. Our interest in The Great Gatsby is mediated and amplified by our interest in one another. We make the world interesting together. The Net does this all the time. Papers and reports rarely do.In their pursuit of demonstrating mastery, they too often drive the interest right out of the topic — less so at a wonderful school like BHS where teachers ask students to write in their own voice and come up with ideas that surprise them both.
Anyway, I came out of the session very stimulated, very thankful that so many of my relatives had the great good luck to attend that institution, and ever thankful to our teachers.
[Note that this is cross posted at the new Digital Scholarship at Harvard blog.]
Ralph Schroeder and Eric Meyer of the Oxford Internet Institute are giving a talk sponsored by the Harvard Library on Internet, Science, and Transformations of knowledge.
NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.
Ralph begins by defining e-research as “Research using digital tools and digital data for the distributed and collaborative production of knowledge.” He points to knowledge as the contentious term. “But we’re going to take a crack at why computational methods are such an important part of knowledge.” They’re going to start with theory and then move to cases.
Over the past couple of decades, we’ve moved from talking about supercomputing to the grid to Web 2.0 to clouds and now Big Data, Ralph says. There is continuity, however: it’s all e-research, and to have a theory of how e-research works, you need a few components: 1. Computational manipulability (mathematization) and 2. The social-technical forces that drive that.
Computational manipulability. This is important because mathematics enables consensus and thus collaboration. “High consensus, rapid discovery.”
Research technologies and driving forces. The key to driving knowledge is research technologies, he says. I.e., machines. You also need an organizational component.
Then you need to look at how that plays out in history, physics, astronomy, etc. Not all fields are organized in the same way.
Eric now talks, beginning with a quote from a scholar who says he now has more information then he needs, all without rooting around in libraries. But others complain that we are not asking new enough questions.
He begins with the Large Hadron Collider. It takes lots of people to build it and then to deal with the data it generates. Physics is usually cited as the epitome of e-research. It is the exemplar of how to do big collaboration, he says.
Distributed computation is a way of engaging citizens in science, he says. E.g. Galaxy Zoo, which engages citizens in classifying galaxies. Citizens have also found new types of galaxies (“green peas”), etc. there. Another example: the Genetic Association Information Network is trying to find the cause of bipolarism. It has now grown into a worldwide collaboration. Another: Structure of Populations, Levels of Abundance, and Status of Humpbacks (SPLASH), a project that requires human brains to match humpback tails. By collaboratively working on data from 500 scientists around the Pacific Rim, patterns of migration have emerged, and it was possible to come up with a count of humpbacks (about 15-17K). We may even be able to find out how long humpbacks live. (It’s a least 120 years because a harpoon head was found in one from a company that went out of business that long ago.)
Ralph looks at e-research in Sweden as an example. They have a major initiative under way trying to combine health data with population data. The Swedes have been doing this for a long time. Each Swede has a unique ID; this requires the trust of the population. The social component that engenders this trust is worth exploring, he says. He points to cases where IP rights have had to be negotiated. He also points to the Pynchon Wiki where experts and the crowd annotate Pynchon’s works. Also, Google Books is a source of research data.
Eric: Has Google taken over scholarly research? 70% of scholars use Google and 66% use Google Scholar. But in the humanities, 59% go to the library. 95% consult peers and experts — they ask people they trust. It’s true in the physical sciences too, he says, although the numbers vary some.
Eric says the digital is still considered a bit dirty as a research tool. If you have too many URLS in your footnotes it looks like you didn’t do any real work, or so people fear.
Ralph: Is e-research old wine in new bottles? Underlying all the different sorts of knowledge is mathematization: a shared symbolic language with which you can do things. You have a physical core that consists of computers around which lots of different scholars can gather. That core has changed over time, but all offer types of computational manipulability. The Pynchon Wiki just needs a server. The LHC needs to be distributed globally across sites with huge computing power. The machines at the core are constantly being refined. Different fields use this power differently, and focus their efforts on using those differences to drive their fields forward. This is true in literature and language as well. These research technologies have become so important since they enable researchers to work across domains. They are like passports across fields.
A scholar who uses this tech may gain social traction. But you also get resistance: “What are these guys doing with computing and Shakespeare?”
What can we do with this knowledge about how knowledge is changing? 1. We can inform funding decisions: What’s been happening in different fields, how they affected by social organizations, etc. 2. We need a multidisciplinary way of understanding e-research as a whole. We need more than case studies, Ralph says. We need to be aiming at developing a shared platform for understanding what’s going on. 3. Every time you use these techniques, you are either disintermediating data (e.g., Galaxy Zoo) or intermediating (biomedicine). 4. Given that it’s all digital, we as outsiders have tremendous opportunities to study it. We can analyze it. Which fields are moving where? Where are projects being funded and how are they being organized? You can map science better than ever. One project took a large chunk of academic journals and looked in real time at who is reading what, in what domain.
This lets us understand knowledge better, so we can work together better across departments and around the globe.
Q: Sometimes you have to take a humanities approach to knowledge. Maybe you need to use some of the old systems investigations tools. Maybe link Twitter to systems thinking.
A: Good point. But caution: I haven’t seen much research on how the next generation is doing research and is learning. We don’t have the good sociology yet to see what difference that makes. Does it fragment their attention? Or is this a good thing?
Q: It’d be useful to know who borrows what books, etc., but there are restrictions in the US. How about in Great Britain?
A: If anything, it’s more restrictive in the UK. In the UK a library can’t even archive a web site without permission.
A: The example I gave of real time tracking was of articles, not books. Maybe someone will track usage at Google Books.
Q: Can you talk about what happens to the experience of interpreting a text when you have so much computer-generated data?
A: In the best cases, it’s both/and. E.g., you can’t read all the 19th century digitized newspapers, but you can compute against it. But you still need to approach it with a thought process about how to interpret it. You need both sets of skills.
A: If someone comes along and says it’s all statistics, the reply is that no one wants to read pure stats. They want to read stats put into words.
Q: There’s a science reader that lets you keep track of which papers are being read.
A: E.g., Mendeley. But it’s a self-selected group who use these tools.
Q: In the physical sciences, the more info that’s out there, it’s hard to tell what’s important.
A: One way to address it is to think about it as a cycle: as a field gets overwhelmed with info, you get tools to concentrate the information. But if you only look at a small piece of knowledge, what are you losing? In some areas, e.g., areas within physics, everyone knows everyone else and what everyone else is doing. Earth sciences is a much broader community.
[Interesting talk. It’s orthogonal to my own interests in how knowledge is becoming something that “lives” at the network level, and is thus being redefined. It’s interesting to me to see how this look when sliced through at a different angle.]
, too big to know
Tagged with: 2b2k
Date: June 7th, 2012 dw
At FOO East, at a small session, I gave a brief re-cap of what “Too Big to Know” is supposed to be about, and asked for help, particularly with the chapter I’m about to start writing (on “difference”), and with an upcoming chapter on how we make decisions. The “difference” chapter deals with the question the prior chapter — a history of facts — leaves the reader with: If we no longer (at least in many fields) have the comfortof thinking thatt what we know of the world rests on a bedrock of facts, then what do we about the, um, fact that we don’t collectively agree about anything?
The discussion was quite helpful about those two chapters, especially the one on decisions. I may blog about that later, but something quite disturbing happened during the hour-long discussion. About three-quarters of the way in, someone (let’s call him Seth because that’s not his name), said, kindly, “I understand what this book is about, but with your other books I knew why they mattered. I’m not getting why this one does.”
The problem is that Seth is just about my ideal reader. He even liked my other books. So, if I can’t explain to him face to face why 2b2k matters, then I have a problem.
Now, maybe I just did a lousy job in my overview. And, actually, I did. I got snared by some abstract points I happen to find interesting. Plus, since it was an overview, I didn’t go through the examples in the various chapters, which made the book sound more theoretical than it is. But Seth’s problem is real and worrisome.
His comment bothers me particularly because I worry that I am peculiarly hung up about knowledge. I think we’re undergoing a revolution in knowledge, but most of the world doesn’t think about it in those terms, and most of the world has been making the transition in a pragmatic and effective way anyway. This is one reason why the topic of expertise is better for the book than the topic of knowledge, although the book has slipped its leash and now seems to be chasing knowledge through the underbrush. People know that the role of experts and expertise matters.
So, here’s what I’m going to do. For now I’m going to leave chapter 3 — the history of facts chapter that’s actually about removing the hope of hitting bedrock in our arguments — as is (especially since I just finished a draft of it three days ago). I’m going to make sure that the next chapter, on the inevitability of difference and disagreement, gets pulled back toward pragmatic questions. Inevitably, in that chapter I’m going to talk about the assumptions that underlie our belief that since diversity is good for decisions, radical diversity is even better. Some of that will be theoreticalish. But I will be sure to stress practical considerations, especially how to scope difference, i.e., how much diversity of opinion is good and when does too much diversity get in the way of progress towards accepted goals. (Scott Page’s “The Difference” is useful here.) I hope also to talk about homophily, serendipity, and curiosity (= demand-side serendipity).
Addressing Seth’s question makes the chapter on decisions especially important, because decisions are where the questions of knowledge come to a head. Difference, diversity, blah blah blah, but now does this affect me at the moment when I have to say yes or no?
Categories: too big to know
Tagged with: 2b2k
Date: May 3rd, 2010 dw
I had just finished a draft of the informal talk I was scheduled to give at Nature in London when I heard that our flight had been canceled. I’m very disappointed because getting to talk with Nature folks about what the Web is doing to knowledge is a pretty great opportunity for me to learn from very thoughtful people on the front line. Also, I was looking forward to seeing my friend Timo Hannay there. Not to mention some unNatural folks we were looking forward to having a meal with. Anyway, this is what I planned on saying in my brief conversation-opener.
I was going to begin with laying out the issue that Too Big to Know seems to be addressing these days: Now that — thanks to the Web — we can’t know anything without simultaneously knowing that there is waaaaay too much for us to ever know, knowledge and knowing are changing. The old strategy of reducing knowledge to what our medium (paper) can handle is being replaced by new strategies appropriate for the inclusive nature of knowledge in a medium built out of links. (Links are about including more and more; books are about excluding everything except what really really counts.)
After a lot of failed outlines for the talk, I had decided on narrowing my focus (oh, the irony of having to narrow one’s focus in a talk about the extravagance of knowledge!) to two changes to the nature of expertise and knowledge, both based on the assumption/presumption that expertise is becoming networked and is thus taking on properties of networks.
First, transparency (although that probably isn’t the best word to sum up this point). I wanted to say something broad and vague about a change from thinking of knowledge as a reflection of the world, known to be true because the method that derived it is repeatable. (This applies to scientific knowledge, but I was going to be talking to Nature after all.) Of course, few experiments are actually repeated; if they all were, we’d cut the pace of science in half. But, designing an experiment as if it were to be repeated creates a useful methodology for scientific work. We live in a post-Bacon world, however. After Watson and Crick, after Kuhn, after philosophers such as Latour, we no longer think of science as merely a neutral mirror, invisible in itself. Now the Web is changing the topology of science. Science will still use repeatable methodologies, but authority is increasingly coming from the social world in which the work is embedded. Indeed, we can now see how the work is appropriated by others, which used to be pretty much a black box. We can thus see the value of the work, whereas before much of that value was hidden. We can also see distressingly how works are misappropriated and rejected by their culture. This is a type of transparency to and fro: From the scientific work to the world, and forward from the work into the culture. This is a 180 degree turn from the old regime that viewed authority as a stopping point for inquiry; links are continuations, not stopping points.
Second, I was going to point to networked knowledge taking on the Net’s embrace of differences: nothing goes uncontroverted on the Net. On the Net, every statement has an equal and opposite reaction. Something like that. There are some good consequences of this. Lots more views get aired. We are filling out the ecology of knowledge, with well-vetted bastions such as Nature, to unvetted bastions like Arxive.org. But we also don’t really know what to do with the fact of difference. Some groups rule out of discourse those who disagree too fundamentally. In fact, we all do, and I can see the sense in that; do we have to include Creationists in every evolutionary science conference? But denying the legitimacy of difference also has a cost. We don’t have the metaphysics and possibly the genetic neural set-up to deal with fundamental differences. So, I don’t know what comes of this.
But now I don’t have to because Odin blew up a mountain in Iceland and my trip to London has been scrubbed.
(I’m blithe about the volcano because I basically have no discretionary Internet access, so I’m just assuming there weren’t any deaths or major destruction caused by the eruption. I do realize that some things are more important than my travel plans.)
Categories: too big to know
Tagged with: 2b2k
Date: April 15th, 2010 dw
I have to say that Tea with the Economist was a fun experience. The Economist has been videoing tea-time discussions with various folks. In line with that magazine’s tradition of anonymous authoring, the interviewer is unnamed, but I can assure you that he is as astute as he is delightful.
We talk about what people will do with the big loads of data that some governments are releasing, and the general problem of the world being too big to know.
, too big to know
Tagged with: 2b2k
• the economist
Date: March 3rd, 2010 dw
As part of my Be A Bigger A-Hole resolution, let me note that the Harvard Business Review blog has just run a post of mine that looks at the history of the DIKW pyramid and why it doesn’t make that much sense.
Even though I think Chapter 1 may have to undergo total revision = start again, I’ve started Chapter 2 as if I knew where Chapter 1 left off. And so far I can’t say that I have confidence that it’s headed in the right direction.
After several attempts to open the chapter in a way that might actually be interesting, I’ve settled on pointing out the wide variety of fields in which people advertise expertise. It’s a cheap laugh, but it’s quick. Then I say that expertise is part of our evolutionary strategy for knowledge, which leads me to Darwin on the evolution of language, just for context. If you know of scientific work on evolutionary advantages conferred by having persistent, shared stores of knowledge or by being able to write, let me know. I very briefly tell the story of Abu Ali al-Hasan ibn al-Haytham, who invented most of the scientific method at the end of the first millennium, so that I can talk about knowledge as something we piece together through small investigations. The “scientific method” thread lets me then talk about the repeatability of experiments and make the transitional point that the aim of repeatability is not to have to repeat, because our knowledge strategy is: Discover, write it down, and move on.
At the moment, I’m writing a CYA paragraph reminding the reader that we’ve spent the past 50 years or so showing that knowledge doesn’t arise purely out of reason: Kuhn, Latour, Foucault, etc.
At the bottom of the screen I have a reminder that I’m aiming at talking about knowledge as a system of authority that gives stopping points for inquiry.
I would feel better if there were less exposition and more examples.
Categories: too big to know
Tagged with: 2b2k
Date: January 9th, 2010 dw
And when I say “first draft,” what I actually mean is the fifth draft of the first draft. Even that’s not right, since I go through the chapter continuously, and create a new draft (or what I should perhaps call a “version”) whenever I’m about to make a big change I think I may regret.
Anyway, I think and hope that it’s in roughly the shape it needs to be in, although I’ll re-read it tomorrow and may decide to scrap it. And when I’ve finished the last chapter, I may well see that I need throw out this one and begin again. Life in the book writing biz.
There are definitely things I don’t like about the current version. For example, the beginning. And the ending. Also, some stuff in the middle.
The current draft begins with the question “If we didn’t have a word for knowledge, would we feel the need to create one?” I don’t answer that in this chapter. I’m thinking I’ll come back to it at the end of the book. Instead, I quickly go through some of the obvious reasons we’d answer “yes.” But then I need to suggest that the answer might be “no,” and I don’t think I do a good enough job on that. It’s difficult, because the whole book whittles away at that answer, so it’s hard to come up with a context-free couple of paragraphs that will do the job. I want this chapter to focus on the nature of knowledge as a structure, so I contrast traditional guide books with the open-endedness of the Web, hoping to suggest that knowledge has gotten too big to be thought of as structure or even as a realm. (I can only hint at this at this point.) But, the Web example seems so old hat to me that I even have to apologize for it in the text (“Just another day on the Web…”). I’d rather open by having me in some actual place that I can write about — someplace where I can point to obvious features that are only obvious because we make non-obvious assumptions about the finitude, structure, and know-ability of knowledge. A library? I’d like to think of something more novel.
Since I last updated this blog about my “progress,” I’ve added a section on the data-information-knowledge-wisdom hierarchy, which traces back to T.S. Eliot. I glom onto some of the definitions of “knowledge” proposed by those who promulgate that hierarchy and point out that they have little to do with what we usually mean by knowledge (and what Eliot meant by it); rather they slap the label “knowledge” on whatever seems to be the justification for investing in information processing equipment. I then swerve from giving my own definition — a swerve I should justify more explicitly — and instead spend some time describing the nature of traditional knowledge. The result of that section is that we think of knowledge as something built on firm foundations. These days, we take facts as the bricks of knowledge. But it wasn’t always so. And that I hope leads the reader smoothly enough into a discussion of the history of fact-based knowledge (which I’m maintaining really came into its own in the early 19th century British social reform movement).
I also added a brief bit about what non-fact-based knowledge looked like. I’d already discussed the medieval idea of assembling knowledge based on analogies, but I wanted to give a more modern example. So, I looked at Malthus, whose big book came out in 1798. I was disappointed to find that Malthus’ book is full of learned discussions of statistics and facts, and thus not only wasn’t a suitable example but seemed to disprove my thesis. Then I realized I was looking at the 6th edition. Malthus revised and republished his book for the next thirty years or so. If you compare the 6th edition with the first, you are struck by how stat-free edition #1 is and how stat-full #6 is. The first edition is a deductive argument based on seemingly self-evident propositions. The support he gives for his conclusion is based on anthropological sketches and guesses about why various populations have been kept in check. The difference between #1 and #6 actually helps my case.
The last section now introduces the idea of “knowledge overload” (which is still distressingly vague and I may have to drop it) and foreshadows some of the changes that overload is bringing. I’m having trouble getting the foreshadowing right, though, since it requires stating themes that will take entire chapters to unpack.
So, having obsessively worked on this every day for the past few weeks with no days off from it, I’m going to let it sit for a day or two. I think I’ll start sketching Chapter 2.
Categories: too big to know
Tagged with: 2b2k
• too big to know
Date: January 2nd, 2010 dw
Yesterday I wrote a little â€” which will probably turn out to be too much â€” about the history of fact-finding missions. They’re really quite new, becoming a conspicuous part of international dispute settlement only with the creation of The Hague Convention in 1899. If you do a search on the phrase at the NY Times, you’ll see that there are only intermittent references until the 1920s when suddenly there are lots of them.
It strikes me as odd that we didn’t always have fact-finding missions, which is why I find it interesting. But I don’t think I can convince the reader that it’s interesting, which is why I’ve probably gone on too long about them. (There were obviously previous times when we tried to ascertain facts, but the phrase and the institutionalizing of fact-finding missions or commissions is what’s relatively new.)
Today I’m thinking I really need to shore up the opening section of this first chapter in order to show why the next section (on the history of facts, including fact-finding missions) matters. I think I’ll try to do that by briefly sketching our normal “architecture” of knowledge. For this it’d be good to come up with an easy example. Working on it…
Terry Jones has an excellent post that lists the problems introduced by maintaining a hard distinction between metadata and data.
Terry cites Everything Is Miscellaneous (thanks, Terry), which argues that the distinction, which is hard-coded in the Age of Databases, becomes a merely functional difference in the Age of Messy Links: Metadata is what you know and data is what you’re looking for. For example, the year of a CD is metadata about the CD if you know the year a Bob Dylan CD came out but you don’t remember the title, and the title can be metadata if you know the title but want to find the year. And in both cases, it could all be metadata in your search for lyrics.
This is all very squishy and messy because the distinction is, as Terry says, artificial. It comes from thinking about experience as content that gets processed, as if we worked the way computers do. More exactly, it comes from thinking about experience as a set of Experience Atoms that then have to be assembled; metadata are the labels that tell you that Atom A goes into Atom Z. But experience is far more like language than like particle physics or Ikea assembly instructions. And that’s for a very good reason: linguistic creatures’ experience cannot be understood apart from language. Language doesn’t neatly separate into content and meta-content. It all comes together and it’s all intertwingled. Language is so very non-atomic that it makes atoms realize how lonely they’ve been.
That doesn’t mean that computer software that separates metadata from data is useless. Lord knows I love a good database. But it also means that computer software that can treat anything as metadata depending on what we’re trying to do opens up some interesting possibilities…
Next Page »