Joho the Blog » web 2.0

January 8, 2014

What blogging was

At a recent Fellows Hour at the Berkman Center the topic was something like “Whatever happened to blogging?,” with the aim of thinking about how Berkman can take better advantage of blogging as a platform for public discussion. (Fellow Hours are private. No, this is not ironic.) They asked me to begin with some reflections on what blogging once was, because I am old. Rather than repeating what I said, here are some thoughts heavily influenced by the discussion.

And an important preface: What follows is much more of a memoir than a history. I understand that I’m reporting on how blogging looked to someone in a highly privileged position. For example, the blogosphere (remember when that was word?) as I knew it didn’t count LiveJournal as a blogging service, I think because it wasn’t “writerly” enough, and because of demographic differences that themselves reflect several other biases.

 


I apparently began blogging in 1999, which makes me early to the form. But, I didn’t take to it, and it was only on Nov. 15, 2001 that I began in earnest (blogging every day for twelve years counts as earnest, right?), which puts me on the late edge of the first wave, I believe. Blogging at that point was generating some interest among the technorati, but was still far from mainstream notice. Or, to give another measure, for the first year or so, I was a top 100 blogger. (The key to success: If you can’t compete on quality, redefine your market down.)

Blogging mattered to us more deeply than you might today imagine. I’d point to three overall reasons, although I find it not just hard but even painful to try to analyze that period.

1. Presence. I remember strolling through the vendor exhibits at an Internet conference in the mid 1990s. It seemed to be a solid wall of companies large and small each with the same pitch: “Step into our booth and we’ll show you how to make a home page in just 3 minutes.” Everyone was going to have a home page. I wish that had worked out. But even those of us who did have one generally found them a pain in the neck to update; FTPing was even less fun then than it is now.

When blogs came along, they became the way we could have a Web presence that enabled us to react, respond, and provoke. A home page was a painting, a statue. My blog was me. My blog was the Web equivalent of my body. Being-on-the-Web was turning out to be even more important and more fun than we’d thought it would be.

2. Community. Some of us had been arguing from the beginning of the Web that the Web was more a social space than a publishing, informational or commercial space — “more” in the sense of what was driving adoption and what was making the Web the dominant shaping force of our culture. At the turn of the millennium there was no MySpace (2003) and no Facebook (2004). But there was a blogging. If blogging enabled us to create a Web presence for ourselves, blogging was also self-consciously about connecting those presences into a community. (Note that such generalizations betray that I am speaking blindly from personal experience.)

That’s why blogrolls were important. Your blogroll was a list of links to the bloggers you read and engaged with. It was a way of sending people away from your site into the care of someone else who would offer up her own blogroll. Blogrolls were an early social network.

At least among my set of bloggers, we tried to engage with one another and to do so in ways that would build community. We’d “retweet” and comment on other people’s posts, trying to add value to the discussion. Of course not everyone played by those rules, but some of us had hope.

And it worked. I made friendships through blogging that maintain to this day, sometimes without ever having been in the same physical space.

(It says something about the strength of our community that it was only in 2005 that I wrote a post titled No, I’m not keeping up with your blog. Until that point, keeping up was sort of possible.)

3. Disruption. We were aware that the practice of blogging upset many assumptions about who gets to speak, how we speak, and who is an authority. Although blogging is now taken for granted at best and can seem quaint at worst, we thought we were participating in a revolution. And we were somewhat right. The invisibility of the effects of blogging — what we take for granted — is a sign of the revolution’s success. The changes are real but not as widespread or deep as we’d hoped.

Of course, blogging was just one of mechanisms for delivering the promise of the Net that had us so excited in the first place. The revolution is incomplete. It is yet deeper than we usually acknowledge.


To recapture some of the fervor, it might be helpful to consider what blogging was understood in contrast to. Here are some of the distinctions discussed at the time.

Experts vs. Bloggers. Experts earned the right to be heard. Bloggers signed up for a free account somewhere. Bloggers therefore add more noise than signal to the discussion. (Except: Much expertise has migrated to blogs, blogs have uncovered many experts, and the networking of bloggy knowledge makes a real difference.)

Professionals vs. Amateurs. Amateurs could not produce material as good as professionals because professionals have gone through some controlled process to gain that status. See “Experts vs. Bloggers.”

Newsletters vs. Posts. Newsletters and ‘zines (remember when that was a word?) lowered the barrier to individuals posting their ideas in a way that built a form of Web presence. Blogs intersected uncomfortably with many online newsletters (including mine). Because it was assumed that a successful blog needed new posts every day or so, content for blogs tended to be shorter and more tentative than content in newsletters.

Paid vs. Free. Many professionals simply couldn’t understand how or why bloggers would work for free. It was a brand new ecosystem. (I remember during an interview on the local Boston PBS channel having to insist repeatedly that, no, I really really wasn’t making any money blogging.)

Good vs. Fast. If you’re writing a couple of posts a day, you don’t have time to do a lot of revising. On the other hand, this made blogging more conversational and more human (where “human” = fallible, imperfect, in need of a spelpchecker).

One-way vs. Engaged. Writers rarely got to see the reaction of their readers, and even more rarely were able to engage with readers. But blogs were designed to mix it up with readers and other bloggers: permalinks were invented for this very purpose, as were comment sections, RSS feeds, etc.

Owned vs. Shared. I don’t mean this to refer to copyright, although that often was an important distinction between old media and blogs. Rather, in seeing how your words got taken up by other bloggers, you got to see just how little ownership writers have ever had over their ideas. If seeing your work get appropriated by your readers made you uncomfortable, you either didn’t blog or you stopped up your ears and covered your eyes so you could simulate the experience of a mainstream columnist.

Reputation vs. Presence. Old-style writing could make your reputation. Blogging gave you an actual presence. It was you on the Web.

Writing vs. Conversation. Some bloggers posted without engaging, but the prototypical blogger treated a post as one statement in a continuing conversation. That often made the tone more conversational and lowered the demand that one present the final word on some topic.

Journalists vs. Bloggers. This was a big topic of discussion. Journalists worried that they were going to be replaced by incompetent amateurs. I was at an early full-day discussion at the Berkman Center between Big Time Journalists and Big Time Bloggers at which one of the bloggers was convinced that foreign correspondents would be replaced by bloggers crowd-sourcing the news (except this was before Jeff Howe [twitter: crowdsourcing] had coined the term “crowd-sourcing”). It was very unclear what the relationship between journalism and blogging would be. At this meeting, the journalists felt threatened and the bloggers suffered a bad case of Premature Triumphalism.

Objectivity vs.Transparency Journalists were also quite concerned about the fact that bloggers wrote in their own voice and made their personal points of view known. Many journalists — probably most of them — still believe that letting readers know about their own political stances, etc., would damage their credibility. I still disagree.

I was among the 30 bloggers given press credentials at the 20042005 Democratic National Convention — which was seen as a milestone in the course of blogging’s short history — and attended the press conference for bloggers put on by the DNC. Among the people they brought forward (including not-yet-Senator Obama) was Walter Mears, a veteran and Pulitzer-winning journalist, who had just started a political blog for the Associated Press. I asked who he was going to vote for, but he demurred because then how could we trust his writing? I replied something like, “Then how will we trust your blog?” Transparency is the new objectivity, or so I’ve been told.

It is still the case that for the prototypical blog, it’d be weird not to know where the blogger stands on the issues she’s writing about. On the other hand, in this era of paid content, I personally think it’s especially incumbent on bloggers to be highly explicit not only about where they are starting from, but who (if anyone) is paying the bills. (Here’s my disclosure statement.)

 


For me, it was Clay Shirky’s Power Law post that rang the tocsin. His analysis showed that the blogosphere wasn’t a smooth ball where everyone had an equal voice. Rather, it was dominated by a handful of sites that pulled enormous numbers, followed by a loooooooooong tail of sites with a few followers. The old pernicious topology had reasserted itself. We should have known that it would, and it took a while for the miserable fact to sink in.

Yet there was hope in that long tail. As Chris Anderson pointed out in a book and article, the area under the long tail is bigger than the area under the short head. For vendors, that means there’s lots of money in the long tail. For bloggers that means there are lots of readers and conversationalists under the long tail. More important, the long tail of blogs was never homogenous; the small clusters that formed around particular interests can have tremendous value that the short head can never deliver.

So, were we fools living in a dream world during the early days of blogging? I’d be happy to say yes and be done with it. But it’s not that simple. The expectations around engagement, transparency, and immediacy for mainstream writing have changed in part because of blogs. We have changed where we turn for analysis, if not for news. We expect the Web to be easy to post to. We expect conversation. We are more comfortable with informal, personal writing. We get more pissed off when people write in corporate or safely political voices. We want everyone to be human and to be willing to talk with us in public.

So, from my point of view, it’s not simply that the blogosphere got so big that it burst. First, the overall media landscape does look more like the old landscape than the early blogosphere did, but at the more local level – where local refers to interests – the shape and values of the old blogosphere are often maintained. Second, the characteristics and values of the blogosphere have spread beyond bloggers, shaping our expectations of the online world and even some of the offline world.

Blogs live.

 


[The next day:] Suw Charman-Anderson’s comment (below) expresses beautifully much of what this post struggles to say. And it’s wonderful to hear from my bloggy friends.

70 Comments »

April 2, 2013

[berkman] Anil Dash on “The Web We Lost”

Anil Dash is giving a Berkman lunchtime talk, titled “The Web We Lost.” He begins by pointing out that the title of his talk implies a commonality that at least once was.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

[Light editing on April 3 2013.]

Anil puts up an icon that is a symbol of privately-owned public spaces in New York City. Businesses create these spaces in order to be allowed to build buildings taller than the zoning requirements allow. These are sorta kinda like parks but are not. E.g., Occupy isn’t in Zuccotti Park any more because the space is a privately-own public space, not a park. “We need to understand the distinction” between the spaces we think are public and the ones that are privately owned.

We find out about these when we transgress rules. We expect to be able to transgress in public spaces, but in these privately-owned spaces we cannot. E.g., Improv Everywhere needs to operate anonymously to perform in these spaces. Anil asks us to imagine “a secretive, private ivy league club.” He is the son of immigrants and didn’t go to college. “A space even as welcoming as this one [Harvard Berkman] can seem intimidating.” E.g., Facebook was built as a private club. It welcomes everyone now, but it still doesn’t feel like it’s ours. It’s very hard for a business to get much past its origins.

One result of online privately-owned public spaces is “the wholesale destruction of your wedding photos.” When people lose them in a fire, they are distraught because those photos cannot be replaced. Yet everyday we hear about a startup that “succeeds” by selling out, and then destroying the content that they’d gathered. We’ve all gotten the emails that say: “Good news! 1. We’re getting rich. 2. You’re not. 3. We’re deleting your wedding photos.” They can do this because of the terms of service that none of us read but that give them carte blanche. We tend to look at this as simply the cost of doing business with the site.

But don’t see it that way, Anil urges. “This is actually a battle” against the values of the early Web. In the mid to late 1990s, the social Web arose. There was a time when it was meaningful thing to say that you’re a blogger. It was distinctive. Now being introduced as a blogger “is a little bit like being introduced as an emailer.” “No one’s a Facebooker.” The idea that there was a culture with shared values has been dismantled.

He challenges himself to substantiate this:

“We have a lot of software that forbids journalism.” He refers to the IoS [iphone operating system] Terms of Service for app developers that includes text that says, literally: “If you want to criticize a religion, write a book.” You can distribute that book through the Apple bookstore, but Apple doesn’t want you writing apps that criticize religion. Apple enforces an anti-journalism rule, banning an app that shows where drone strikes have been.

Less visibly, the laws is being bent “to make our controlling our data illegal.” All the social networks operate as common carriers — neutral substrates — except when it comes to monetizing. The boundaries are unclear: I can sing “Happy Birthday” to a child at home, and I can do it over FaceTime, but I can’t put it up at YouTube [because of copyright]. It’s very open-ended and difficult to figure. “Now we have the industry that creates the social network implicitly interested in getting involved in how IP laws evolve.” When the Google home page encourages visitors to call their senators against SOPA/PIPA, we have what those of us against Citizens United oppose: we’re asking a big company to encourage people to act politically in a particular way. At the same time, we’re letting these companies capture our words and works and put them under IP law.

A decade ago, metadata was all the rage among the geeks. You could tag, geo-tag, or machine-tag Flickr photos. Flickr is from the old community. That’s why you can still do Creative Commons searches at Flickr. But you can’t on Instagram. They don’t care about metadata. From an end-user point of view, RSS is out of favor. The new companies are not investing in creating metadata to make their work discoverable and shareable.

At the old Suck.com, hovering on a link would reveal a punchline. Now, with the introduction of Adlinks and AdSense, Google transformed links from the informative and aesthetic, to an economic tool for search engine optimization (SEO). Within less than 6 months, linkspam was spawned. Today Facebook’s EdgeRank is based on the idea that “Likes” are an expression of your intent, which determines how FB charges for ads. We’ll see like-spammers and all the rest we saw with links. “These gestural things that were editorial or indicators of intent get corrupted right away.” There are still little islands, but for the most part these gestures that used to be about me telling you that I like your work are becoming economic actions.

Anil says that a while ago when people clicked on a link from Facebook to his blog, FB popped up a warning notice saying that it might be dangerous to go there. “The assumption is that my site is less trustworthy than theirs. Let’s say that’s true. Let’s say I’m trying to steal all your privacy and they’re not.” [audience laughs] He has FB comments on his site. To get this FB has to validate your page. “I explicitly opted in to the Facebook ecology” in part to prove he’s a moderate and in part as a convenience to his readers. At the same time, FB was letting the Washington Post and The Guardian publish within the FB walls, and FB never gave that warning when you clicked on their links. A friend at FB told Anil that the popup was a bug, which might be. But that means “in the best case, we’re stuck fixing their bugs on our budgets.” (The worst case is that FB is trying to shunt traffic away from other sites.)

And this is true for all things that compete with the Web. The ideas locked into apps won’t survive the company’s acquisition, but this is true when we change devices as well. “Content tied to devices dies when those devices become obsolete.” We have “given up on standard formats.” “Those of us who cared about this stuff…have lost,” overall. Very few apps support standard formats, with jpg and html as exceptions. Likes and follows, etc., all use undocumented proprietary formats. The most dramatic shift: we’ve lost the expectation that they would be interoperable. The Web was built out of interoperability. “This went away with almost no public discourse about the implications of it.”

The most important implication of all this comes when thinking about the Web as a public space. When the President goes on FB, we think about it as a public space, but it’s not, and dissent and transgression are not permitted. “Terms of Service and IP trump the Constitution.” E.g., every single message you put on FB during the election FB could have transformed into its opposite, and FB would be within its ToS rights. After Hurricane Sandy, public relief officials were broadcasting messages only through FB. “You had to be locked into FB to see where public relief was happening. A striking change.”

What’s most at risk are the words of everyday people. “It’s never the Pharaoh’s words that are lost to history.” Very few people opt out of FB. Anil is still on FB because he doesn’t want to lose contact with his in-laws. [See Dan Gillmor’s talk last week.) Without these privately-owned public spaces, Anil wouldn’t have been invited to Harvard; it’s how he made his name.

“The main reason this shift happened in the social web is the arrogance of the people who cared about the social web in the early days…We did sincerely care about enabling all these positive things. But the way we went about it was so arrogant that Mark Zuckerberg’s vision seemed more appealing, which is appalling.” An Ivy League kid’s software designed for a privileged, exclusive elite turned out to be more appealing than what folks like Anil were building. “If we had been listening more, and a little more open in self-criticism, it would have been very valuable.”

There was a lot of triumphalism after PIPA/SOPA went down, but it took a huge amount of hyperbole: “Hollywood wants to destroy the First Amendment, etc.” It worked once but it doesn’t scale. The willingness to pat ourselves on our back uncritically led us to vilify people who support creative industries. That comes from the arrogance that they’re dinosaurs, etc. People should see us being publicly critical of ourselves. For something to seem less inclusive than FB or Apple — incredibly arrogant, non-egalitarian cultures — that’s something we should look at very self-critically.

Some of us want to say “But it’s only some of the Web.” We built the Web for pages, but increasingly we’re moving from pages to streams (most recently-updated on top, generally), on our phones but also on bigger screens. Sites that were pages have become streams. E.g., YouTube and Yahoo. These streams feel like apps, not pages. Our arrogance keeps us thinking that the Web is still about pages. Nope. The percentage of time we spend online looking at streams is rapidly increasing. It is already dominant. This is important because these streams are controlled access. The host controls how we experience the content. “This is part of how they’re controlling the conversation.” No Open Web advocate has created a stream that’s anywhere near as popular as the sites we’re going to. The geeks tend to fight the last battle. “Let’s make an open source version of the current thing.” Instead, geeks need to think about creating a new kind of stream. People never switch to more open apps. (Anil says Firefox was an exception.)

So, what do we do? Social technologies follow patterns. It’s cyclical. (E.g., “mainframes being rebranded as The Cloud.”) Google is doing just about everything Microsoft was doing in the late 1990s. We should expect a reaction against their overreach. With Microsoft, “policy really worked.” The Consent Decree made IE an afterthought for developers. Public policy can be an important of this change. “There’s no question” that policy over social software is coming.

Also, some “apps want to do the right thing.” Anil’s ThinkUp demonstrates this. We need to be making apps that people actually want, not ones that are just open. “Are you being more attentive to what users want than Mark Zuckerberg is?” We need to shepherd and coach the apps that want to do the right thing. We count on 23 yr olds to do this, but they were in 5th grade when the environment was open. It’s very hard to learn the history of the personal software industry and how it impacted culture. “What happened in the desktop office suite wars ?” [Ah, memories!] We should be learning from such things.

And we can learn things from our own data. “It’s much easier for me to check my heart-rate than how often I’m reading Twitter.”

Fortunately, there are still institutions that care about a healthy Web. At one point there was a conflict between federal law and Terms of Service: the White House was archiving coments on its FB wall, whereas FB said you couldn’t archive for more than 24 hrs.

We should remember that ToS isn’t law. Geeks will hack software but treat ToS as sacred. Our culture is negatively impacted by ToS and we should reclaim our agency over them. “We should think about how to organize action around specific clauses in ToS.” In fact, “people have already chosen a path of civil disobedience.” E.g., search YouTube for “no infringement intended.” “It’s like poetry.” They’re saying “I’m not trying to step on your toes, but the world needs to see this.” “I’m so inspired by this.” If millions of teenagers assembled to engage in civil disobedience, we’d be amazed. They do on line. They feel they need to transgress because of a creative urge, or because it’s speech with a friend not an act of publishing. “That’s the opportunity. That’s the exciting part. People are doing this every single day.

[I couldn’t capture the excellent Q&A because I was running the microphone around.]

 


The video of the talk will be posted here.

38 Comments »

August 7, 2011

The point of Web 2.0 is its problem

I liked this post by in the Guardian by John Naughton about the future of Web 2.0, and I’m always delighted to be mention in the same paragraph as Paul Graham, but I want to keep insisting that Web 2.0 was not the moment when the Web moved from publishing platform to social platform. One of the main points of Cluetrain (1999) was in fact that the Web from its beginning was thrilling us because it was a social place, a set of conversations, a party.

Now, it is certainly true that with Web 2.0, the Web became more social, easier to socialize in, undeniably social. That’s why Web 2.0 is a useful concept.

My problem is really with the “point” in Web 2 Point Oh, since it can imply a point in time when the Web became social, as if before that the Web was merely a publishing platform. Nah. It’s been social since the moment browsers started appearing.

4 Comments »

June 9, 2009

Meaning-mining Wikipedia

DBpedia extracts information from Wikipedia, building a database that you can query. This isn’t easy because much of the information in Wikipedia is unstructured. On the other hand, there’s an awful lot that’s structured enough so that an algorithm can reliably deduce the semantic content from the language and the layout. For example, the boxed info on bio pages is pretty standardized, so your algorithm can usually assume that the text that follows “Born: ” is a date and not a place name. As the DBpedia site says:

The DBpedia knowledge base currently describes more than 2.6 million things, including at least 213,000 persons, 328,000 places, 57,000 music albums, 36,000 films, 20,000 companies. The knowledge base consists of 274 million pieces of information (RDF triples). It features labels and short abstracts for these things in 30 different languages; 609,000 links to images and 3,150,000 links to external web pages; 4,878,100 external links into other RDF datasets, 415,000 Wikipedia categories, and 75,000 YAGO categories.

Over time, the site will get better and better at extracting info from Wikipedia. And as it does so, it’s building a generalized corpus of query-able knowledge.

As of now, the means of querying the knowledge requires some familiarity with building database queries. But, the world has accumulated lots of facility with putting front-ends onto databases. DBpedia is working on something differentL accumulating an encyclopedic database, open to all and expressed in the open language of the Semantic Web.

(Via Mirek Sopek.) [Tags: ]

5 Comments »

June 8, 2009

Social media are jazz

Jeneane’s got a great post for businesses that think they’re playing well in the social media sandbox. She asks: You’re playing, but are you playing jazz?

[Tags: ]

Comments Off on Social media are jazz

May 8, 2009

Robin Chase on the smart grid, smart cars, and the power of mesh networks

Pardon the self-bloggery-floggery, but Wired.com has just posted an article of mine that presents Robin “ZipCar” Chase’s argument that the smart grid and smart cars need to be thought about together. Actually, she wants all the infrastructures we’re now building out to adopt open, Net standards, and would prefer that the Internet of Everything be meshed up together. (Time Mag just named Robin as one of the world’s 100 most influential people. We can only hope that’s true.)

The article is currently on Wired’s automotive page, but it may be moved to the main page today or tomorrow.

[Tags: ]

1 Comment »

May 7, 2009

Wolfram podcast

My interview with Stephen Wolfram about WolframAlpha is now available. Some other me-based resources:

The unedited version weighs in at a full 55 minutes. The edited version will spare you some of my throat-clearing, and some dumb questions.

A post about what I think the significance of WolframAlpha will be.

Live blog of Wolfram’s presentation at Harvard.

Wolfram’s presentation at Harvard.

[Tags: ]

Comments Off on Wolfram podcast

May 4, 2009

How important is WolframAlpha?

The Independent calls WolframAlpha “An invention that could change the Internet forever.” It concludes: “Wolfram Alpha has the potential to become one of the biggest names on the planet.”

Nova Spivak, a smart Semantic Web guy, says it could be as important as Google.

Ton Zijlstra, on the other hand, who knows a thing or two about knowledge and knowledge management, feels like it’s been overhyped. After seeing the video of Wolfram talking at Harvard, Ton writes:

No crawling? Centralized database, adding data from partners? Manual updating? Adding is tricky? Manually adding metadata (curating)? For all its coolness on the front of WolframAlpha, on the back end this sounds like it’s the mechanical turk of the semantic web.

(“The mechanical turk of the semantic web.” Great phrase. And while I’m in parentheses, ReadWriteWeb has useful screenshots of WolframAlpha, and here’s my unedited 55-minute interview with Wolfram.)

I am somewhere in between, definitely over in the Enthusiastic half of the field. I think WolframAlpha [WA] will become a standard part of the Internet’s tool set, but is not transformative.

WA works because it’s curated. Real human beings decide what topics to include (geography but not 6 Degrees of Courtney Love), which data to ingest, what metadata is worth capturing, how that metadata is interrelated (= an ontology), which correlations to present to the user when she queries it (daily tonnage of fish captured by the French compared to daily production of garbage in NYC), and how that information should be presented. Wolfram insists that an expert be present in each data stream to ensure the quality of the data. Given all that human intervention, WA then performs its algorithmic computations … which are themselves curated. WA is as curated as an almanac.

Curation is a source of its strength. It increases the reliability of the information, it enables the computations, and it lets the results pages present interesting and relevant information far beyond the simple factual answer to the question. The richness of those pages will be big factor in the site’s success.

Curation is also WA’s limitation. If it stays purely curated, without areas in which the Big Anyone can contribute, it won’t be able to grow at Internet speeds. Someone with a good idea — provide info on meds and interactions, or add recipes so ingredients can be mashed up with nutritional and ecological info — will have to suggest it to WolframAlpha, Inc. and hope they take it up. (You could to this sorta kinda through the API, but not get the scaling effects of actually adding data to the system.) And WA will suffer from the perspectival problems inevitable in all curated systems: WA reflects Stephen Wolfram’s interests and perspective. It covers what he thinks is interesting. It covers it from his point of view. It will have to make decisions on topics for which there are no good answers: Is Pluto a planet? Does Scientology go on the list of religions? Does the page on rabbits include nutritional information about rabbit meat? (That, by the way, was Wolfram’s example in my interview of him. If you look at the site from Europe, a “rabbit” query does include the nutritional info, but not if you log in from a US IP address.) But WA doesn’t have to scale up to Internet Supersize to be supersized useful.

So, given those strengths and limitations, how important is WA?

Once people figure out what types of questions it’s good at, I think it will become a standard part of our tools, and for some areas of inquiry, it may be indispensable. I don’t know those areas well enough to give an example that will hold up, but I can imagine WA becoming the first place geneticists go when they have a question about a gene sequence or chemists who want to know about a molecule. I think it is likely to be so useful within particular fields that it becomes the standard place to look first…Like IMDB.com for movies, except for broad, multiple fields, with the ability to cross-compute.

But more broadly, is WA the next Google? Does it transform the Internet?

I don’t think so. Its computational abilities mean it does something not currently done (or not done well enough for a crowd of users), and the aesthetics of its responses make it quite accessible. But how many computational questions do you have a day? If you want to know how many tons of fish France catches, WA will work as an almanac. But that’s not transformational. If you want to know how many tons divided by the average weight of a French person, WA is for you. But the computational uses that are distinctive of WA and for which WA will frequently be an astounding tool are not frequent enough for WA to be transformational on the order of a Google or Wikipedia.

There are at least two other ways it could be transformational, however.

First, its biggest effect may be on metadata. If WA takes off, as I suspect it will, people and organizations will want to get their data into it. But to contribute their data, they will have to put it into WA’s metadata schema. Those schema then become a standard way we organize data. WA could be the killer app of the Semantic Web … the app that gives people both a motive for putting their data into ontologies and a standardized set of ontologies that makes it easy to do so.

Second, a robust computational engine with access to a very wide array of data is a new idea on the Internet. (Ok, nothing is new. But WA is going to bring this idea to mainstream awareness.) That transforms our expectations, just as Wikipedia is important not just because it’s a great encyclopedia but because it proved the power of collaborative crowds. But, WA’s lesson — there’s more that can be computed than we ever imagined — isn’t as counter-intuitive as Wikipedia’s, so it is not as apple-cart-upsetting, so it’s not as transformational. Our cultural reaction to Wikipedia is to be amazed by what we’ve done. With WA, we are likely to be amazed by what Wolfram has done.

That is the final reason why I think WA is not likely to be as big a deal as Google or Wikipedia, and I say this while being enthusiastic — wowed, even — about WA. WA’s big benefit is that it answers questions authoritatively. WA nails facts down. (Please take the discussion about facts in a postmodern age into the comments section. Thank you.) It thus ends conversation. Google and Wikipedia aim at continuing and even provoking conversation. They are rich with links and pointers. Even as Wikipedia provides a narrative that it hopes is reliable, it takes every opportunity to get you to go to a new page. WA does have links — including links to Wikipedia — but most are hidden one click below the surface. So, the distinction I’m drawing is far from absolute. Nevertheless, it seems right to me: WA is designed to get you out of a state of doubt by showing you a simple, accurate, reliable, true answer to your question. That’s an important service, but answers can be dead-ends on the Web: you get your answer and get off. WA as question-answerer bookends WA’s curated creation process: A relatively (not totally) closed process that has a great deal of value, but keeps it from the participatory model that generally has had the biggest effects on the Net.

Providing solid, reliable answers to difficult questions is hugely valuable. WolframAlpha’s approach is ambitious and brilliant. WolframAlpha is a genius. But that’s not enough to fundamentally alter the Net.

Nevertheless, I am wowed.[Tags: ]

19 Comments »

January 29, 2009

David Pogue twitters in public

David Pogue, the NY Times’ tech-for-the-people guy, did a little experiment when giving at talk in Las Vegas: To demo Twitter, he live-twittered a request for hiccup cures. It’s an amusing list of tweets, with a twist in the road in the second half…

[Tags: ]

Comments Off on David Pogue twitters in public

January 13, 2009

[berkman] Berkman lunch: Andrew McAfee on Enterprise 2.0

Andrew McAfee, the Enterprise 2.0 guy, is giving a Berkman lunchtime talk. He begins by defining the term as “the use of emergent social software platforms by organizations in pursuit of their goals.” This technology tends to be emergent, bottom up, etc. [NOTE: I’m live blogging, making mistakes, missing stuff, creating typos, etc. Reader beware.] He contrasts this with ERP systems that are top-down, highly specific, etc. “The huge shift” is that the 2.0 tools “make an effort to get out of the way of the users at the front” but then allow structure to emerge.

“The Net is the world’s largest library. The problem is that all the books are on the floor,” he says, citing an old saw.

Companies are interested in what’s going on because they’ve used Wikipedia or their kids are on Facebook. But companies want to know what the tools are and how they’re different. Also, they ask, “Why do I care?” What’s in it for me as a pragmatic businessperson, they ask.

To answer these questions, Andrew points to what a knowledge worker’s view of the enterprise is, from the inside. At the core are a small group of people with whom she has strong ties. Then there’s a larger group of people with whom she has weak ties. Then there’s a set of people the knowledge worker should be tied to, but is not. [He draws concentric circles.]

Three points.
1. We spend a lot of time strengthening ties that are already strong.

2. The weaker and potential ties are hugely important. (He cites The Strength of Weak Ties.)

3. Classically inside orgs, “we’ve had lousy technology,” particularly at the outer two rings. How do you keep track of your weak ties? (One solution, he says, is the Christmas-time newsletter from acquaintances.) Corporate directories try to highlight expertise to enhance the third ring, but they don’t work well. Instead, people work their networks.


There’s a fourth ring: Where there are no ties. Strangers who are not going to form any professional bond. But 2.0 enables them to come together for “powerful outcomes.”

Now Andrew looks at prototypical technologies available for each of the four rings. (He notes that these technologies are useful only at those rings.)

1. Strong ties: Wikis, Google Docs, etc. About 2/3 of traditional folks do this by sending email attachments around, but no one is happy about it. Example: VistaPrint Wiki: 18 months, 280 registered users, 12,000 topics, 77,000 page edits.

2. Weak ties: Social networking software. Various social networking tools let you link up networks, e.g., Tweets that point elsewhere. E.g., Facebook at Serena: 90% penetration, 50% active users. Helped with new hires.

Potential ties: Blogosphere. Blogging is “narrating your work.” Add a search engine and you can find others interested in the same things. E.g., Intrawest. Andrew points to a post about radiant heated floors, with some helpful commenting, etc. [Great example.] Another example: The 16 US intelligence agencies have installed 2.0 tools, such as Intellipedia, blogging, tagging, etc. This gives access to a pool of info, but, more important, makes connections among brains.

4. No ties: Prediction markets. E.g., Google’s Prediction Markets, inside of companies. These work even when you don’t have that many traders. “Why do we even have forecasting departments in companies.”

Q: Say more about Google prediction markets?
A: [Andrew gives some examples. He talks very quickly.]

Q: [gene] Would prediction markets work less effectively if there weren’t pollsters and forecasting departments? Is this Web 2.0 stuff layered on top of the traditional stuff?
A: Yes, the traders on the Iowa poll are looking at polls. Good point.

Q: Why are these trader markets accurate? Why do we still use polls?
A: Hayek in the middle of the 20th century, when intellectuals were enthralled with collective, said that they had it work. A market’s pricing system is a brilliant system for aggregating and transmitting information, said Hayek. These trader markets work because a massive number of traders express their own preferences, values, beliefs. Polling will become less important. And, yes, people try to manipulate these markets, but so far the attempts don’t work very well.

Q: What does this say about science, e.g., the change from using randomized control trial for doing science? E.g., you could run a wiki instead and process the data…
A: So, why doesn’t Merck just set up a prediction market for whether a drug will work. But the FDA wouldn’t accept it.

Q: [me] If you look at an enterprise as a power structure, how does this play?
A: I ask this of companies all the time, and they tend to say they don’t see it. But it’s probably because they’re not looking deeply at enough. In the intelligence community, they’re explicitly moving from “need to know” to “responsible to share.”
Q: [me] Although in a rigidly and explicitly structure org like intelligence, there isn’t as much jockeying for power by working the network…
A: [Andrew tells of the use of social networking to gain prominence and position in the intelligence community.]

Q: How public, how shareable should this info be?
A: That’s one of the first concerns management teams express. But people don’t need Web 2.0 tools to walk outside the org with confidential info. Web 2.0 does increase the number of people who have access to the info. But, the intelligence community, for example, understands that there’s a risk to not sharing as well. Too many companies close down their connections too much; they tend to stay at the level of the strong ties. That forecloses the possibility that someone in the other part of the organization might have a contribution to make. E.g., Innocentive anonymized problem statements and posted them on the Net for anyone to work on. Eric Raymond: With enough eyeballs, all bugs are shallow.

Q: What kinds of technologies are likely to be deployed? What types of businesses? What problems?
A: Companies are proud they’ve set up wikis for strongly tied groups, but they’re often walled gardens. Unsurprisingly, tech companies are usually the first to adopt these technologies. It’s not that E2.0 is sweeping all companies, without hesitation or doubt.

Q: Bad behavior?
A: Sure. But there’s also frequently some moderation of bad behavior, in part because inside the org, identity is the default. People generally know how to behave already. “My collection of horror stories is very very thin.”

Q: [doc] Isn’t it really very early. More versions? Fanning out of versions? What?
A: Inside the enterprise, it’s very early days. E2.0 is a prediction. Web 2.0 is much more the norm on the Web. So yes, early days. I find the rise of the Semantic Web as Web 2.0 really really speculative. 2.0 is about people. Web 2.0 is another geek utopia where the machines are in charge and people are out of the way.

Q: I was selling social software solutions to companies in Korea 7 years ago. But after 2-3 years, employees didn’t want to use them because they’re in addition to their work. Is this short term?
A: Socialtext makes a distinction between tools you use in the flow of your job or above the flow. If it’s in the flow, it’ll preserve. If you’re serious about it in your organization, put incentives and measurement in place. Some people I respect say that this is 180 degrees wrong.

Q: When will we see a divergence between those who use these tools and are winning, and those who do not and are not?
A: I’ve been doing research on this. Is IT separating winners from users? Is it irrelevant to competition? It turns out that the more IT an industry consumes, the more winners have been differentiated from users since about the mid 1990s.

[david horvik] There were attempts to drive social tools inward, but the winner was LinkedIn, which is remarkably outwards facing. Are mainstream social products going to be brought in to the enterprise. As for whether investing in IT drives winners, there’s a company selling IT to banks. You’d think this is a bad time. But the banks want optimization and efficiency. The only question is how long it takes for something to be recognized as working. It’s interesting to ask when these social media will become recognized? Is twitter replacing blogging? etc. It evolves so quickly.
A: A lot of the management teams I talk with want the pace of technology to slow down. But that’s not going to happen.

[pistachio] Twitter will be big in enterprises. No?
A: Yes. Great tool for strengthening weak ties and potential ties. And Twitter got the asymmetry right. [I.e., not everyone you follow follows you.] And it’s so lightweight to use — 10 seconds to send out a tweet?
Q: What are companies going to see as the issue?
A: They’ve had to internalize so much. It’s weird and frightening to someone who just wants to make dogfood. It’s going to take longer than 6 months.

[posted without proofreading. sorry.] [Tags: ]

1 Comment »

Next Page »