Joho the Blog » 2010 » April

April 29, 2010

[berkman] [2b2k] Beth Noveck on White House open government initiatives

Beth Noveck is deputy chief technology officer for open government and leads President Obama’s Open Government Initiative. She is giving a talk at Harvard. She begins by pointing to the citizenry’s lack of faith in government. Without participation, citizens become increasingly alienated, she says. For example: the rise of Tea Parties. A new study says that a civic spirit reduces crime. Another article, in Social Science and Medicine, correlates civic structures and health. She wants to create more opportunities for citizens to engage and for government to engage in civic structures — a “DoSomething.gov,” as she lightly calls it. [NOTE: Liveblogging. Getting things wrong. Missing things. Substituting inelegant partial phrases for Beth's well-formed complete sentences. This is not a reliable report.]

Beth points to the peer to patent project she initiated before she joined the government. It enlists volunteer scientists and engineers to research patent applications, to help a system that is seriously backlogged, and that uses examiners who are not necessarily expert in the areas they’re examining. This crowd-sources patent applications. The Patent Office is studying how to adopt peer to patent. Beth wants to see more of this, to connect scientists and others to the people who make policy decisions. How do we adapt peer to patent more broadly, she asks. How do we do this in a culture that prizes consistency of procedures?

This is not about increasing direct democracy or deliberative democracy, she says. The admin hasn’t used more polls, etc., because the admin is trying to focus on action, not talk. The aim is to figuring out ways to increase collaborative work. Next week there’s a White House on conf on gov’t innovation, focusing on open grant making and prize-based innovation.

The President’s first executive action was to issue a memorandum on transparency and open gov’t. This was very important, Beth says, because it let the open gov folks in the administration say, “The President says…” President Obama is very committed to this agenda, she says; after all, he is a community organizer in his roots. Simple things like setting up a blog with comments were big steps. It’s about changing the culture. Now, there’s a culture of “leaning forward,” i.e., making commitments to being innovative about how they work. In Dec., every agency was told to come up with its own open govt plan. A directive set a road map: How and when you’re going to inventory all the data in your agency and put it online in raw, machine-readable form? How are you going to engage people in meaningful policy work? How are you going to engage in collaboration within govt and with citizens? On Tuesday, the White House collected self-evaluations, which are then evaluated by Beth’s office and by citizen groups.

How to get there. First, through people. Every agency has someone responsible for open govt. The DoT has 200+ on their open govt committee. Second, through platforms (which, as she says, is Tim O’Reilly’s mantra). E.g., data.gov is a platform.

Transparency is going well, she thinks: White House visitor logs, streaming the health care summit, publishing White House employee salaries. More important is data.gov. 64M hits in under a year. Pew says 40% of respondents have been there. 89M hits on the IT dashboard that puts a user-friendlier interface to govt spending. Agencies are required to put up “high value” data that helps them achieve their core mission. E.g., Dept. of Labor has released 15 yrs of data about workplace exposure to toxic chemicals, advancing its goal of saving workers’ lives. Medicare data helps us understand health care. USDA nutrition data + a campaign to create video games to change the eating habits of the young. Agencies are supposed to ask the public which data they want to see first, in part as a way of spurring participation.

To spur participation, the GSA now has been procuring govt-friendly terms of service for social media platforms; they’re available at apps.gov. It’s now trying to acquire innovation prize platforms, etc.

Participation and collaboration are different things, she says. Participation is a known term that has to do with citizens talking with govt. But the exciting new frontier, she says, is about putting problems out to the public for collaborative solving. E.g., Veterans Benefits Admin asked its 19,000 employees how to shorten wait times; within the first week of a brainstorming competition, 7,000 employees signed up and generated 3,000 ideas, the top ten of which are being implemented. E.g., the Army wikified the Army operations manual.

It’s also about connecting the public and private. E.g., the National Archives is making the Federal Registry available for free (instead of for $17K/yr), and the Princeton Internet center has made an annotatable. Carl Malamud also. The private sector has announced National Lab Day, to get scientists out into the schools. Two million people signed up.

She says they know they have a lot to do. E.g., agencies are sitting on exebytes of info, some of which is on paper. Expert networking: We have got to learn how to improve upon the model of federal advisory commissions, the same group of 20 people. It’s not as effective as a peer to patent model, volunteers pooled from millions of people. And we don’t have much experience using collaboration tools in govt. There is a recognition spreading throughout the govt that we are not the only experts, that there are networks of experts across the country and outside of govt. But ultimately, she says, this is about restoring trust in govt.

Q: Any strategies for developing tools for collaborative development of policy?
A: Brainstorming techniques have been taken up quickly. Thirty agencies are involved in thinking about this. It’s not about the tools, but thinking about the practices. On the other hand, we used this tool with the public to develop open govt plans, but it wasn’t promoted enough; it’s not the tools but the processes. Beth’s office acts as an internal consultancy, but people are learning from one another. This started with the President making a statement, modeling it in the White House, making the tools available…It’s a process of creating a culture and then the vehicles for sharing.

Q: Who winnowed the Veterans agency’s 3,000 suggestions?
A: The VA ideas were generated in local offices and got passed up. In more open processes, they require registration. They’ve used public thumbs up and down, with a flag for “off topic” that would shrink the posting just to one link; the White House lawyers decided that that was acceptable so long as the public was doing the rating. So the UFO and “birther” comments got rated down. They used a wiki tool (MixedInk) so the public could write policy drafts; that wiki let users vote on changes. When there are projects with millions of responses, it will be very hard; it makes more sense to proliferate opportunities for smaller levels of participation.

A: We’re crowd-sourcing expertise. In peer to patent, we’re not asking people if they like the patent or think it should be patented; we’re asking if they have info that is relevant. We are looking for factual info, recognizing that even that info is value-laden. We’re not asking about what people feel, at least initially. It’s not about fostering contentious debate, but about informed conversation.

A: What do you learn from countries that are ahead of the curve on e-democ, e.g., Estonia? Estonia learned 8 yrs ago that you have to ask people to register in online conversations…
A: Great point. We’re now getting up from our desks for the first time. We’re meeting with the Dutch, Norway, Estonia, etc. And a lot of what we do is based on Al Gore’s reinventing govt work. There’s a movement spreading particularly on transparency and data.gov.

Q: Is transparency always a good approach? Are there fields where you want to keep the public out so you can talk without being criticized?
A: Yes. We have to be careful of personal privacy and national security. Data sets are reviewed for both before they go up on data.gov. I’d rather err on the side of transparency and openness to get usover the hump of sharing what they should be sharing. There’s value in closed-door brainstorm so you can float dumb ideas. We’re trying to foster a culture of experimentation and fearlessness.

[I think it's incredible that we have people like Beth in the White House working on open government. Amazing.]

2 Comments »

April 28, 2010

In defense of Powerpoint

The NY Times has an article by Elisabeth Bumiller about the Army’s disenchantment with Powerpoint. It leads people to over-simplify complex problems (although the centerpiece of the article is a graphic that is too complex) and people spend too much time putting together text and graphic decks.

Sure. Fine. We have all sat through presentations at which someone reads through the 15 6-pt bullets on each slide, until by the time he reaches the 19 ways the company can synergize verticalized asymmetries, we’re begging for an aneurysm and don’t care if it’s his or ours. Sure, we’ve all been there. But …

Powerpoint imposed upon wandering business reports and updates a needed and welcome discipline of thought. Powerpoint forced presenters to break what they wanted to say into a set of headlines, and then think about how each headline was supported or elaborated. They could see how many slides they were taking to make their points. Bulleted lists focused the mind.

Powerpoint’s model of thought is better than the ramblings of a self-important business guy who’s grabbed the floor for as long as he feels he’s interesting, but it’s still quite a limited model. Powerpoint encourages us to think in a sequence of brief points, and doesn’t encourage us to express or make visible the relationship among the points. It doesn’t have a built-in way to indicate the clustering of points into a section. You can always create a sub-title slide with a distinctive look, but Powerpoint itself doesn’t encourage us to think that way. For example, it might have a breadcrumbs widget that shows the path we’ve been down as a standard part of slides, but it doesn’t.

What would a presentation system look like that expressed the relationships among the parts? I don’t know, but it probably wouldn’t be a set of discrete rectangles. The mind mapping programs are one approach (although I still haven’t found one that lets me disclose one leaf on a branch at a time, which is often necessary for narrative drama) and Prezi takes another.

Anyway, all I wanted to say is that we ought to remember that Powerpoint made business thought and expression more rigorous and structured.

10 Comments »

Oregon educational system offers Google Apps

Oregon has signed a deal with Google that enables any school district to provide Google Apps for Education [faq] for free to its students and teachers. This includes Google Gmail, Calendar, Contacts, Sites and Pages, Talk, Video, Groups, Docs, and Postini email management. Google Apps for Ed lets the school district use its own domain names rather than Google.com.

Google Apps for Ed is alwaysfree to schools, so the effect of this contract will depend on whether these are simply services students can use, or if students are actually expected to do their work with Google Docs et al. If the latter, this would be a step toward establishing Google (and its cloudy ways) as the educational default, the way Apple’s educational program inserted Macishness into the brains of our young. One Google Account Per Child!

It will be interesting also, of course, if it decreases the purchase of other software; Google says it will save Oregon $1.5M, but doesn’t say how)

You can read the contract here. The system defaults to ad-free services, although it allows District Administrators to opt to turn ads on. Why they would is confusing since the contract stipulates that all the revenues would “be retained by Google and will not be subject to revenue sharing.” It prevents Google from using personal info “for any purpose related to serving Ads” unless the ads are turned on. It is all provided for free for the term of the contract, although if Google adds services or a “professional version,” a fee for them may be negotiated.

5 Comments »

How the Left and Right use blogs

A new paper from the Berkman Center:

A Tale of Two Blogospheres: Discursive Practices on the Left and the Right, by Yochai Benkler, Aaron Shaw, and Victoria Stodden

This paper compares the practices of discursive production and participation among top U.S. political blogs on the left, right, and center during the summer of 2008 and, based on qualitative coding of the top 155, finds evidence of an association between ideological affiliation and the technologies, institutions, and practices of participation across political blogs. Sites on the left adopt more participatory technical platforms; are comprised of significantly fewer sole-authored sites; include user blogs; maintain more fluid boundaries between secondary and primary content; include longer narrative and discussion posts; and (among the top half of the blogs in the papers’ sample) more often use blogs as platforms for mobilization as well as discursive production.

The variations observed between the left and right wings of the U.S. political blogosphere provide insights into how varied patterns of technological adoption and use within a single society may produce distinct effects on democracy and the public sphere. The study also suggests that the prevailing techniques of domain-based link analysis used to study the political blogosphere to date may have fundamental limitations.

To read the full abstract and download the paper, visit http://cyber.law.harvard.edu/publications/2010/Tale_Two_Blogospheres_Discursive_Practices_Left_Right

Be the first to comment »

April 27, 2010

[berkman] Luis von Ahn on free lunches, captcha, and tags

Luis von Ahn of Carnegie Mellon University is giving a Berkman lunchtime talk. [NOTE: I'm liveblogging. I'm making mistakes, leaving stuff out, paraphrasing, getting things wrong. This is an unreliable record.]

Luis invented captchas, the random characters you have to type in to convince a web page that you are a human and not a hostile software program. (He shows randomly generated sequences that happened to spell out “wait” and “restart.”) Captchas are useful, he says, when you’re trying to prevent people from gaming a system by writing a program to enter data robotically. They’re also useful to prevent spammers from signing up for free email accounts. To get around this, spammers have started up sweat shops where humans type captchas all day long; it costs the spammers about $0.33/account. And some porn companies ask users to type in a captcha to see photos; the captchas are drawn from email account applications. Damn clever!

He shows some variants. A Russian asks you to solve a mathematical limit. In India one asks you to solve a circuit. Luis says these aren’t all that effective because compputers can solve both problems, but they’re still better than the “what is 1 + 1?” captchas he’s found on US sites.

He says that about 200M captchas are typed every day. He was proud of that until he realized it takes about 10 seconds to type them, so his invention is wasting 500,000 hours per day. So, he wondered if there was a way to use captchas to solve some humungous problem ten seconds at a time. result: ReCAPTCHA. For books written before 1900, the type is weak and about 30% of the text cannot be recognized by OCR. So, now many captchas ask you to type in a word unrecognized when OCR’ing a book. (The system knows which words are unrecognized by running multiple OCR programs; ReCAPTCHA uses those words.) To make sure that it’s not a software program typing in random words, ReCAPTCHA shows the user two words, one of which is known to be right. The user has to type in both, but doesn’t know which is which. If the user types in the known word correctly, the system knows it’s not dealing with a robot, and that the user probably got the unknown word right.

ReCAPTCHA is a free service. Sites that use it have to feed back the entries for the unknown word. About 125,000 sites use it. They’re doing about 70M words per day, the equivalent of 2-4M books per year. If the growth continues, they’ll run out of books in 7 years, but Luis doesn’t think the growth will continue, so it might take twenty years. (There are 100M books.)

(In response to a backchannel question, Luis tells the penis captcha story.)

The ReCAPTCHA system filters out nationalities, known insult terms, and the like, to avoid unfortunate juxtapositions. It’s soon going to be released in 40 languages. Google acquired ReCAPTCHA.

Q: When will OCR be good enough to break captchas?
A: I don’t know. We’ll probably run out of books first.

Q: Business model?,br>
A: Google Books gets help digitizing.

ReCAPTCHA “reuses wasted human processing power.” The average American spends 1.9 seconds per day typing captchas. We also spend 1.1 hours a day playing electronic games. We humans spent 9B hours spending in 2003. It took less than a day of that to build the Panama Canal. So, Luis switches topics a bit to talk about how to solve human problems by playing games.

First is tagging images with words. Image search works by looking at file names and html text, because computers can’t yet recognize objects in images very well.

Does typing two words take twice as long as typing random letters? No, it takes about the same time, he says. Luis says about 10% of the world’s population have typed in a captcha. The ESP game asks two people unknown to each other to label an image until they agree. The game taboos words that other players have already agreed on. The system passes images through until they get no new labels. They’ve gotten over 50M agreements. 5,000 players playing simultaneous could label all Google images in a month. Google has itsown version; Google has an exclusive license to the patent.

Q: Demographics?
A: For my version, average age is 29 (with huge variance), evenly split between women and men.

Q: Compared to Flickr tags?
A: Only a small fraction of Flickr images have useful tags. The tags from flickr tend to be significantly more exact, but also significantly noisier (e.g., a person tagging an image in a way that means something idiosyncratic).

Q: Bots?
A: Yes, we don’t want you to wait for a partner, so sometimes we’ll give you a bot that replays the moves a human had made with the same image.

Q: Google Images benefits from its version of your game. Who benefits from your version of the game?
A: No one.

For some images, guesses change over time. E.g., a Britney Spears photo five years ago got labels like britney and hot. About two years ago, the labels changed to crazy, rehab, and shaved head. Now they’re back to britney and hot. By watching a player for 15 mins, you can guess whether the player is male or female with 95-98% accuracy.

Why do people like the ESP game? Sometimes they feel an intimacy with their partners. They have to step outside of themselves to make the match. They can have a sense of achievement.

He ends by saying that the about the same number of people — 100,000 — have worked on humanity’s big projects, e.g., pyramids, Panama Canal, putting a person on the moon. That’s in part (he says) because it is so hard to coordinate large numbers of people. Now we can get 100M people to work on something. What can we do?

2 Comments »

[2b2k] Facts and networked facts

Harry Lewis [blog], one of my faves and someone who does not put up with any of my guff, had me in as a guest lecturer in one of his courses today. We talked about knowledge on the Net, and, in particular, whether the Net is leading us to flock with others who are like us, thus making us stupider and more extreme, rather than smarter and more open. It’s hard to know what the data actually are about this; Harry, who worries that the Net is just enabling us to confirm our ignorances, nevertheless pointed us to the David Brooks column that references some more optimistic studies. But, as I think Harry agrees, this is an area where the meaning of such studies is up for grabs — ironically, if we cite the studies that confirm our beliefs (which, btw, is the opposite of what Harry was doing), and ironically with a double salchow in light of what I’m about to say about facts.

This discussion was quite useful for me. I’m writing the last section of the chapter on facts. The echo chamber argument (i.e., we flock with similar birds and chirp our way into stupidity) often expresses a nostalgia for the Enlightenment, which includes, in the modern era, a belief that knowledge rests on a bedrock of facts. Facts are bedrock because they cannot be disputed. Facts, after all, straddle the line between the world and our knowledge of the world: They are what are knowable about the world. They are what makes a true statement true. They are not dependent on our knowledge (they are true whether or not we know them), but they enable our knowledge. Because facts are facts regardless of whether any one of us recognizes them, they are true for everyone. Thus: Bedrock because they are independent of us, and bedrock because they are nonetheless knowable.

So, this makes a big stinking problem for the book, for a few reasons.

First, I don’t want to be dealing with this question. It’s too hard. This was supposed to be a relatively easy book about expertise and knowledge, and now I’m smack up against big questions that are way way past my pay grade.

Second, I think the metaphysics in which the “facts are bedrock” argument is embedded is a misguided metaphysics. I fully believe that facts do not depend on us, and that facts are just one (particularly useful) “mode of discourse” — one way the world shows itself to us if we ask about it in a particular way. The Enlightenment set-up of the problem doesn’t let us have our fact-based cake and eat it too, which is what’s required. But I don’t want to deal with metaphysics (see point #1 immediately above)). So, I’m thinking about talking in the book about “networked facts” that include their links and context, for facts are always (?) taken up in context, and once taken up by us, they no longer serve as a self-sufficient bedrock, because you take them up one way and I take them up the other. Facts in a networked world always (= almost always, often, can) point back into the source from which they emerge and ahead into the social stew that makes sense (= tries to make sense, pretends to make sense, makes no sense) of them. (I do want to make sure that the reader doesn’t feel let off the hook when it comes to facts; facts matter.)

Third, I realized after the class that I’m right back in the topic of my doctoral dissertation of 30+ years ago, which was about Heidegger’s ontology of things (= material objects, roughly). My question then was how do we make sense of phenomena that show themselves in our experience as being beyond our experience. Apparently, I still don’t know.

9 Comments »

April 26, 2010

Come to a discussion with John Hagel

On Wednesday, at 6pm, I’m interviewing John Hagel, co-author with John Seely Brown of the new book The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion John is brilliant about the intersection of business and Net trends. I’ll interview him for a while, and then we’ll all talk.

It’ll be at 6pm, Harvard Law School, Pound Hall 2nd Floor John Chipman Gray Room. If you want to attend, you’re very welcome. RSVP here.

See you there!

2 Comments »

April 25, 2010

Games, art, morality, themes, and mechanics

I don’t know why I’m being sent issues of Game Developer magazine, but I’m vastly enjoying them. It’s fun seeing how people so deeply embedded in their craft talk amongst themselves, eve3n though much of it is over my head.

It’s not all tech talk, though. There are thoughtful reflections on the meaning and role of games. For example, in the March issue, Soren Johnson has part 2 of an essay that argues that “a game’s meaning springs from its rules, and not necessarily from its theme.” In fact, he says, the two can be in conflict, which is not a good thing. So, Left4Dead’s theme is zombie survival, but it’s actually about cooperation. Grand Theft Auto’s theme is “crime and urban chaos, but the game is actually about freedom and consequence.” (The magazine charges for online access, but you can read a report of Soren’s talk on this topic here.)

In the same issue, there’s an editorial by Brandon Sheffield called “Making Decisions Matter in Morality-Oriented Games.” (You can read it here.) After observing that in Bioshock, although you make a moral choice about harvesting or helping “little sisters,” the choice turns out to have very little effect on the game. But, he also writes:

I believe if one is going to present choices or issues in games as ethical, those choices have to matter in the game world. But I get antsy when games present me with choices that clearly open one door while closing another, as I want to see all of the game’s content, since I’m unlikely to go through it multiple times.

Well, you can’t have your little sisters and eat them, too. If the moral choice is going to affect the game, then you necessarily won’t see all the game’s content.

This is more of a problem with games that are pathways through a narrative. In an open-ended online multiplayer game like Left4Dead, moral choices affect games constantly, and in far more complex ways. For example, on the normal difficulty setting, friendly fire incidents don’t hurt your teammates too much, but on advanced, you can pretty easily kill a teammate by accident if your aim isn’t good. (Um, not that I’ve ever done so.) If you kill your teammates on purpose, you’ll get kicked out of the game; that’s not a choice within the rules so much as an infraction of the rules. But, even if you’re trying to be a good teammate, you will have to decide whether it’s worth the risk of hurting a teammate in order to rescue another teammate under attack next to her/him. Likewise, you’ll have to decide whether to risk your own health by going back for a teammate under attack. These are not the sorts of examples we normally give when talking about moral questions, but they are quite like the moral questions we generally have to face in the world — balancing risk, skill, and probability while trying to accomplish an aim we are convinced is right. That is, they are instrumental moral questions, not questions of ends. Moral questions are unavoidable in multiplayer games because multiplayer games are by definition social, and all social interactions have a moral dimension.

5 Comments »

April 24, 2010

ImpotentPoint: Text into online slides

I know this is ridiculous, and it’s undoubtedly been done before and well. But, I had fun doing this, so leave me alone.

When I was in Saudi Arabia, not only did Open Office eat the slides I’d made, it also then itself crashed into little tiny pieces. I have no idea what the problem was, but I didn’t have a reliable Net connection, and I didn’t have any other presentation software, so I quickly recreated my slides in HTML and wrote a little Javascript so I could click to go from one slide to another in my browser. So, then I thought it might be useful to have a little program that let you write your slides with a text editor, using a very simple markup language (simpler than HTML), and that would then display your text as slides.

Welcome to ImpotentPoint. The site explains the markup, but basically you begin a new slide by beginning a line with four or more dashes. A line that begins with a = is taken as a head (<h1>). You can use up to six =’s to get six levels of heads. A bullet point begins with a *. A bullet point that will build begins with a +. Unfortunately, you have to use HTML markup to get a graphic in. And that’s about it.

I like the idea of writing slides in plain text, but I’m afraid that the markup required to make this actually useful would turn out to be as complex as just writing them in html.

If you want to see a tutorial click on this button and paste the text into the first text box at ImpotentPoint.

4 Comments »

April 23, 2010

FiberFete and Plenums

I gave the closing talk at FiberFete on Thursday. FiberFete was a celebration of the complete fiber-ing of Lafayette, Louisiana — an impressive story of a city struggling to overcome entrenched interests with a vision of how low-cost bandwidth can bring about major benefits in education, medical care, and the economy. The Fete was organized by Geoff Daily and David Isenberg as a celebration, and as a way to stimulate interest and enthusiasm in what a fully connected city can do. The day was impressive and even moving as we heard from the CIOs of San Francisco and Seattle, technologists, visionaries, and an awesome group of Lafayette teachers and students.

David wanted me to talk about what we could do if we had ubiquitous, high speed, open, symmetric (i.e., roughly the same speed for uploading and downloading) connectivity. Since I don’t know what we could do, I tried to beg off, but David insisted. So, here’s a summary of what I said in my twenty minutes.

The important thing about ubiquity is not the percentage of people connected, but the ubiquity of the assumption of ubiquity. E.g., we assume everyone has access to a phone, even though “only” 95.7% of American households have one (including cell phones). Nevertheless, the assumption creates a market for innovation.

The core of that assumption is an assumption of abundance…an abundance of information, links, people, etc. Our brains have difficulty comprehending the abundance we now have. There are so many people on line that the work of 1% can create something that boggles the mind of the other 99%. As more people come on line, that rule of 1% will become a rule of 0.01% and then 0.001%. The curve of amazement is going straight up.

The abundance means we will fill up every space we can think of. We are creating plenums (plena?) of sociality, knowledge and ideas, and things (via online sensors). These plenums fill up our social, intellectual and creative spaces. The only thing I can compare them to in terms of what they allow is language itself.

What do they allow? Whatever we will invent. And the range of what we can invent within these plenums is enormous, at least so long as the Net isn’t for anything in particular. As soon as someone decides for us what the Net is “really” for, the range of what we can do with it becomes narrowed. That’s why we need the Net to stay open and undecided.

These abundances are not merely quantitative. They change the nature of what they provide. And they refuse to stay within their own bounds. For example, we go online to get information about a product, probably through a mobile device. There we find customer conversations. These voices are not confined to giving us product reviews. We are also ubiquitously connected to pragmatic advice, to new businesses and institutions that compete with or make use of the item we’re engaged with, to governmental and legal information. If people are unhappy with the product, they may use their online meeting spot as a way to organize an activist movement.

In other words, Clay Shirky is right: The Net makes it ridiculously easy to form groups. In fact, when your information medium, communication medium, and social medium are all precisely the same, its ubiquity will make it hard not to form groups. For example, if your child has a bad cough, of course you’ll go online. Of course you’ll find other parents talking about their kids. Your information search has become a communicative enterprise. Because you’re now talking with other people who share an interest, your communication is likely to spawn a social connection. These plenums just won’t stay apart.

Furthermore, many of these networked groups will be hyperlocal, especially within localities where connectivity is ubiquitous. As we get more of these locations, hyperlocal networks will connect with other hyperlocal networks, creating superlocal networks (although I have no idea what I mean by that term).

These plenums will affect all of our institutions because they remove obstacles to our being more fully human.

37 Comments »

Next Page »