Joho the BlogJune 2010 - Page 3 of 4 - Joho the Blog

June 11, 2010

Pew surveys Experts on the future of cloud computing

Pew Internet surveyed a bunch o’ experts about where will be in The Cloud in 2020. The survey was more intended to elicit verbal responsesthan to come up with reliable numbers, but overall the experts seem to agree that we’ll be computing with a hybrid of desktop and cloud services. That seems a safe bet, especially since given enough bandwidth, all services are local. (Hasn’t distance always been the time it takes to connect?)

Several of the experts push back against the term “cloud,” Gary Bachula because it’s a “bad metaphor for broadening understanding of the concept,” and Susan Crawford because its ubiquity will mean that we “won’t need a word for it.”

Many worry about the power this will put in the hands of the Big Cloud Providers, with Robert Ackland arguing that “we need the cloud to be built using free and open source software.”

Several believe that there will be some prominent act of terrorism or incompetence in The Cloud that will drive people back to their desktops: “Expect a major news event involving a cloud catastrophe security breach or lost data to drive a reversion of these critical resources back to dedicated computing,” says Nathaniel James, or “a huge blow up with errorism,” predicts R. Ray Wang. Most agree it will be “both/and,” not “either/or.”

Many think that we’re not recognizing the depth of the change. For example, Fred Hapgood is among those who predicts the death, transformation, or marginalization of the PC: “By 2020 the computational hardware that we see around us in our daily lives will all be peripherals – tablets, goggles, earphones, keyboards, 3D printers, and the like. The computation driving the peripherals will go on in any of a million different places…” Says Garth Graham: “By 2020, a ‘general-purpose PC’ and a ‘smart phone’ will have converged into a range of general-purpose interactive connection devices, and ‘things’ will have acquired agency by becoming smart. “The PC is just a phase,” says Rebecca MacKinnon.

Some of the commenters point to the global digital divide, although they don’t agree on which side will be most cloud-based. Gary Arlen says that because of the U.S.’s desktop-based infrastructure, we won’t move into the cloud as rapidly as will less-developed nations. Seliti Arata, however, says, “Business models will provide premium services and applications on the cloud for monetization. However most of the world population will continue to use pirated software on their desktops and alternative/free cloud services.”

As for me, I don’t have predictions because the future is too furious. For example, the speed and availability of broadband access in this country is unpredictable and is by itself determinative, not to mention the Internet-seeking asteroid this is currently streaking toward the Earth. It’s safe to say, however, (= here comes something that in 5 years I’ll feel foolish for having said) that we’re going to move more and more into the cloud. The only thing I’d add to The Experts is that this will have network effects like crazy — effects due to the scale of data and social connections being managed under one roofless roof (with, we hope, lots of openness as well as security).

4 Comments »

June 10, 2010

Data.gov goes semantic

Data.gov has announced that it’s making some data sets available as RDF triples so Semantic Webbers can start playing with it. There’s an index of data here. The site says that even though only a relative handful of datasets have been RDF’ed, there are 6.4 billion triples available. They’ve got some examples of RDF-enabled visualizations here and here, and some more as well.

Data.gov also says they’re working with RPI to come up with a proposal for “a new encoding of datasets converted from CSV (and other formats) to RDF” to be presented for worldwide consideration: “We’re looking forward to a design discussion to determine the best scheme for persistent and dereferenceable government URI naming with the international community and the World Wide Web Consortium to promote international standards for persistent government data (and metadata) on the World Wide Web.” This is very cool. A Uniform Resource Identifier points to a resource; it is dereferenceable if there is some protocol for getting information about that resource. So, Data.gov and RPI are putting together a proposal for how government data can be given stable Web addresses that will predictably yield useful information about that data.

I think.

3 Comments »

June 9, 2010

Getting EPub wrong every possible way

I spent too much of yesterday and today trying to placate EPub, the God of Finickiness.

EPub (which the creators would prefer I spelled EPUB, but I figure there’s no need to shout) is the ebook format that iPad and many other readers like. I have a young adult novel that I give away in html, pdf, and Word .doc, but I figured I should modernize it up to the EPub standard.

First, I converted all 26 chapters to XHTML. XHTML is HTML’s obsessive-compulsive younger brother. The sloppiness that made HTML a success — you could sling together code of the ugliest sort and browsers would still display it for you with red blush and lipstick — drives XHTML nuts. So, you’d better close every tag and make sure you don’t start any of your inner ID tags with a number. The W3C has a useful validator that will tell you every thing you’re doing wrong. (Under options, turn on “Show source” and it will show you your original text with the mistakes flagged.)

I downloaded a bunch of automatic EPub creators, many of them listed at JediSabre, but I couldn’t get any of them to do what I wanted. A lot of people really like Calibre, which does much more than just compile EPub files, but I couldn’t figure out how to get it to treat 26 chapters as one book; dumb of me, I know, but I didn’t see that basic point covered in the documentation, and I was too embarrassed to ask it in the user forum where the creator generously responds.

I also tried eCub. I couldn’t remember why I gave up on it, so I just now tried it again, and of course it worked perfectly. Instantly. Easily. Dammit! It must have been something wrong in the XHTML files that I fixed after I’d given up on it.

I also tried Sigil, which is quite full-featured, but it kept crashing before making it through all 26 chapters, very likely because of the same irregularities in those files. So, try the automated systems before you venture down the hand-coding path.

I spent many hours with TextWrangler — a text editor that can do search and replace across multiple files is a requirement if you’re going to end up doing this somewhat by hand. EPub files are actually zip files, and, astoundingly, the files in them have to be in the proper order. Why ebooks are too dumb to be able to randomly access the contents of a zip file is beyond me, but then, so is using an easy-to-use EPub compiler. So, you need to download a sample (I used Sigil to create one), unzip it, put in your content, and zip it back up. WebVivant tells us the three magic command-line commands to get the zip file to rezip itself correctly. I did that dozens of times today. Now it’s time to test the resulting file…

ThreePress has an online EPub validator that shows you what you did wrong this time. Very helpful, although the error messages can be a bit cryptic, mainly because there are so many freaking ways to go wrong.

If you need to handcode how to get your EPub to display your cover, here’s a very helpful step-by-step guide to cover markup, by Keith Fahlgren at ThreePress.

The upshot? Make sure your XHTML is valid, and that your id values do not begin with numerals, and then try the automated systems. The one’s I’ve mentioned are free. (Thank you!) (Bowerbird posted a comment to my earlier post about EPub about a system Bowerbird is developing that outputs lots of book/document formats.) If you find yourself preparing or tweaking it by hand, expect to spend some time at it.

The upshot’s upshot? I spent a day creating a crappy version of my YA adult in EPub format that I’m not even sure works beyond with Wordplayer on my Droid. If you care to try it and let me know if it works on your device or software ebook, let me know? (You can get the pdf and .doc versions here, and here’s Bowerbird’s epub version.)

78 Comments »

Obscene but not heard

Yule Heibel has a very interesting post on obscenity and oil spills. She points to a redesigned BP logo that mashes up the logo with the famous 1968 photo of a South Vietnam’s Chief of National Police executing a Viet Cong soldier in Saigon, taken by Eddie Adams. Yule wrote in a comment: “…the designer latched on to something important: that photo seems to be the first instance of mainstream [media] obscenity, and linking the obscene to what’s happening in the Gulf seemed somehow right.” Her post elaborates on why the word “obscene” seemed right.

The word apparently comes from the Greek for an off-stage event that is indispensable for the on-stage action. — the unseen that enables us to understand the seen.

That’s a nice way of understanding the BP oil volcano. The spill let’s us see our deadly dependence on oil. That a single oil well can do this to our oceans tells us how deep a well we are in.

But I think she’s being too optimistic. In 1968, the brutality of the Eddie Adams photo revealed that we are in the wrong war. In 2010, the photos of the dead animals and the vast plume are showing many of us merely that BP deserves blame. Sure they do, but the real conclusion is remaining off-stage: Oil-based economies are at war with our planet. We are in the wrong war.

5 Comments »

June 8, 2010

[berkman] [2b2k] Carolina Rossini – The Industrial Cooperation Project

Carolina Rossini is giving a Berkman talk abouut the preloilminaray conclusionss from Yochai Benkler’s Industrial Cooperation Project.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

The project looked at industries to see if they are moving to collaborative peer production. They looked at knowledge-embedded products: data, text, and tools. The key takeaway, she says: “The nature of the knowledge embedded product has an impact on the emergence of commons-based production.” This is relevant if you care about the emergence of commons-based production (or a knowledge governance system), in some sectors, you may have to have some kind of intervention in the market, and not wait for the organic emergence of CBP.

She goes through the research methodology: standardized observations across sectors and “a huge literature review.” How does innovation happen? What’s the effect of IP? etc. All of this will be open and available soon. From this came synthesizing papers and the ICP Wiki (open in September) where you can see the entire research process.

The main verticals studies: biotech, alternative energy, educational materials. These three are a cross-section of contemporary economy. It is not the more typical sector where you find the usual examples such as threadless.com.

Within alternative energy, the project looked at wind, solar, and see, because they are at different levels of maturity, a lot of patenting, complicate manufacturing and governance processes, and no evidence of commons based production. The academics have no incentives for sharing. Sharing is happening, however, in pre-product spaces, e.g., OpenEi, U.S. gov’t OpenLabs, and the Database of State Incentives for Renewables and Energy. There is also the emergence of a DIY spirit among users, and there’s the Danish offshore wind industry is interesting. Prizes have been created to encourage CBP, and there have been attempts to cluster people to encourage collaboration.

Another vertical: educational materials. Here there’s lots of CBP. E.g., the Open Educational Resources Commons (which Carolina has long been involved with). Incumbents as well as start-ups are exploring new models, both open and closed. On the other hand, publishers are suing users now and then. Pres. Obama says he wants to invest OER, but there is no money around. On the other hand, the text book adoption process now accepts open textbooks, which Carolina says is a “huge huge turnaround.” There are also governmental interventions around the world, including in Brazil. There are also pushes toward closedness. E.g., Pearson is rethinking it’s strategy so that the end product isn’t a textbook but a new education model that includes assessment systems, exam models, etc. [Vertical integration strikes again! When your product becomes open or commoditized, it’s one of the obvious business responses.] Does this lessen the importance of copyright for Pearson? The guy from Pearson that Carolina asked this question of just laughed.

In Biotech there’s a mix of CBP and closed practices. “Big science” shows the most evidence of CBP. The commons in gene sequences has helped, as does the fact that most big science happens through government investments, the data and text results of which are open by default (thanks to the Bermuda rules and the NIH Public access Policy). But there’s no evidence of commons-based industrial disruption in biological materials used as a research tools. This is due in part to the fact that VC’s who fund “translational research” like patents. The end products market has been the most resistant to commons-based effects. And patents are aggressively enforced. One result is that there can be conflicts. E.g., you can get a genomic profile from a company like 23andme, but if a woman wants to get a specific analysis

Summary: Biotech: some of the most open and closed practices. Alternative energy: not a lot of CBP and no sstructure for it. Educational material: intermediate in development of CBP.

If you want to see more CBP in these industries, we need interventions. Carolina doesn’t want to talk much about what is needed, but she points to a 2×2 (closed-open, regulated-unregulated) that shows the effect of government intervention in biotech: some companies left the market, but new open databases enabled other projects and new business models. [She shows the same chart for the other two sectors, but I can’t capture them in text.]

There’s also the possibility of industry intervention. E.g., the danish offshore wind farmers needed to prove to the government that their industry is viable. There’s community pressure to share knowledge and not to patent. They organized around work teams, not companies; teams were cross-company, so patenting wasn’t possible. They were able to cooperate because they knew they could only solve the problems together.

Conclusion: It’s easy to find CBP within copyright-based materials. Industries with more complex manufacturing and distribution are more resistant. But even within them, you can find CBP within some of their processes. Within those resistant sectors, we should probably look for more commons-based sharing of knowledge, rather than in their core development processes — knowledge diffusion rather than innovation.

Q: What was most surprising?
A: Openness may not be the answer in every sector.

Q: Denmark’s team-oriented approach — does that mean that the teams have members from universities, companies, etc.?
A: They found a problem, they figured which companies had some relation to that problem — e.g., the cement for holding the towers — and they picked employees from a diversity of companies.

Q: When Paul Roemer talks about economic growth in terms of sharing recipes. The copyright industries have more openness than patent-based ones. Maybe that’s because copyright is about sharing documents and files, and the recipe is there. Maybe the obstacle is in how to write out the recipe…
A: Also, the copyright industries are dealing with more easily digitized products that can be more easily transformed and shared.

Q: Is the educational system doing what the music business has done: Product is commoditized, so they sell services. E.g., Verizon tells you that the music is free or cheap, but you pay for fast download, etc.
A: I see an easier parallel in software — customization, extension to make it work better for particular clients, etc. Now that’s being done for textbooks.

Q: How do you measure the impact of CBP on the development of an industry, especially compared to the effect of social networks, etc.?
A: You’re thinking about wikis, etc. We asked how the sectors are sharing knowledge, how much patenting, are there access issues, etc.

Q: Could this be delivered on a Google platform or Facebook? How people share info?
A: We didn’t focus on this, but a lot of people looking at how companies are sharing info. E.g., the Pfizer internal wiki. Also, the OpenEI.

Q: What does it mean to share? People chatting or contracting parties? Why would people want to share?
A: Buying is a way of sharing. There’s a gradient.

Q: Can you compare the Brazillian experiment with the US government’s?
A: In Brazil, the federal government buys all the textbooks for the public schools. They are now started to debate whether the gov’t should also buy the IP and make it available. In the US, we are putting money toward training the teachers. Pres. Obama has hired many leaders from the OER. But, there is no policy being pushed here for public access to educational materials because the fed gov’t is not the main buyer. But Texas and California are asking for more openness because it will reduce their costs.

Q: How is OER trying to create sustainable business models?
A: Most of the big projects in the US are supported by universities. Many are cutting funding, so these projects are going to foundations for money. They’re trying to find business models. Publishign on-demand. E.g., Flat World Knowledge is getting major authors and paying them better than traditional publishers; you have access to a free version online, but you can buy an iPad version, etc.

Q: Long term?
A: Biotech: We have mandates for openness, but the structure so dictates how the market works that I don’t see a lot of change there. Same in alternative energy. I hope they share more if the target is mitigation of climate change, but I don’t see that happening in practice. We’ll see in some changes. I think we’ll see more political debate around open access. And there will be some international agreements, probably in terms of more tech transfer.l Educational materials: Transitioning.

4 Comments »

June 6, 2010

Democratized curation

JP Rangaswami has an excellent post about the democratizing of curation.

He begins by quoting Eric Schmidt (found at 19:48 in this video):

“…. the statistic that we have been using is between the dawn of civilisation and 2003, five exabytes of information were created. In the last two days, five exabytes of information have been created, and that rate is accelerating. And virtually all of that is what we call user-generated what-have-you. So this is a very, very big new phenomenon.”

He concludes — and I certainly agree — that we need digital curation. He says that digital curation consists of “Authenticity, Veracity, Access, Relevance, Consume-ability, and Produce-ability.” “Consume-ability” means, roughly, that you can play it on any device you want, and “produce-ability” means something like how easy it is to hack it (in the good O’Reilly sense).

JP seems to be thinking primarily of knowledge objects, since authenticity and veracity are high on his list of needs, and for that I think it’s a good list. But suppose we were to think about this not in terms of curation — which implies (against JP’s meaning, I think) a binary acceptance-rejection that builds a persistent collection — and instead view it as digital recommendations? In that case, for non-knowledge-objects, other terms will come to the fore, including amusement value, re-playability, and wiseacre-itude. In fact, people recommend things for every reason we humans may like something, not to mention the way we’s socially defined in part by what we recommend. (You are what you recommend.)

Anyway, JP is always a thought-provoking writer…

5 Comments »

June 5, 2010

Book formats and formatting books

AKMA points to an excellent post by Jacqui Cheng at Ars Technica about the fragmentation of ebook standards. AKMA would love to see a publisher offer an easy way of “pouring” text into an open format that creates a useful, beautiful digital book. Jacqui points to the major hurdle: Ebook makers like owning their own format so they can “vertically integrate,” i.e., lock users into their own bookstores.

Even if there werent that major economic barrier, itd be hard to do what AKMA and we all want because books are incredibly complex objects. You can always pour text into a new container, but its much harder to capture the structure of the book this is a title, that is body text, this is a caption. The structure is then reflected in the format of the book titles are bolded, etc. and in the electronic functionality of the book tables of contents are linked to the contents, etc.. We are so well-trained in reading books that even small errors interrupt our reading: My Kindle doesnt count hyphens as places where words can be broken across lines, resulting in some butt-ugly layouts. A bungled drop cap can mystify you for several seconds. White-space breaks between sections that are not preserved when they occur at the end of page can ruin a good mid-chapter conclusion. Its not impossible to get all this right, but its hard.

And getting a standard format that captures the right degrees of structure and of format, and that is sufficiently forgiving so just about anything can be poured to it is really difficult because there are no such right degrees. E.g., epub is not great at layout info at least according to Wikipedia.

All Im saying is: Its really really hard.

14 Comments »

June 4, 2010

[pdf] Aneesh Chopra, Federal CTO

Aneesh Chopra, the U.S. Chief Technology Officer, is opening the second day of Personal Democracy Forum.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other peoples ideas and words. You are warned, people.

He says Katie Stanton at the State Dept. says that the difference between consumer and government cultures is “Theres an app for that” vs. “Theres a form for that.”

President Obamas first act in office was to require far more openness. This means changing the defaults, a cultural change. Aneesh says theyre making progress. A year ago, Vivek Kundra, the federal CIO, announced at PDF the IT Dashboard for browsing the new Data.gov, a tool for accountability. Aneesh points to reform at the Veterans Admin, resulting in cost savings and faster service. Another example: US Citizen and Immigration Services now lets you opt in to having the status of your application be pushed to you, rather than you having to check in. This type of change has little cost, brings benefits, and is beginning to change the culture of government.

Aneesh is announcing “the blue button to liberate personal health data.” Press it and you can get your data from government databases. “Do it with it whatever you want. Its your choice. Its your data.” This will begin this fall with medical and veteran info.

Another example of the change in culture: The Dept of Agriculture wants to inform us about healthy nutrition choices, part of the First Ladys efforts. The Dept has nutritional info on 30,000 products. What to do with it? The government is holding “game jams” across the country — “Apps for Healthy Kids.”

Theyve been building tools to find widely dispersed knowledge. E.g., NASA has today released a report on its experience with the Innocentive prize system. A semi-retired radio frequency engineer won with an idea that exceeded NASAs requirements. “No RFP, no convoluted process, just a smart person” that the prize system uncovered.

Aneesh talks about the Health and Human Services Community Health Data Initiative that debuted two days ago. Its launched with twenty programs that take advantage of the newly opened data. The OMB has required agencies to make data available at any agency site, at a /open address. Microsoft Bing is now showing on maps the info available at hospitalcompare.gov, a site few have gone to. Heres an idea from a citizen: Asthmapolis crowdsources data to help visualize outbreaks; participants have gps-aware inhalers.

[And then my computer crashed…]

2 Comments »

[pdf] Non-deliberative deliberation

I moderated a panel yesterday at Personal Democracy Forum on deliberative democracy. Because I was the moderator, I didn’t express my own unease with the emphasis on deliberation. Don’t get me wrong: I like deliberative processes and wish there were more of them. I’m just not as bullish on their ability to resolve real differences.

But there are non-rational deliberative processes. For example, Morgan Spurlock’s tv series, Thirty Days, puts together people who deeply disagree. But they learn more and better by living with one another for thirty days than they do through their rational discussions. If “deliberation” refers to a fair weighing, living with someone with whom you disagree is more likely to right the scales. The issues over which we struggle the most and that divide us the deepest cannot be bridge through careful, quiet discussion. Or, at least, the role of rational deliberation often is, in my opinion, over-stated. When rational discussion fails to change our minds, sympathy based on lived understanding can change our souls.

3 Comments »

[pdf] Susan Crawford: Rethinking broadband

Susan Crawford says, “We are in the course of a titanic battle for the future of the Internet in the United States. The technology community is radically underrepresented in this battle.”

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Telephone providers and cable providers have each been merging, increasing monopoly holds on regions.The government has a key role in providing a level playing field for innovators. If you’re worried about personalization at the app level (as per Eli Pariser yesterday), you should be very worried about it at the network level.

“The Net would not exist absent government regulation.” E.g., the telcos were required to allow modems to attach to telephone lines. When cable modems arrived, government regulators were confused. Thinking that competition was right around the corner, the FCC completely deregulated highspeed Net access in 2002 (and 2205,6,7). They took away the “regulated” level but reserved the right to reregulate it (via “ancillary jurisdiction”). The courts have found that labeling a service as deregulated but then regulating it (as in the Comcast case) makes no sense. So, the FCC is proposing to re-regulate, but free of the heavy-handed elements: No rate regulation, etc. But, carriers would be required not to discriminate among bits [= Net neutrality]. This is the FCC’s “Third Wave.” The carriers claim that this is the “nuclear option.”

The FCC needs to regulate to fulfill its mandate to enable Net access to all people. E.g., they need to gather data. And they want to make sure that it’s open for innovation. Also, to keep privacy of packets. It’s great that AT&T is part of this conversation at PDF. But AT&T has spent $6M this quarter for lobbying against any form of regulation. There have also been personal attacks, she says. Comcast spent $29M in the first quarter, she adds.

By 2012, the FCC says, most Americans will have only one choice of provider. [June 5: Susan’s slide actually said that by 2012, 75 to 85 percent of Americans will have one choice of wired provider for 50 to 100Mbps speeds; sorry for the gross gloss. This comes from the National Broadband Plan.] Verizon has backed off on its plans for FIOS. So there will not be another competitor to cable. We should therefore be concerned about Comcast’s plans to merge with NBC, giving it an edge against other major video providers, but also against the growth of online video. Comcast could put content behind an authentication wall, so to see it you’d have to be a cable subscriber. The tech community should watch this merger carefully.

The content providers believe in “vertical integration,” so we’ll see many more mergers.

She says 100 yrs ago, Americans hated Standard Oil which was able to control regional production of oil. Small business people and farmers were enraged by them. Standard Oil required railroads to ship their stuff cheaper, and if the RR’s shipped competitors’ stuff, SO got paid. They also carried out espionage about competitive shipments. Like the electric grid, like the Net, the future of highspeed access depends upon government creation of a level playing field. The tech community should be working together to make sure we retain the ability to innovate.

[I interviewed Susan about the FCC’s Third Way on a Radio Berkman podcast] [Note: On June 5, I made some very minor edits, cleaning up typos and unclear referents, etc., in addition to the insertion noted above.]

19 Comments »

« Previous Page | Next Page »