Joho the Blog » future

November 26, 2014

Welcome to the open Net!

I wanted to play Tim Berners-Lee’s 1999 interview with Terry Gross on WHYY’s Fresh Air. Here’s how that experience went:

  • I find a link to it on a SlashDot discussion page.

  • The link goes to a text page that has links to Real Audio files encoded either for 28.8 or ISBN.

  • I download the ISBN version.

  • It’s a RAM (Real Audio) file that my Mac (Yosemite) cannot play.

  • I look for an updated version on the Fresh Air site. It has no way of searching, so I click through the archives to get to the Sept. 16, 1999 page.

  • It’s a 404 page-not-found page.

  • I search for a way to play an old RAM file.

  • The top hit takes me to Real Audio’s cloud service, which offers me 2 gigabytes of free storage. I decline.

  • I pause for ten silent seconds in amazement that the Real Audio company still exists. Plus it owns the domain “real.com.”

  • I download a copy of RealPlayerSP from CNET, thus probably also downloading a copy of MacKeeper. Thanks, CNET!

  • I open the Real Player converter and Apple tells me I don’t have permission because I didn’t buy it through Apple’s TSA clearance center. Thanks, Apple!

  • I do the control-click thang to open it anyway. It gives me a warning about unsupported file formats that I don’t understand.

  • Set System Preferences > Security so that I am allowed to open any software I want. Apple tells me I am degrading the security of my system by not giving Apple a cut of every software purchase. Thanks, Apple!

  • I drag in the RAM file. It has no visible effect.

  • I use the converter’s upload menu, but this converter produced by Real doesn’t recognize Real Audio files. Thanks, Real Audio!

  • I download and install the Real Audio Cloud app. When I open it, it immediately scours my disk looking for video files. I didn’t ask it to do that and I don’t know what it’s doing with that info. A quick check shows that it too can’t play a RAM file. I uninstall it as quickly as I can.

  • I download VLC, my favorite audio player. (It’s a new Mac and I’m still loading it with my preferred software.)

  • Apple lets me open it, but only after warning me that I shouldn’t trust it because it comes from [dum dum dum] The Internet. The scary scary Internet. Come to the warm, white plastic bosom of the App Store, it murmurs.

  • I drag the file in to VLC. It fails, but it does me the favor of tellling me why: It’s unable to connect to WHYY’s Real Audio server. Yup, this isn’t a media file, but a tiny file that sets up a connection between my computer and a server WHYY abandoned years ago. I should have remembered that that’s how Real worked. Actually, no, I shouldn’t have had to remember that. I’m just embarrassed that I did not. Also, I should have checked the size of the original Fresh Air file that I downloaded.

  • A search for “Time Berners-Lee Fresh Air 1999″ immediately turns up an NPR page that says the audio is no longer available.

    It’s no longer available because in 1999 Real Audio solved a problem for media companies: install a RA server and it’ll handle the messy details of sending audio to RA players across the Net. It seemed like a reasonable approach. But it was proprietary and so it failed, taking Fresh Air’s archives with it. Could and should have Fresh Air converted its files before it pulled the plug on the Real Audio server? Yeah, probably, but who knows what the contractual and technical situation was.

    By not following the example set by Tim Berners-Lee — open protocols, open standards, open hearts — this bit of history has been lost. In this case, it was an interview about TBL’s invention, thus confirming that irony remains the strongest force in the universe.

    1 Comment »

  • November 21, 2014

    APIs are magic

    (This is cross-posted at Medium.)

    Dave Winer recalls a post of his from 2007 about an API that he’s now revived:

    “Because Twitter has a public API that allows anyone to add a feature, and because the NY Times offers its content as a set of feeds, I was able to whip up a connection between the two in a few hours. That’s the power of open APIs.”

    Ah, the power of APIs! They’re a deep magic that draws upon five skills of the Web as Mage:

    First, an API matters typically because some organization has decided to flip the default: it assumes data should be public unless there’s a reason to keep it private.

    Second, an API works because it provides a standard, or at least well-documented, way for an application to request that data.

    Third, open APIs tend to be “RESTful,” which means that they work using the normal Web way of proceeding (i.e., Web protocols). All you or your program have to do is go to the API’s site using a standard URL of the sort you enter in a browser. The site comes back not with a Web page but with data. For example, click on this URL (or paste it into your browser) and you’ll get data from Wikipedia’s API: http://en.wikipedia.org/w/api.php?action=query&titles=San_Francisco&prop=images&imlimit=20&format=jsonfm. (This is from the Wikipedia API tutorial.)

    Fourth, you need people anywhere on the planet who have ideas about how that data can be made more useful or delightful. (cf. Dave Winer.)

    Fifth, you need a worldwide access system that makes the results of that work available to everyone on the Internet.

    In short, API’s show the power of a connective infrastructure populated by ingenuity and generosity.

    In shorter shortnesss: API’s embody the very best of the Web.

    Be the first to comment »

    October 13, 2014

    Library as starting point

    A new report on Ithaka S+R‘s annual survey of libraries suggests that library directors are committed to libraries being the starting place for their users’ research, but that the users are not in agreement. This calls into question the expenditures libraries make to achieve that goal. (Hat tip to Carl Straumsheim and Peter Suber.)

    The question is good. My own opinion is that libraries should let Google do what it’s good at, while they focus on what they’re good at. And libraries are very good indeed at particular ways of discovery. The goal should be to get the mix right, not to make sure that libraries are the starting point for their communities’ research.

    The Ithaka S+R survey found that “The vast majority of the academic library directors…continued to agree strongly with the statement: ‘It is strategically important that my library be seen by its users as the first place they go to discover scholarly content.'” But the survey showed that only about half think that that’s happening. This gap can be taken as room for improvement, or as a sign that the aspiration is wrongheaded.

    The survey confirms that many libraries have responded to this by moving to a single-search-box strategy, mimicking Google. You just type in a couple of words about what you’re looking for and it searches across every type of item and every type of system for managing those items: images, archival files, books, maps, museum artifacts, faculty biographies, syllabi, databases, biological specimens… Just like Google. That’s the dream, anyway.

    I am not sold on it. Roger cites Lorcan Dempsey, who is always worth listening to:

    Lorcan Dempsey has been outspoken in emphasizing that much of “discovery happens elsewhere” relative to the academic library, and that libraries should assume a more “inside-out” posture in which they attempt to reveal more effectively their distinctive institutional assets.

    Yes. There’s no reason to think that libraries are going to be as good at indexing diverse materials as Google et al. are. So, libraries should make it easier for the search engines to do their job. Library platforms can help. So can Schema.org as a way of enriching HTML pages about library items so that the search engines can easily recognize the library item metadata.

    But assuming that libraries shouldn’t outsource all of their users’ searches, then what would best serve their communities? This is especially complicated since the survey reveals that preference for the library web site vs. the open Web varies based on just about everything: institution, discipline, role, experience, and whether you’re exploring something new or keeping up with your field. This leads Roger to provocatively ask:

    While academic communities are understood as institutionally affiliated, what would it entail to think about the discovery needs of users throughout their lifecycle? And what would it mean to think about all the different search boxes and user login screens across publishes [sic] and platforms as somehow connected, rather than as now almost entirely fragmented? …Libraries might find that a less institutionally-driven approach to their discovery role would counterintuitively make their contributions more relevant.

    I’m not sure I agree, in part because I’m not entirely sure what Roger is suggesting. If it’s that libraries should offer an experience that integrates all the sources scholars consult throughout the lifecycle of their projects or themselves, then, I’d be happy to see experiments, but I’m skeptical. Libraries generally have not shown themselves to be particularly adept at creating grand, innovative online user experiences. And why should they be? It’s a skill rarely exhibited anywhere on the Web.

    If designing great Web experiences is not a traditional strength of research libraries, the networked expertise of their communities is. So is the library’s uncompromised commitment to serving its community’s interests. A discovery system that learns from its community can do something that Google cannot: it can find connections that the community has discerned, and it can return results that are particularly relevant to that community. (It can make those connections available to the search engines also.)

    This is one of the principles behind the Stacklife project that came out of the Harvard Library Innovation Lab that until recently I co-directed. It’s one of the principles of the Harvard LibraryCloud platform that makes Stacklife possible. It’s one of the reasons I’ve been touting a technically dumb cross-library measure of usage. These are all straightforward ways to start to record and use information about the items the community has voted for with its library cards.

    It is by far just the start. Anonymization and opt-in could provide rich sets of connections and patterns of usage. Imagine we could know what works librarians recommend in response to questions. Imagine if we knew which works were being clustered around which topics in lib guides and syllabi. (Support the Open Syllabus Project!) Imagine if we knew which books were being put on lists by faculty and students. Imagine if knew what books were on participating faculty members’ shelves. Imagine we could learn which works the community thinks are awesome. Imagine if we could do this across institutions so that communities could learn from one another. Imagine we could do this with data structures that support wildly messily linked sources, many of them within the library but many of them outside of it. (Support Linked Data!)

    Let the Googles and Bings do what they do better than any sane person could have imagined twenty years ago. Let libraries do what they have been doing better than anyone else for centuries: supporting and learning from networked communities of scholars, librarians, and students who together are a profound source of wisdom and working insight.

    Be the first to comment »

    October 7, 2014

    Library as a platform: Chattanooga

    I finally got to see the Chattanooga Library. It was even better than I’d expected. In fact, you can see the future of libraries emerging there.

    That’s not to say that you can simply list what it’s doing and do the same things and declare yourself the Library of the Future. Rather, Chattanooga Library has turned itself into a platform. That’s where the future is, not in the particular programs and practices that happen to emerge from that platform.

    I got to visit, albeit all too briefly, because my friend Nate Hill, assistant director of the Library, invited me to speak at the kickoff of Chattanooga Startup Week. Nate runs the fourth floor space. It had been the Library’s attic, but now has been turned into an open space lab that works in both software and hardware. The place is a pleasing shambles (still neater than my office), open to the public every afternoon. It is the sort of place that invites you to try something out — a laser cutter, the inevitable 3D printer, an arduino board … or to talk with one of the people at work there creating apps or liberating data.

    The Library has a remarkable open data platform, but that’s not what makes this Library itself into a platform. It goes deeper than that.

    Go down to the second floor and you’ll see the youth area under the direction/inspiration of Justin Hoenke. It’s got lots of things that kids like to do, including reading books, of course. But also playing video games, building things with Legos, trying out some cool homebrew tech (e.g., this augmented reality sandbox by 17-year-old Library innovator, Jake Brown (github)), and soon recording in audio studios. But what makes this space a platform is its visible openness to new ideas that invites the community to participate in the perpetual construction of the Library’s future.

    This is physically manifested in the presence of unfinished structures, including some built by a team of high school students. What will they be used for? No one is sure yet. The presence of lumber assembled by users for purposes to be devised by users and librarians together makes clear that this is a library that one way or another is always under construction, and that that construction is a collaborative, inventive, and playful process put in place by the Library, but not entirely owned by the Library.

    As conversations with the Library Director, Corinne Hill (LibraryJournal’s Librarian of the Year, 2014), and Mike Bradshaw of Colab — sort of a Chattanooga entrepreneurial ecosystem incubator — made clear, this is all about culture, not tech. Open space without a culture of innovation and collaboration is just an attic. Chattanooga has a strong community dedicated to establishing this culture. It is further along than most cities. But it’s lots of work: lots of networking, lots of patient explanations, and lots and lots of walking the walk.

    The Library itself is one outstanding example. It is serving its community’s needs in part by anticipating those needs (of course), but also by letting the community discover and develop its own interests. That’s what a platform is about.

    It’s also what the future is about.

     


    Here are two relevant things I’ve written about this topic: Libraries as Platforms and Libraries won’t create their own futures.

    3 Comments »

    September 22, 2014

    The future of libraries won’t be created by libraries

    Library Journal has posted an op-ed of mine that begins:

    The future of libraries won’t be created by libraries. That’s a good thing. That future is too big and too integral to the infrastructure of knowledge for any one group to invent it. Still, that doesn’t mean that libraries can wait passively for this new future. Rather, we must create the conditions by which libraries will be pulled out of themselves and into everything else.

    2 Comments »

    September 12, 2014

    Springtime at Shorenstein

    The Shorenstein Center is part of the Harvard Kennedy School of Government. The rest of the Center’s name — “On Media, Politics, and Public Policy” — tells more about its focus. Generally, its fellows are journalists or other media folk who are taking a semester to work on some topic in a community of colleagues.

    To my surprise, I’m going to spend the spring there. I’m thrilled.

    I lied. I’m *\\*THRILLED*//*.

    The Shorenstein Center is an amazing place. It is a residential program so that a community will develop, so I expect to learn a tremendous amount and in general to be over-stimulated.

    The topic I’ll be working on has to do with the effect of open data platforms on journalism. There are a few angles to this, but I’m particularly interested in ways open platforms may be shaping our expectations for how news should be made accessible and delivered. But I’ll tell you more about this once I understand more.

    I’ll have some other news about a part-time teaching engagement in this Spring, but I think I’d better make sure it’s ok with the school to say so.

    I also probably should point out that as of last week I left the Harvard Library Innovation Lab. I’ll get around to explaining that eventually.

    3 Comments »

    September 8, 2014

    Progress isn’t what it used to be

    At Medium.com I have a short piece on what progress looks like on the Internet, which is not what progress used to look like. I think.

    I wrote this for the Next Web conference blog. (I’m keynoting their Dec. conference in NYC.)

    Be the first to comment »

    June 29, 2014

    [aif] Government as platform

    I’m at a Government as Platform session at Aspen Ideas Festival. Tim O’Reilly is moderating it with Jen Pahlka (Code for America and US Deputy Chief Technology Officer ) and Mike Bracken who heads the UK Government Digital Service.

    Mike Backen begins with a short presentation. The Digital Service he heads sits at the center of govt. In 2011, they consolidated govt web sites that presented inconsistent policy explanations. The DS provides a central place that gives canonical answers. He says:

    • “Our strategy is delivery.” They created a platform for govt services: gov.uk. By having a unified platform, users know that they’re dealing with the govt. They won the Design of the Year award in 2013.

    • The DS also gives govt workers tools they can use.

    • They put measurements and analytics at the heart of what they do.

    • They are working on transforming the top 25 govt services.

    They’re part of a group that saved 14.3B pounds last year.

    Their vision goes back to James Brindley, who created a system of canals that transformed the economy. [Mike refers to “small pieces loosely joined.”] Also Joseph Bazalgette created the London sewers and made them beautiful.


    (cc) James Pegrum

    Here are five lessons that could be transferred to govt, he says:

    1. Forget about the old structures. “Policy-led hierarchies make delivery impossible.” The future of govt will emerge from the places govt exists, i.e., where it is used. The drip drip drip of inadequate services undermine democracy more than does the failure of ideas.

    2. Forget the old binaries. It’s not about public or private. It’s about focusing on your users.

    3. No more Big IT. It’s no longer true that a big problems requires big system solutions.

    4. This is a global idea. Sharing makes it stronger. New Zealand used gov.uk’s code, and gov.uk can then take advantage of their improvements.

    5. It should always have a local flavour. They have the GovStack: hw, sw, apps. Anyone can use it, adapt it to their own situation, etc.

    A provocation: “Govt as platform” is a fantastic idea, but when applied to govt without a public service ethos it becomes a mere buzzword. Public servants don’t “pivot.”

    Jen Pahlka makes some remarks. “We need to realize that if we can’t implement our policies, we can’t govern.” She was running Code for America. She and the federal CTO, Todd Park, were visiting Mike in the UK “which was like Disneyland for a govt tech geek like me.” Todd asked her to help with the Presidential Innovation Fellows, but she replied that she really wanted to work on the sort of issues that Mike had been addressing. Fix publishing. Fix transactions. Go wholesale.

    “We have 30-40,000 federal web sites,” she says. Tim adds, “Some of them have zero users.”

    Todd wanted to make the data available so people could build services, but the iPhone ships with apps already in place. A platform without services is unlikely to take off. “We think $172B is being spent on govt IT in this country, including all levels.” Yet people aren’t feeling like they’re getting the services they need.

    E.g., if we get immigration reform, there are lots of systems that would have to scale.

    Tim: Mike, you have top-level support. You report directly to a cabinet member. You also have a native delivery system — you can shut down failed services, which is much harder in the US.

    Mike: I asked for very little money — 50M pounds — a building, and the ability to hire who we want. People want to work on stuff that matters with stellar people. We tried to figure out what are the most important services. We asked people in a structured way which was more important, a drivers license or fishing license? Drivers license or passport? This gave us important data. And ?e retired about 40% of govt content. There was content that no one ever read. There’s never any feedback.

    Tim: You have to be actually measuring things.

    Jen: There are lots of boxes you have to check, but none of them are “Is it up? Do people like it?”

    Mike: Govts think of themselves as big. But digital govt isn’t that big. Twelve people could make a good health care service. Govt needs to get over itself. Most of what govt does digitally is about the size of the average dating site. The site doesn’t have to get big for the usage of it to scale.

    Jen: Steven Levy wrote recently about how the Health Care site got built. [Great article -dw] It was a small team. Also, at Code for America, we’ve seen that the experience middle class people had with HealthCare.gov is what poor people experience every day. [my emphasis – such an important point!]

    Tim: Tell us about Code for America’s work in SF on food stamps.

    Jen: We get folks from the tech world to work on civic projects. Last year they worked on the California food stamps program. One of our fellows enrolled in the program. Two months later, he got dropped off the roles. This happens frequently. Then you have to re-enroll, which is expensive. People get dropped because they get letters from the program that are incomprehensible. Our fellows couldn’t understand the language. And the Fellows weren’t allowed to change the language in the letter. So now people get text messages if there’s a problem with their account, expressed in simple clear language.

    Q&A

    Q: You’ve talked about services, but not about opening up data. Are UK policies changing about open data?

    Mike: We’ve opened up a lot of data, but that’s just the first step. You don’t just open it up and expect great things to open. A couple of problems: We don’t have a good grip on our data. It’s not consistent, it lives in macros and spreadsheets, and contractually it’s often in the hands of the people giving the service. Recently we wanted to added an organ donation checkbox and six words on the drivers license online page. We were told it would cost $50K and take 100 days. It took us about 15 mins. But the data itself isn’t the stimulus for new services.

    Q: How can we avoid this in the future?

    Mike: One thing: Require the govt ministers to use the services.

    Jen: People were watching HealthCare.gov but were asking the wrong questions. And the environment is very hierarchical. We have to change the conversation from tellling people what to do, to “Here’s what we think is going to work, can you try it?” We have to put policy people and geeks in conversation so they can say, no that isn’t going to work.

    Q: The social security site worked well, except when I tried to change my address. It should be as easy as Yahoo. Is there any plan for post offices or voting?

    Mike: In the UK, the post offices were spun out. And we just created a register-to-vote service. It took 20 people.

    Q: Can you talk about the online to offline impact on public service, and measuring performance, and how this will affect govt? Where does the transformation start?

    Jen: It starts with delivery. You deliver services and you’re a long way there. That’s what Code for America has done: show up and build something. In terms of the power dynamics, that’s hard to change. CGI [the contractor that “did” HealthCare.gov] called Mike’s Govt Digital Service “an impediment to innovation,” which I found laughable.

    Tim: You make small advances, and get your foot in the door and it starts to spread.

    Mike: I have a massive poster in my office: “Show the thing.” If you can’t create version of what you want to build, even just html pages, then your project shouldn’t go forward.

    Be the first to comment »

    June 20, 2014

    [platform] Denmark recreated in Minecraft

    According to an article in PC Games (August 2014, Ben Griffin, p. 12), two people from the Danish Ministry of the Environment “have recreated Denmark on 1:1 scale” in Minecraft. Although the idea came from observing their children playing the game, the construction required non-child-like automation. “By using standard open-source components, it was possible to break this down into a few thousand lines of code, most of which remaps various geospatial objects into Minecraft blocks…In total it took less than a week to calculate all 6437 files,” they said.

    Yes, griefers have come, in tanks, blowing up landmarks, and planting their own country’s flags. But, the creators (Simon Kokkendorff and Thorbjørn Nielsen) point out that the vandals only destroyed “a few hectares.”

    3 Comments »

    [platform] Unreal Tournament 2014 to provide market for mods

    According to an article in PC Gamer (August 2014, Ben Griffin, p. 10), Epic Games’ Unreal Tournament 2014 will make “Every line of code, evert art asset and animation…available for download.” Users will be able to create their own mods and sell them through a provided marketplace. “Epic, naturally, gets a cut of the profits.”

    Steve Polge, project lead and senior programmer, said “I believe this development model gives us the opportunity to build a much better balanced and finely tuned game, which is vital to the long-term success of a competitive shooter.” He points out that players already contribute to design discussions.

    1 Comment »

    Next Page »