logo
EverydayChaos
Everyday Chaos
Too Big to Know
Too Big to Know
Cluetrain 10th Anniversary edition
Cluetrain 10th Anniversary
Everything Is Miscellaneous
Everything Is Miscellaneous
Small Pieces cover
Small Pieces Loosely Joined
Cluetrain cover
Cluetrain Manifesto
My face
Speaker info
Who am I? (Blog Disclosure Form) Copy this link as RSS address Atom Feed

November 26, 2014

Welcome to the open Net!

I wanted to play Tim Berners-Lee’s 1999 interview with Terry Gross on WHYY’s Fresh Air. Here’s how that experience went:

  • I find a link to it on a SlashDot discussion page.

  • The link goes to a text page that has links to Real Audio files encoded either for 28.8 or ISBN.

  • I download the ISBN version.

  • It’s a RAM (Real Audio) file that my Mac (Yosemite) cannot play.

  • I look for an updated version on the Fresh Air site. It has no way of searching, so I click through the archives to get to the Sept. 16, 1999 page.

  • It’s a 404 page-not-found page.

  • I search for a way to play an old RAM file.

  • The top hit takes me to Real Audio’s cloud service, which offers me 2 gigabytes of free storage. I decline.

  • I pause for ten silent seconds in amazement that the Real Audio company still exists. Plus it owns the domain “real.com.”

  • I download a copy of RealPlayerSP from CNET, thus probably also downloading a copy of MacKeeper. Thanks, CNET!

  • I open the Real Player converter and Apple tells me I don’t have permission because I didn’t buy it through Apple’s TSA clearance center. Thanks, Apple!

  • I do the control-click thang to open it anyway. It gives me a warning about unsupported file formats that I don’t understand.

  • Set System Preferences > Security so that I am allowed to open any software I want. Apple tells me I am degrading the security of my system by not giving Apple a cut of every software purchase. Thanks, Apple!

  • I drag in the RAM file. It has no visible effect.

  • I use the converter’s upload menu, but this converter produced by Real doesn’t recognize Real Audio files. Thanks, Real Audio!

  • I download and install the Real Audio Cloud app. When I open it, it immediately scours my disk looking for video files. I didn’t ask it to do that and I don’t know what it’s doing with that info. A quick check shows that it too can’t play a RAM file. I uninstall it as quickly as I can.

  • I download VLC, my favorite audio player. (It’s a new Mac and I’m still loading it with my preferred software.)

  • Apple lets me open it, but only after warning me that I shouldn’t trust it because it comes from [dum dum dum] The Internet. The scary scary Internet. Come to the warm, white plastic bosom of the App Store, it murmurs.

  • I drag the file in to VLC. It fails, but it does me the favor of tellling me why: It’s unable to connect to WHYY’s Real Audio server. Yup, this isn’t a media file, but a tiny file that sets up a connection between my computer and a server WHYY abandoned years ago. I should have remembered that that’s how Real worked. Actually, no, I shouldn’t have had to remember that. I’m just embarrassed that I did not. Also, I should have checked the size of the original Fresh Air file that I downloaded.

  • A search for “Time Berners-Lee Fresh Air 1999” immediately turns up an NPR page that says the audio is no longer available.

    It’s no longer available because in 1999 Real Audio solved a problem for media companies: install a RA server and it’ll handle the messy details of sending audio to RA players across the Net. It seemed like a reasonable approach. But it was proprietary and so it failed, taking Fresh Air’s archives with it. Could and should have Fresh Air converted its files before it pulled the plug on the Real Audio server? Yeah, probably, but who knows what the contractual and technical situation was.

    By not following the example set by Tim Berners-Lee — open protocols, open standards, open hearts — this bit of history has been lost. In this case, it was an interview about TBL’s invention, thus confirming that irony remains the strongest force in the universe.

    Tweet
    Follow me

    Categories: future, net neutrality, open access Tagged with: future • interoperability • open • platforms • protocols • web Date: November 26th, 2014 dw

    1 Comment »

  • November 21, 2014

    APIs are magic

    (This is cross-posted at Medium.)

    Dave Winer recalls a post of his from 2007 about an API that he’s now revived:

    “Because Twitter has a public API that allows anyone to add a feature, and because the NY Times offers its content as a set of feeds, I was able to whip up a connection between the two in a few hours. That’s the power of open APIs.”

    Ah, the power of APIs! They’re a deep magic that draws upon five skills of the Web as Mage:

    First, an API matters typically because some organization has decided to flip the default: it assumes data should be public unless there’s a reason to keep it private.

    Second, an API works because it provides a standard, or at least well-documented, way for an application to request that data.

    Third, open APIs tend to be “RESTful,” which means that they work using the normal Web way of proceeding (i.e., Web protocols). All you or your program have to do is go to the API’s site using a standard URL of the sort you enter in a browser. The site comes back not with a Web page but with data. For example, click on this URL (or paste it into your browser) and you’ll get data from Wikipedia’s API: http://en.wikipedia.org/w/api.php?action=query&titles=San_Francisco&prop=images&imlimit=20&format=jsonfm. (This is from the Wikipedia API tutorial.)

    Fourth, you need people anywhere on the planet who have ideas about how that data can be made more useful or delightful. (cf. Dave Winer.)

    Fifth, you need a worldwide access system that makes the results of that work available to everyone on the Internet.

    In short, API’s show the power of a connective infrastructure populated by ingenuity and generosity.

    In shorter shortnesss: API’s embody the very best of the Web.

    Tweet
    Follow me

    Categories: free culture, future Tagged with: apis • generosity • platforms • technology Date: November 21st, 2014 dw

    Comments Off on APIs are magic

    October 13, 2014

    Library as starting point

    A new report on Ithaka S+R‘s annual survey of libraries suggests that library directors are committed to libraries being the starting place for their users’ research, but that the users are not in agreement. This calls into question the expenditures libraries make to achieve that goal. (Hat tip to Carl Straumsheim and Peter Suber.)

    The question is good. My own opinion is that libraries should let Google do what it’s good at, while they focus on what they’re good at. And libraries are very good indeed at particular ways of discovery. The goal should be to get the mix right, not to make sure that libraries are the starting point for their communities’ research.

    The Ithaka S+R survey found that “The vast majority of the academic library directors…continued to agree strongly with the statement: ‘It is strategically important that my library be seen by its users as the first place they go to discover scholarly content.'” But the survey showed that only about half think that that’s happening. This gap can be taken as room for improvement, or as a sign that the aspiration is wrongheaded.

    The survey confirms that many libraries have responded to this by moving to a single-search-box strategy, mimicking Google. You just type in a couple of words about what you’re looking for and it searches across every type of item and every type of system for managing those items: images, archival files, books, maps, museum artifacts, faculty biographies, syllabi, databases, biological specimens… Just like Google. That’s the dream, anyway.

    I am not sold on it. Roger cites Lorcan Dempsey, who is always worth listening to:

    Lorcan Dempsey has been outspoken in emphasizing that much of “discovery happens elsewhere” relative to the academic library, and that libraries should assume a more “inside-out” posture in which they attempt to reveal more effectively their distinctive institutional assets.

    Yes. There’s no reason to think that libraries are going to be as good at indexing diverse materials as Google et al. are. So, libraries should make it easier for the search engines to do their job. Library platforms can help. So can Schema.org as a way of enriching HTML pages about library items so that the search engines can easily recognize the library item metadata.

    But assuming that libraries shouldn’t outsource all of their users’ searches, then what would best serve their communities? This is especially complicated since the survey reveals that preference for the library web site vs. the open Web varies based on just about everything: institution, discipline, role, experience, and whether you’re exploring something new or keeping up with your field. This leads Roger to provocatively ask:

    While academic communities are understood as institutionally affiliated, what would it entail to think about the discovery needs of users throughout their lifecycle? And what would it mean to think about all the different search boxes and user login screens across publishes [sic] and platforms as somehow connected, rather than as now almost entirely fragmented? …Libraries might find that a less institutionally-driven approach to their discovery role would counterintuitively make their contributions more relevant.

    I’m not sure I agree, in part because I’m not entirely sure what Roger is suggesting. If it’s that libraries should offer an experience that integrates all the sources scholars consult throughout the lifecycle of their projects or themselves, then, I’d be happy to see experiments, but I’m skeptical. Libraries generally have not shown themselves to be particularly adept at creating grand, innovative online user experiences. And why should they be? It’s a skill rarely exhibited anywhere on the Web.

    If designing great Web experiences is not a traditional strength of research libraries, the networked expertise of their communities is. So is the library’s uncompromised commitment to serving its community’s interests. A discovery system that learns from its community can do something that Google cannot: it can find connections that the community has discerned, and it can return results that are particularly relevant to that community. (It can make those connections available to the search engines also.)

    This is one of the principles behind the Stacklife project that came out of the Harvard Library Innovation Lab that until recently I co-directed. It’s one of the principles of the Harvard LibraryCloud platform that makes Stacklife possible. It’s one of the reasons I’ve been touting a technically dumb cross-library measure of usage. These are all straightforward ways to start to record and use information about the items the community has voted for with its library cards.

    It is by far just the start. Anonymization and opt-in could provide rich sets of connections and patterns of usage. Imagine we could know what works librarians recommend in response to questions. Imagine if we knew which works were being clustered around which topics in lib guides and syllabi. (Support the Open Syllabus Project!) Imagine if we knew which books were being put on lists by faculty and students. Imagine if knew what books were on participating faculty members’ shelves. Imagine we could learn which works the community thinks are awesome. Imagine if we could do this across institutions so that communities could learn from one another. Imagine we could do this with data structures that support wildly messily linked sources, many of them within the library but many of them outside of it. (Support Linked Data!)

    Let the Googles and Bings do what they do better than any sane person could have imagined twenty years ago. Let libraries do what they have been doing better than anyone else for centuries: supporting and learning from networked communities of scholars, librarians, and students who together are a profound source of wisdom and working insight.

    Tweet
    Follow me

    Categories: future, libraries, too big to know Tagged with: 2b2k • libraries • platforms Date: October 13th, 2014 dw

    Comments Off on Library as starting point

    June 20, 2014

    [platform] Unreal Tournament 2014 to provide market for mods

    According to an article in PC Gamer (August 2014, Ben Griffin, p. 10), Epic Games’ Unreal Tournament 2014 will make “Every line of code, evert art asset and animation…available for download.” Users will be able to create their own mods and sell them through a provided marketplace. “Epic, naturally, gets a cut of the profits.”

    Steve Polge, project lead and senior programmer, said “I believe this development model gives us the opportunity to build a much better balanced and finely tuned game, which is vital to the long-term success of a competitive shooter.” He points out that players already contribute to design discussions.

    Tweet
    Follow me

    Categories: future, games Tagged with: games • markets • mods • platforms Date: June 20th, 2014 dw

    1 Comment »

    June 1, 2014

    Oculus Riiiiiiiiiift

    At the Tel Aviv headquarters of the Center for Educational Technology, an NGO I’m very fond of because of its simultaneous dedication to improving education and its embrace of innovative technology, I got to try an Oculus Rift.

    They put me on a virtual roller coaster. My real knees went weak.

    Holy smokes.

    wearing an Oculus Rift

     


    Earlier, I gave a talk at the Israeli Wikimedia conference. I was reminded — not that I actually need reminding — how much I like being around Wikipedians. And what an improbable work of art is Wikipedia.

    Tweet
    Follow me

    Categories: future, games, misc Tagged with: games • platforms • wikipedia Date: June 1st, 2014 dw

    1 Comment »

    [liveblog] Jan-Bart de Vreede at Wikimedia Israel

    I’m at the Israeli Wikimedia conference. The chair of the Wikimedia Foundation, Jan-Bart De Vreede, is being interviewed by Shizaf Rafaeli.

    NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

    Jan introduces himself. Besides being the chair, in the Netherlands he works on open educational resources at Kinnesnent. He says that the Wikimedia Foundation is quite small compared to other organizations like it. Five members are elected by the community (anyone with enough edits can vote), there are four appointed members, and Jimmy Wales.

    Q: The Foundation is based on volunteers, and it has a budget. What are the components of the future for Wikipedia?

    A: We have to make sure we get the technology to the place where we’re prepared for the future. And how we can enable the volunteers to do whatever they want to achieve our mission of being the sum of all knowledge, which is a high bar? Enabling volunteers is the highest impact thing that we can do.

    Q: Students just did a presentation here based on the idea that Wikipedia already has too much information.

    A: It’s not up to us to decide how the info is consumed. We should make sure that the data is available to be presented any way people want to. We are moving toward WikiData: structured data and the relationship among that data. How can we make it easier for people to add data to WikiData without necessarily requiring people to edit pages? How can we enable people to tag data? Can we use that to learn what people find relevant?

    Q: What’s most important?

    A: WikiData. Then Wikipedia Zero, making WP access available in developing parts of the globe. We’re asking telecoms to provide free access to Wikipedia on mobile phones.

    Q: You’re talking with the Israeli Minister of Education tomorrow. About what?

    A: We have a project of Wikipedia for children, written by children. Children can have an educational experience — e.g., interview a Holocaust survivor — and share it so all benefit from it.

    Q: Any interesting projects?

    A: Wiki Monuments [link ?]. Wiki Air. So many ideas. So much more to do. The visual editor will help people make edits. But we also have to make sure that new editors are welcomed and are treated kindly. Someone once told Jan that she “just helps new editors,” and he replied that that scale smuch better than creating your own edits.

    A: I’m surprised you didn’t mention reliability…

    Q: Books feel trustworthy. The Net automatically brings a measure of distrust, and rightly so. Wikipedia over the years has come to feel trustworthy, but that requires lots of people looking at it and fixing it when its wrong.

    Q: 15,000 Europeans have applied to have their history erased on Google. The Israeli Supreme Court has made a judgment along the same lines. What’s Wikipedia’s stance on this?

    A: As we understand it, the right to be forgotten applies to search engines, not to source articles about you. Encyclopedia articles are about what’s public.

    Q: How much does the neutral point of view count?

    A: It’s the most important thing, along with being written by volunteers. Some Silicon Valley types have refused to contributed money because, they say, we have a business model that we choose not to use: advertising. We decided it’d be more important to get many small contributions than corrode NPOV by taking money.

    A: How about paid editing so that we get more content?

    Q: It’s a tricky thing. There are public and governmental institutions that pay employees to provide Open Access content to Wikipedia and Wiki Commons. On the other hand, there are organizations that take money to remove negative information about their clients. We have to make sure that there’s a way to protect the work of genuine volunteers from this. But even when we make a policy about, the local Wikipedia units can override it.

    Q: What did you think of our recent survey?

    A: The Arab population was much more interested in editing Wikipedia than the Israeli population. How do you enable that? It didn’t surprise me that women are more interested in editing. We have to work against our systemic bias.

    Q: Other diversity dimensions we should pay more attention to?

    A: Our concept of encyclopedia itself is very Western. Our idea of citations is very Western and academic. Many cultures have oral citations. Wikipedia doesn’t know how to work with that. How can we accommodate knowledge that’s been passed down through generations?

    Q&A

    Q: Wikipedia doesn’t allow original research. Shouldn’t there be an open access magazine for new scientific research?

    A: There are a lot of OA efforts. If more are needed, they should start with volunteers.

    Q: Academics and Wikipedia have a touchy relationship. Wikipedia has won that battle. Isn’t it time to gear up for the next battle, i.e., creating open access journals?

    A: There are others doing this. You can always upload and publish articles, if you want [at Wiki Commons?].

    Tweet
    Follow me

    Categories: free culture, future, too big to know Tagged with: 2b2k • liveblog • platforms • wikipedia Date: June 1st, 2014 dw

    19 Comments »

    April 27, 2014

    The future is a platform

    Here’s the video of my talk at The Next Web in Amsterdam on Friday. I haven’t watched it because I don’t like watching me and neither should you. But I would be interested in your comments about what I’m feeling my way toward in this talk.

    It’s about what I think is a change in how we think about the future.

    Tweet
    Follow me

    Categories: future Tagged with: future • platforms • video Date: April 27th, 2014 dw

    3 Comments »

    November 9, 2013

    Aaron Swartz and the future of libraries

    I was unable to go to our local Aaron Swartz Hackathon, one of twenty around the world, because I’d committed (very happily) to give the after dinner talk at the University of Rhode Island Graduate Library and Information Studies 50th anniversary gala last night.

    The event brought together an amazing set of people, including Senator Jack Reed, the current and most recent presidents of the American Library Association, Joan Ress Reeves, 50 particularly distinguished alumni (out of the three thousand (!) who have been graduated), and many, many more. These are heroes of libraries. (My cousin’s daughter, Alison Courchesne, also got an award. Yay, Alison!)

    Although I’d worked hard on my talk, I decided to open it differently. I won’t try to reproduce what I actually said because the adrenalin of speaking in front of a crowd, especially one as awesome as last night’s, wipes out whatever short term memory remains. But it went very roughly something like this:

    It’s awesome to be in a room with teachers, professors, researchers, a provost, deans, and librarians: people who work to make the world better…not to mention the three thousand alumni who are too busy do-ing to be able to be here tonight.

    But it makes me remember another do-er: Aaron Swartz, the champion of open access, open data, open metadata, open government, open everything. Maybe I’m thinking about Aaron tonight because today is his birthday.

    When we talk about the future of libaries, I usually promote the idea of libraries as platforms — platforms that make openly available everything that libraries know: all the data, all the metadata, what the community is making of what they get from the library (privacy accommodated, of course), all the guidance and wisdom of librarians, all the content especially if we can ever fix the insane copyright laws. Everything. All accessible to anyone who wants to write an application that puts it to use.

    And the reason for that is because in my heart I don’t think librarians are going to invent the future of libraries. It’s too big a job for any one group. It will take the world to invent the future of libraries. It will take 14 year olds like Aaron to invent the future of libraries. We need supply them with platforms that enable them.

    I should add that I co-direct a Library Innovation Lab where we do work that I’m very proud of. So, of course libraries will participate in the invention of their future. But it’ll take the world — a world that contains people with the brilliance and commitment of an Aaron Swartz — to invent that future fully.

     


    Here are wise words delivered at an Aaron Hackathon last night by Carl Malamud: Hacking Authority. For me, Carl is reminding us that the concept of hacking over-promises when the changes threaten large institutions that represent long-held values and assumptions. Change often requires the persistence and patience that Aaron exhibited, even as he hacked.

    Tweet
    Follow me

    Categories: libraries, open access Tagged with: 2b2k • aaron swartz • future • libraries • open access • platforms Date: November 9th, 2013 dw

    2 Comments »

    October 24, 2013

    E-Dickinson

    The Emily Dickinson archive went online today. It’s a big deal not only because of the richness of the collection, and the excellent technical work by the Berkman Center, but also because it is a good sign for Open Access. Amherst, one of the major contributors, had open accessed its Dickinson material earlier, and now the Harvard University Press has open accessed some of its most valuable material. Well done!

    The collection makes available in one place the great Dickinson collections held by Amherst, Harvard, and others. The metadata for the items is (inevitably) inconsistent in terms of its quantity, but the system has been tuned so that items with less metadata are not systematically overwhelmed by its search engine.

    The Berkman folks tell me that they’re going to develop an open API. That will be extra special cool.

    Tweet
    Follow me

    Categories: open access Tagged with: apis • dickinson • libraries • open access • platforms Date: October 24th, 2013 dw

    Comments Off on E-Dickinson

    March 28, 2013

    [annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

    Paolo Ciccarese begins by reminding us just how vast the scientific literature is. We can’t possibly read everything we should. But “science is social” so we rely on each other, and build on each other’s work. “Everything we do now is connected.”

    NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

    Today’s media do provide links, but not enough. Things are so deeply linked. “How do we keep track of it?” How do we communicate with others so that when they read the same paper they get a little bit of our mental model, and see why we found the article interesting?

    Paolo’s project — Domeo [twitter:DomeoTool] — is a web app for “producing, browsing, and sharing manual and semi-automatic (structure and unstructured) annotations, using open standards. Domeo shows you an article and lets you annotate fragments. You can attach a tag or an unstructured comment. The tag can be defined by the user or by a defined ontology. Domeo doesn’t care which ontologies you use, which means you could use it for annotating recipes as well as science articles.

    Domeo also enables discussions; it has a threaded msg facility. You can also run text mining and entity recognition systems (Calais, etc.) that automatically annotates the work with those words, which helps with search, understanding, and curation. This too can be a social process. Domeo lets you keep the annotation private or share it with colleagues, groups, communities, or the Web. Also, Domeo can be extended. In one example, it produces information about experiments that can be put into a database where it can be searched and linked up with other experiments and articles. Another example: “hypothesis management” lets readers add metadata to pick out the assertions and the evidence. (It uses RDF) You can visualize the network of knowledge.

    It supports open APIs for integrating with other systems., including into the Neuroscience Information Framework and Drupal. “Domeo is a platform.” It aims at supporting rich source, and will add the ability to follow authors and topics, etc., and enabling mashups.

    Tweet
    Follow me

    Categories: interop, liveblog, too big to know Tagged with: 2b2k • annotation • interop • platforms Date: March 28th, 2013 dw

    Comments Off on [annotation][2b2k] Paolo Ciccarese on the Domeo annotation platform

    « Previous Page | Next Page »


    Creative Commons License
    This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.
    TL;DR: Share this post freely, but attribute it to me (name (David Weinberger) and link to it), and don't use it commercially without my permission.

    Joho the Blog uses WordPress blogging software.
    Thank you, WordPress!