Joho the Blogfuture Archives - Joho the Blog

April 14, 2015

[shorenstein] Managing digital disruption in the newsroom

David Skok [twitter:dskok] is giving a Shorenstein Center lunchtime talk on managing digital disruption in the newsroom. He was the digital advisor to the editor of the Boston Globe. Today he was announced as the new managing editor of digital at the Globe. [Congrats!]

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

As a Nieman fellow David audited a class at the Harvard Business School taught by Clay Christensen, of “creative destruction” fame. This gave him the sense that whether or not newspapers will survive, journalism will. Companies can be disrupted, but for journalism it means that for every legacy publisher that’s disrupted, there are new entrants that enter at the low end and move up market. E.g., Toyota started off at the low end and ended up making Lexuses. David wrote an article with Christensen [this one?] that said that you may start with aggregation and cute kittens, but as you move up market you need higher quality journalism that brings in higher-value advertising. “So I came out of the project doubly motivated as a journalist,” but also wanting to hold off the narrative that there is an inevitability to the demise of newspapers.


He helped started GlobalNews.ca and got recruited for the Globe. There he held to the RPP model: the Resources, Process, and Priorities you put in place to help frame an organizational culture. It’s important for legacy publishers to see that it isn’t just tech that’s bringing down newspapers; the culture and foundational structure of those organizations are also to blame.

Priorities:
If you take away the Internet, a traditional news organization is a print factory line. The Internet tasks were typically taken up by the equivalent groups with in the org. Ultimately, the publisher’s job is how to generate profit, so s/he picks the paths that lead most directly to short-term returns. But that means user experience gets shuffled down, as does the ability of the creators to do “frictionless journalism.” On the Internet, I can write the best lead but if you can’t read it on your phone in 0.1 seconds, it doesn’t exist. The human experience has to be the most important thing. The consumer is the most important person in this whole transaction. How are we making sure that person is pleased?


In the past 18 months David has done a restructuring of the Globe online. He’s been the general mgr of Boston.com. Every Monday he meets with all the group leads, including the sales team (which he does not manage for ethical journalism reasons). This lets them set priorities not at the publisher level where they are driven by profit, but by user and producer experience. The conceit is that if they produce good user and producer experiences, the journalism will be better, and that will ultimately drive more revenue in advertising and subscriptions.


The Globe had a free site (Boston.com) and a paywall site (bostonglobe.com). This was set up before his time. Boston.com relative to its size as a website business has a remarkable amount of revenue via advertising. BostonGlobe.com is a really healthy digital subscription business. It has more subscriptions in North America outside of the NYT and WSJ. These are separate businesses that had been smushed together. So David split them up.

Processes:

They’ve done a lot to change their newsroom processors. Engineers are now in the newsroom. They use agile processes. The newsroom is moving toward an 18-24 hour cycle as opposed to the print cycle.


We do three types of journalism on our sites:


1. Digital first — the “bloggy stuff.” How do we add something new to those conversations that provides the Globe’s unique perspective? We don’t want to be writing about things simply because everyone else is. We want to bring something new to it. We have three digital first writers.


2. The news of the day. We do a good job with this, as demonstrated during the Marathon bombing.


3. Enterprise stuff — long investigations, etc. Those stories get incredible engagement. “It’s heartening.” They’re experimenting with release schedules: how do you maximize the exposure of a piece?

Resources:
In terms of resources: We’re looking at our content management system (CMS). Ezra Klein went to Vox in part because of their CMS. You need a CMS that gives reporters what they need and want. We also need better realtime analytics.


Priorities, Processes + Resources = organizational culture.

Q&A
Q: You’re optimistic…?


A: We’re now entering the third generation of journalism on line. First: [missed it]. Second: SEO. Third: the social phase, the network effect. How are we engaging our readers so that they feel responsible to help us succeed? We’re not in the business of selling impressions [=page views, etc.] but experiences. E.g., we have a bracket competition (“Munch Madness“) for restaurant reviews. We tell advertisers that you’re getting not just views but experiences.


Q: [alex jones] And these revenues are enough to enable the Globe to continue…?


A: It would be foolish of me to say yes, but …


Q: [alex jones] How does the Globe attract an audience that’s excited but civil?


A: Part of it is thinking about new ways of doing journalism. E.g., for the Tsarnaev trial, we created cards that appear on every page that give you a synopsis of the day’s news and all the witnesses and evidence online. We made those cards available to any publisher who wanted them. They’re embeddable. We reached out to every publisher in New England that can’t cover it in the depth that the Globe can” and offered it to them for free. “We didn’t get as much uptake as we’d like,” perhaps because the competitive juices are still flowing.


Then there are the comments. When news orgs first put comments on their site, they thought about them as digital letters to the editor. Comments serve another purpose: they are a product and platform in and of themselves where your community can talk about your product. They’re not really tied to the article. Some comments “make me weep because they’re so beautiful.”


Q: As journalists are being asked to do much more, what do you think about the pay scale declining?


A: I can’t speak for the industry. The Globe pays competitively. We’re creating jobs now. And there are so many more outlets out there that didn’t exist five years ago. Journalists today aren’t just writers. They’re sw engineers, designers, etc.


I’m increasingly concerned about the lack of women engineers entering the field. Newspapers have as much responsibility as any other industry to address this issue.


Q: How to monetize aggregators?


A: If we were to try to go to every org that aggregates us, it’d be a fulltime job. We released a story online on a Feb. afternoon about Jeb Bush at Andover. [This one?] By Friday night, it was all over. I don’t view it as a threat. We have a meter. My job is to make sure that our reporting is good enough that you’ll use your credit card and sign up. I’m in awe in the number of people who sign up every day. We have churn issues as does everyone, but the meter business has been a success.


Q: [me] As you redo your CMS, have you thought about putting in an API? If so, would you consider opening it to the public?


A: When I’ve opened up API sets, there has been minimal takeup.


Q: What other newspapers are doing a good job addressing digital issues? And does the ownership structure matter?


A: The Washington Post, and they have a very similar ownership structure as the Globe.


Q: [alex] What’s Bezo’s effect on the WaPo?


A: Having the Post appear on every Kindle is something we’d all like for ourselves.


Q: Release schedule?


A: Our newsroom’s phenomenal editors are recognizing and believing that we are not a platform-specific business. We find only one in four of our print subscribers logged on to the web site with any frequency. We have two different audiences.We’ve had no evidence that releasing stories earlier on digital cannibalizes our print business. I love print. But when I get the Sunday edition, I feel guilty if I recycle it before I’ve read it all. So why not give people the opportunity to read it when they want? If it’s ready on a Wed., let them read it on Wed. Different platforms have different reader habits.

Q: What’s native to the print version?

A: Some of the enterprise reporting perhaps. But it’s more obvious in format issues. E.g., the print showed the 30 charges Tsarnaev was charged with. It had an emotional impact that digital did not.


Q: Is your print audience entirely over the age of 50?


A: No. It’s a little older than our overall numbers, but not that much.


Q: What are you doing to reduce the churn rate? What’s worked on getting print and digital folks to understand each other?


A: I’m a firm believer in data. We’re not pushing for digital change because we want to but because data backs up our claims. About frictionlessness: It’s so easy to buy goods. Uber. Even buying a necklace. We’re working with a backend database that is complex. We have to tie that into our digital product. The front end complexities on how users can pay come from the complexity of the back end.


Q: [nick sinai] I appreciate your comments about bringing designers, developers, UX into the newsroom. That’s what we’re trying to do in the govt. for digital services. How about data journalism.


A: Data journalism lets you tell stories you didn’t know where there. My one issue: We’ve reached a barrier: we’re reliant on what datasets are available.


Q: How many reporters work for print, Boston.com, and BostonGlobe.com


A: 250 journalists or so work for the Globe and they all work for all platforms.


Q: Are different devices attracting different stories? E.g., a long enterprise story may do better on particular devices. Where is contradiction, nuance, subtlety in this environment? How much is constrained by the device?


A: Yes, there are form-specific things. But there are also social-specific things. If you’re coming from Reddit, your behavior is different from your behavior coming from Facebook, etc. Each provides its own unique expectation of the reader. We’re trying to figure out how to be smarter in detecting where you’re coming from and what assets we should serve up to you. E.g., if you’re coming from Reddit and are going back to talk about the article, maybe you’re never going to subscribe, but could we provide a FB Like button, etc.?


Q: Analytics?


A: The most important metric for me is journalistic impact. That’s hard to measure. Sheer number? The three legislators who can change a law? More broadly: At the top of the funnel, it’s how to grow our audience: page views, shares, unique visitors, etc. As you get deeper into the funnel it’s about how much you engage with the site: bounce rate, path, page views per visit,time spent, etc. Third metric: return frequency. If you had a really good experience, did you come back: return visits, subscribers, etc.


[Really informative talk.]

1 Comment »

January 7, 2015

Harvard Library adopts LibraryCloud

According to a post by the Harvard Library, LibraryCloud is now officially a part of the Library toolset. It doesn’t even have the word “pilot” next to it. I’m very happy and a little proud about this.

LibraryCloud is two things at once. Internal to Harvard Library, it’s a metadata hub that lets lots of different data inputs be normalized, enriched, and distributed. As those inputs change, you can change LibraryCloud’s workflow process once, and all the apps and services that depend upon those data can continue to work without making any changes. That’s because LibraryCloud makes the data that’s been input available through an API which provides a stable interface to that data. (I am overstating the smoothness here. But that’s the idea.)

To the Harvard community and beyond, LibraryCloud provides open APIs to access tons of metadata gathered by Harvard Library. LibraryCloud already has metadata about 18M items in the Harvard Library collection — one of the great collections — including virtually all the books and other items in the catalog (nearly 13M), a couple of million of images in the VIA collection, and archives at the folder level in Harvard OASIS. New data can be added relatively easily, and because LibraryCloud is workflow based, that data can be updated, normalized and enriched automatically. (Note that we’re talking about metadata here, not the content. That’s a different kettle of copyrighted fish.)

LibraryCloud began as an idea of mine (yes, this is me taking credit for the idea) about 4.5 years ago. With the help of the Harvard Library Innovation Lab, which I co-directed until a few months ago, we invited in local libraries and had a great conversation about what could be done if there were an open API to metadata from multiple libraries. Over time, the Lab built an initial version of LibraryCloud primarily with Harvard data, but with scads of data from non-Harvard sources. (Paul Deschner, take many many bows. Matt Phillips, too.) This version of LibraryCloud — now called lilCloud — is still available and is still awesome.

With the help of the Library Lab, a Harvard internal grant-giving group, we began a new version based on a workflow engine and hosted in the Amazon cloud. (Jeffrey Licht, Michael Vandermillen, Randy Stern, Paul Deschner, Tracey Robinson, Robin Wendler, Scott Wicks, Jim Borron, Mary Lee Kennedy, and many more, take bows as well. And we couldn’t have done it without you, Arcardia Foundation!) (Note that I suffer from Never Gets a List Right Syndrome, so if I left you out, blame my brain and let me know. Don’t be shy. I’m ashamed already.)

The Harvard version of LibraryCloud is a one-library implementation, although that one library comprises 73 libraries. Thus the LibraryCloud Harvard has adopted is a good distance from the initial vision of a single API for accessing multiple libraries. But it’s a big first step. It’s open source code [documentation]. Who knows?

I think it’s impressive that Harvard Library has taken this step toward adopting a platform architecture, and it’s cool beyond cool that this architecture is further opening up Harvard Library’s metadata riches to any developer or site that wants to use it. (This also would not have happened without Harvard Library’s enlightened Open Metadata policy.)

1 Comment »

December 24, 2014

Fame. Web Fame. Mass Web Fame.

A weird thing happened yesterday. First I got a call from a Swedish journalist writing about a Danish kid who has become famous on the Net for nothing in particular and is now weighing his options as a possible recording star. Since I’ve written about Web fame (in Small Pieces Loosely Joined, in 2002) and talked about it (at the keynote of the first ROFLcon conference in 2008), he gave me a talk and we had a fun conversation.

That conversation prompted me to write a post about how Web fame has changed over the past few years. I was mostly through a first draft when I got a call from a journalist at a well-known US newspaper who is doing a story about Web fame, and wanted to talk with me about it. Huh?

Keep in mind that I hadn’t yet posted about the topic. He got to me totally independently of the Swedish journalist. And it’s not like I spend my mornings talking to the press. It’s just a completely weird coincidence.

Anyway, afterwards I posted what I had written. It’s at Medium. Here’s the beginning:

It’s a great time to be famous, at least if you’re interested in innovating new types of fame. If you’re instead looking for old-fashioned fame, you’re out of luck. We’re in a third epoch of fame, and this one is messier than any of the others. (Sure, that’s an oversimplification, but what isn’t?)

Before the Web there was Mass Fame, the fame bestowed upon lucky (?) individuals by the mass media. The famous were not like you and me. They were glamorous, had an aura, were smiled upon by the gods.

Fame back then was something that was done to the audience. We could accept or reject those thrust upon us by the the mass media, but since fame was defined as mass awareness of someone, the mass media were ultimately in control.

With the dawn of the Web there was Internet Fame. We made people famous…[more]

(Amanda Palmer, whom I use as a positive example of the new possibilities, facebooked the post, which makes me one degree from famous!)

Be the first to comment »

December 14, 2014

Jeff Jarvis on journalism as a service

My wife and I had breakfast with Jeff Jarvis on Thursday, so I took the opportunity to do a quick podcast with him about his new book Geeks Bearing Gifts: Imagining New Futures for News.

I like the book a lot. It proposes that we understand journalism as a provider of services rather than of content. Jeff then dissolves journalism into its component parts and asks us to imagine how they could be envisioned as sustainable services designed to help readers (or viewers) accomplish their goals. It’s more a brainstorming session (as Jeff confirms in the podcast) than a “10 steps to save journalism” tract, and some of the possibilities seem more plausible — and more journalistic — than others, but that’s the point.

If I were teaching a course on the future of journalism, or if I were convening my newspaper’s staff to think about the future of our newspaper, I’d have them read Geeks Bearing Gifts if only to blow up some calcified assumptions.

1 Comment »

November 26, 2014

Welcome to the open Net!

I wanted to play Tim Berners-Lee’s 1999 interview with Terry Gross on WHYY’s Fresh Air. Here’s how that experience went:

  • I find a link to it on a SlashDot discussion page.

  • The link goes to a text page that has links to Real Audio files encoded either for 28.8 or ISBN.

  • I download the ISBN version.

  • It’s a RAM (Real Audio) file that my Mac (Yosemite) cannot play.

  • I look for an updated version on the Fresh Air site. It has no way of searching, so I click through the archives to get to the Sept. 16, 1999 page.

  • It’s a 404 page-not-found page.

  • I search for a way to play an old RAM file.

  • The top hit takes me to Real Audio’s cloud service, which offers me 2 gigabytes of free storage. I decline.

  • I pause for ten silent seconds in amazement that the Real Audio company still exists. Plus it owns the domain “real.com.”

  • I download a copy of RealPlayerSP from CNET, thus probably also downloading a copy of MacKeeper. Thanks, CNET!

  • I open the Real Player converter and Apple tells me I don’t have permission because I didn’t buy it through Apple’s TSA clearance center. Thanks, Apple!

  • I do the control-click thang to open it anyway. It gives me a warning about unsupported file formats that I don’t understand.

  • Set System Preferences > Security so that I am allowed to open any software I want. Apple tells me I am degrading the security of my system by not giving Apple a cut of every software purchase. Thanks, Apple!

  • I drag in the RAM file. It has no visible effect.

  • I use the converter’s upload menu, but this converter produced by Real doesn’t recognize Real Audio files. Thanks, Real Audio!

  • I download and install the Real Audio Cloud app. When I open it, it immediately scours my disk looking for video files. I didn’t ask it to do that and I don’t know what it’s doing with that info. A quick check shows that it too can’t play a RAM file. I uninstall it as quickly as I can.

  • I download VLC, my favorite audio player. (It’s a new Mac and I’m still loading it with my preferred software.)

  • Apple lets me open it, but only after warning me that I shouldn’t trust it because it comes from [dum dum dum] The Internet. The scary scary Internet. Come to the warm, white plastic bosom of the App Store, it murmurs.

  • I drag the file in to VLC. It fails, but it does me the favor of tellling me why: It’s unable to connect to WHYY’s Real Audio server. Yup, this isn’t a media file, but a tiny file that sets up a connection between my computer and a server WHYY abandoned years ago. I should have remembered that that’s how Real worked. Actually, no, I shouldn’t have had to remember that. I’m just embarrassed that I did not. Also, I should have checked the size of the original Fresh Air file that I downloaded.

  • A search for “Time Berners-Lee Fresh Air 1999″ immediately turns up an NPR page that says the audio is no longer available.

    It’s no longer available because in 1999 Real Audio solved a problem for media companies: install a RA server and it’ll handle the messy details of sending audio to RA players across the Net. It seemed like a reasonable approach. But it was proprietary and so it failed, taking Fresh Air’s archives with it. Could and should have Fresh Air converted its files before it pulled the plug on the Real Audio server? Yeah, probably, but who knows what the contractual and technical situation was.

    By not following the example set by Tim Berners-Lee — open protocols, open standards, open hearts — this bit of history has been lost. In this case, it was an interview about TBL’s invention, thus confirming that irony remains the strongest force in the universe.

    1 Comment »

  • November 21, 2014

    APIs are magic

    (This is cross-posted at Medium.)

    Dave Winer recalls a post of his from 2007 about an API that he’s now revived:

    “Because Twitter has a public API that allows anyone to add a feature, and because the NY Times offers its content as a set of feeds, I was able to whip up a connection between the two in a few hours. That’s the power of open APIs.”

    Ah, the power of APIs! They’re a deep magic that draws upon five skills of the Web as Mage:

    First, an API matters typically because some organization has decided to flip the default: it assumes data should be public unless there’s a reason to keep it private.

    Second, an API works because it provides a standard, or at least well-documented, way for an application to request that data.

    Third, open APIs tend to be “RESTful,” which means that they work using the normal Web way of proceeding (i.e., Web protocols). All you or your program have to do is go to the API’s site using a standard URL of the sort you enter in a browser. The site comes back not with a Web page but with data. For example, click on this URL (or paste it into your browser) and you’ll get data from Wikipedia’s API: http://en.wikipedia.org/w/api.php?action=query&titles=San_Francisco&prop=images&imlimit=20&format=jsonfm. (This is from the Wikipedia API tutorial.)

    Fourth, you need people anywhere on the planet who have ideas about how that data can be made more useful or delightful. (cf. Dave Winer.)

    Fifth, you need a worldwide access system that makes the results of that work available to everyone on the Internet.

    In short, API’s show the power of a connective infrastructure populated by ingenuity and generosity.

    In shorter shortnesss: API’s embody the very best of the Web.

    Be the first to comment »

    October 13, 2014

    Library as starting point

    A new report on Ithaka S+R‘s annual survey of libraries suggests that library directors are committed to libraries being the starting place for their users’ research, but that the users are not in agreement. This calls into question the expenditures libraries make to achieve that goal. (Hat tip to Carl Straumsheim and Peter Suber.)

    The question is good. My own opinion is that libraries should let Google do what it’s good at, while they focus on what they’re good at. And libraries are very good indeed at particular ways of discovery. The goal should be to get the mix right, not to make sure that libraries are the starting point for their communities’ research.

    The Ithaka S+R survey found that “The vast majority of the academic library directors…continued to agree strongly with the statement: ‘It is strategically important that my library be seen by its users as the first place they go to discover scholarly content.'” But the survey showed that only about half think that that’s happening. This gap can be taken as room for improvement, or as a sign that the aspiration is wrongheaded.

    The survey confirms that many libraries have responded to this by moving to a single-search-box strategy, mimicking Google. You just type in a couple of words about what you’re looking for and it searches across every type of item and every type of system for managing those items: images, archival files, books, maps, museum artifacts, faculty biographies, syllabi, databases, biological specimens… Just like Google. That’s the dream, anyway.

    I am not sold on it. Roger cites Lorcan Dempsey, who is always worth listening to:

    Lorcan Dempsey has been outspoken in emphasizing that much of “discovery happens elsewhere” relative to the academic library, and that libraries should assume a more “inside-out” posture in which they attempt to reveal more effectively their distinctive institutional assets.

    Yes. There’s no reason to think that libraries are going to be as good at indexing diverse materials as Google et al. are. So, libraries should make it easier for the search engines to do their job. Library platforms can help. So can Schema.org as a way of enriching HTML pages about library items so that the search engines can easily recognize the library item metadata.

    But assuming that libraries shouldn’t outsource all of their users’ searches, then what would best serve their communities? This is especially complicated since the survey reveals that preference for the library web site vs. the open Web varies based on just about everything: institution, discipline, role, experience, and whether you’re exploring something new or keeping up with your field. This leads Roger to provocatively ask:

    While academic communities are understood as institutionally affiliated, what would it entail to think about the discovery needs of users throughout their lifecycle? And what would it mean to think about all the different search boxes and user login screens across publishes [sic] and platforms as somehow connected, rather than as now almost entirely fragmented? …Libraries might find that a less institutionally-driven approach to their discovery role would counterintuitively make their contributions more relevant.

    I’m not sure I agree, in part because I’m not entirely sure what Roger is suggesting. If it’s that libraries should offer an experience that integrates all the sources scholars consult throughout the lifecycle of their projects or themselves, then, I’d be happy to see experiments, but I’m skeptical. Libraries generally have not shown themselves to be particularly adept at creating grand, innovative online user experiences. And why should they be? It’s a skill rarely exhibited anywhere on the Web.

    If designing great Web experiences is not a traditional strength of research libraries, the networked expertise of their communities is. So is the library’s uncompromised commitment to serving its community’s interests. A discovery system that learns from its community can do something that Google cannot: it can find connections that the community has discerned, and it can return results that are particularly relevant to that community. (It can make those connections available to the search engines also.)

    This is one of the principles behind the Stacklife project that came out of the Harvard Library Innovation Lab that until recently I co-directed. It’s one of the principles of the Harvard LibraryCloud platform that makes Stacklife possible. It’s one of the reasons I’ve been touting a technically dumb cross-library measure of usage. These are all straightforward ways to start to record and use information about the items the community has voted for with its library cards.

    It is by far just the start. Anonymization and opt-in could provide rich sets of connections and patterns of usage. Imagine we could know what works librarians recommend in response to questions. Imagine if we knew which works were being clustered around which topics in lib guides and syllabi. (Support the Open Syllabus Project!) Imagine if we knew which books were being put on lists by faculty and students. Imagine if knew what books were on participating faculty members’ shelves. Imagine we could learn which works the community thinks are awesome. Imagine if we could do this across institutions so that communities could learn from one another. Imagine we could do this with data structures that support wildly messily linked sources, many of them within the library but many of them outside of it. (Support Linked Data!)

    Let the Googles and Bings do what they do better than any sane person could have imagined twenty years ago. Let libraries do what they have been doing better than anyone else for centuries: supporting and learning from networked communities of scholars, librarians, and students who together are a profound source of wisdom and working insight.

    Be the first to comment »

    October 7, 2014

    Library as a platform: Chattanooga

    I finally got to see the Chattanooga Library. It was even better than I’d expected. In fact, you can see the future of libraries emerging there.

    That’s not to say that you can simply list what it’s doing and do the same things and declare yourself the Library of the Future. Rather, Chattanooga Library has turned itself into a platform. That’s where the future is, not in the particular programs and practices that happen to emerge from that platform.

    I got to visit, albeit all too briefly, because my friend Nate Hill, assistant director of the Library, invited me to speak at the kickoff of Chattanooga Startup Week. Nate runs the fourth floor space. It had been the Library’s attic, but now has been turned into an open space lab that works in both software and hardware. The place is a pleasing shambles (still neater than my office), open to the public every afternoon. It is the sort of place that invites you to try something out — a laser cutter, the inevitable 3D printer, an arduino board … or to talk with one of the people at work there creating apps or liberating data.

    The Library has a remarkable open data platform, but that’s not what makes this Library itself into a platform. It goes deeper than that.

    Go down to the second floor and you’ll see the youth area under the direction/inspiration of Justin Hoenke. It’s got lots of things that kids like to do, including reading books, of course. But also playing video games, building things with Legos, trying out some cool homebrew tech (e.g., this augmented reality sandbox by 17-year-old Library innovator, Jake Brown (github)), and soon recording in audio studios. But what makes this space a platform is its visible openness to new ideas that invites the community to participate in the perpetual construction of the Library’s future.

    This is physically manifested in the presence of unfinished structures, including some built by a team of high school students. What will they be used for? No one is sure yet. The presence of lumber assembled by users for purposes to be devised by users and librarians together makes clear that this is a library that one way or another is always under construction, and that that construction is a collaborative, inventive, and playful process put in place by the Library, but not entirely owned by the Library.

    As conversations with the Library Director, Corinne Hill (LibraryJournal’s Librarian of the Year, 2014), and Mike Bradshaw of Colab — sort of a Chattanooga entrepreneurial ecosystem incubator — made clear, this is all about culture, not tech. Open space without a culture of innovation and collaboration is just an attic. Chattanooga has a strong community dedicated to establishing this culture. It is further along than most cities. But it’s lots of work: lots of networking, lots of patient explanations, and lots and lots of walking the walk.

    The Library itself is one outstanding example. It is serving its community’s needs in part by anticipating those needs (of course), but also by letting the community discover and develop its own interests. That’s what a platform is about.

    It’s also what the future is about.

     


    Here are two relevant things I’ve written about this topic: Libraries as Platforms and Libraries won’t create their own futures.

    3 Comments »

    September 22, 2014

    The future of libraries won’t be created by libraries

    Library Journal has posted an op-ed of mine that begins:

    The future of libraries won’t be created by libraries. That’s a good thing. That future is too big and too integral to the infrastructure of knowledge for any one group to invent it. Still, that doesn’t mean that libraries can wait passively for this new future. Rather, we must create the conditions by which libraries will be pulled out of themselves and into everything else.

    2 Comments »

    September 12, 2014

    Springtime at Shorenstein

    The Shorenstein Center is part of the Harvard Kennedy School of Government. The rest of the Center’s name — “On Media, Politics, and Public Policy” — tells more about its focus. Generally, its fellows are journalists or other media folk who are taking a semester to work on some topic in a community of colleagues.

    To my surprise, I’m going to spend the spring there. I’m thrilled.

    I lied. I’m *\\*THRILLED*//*.

    The Shorenstein Center is an amazing place. It is a residential program so that a community will develop, so I expect to learn a tremendous amount and in general to be over-stimulated.

    The topic I’ll be working on has to do with the effect of open data platforms on journalism. There are a few angles to this, but I’m particularly interested in ways open platforms may be shaping our expectations for how news should be made accessible and delivered. But I’ll tell you more about this once I understand more.

    I’ll have some other news about a part-time teaching engagement in this Spring, but I think I’d better make sure it’s ok with the school to say so.

    I also probably should point out that as of last week I left the Harvard Library Innovation Lab. I’ll get around to explaining that eventually.

    3 Comments »

    Next Page »