Joho the Blog » too big to know

December 13, 2014

[2b2k] The Harvard Business School Digital Initiative’s webby new blog

The Harvard Business School Digital Initiative [twitter:digHBS] — led by none other than Berkman‘s Dr. Colin Maclay — has launched its blog. The Digital Initiative is about helping HBS explore the many ways the Net is affecting (or not affecting) business. From my point of view, it’s also an opportunity to represent, and advocate for, Net values within HBS.[1] (Disclosure: I am officially affiliated with the Initiative as an unremunerated advisor. Colin is a dear friend.[2])

The new blog is off to a good start:

I also have a post there titled “Generative Business and the Power of What We Ignore.” Here’s how it starts:

“I ignore them. That’s my conscious decision.”

So replied CV Harquail to a question from HBS professor Karim Lakhani about the effect of bad actors in the model of “generative business” she was explaining in a recent talk sponsored by the Digital Initiative.

Karim’s question addressed an issue that more than one of us around the table were curious about. Given CV’s talk, the question was perhaps inevitable.

CV’s response was not inevitable. It was, in fact, surprising. And it seems to me to have been not only entirely appropriate, but also brave… [more]

  


[1] I understand that the Net doesn’t really have values. It’s shorthand.
[2] I’m very happy to say that more than half of the advisors are women.

1 Comment »

November 24, 2014

[siu] Accessing content

Alex Hodgson of ReadCube is leading a panel called “Accessing Content: New Thinking and New Business Models or Accessing Research Literature” at the Shaking It Up conference.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Robert McGrath is from ReadCube, a platform for managing references. You import your pdfs, read them with their enhanced reader, and can annotate them and discover new content. You can click on references in the PDF and go directly to the sources. If you hit a pay wall, they provide a set of options, including a temporary “checkout” of the article for $6. Academic libraries can set up a fund to pay for such access.

Eric Hellman talks about Unglue.it. Everyone in the book supply chain wants a percentage. But free e-books break the system because there are no percentages to take. “Even libraries hate free ebooks.” So, how do you give access to Oral Literature in Africain Africa? Unglue.it ran a campaign, raised money, and liberated it. How do you get free textbooks into circulation? Teachers don’t know what’s out there. Unglue.it is creating MARC records for these free books to make it easy for libraries to include the. The novel Zero Sum Game is a great book that the author put it out under a Creative Commons license, but how do you find out that it’s available? Likewise for Barbie: A Computer Engineer, which is a legal derivative of a much worse book. Unglue.it has over 1,000 creative commons licensed books in their collection. One of Unglue.it’s projects: an author pledges to make the book available for free after a revenue target has been met. [Great! A bit like the Library License project from the Harvard Library Innovation Lab. They’re now doing Thanks for Ungluing which aggregates free ebooks and lets you download them for free or pay the author for it. [Plug: John Sundman’s Biodigital is available there. You definitely should pay him for it. It’s worth it.]

Marge Avery, ex of MIT Press and now at MIT Library, says the traditional barriers sto access are price, time, and format. There are projects pushing on each of these. But she mainly wants to talk about format. “What does content want to be?” Academic authors often have research that won’t fit in the book. Univ presses are experimenting with shorter formats (MIT Press Bits), new content (Stanford Briefs), and publishing developing, unifinished content that will become a book (U of Minnesota). Cambridge Univ Press published The History Manifesto, created start to finish in four months and is available as Open Access as well as for a reasonable price; they’ve sold as many copies as free copies have been downloaded, which is great.

William Gunn of Mendeley talks about next-gen search. “Search doesn’t work.” Paul Kedrosky was looking for a dishwasher and all he found was spam. (Dishwashers, and how Google Eats Its Own Tail). Likewise, Jeff Atwood of StackExchange: “Trouble in the House of Google.” And we have the same problems in scholarly work. E.g., Google Scholar includes this as a scholarly work. Instead, we should be favoring push over pull, as at Mendeley. Use behavior analysis, etc. “There’s a lot of room for improvement” in search. He shows a Mendeley search. It auto-suggests keyword terms and then lets you facet.

Jenn Farthing talks about JSTOR’s “Register and Read” program. JSTOR has 150M content accesses per year, 9,000 institutions, 2,000 archival journals, 27,000 books. Register and Read: Free limited access for everyone. Piloted with 76 journals. Up to 3 free reads over a two week period. Now there are about 1,600 journals, and 2M users who have checked out 3.5M articles. (The journals are opted in to the program by their publishers.)

Q&A

Q: What have you learned in the course of these projects?

ReadCube: UI counts. Tracking onsite behavior is therefore important. Iterate and track.

Marge: It’d be good to have more metrics outside of sales. The circ of the article is what’s really of importance to the scholar.

Mendeley: Even more attention to the social relationships among the contributors and readers.

JSTOR: You can’t search for only content that’s available to you through Read and Register. We’re adding that.

Unglue.it started out as a crowdfunding platform for free books. We didn’t realize how broken the supply chain is. Putting a book on a Web site isn’t enough. If we were doing it again, we’d begin with what we’re doing now, Thanks for Ungluing, gathering all the free books we can find.

Q: How to make it easier for beginners?

Unglue .it: The publishing process is designed to prevent people from doing stuff with ebooks. That’s a big barrier to the adoption of ebooks.

ReadCube: Not every reader needs a reference manager, etc.

Q: Even beginning students need articles to interoperate.

Q: When ReadCube negotiates prices with publishers, how does it go?

ReadCube: In our pilots, we haven’t seen any decline in the PDF sales. Also, the cost per download in a site license is a different sort of thing than a $6/day cost. A site license remains the most cost-effective way of acquiring access, so what we’re doing doesn’t compete with those licenses.

Q: The problem with the pay model is that you can’t appraise the value of the article until you’ve paid. Many pay models don’t recognize that barrier.

ReadCube: All the publishers have agreed to first-page previews, often to seeing the diagrams. We also show a blurred out version of the pages that gives you a sense of the structure of the article. It remains a risk, of course.

Q: What’s your advice for large legacy publishers?

ReadCube: There’s a lot of room to explore different ways of brokering access — different potential payers, doing quick pilots, etc.

Mendeley: Make sure your revenue model is in line with your mission, as Geoff said in the opening session.

Marge: Distinguish the content from the container. People will pay for the container for convenience. People will pay for a book in Kindle format, while the content can be left open.

Mendeley: Reading a PDF is of human value, but computing across multiple articles is of emerging value. So we should be getting past the single reader business model.

JSTOR: Single article sales have not gone down because of Read and Register. They’re different users.

Unglue.it: Traditional publishers should cut their cost basis. They have fancy offices in expensive locations. They need to start thinking about how they can cut the cost of what they do.

1 Comment »

[siu] Geoff Bilder on getting the scholarly cyberinfrastructure right

I’m at “Shaking It Up: How to thrive in — and change — the research ecosystem,” an event co-sponsored by Digital Science, Microsoft, Harvard, and MIT. (I think, based on little, that Digital Science is the primary instigator.) I’m late to the opening talk, by Geoff Bilder [twitter:gbilder] , dir. of strategic initiatives at CrossRef. He’s also deeply involved in Orcid, an authority-base that provides a stable identity reference for scholars. He refers to Orcid’s principles as the basis of this talk.

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Geoff Bilder

Geoff is going through what he thinks is required for organizations contributing to a scholarly cyberinfrastructure. I missed the first few.


It should transcend disciplines and other boundaries.


An organization nees a living will: what will happen to it when it ends? That means there should be formal incentives to fulfill the mission and wind down.


Sustainability: time-limited funds should be used only for time-limited activities. You need other sources for sustaining fundamental operations. The goal should be to generate surplus so the organization isn’t brittle and can respond to new opportunities. There should be a contingency fund sufficient to keep it going for 12 months. This builds trust in the organization.

The revenues ought to be based on series, not on data. You certainly shouldn’t raise money by doing things that are against your mission.


But, he says, people are still wary about establishing a single organization that is central and worldwide. So people need the insurance of forkability. Make sure the data is open (within the limits of privacy) and is available in practical ways. “If we turn evil, you can take the code and the data and start up your own system. If you can bring the community with you, you will win.” It also helps to have a patent non-assertion so no one can tie it up.


He presents a version of Maslow’s hierarchy of needs for a scholarly cyberinfrastructure: tools, safety, esteem, self-actualization.


He ends by pointing to Building 20, MIT’s temporary building for WW II researchers. It produced lots of great results but little infrastructure. “We have to stop asking researchers how to fund infrastructure.” They aren’t particularly good at it. We need to get people who are good at it and are eager to fund a research infrastructure independent of funding individual research projects.

3 Comments »

November 18, 2014

[2b2k] Four things to learn in a learning commons

Last night I got to give a talk at a public meeting of the Gloucester Education Foundation and the Gloucester Public School District. We talked about learning commons and libraries. It was awesome to see the way that community comports itself towards its teachers, students and librarians, and how engaged they are. Truly exceptional.

Afterwards there were comments by Richard Safier (superintendent), Deborah Kelsey (director of the Sawyer Free Library), and Samantha Whitney (librarian and teacher at the high school), and then a brief workshop at the attendees tables. The attendees included about a dozen of Samantha’s students; you can see in the liveliness of her students and the great questions they asked that Samantha is an inspiring teacher.

I came out of these conversations thinking that if my charter were to establish a “learning commons” in a school library, I’d ask what sort of learning I want to be modeled in that space. I think I’d be looking for four characteristics:

1. Students need to learn the basics (and beyond!) of online literacy: not just how to use the tools, but, more important, how to think critically in the networked age. Many schools are recognizing that, thankfully. But it’s something that probably will be done socially as often as not: “Can I trust a site?” is a question probably best asked of a network.

2. Old-school critical thinking was often thought of as learning how to sift claims so that only that which is worth believing makes it through. Those skills are of course still valuable, but on a network we are almost always left with contradictory piles of sifted beliefs. Sometimes we need to dispute those other beliefs because they are simply wrong. But on a network we also need to learn to live with difference — and to appreciate difference — more than ever. So, I would take learning to love difference to be an essential skill.

3. It kills me that most people have never clicked on a Wikipedia “Talk” page to see the discussion that resulted in the article they’re reading. If we’re going to get through this thing — life together on this planet — we’re really going to have to learn to be more meta-aware about what we read and encounter online. The old trick of authority was to erase any signs of what produced the authoritative declaration. We can’t afford that any more. We need always to be aware the what we come across resulted from humans and human processes.

4. We can’t rely on individual brains. We need brains that are networked with other brains. Those networks can be smarter than any of their individual members, but only if the participants learn how to let the group make them all smarter instead of stupider.

I am not sure how these skills can be taught — excellent educators and the communities that support them, like those I met last night, are in a better position to figure it out — but they are four skills that seem highly congruent with a networked learning commons.

1 Comment »

November 10, 2014

The invisible change in the news

The first chapter of Dan Gillmor‘s 2005 book, We the Media [pdf], is a terrific, brief history of journalism from the US Colonial era up through Sept. 11. And in 2014 it has a different lesson to teach us as well.

Ten years later, what Dan pointed to as extraordinary is now common as air. It’s now so ordinary that it sometimes leads us to underestimate the magnitude of the change we’ve already lived through.

For example, he ends that first chapter with stories from Sept. 11. News coming through email lists before it could be delivered by the mainstream press. People on the scene posting photos they took. A blood drive organized online. A little-known Afghan-American writer offering wise advice that got circulated across the Net and worked its way up into the mainstream. Personal stories that conveyed the scene better than objective reporting could.

This was novel enough that Dan presented it as worth listening to as a portent. The fact that in 2014 it seems old hat is the proof that in 2004 Dan’s vision was acute.

Think about how you heard about, say, Obama’s Net Neutrality statement today and where you went to hear it explained and contextualized, and then tell me that the Net hasn’t already transformed the news, and that much of the most important, vibrant journalism is now being accomplished by citizens in ways that we now take for granted.

Be the first to comment »

November 7, 2014

The Blogosphere lives!

There was a reason we used that ridiculous word to refer to the loose collection of bloggers: Back in the early 2000s, we were reading one another’s blogs, responding to them, and linking to them. Blogging was a conversational form made solid by links.

It’s time to get back to that. At least for me.

Tweeting’s great. I love Twitter. And I love the weird conversational form it enables. But it’s better at building social relationships than relationships among ideas: I can easily follow you at Twitter, but not ideas: hashtags (lord love ‘em) let us do a little tracing of tweetful interactions, but they’re really more for searching than for creating dense clouds of ideas in relation.

Facebook’s great. I mean, not so much for me, but I understand it’s popular with the kids today. But there again the nodes are social more than ideas. Yes, you can certainly get a thread going, but a thread turns the post into the container.

Medium.com’s great. I actually like it a lot, and publish there occasionally. But why? I don’t use if for its fluent writing experience; these days I prefer more rough-hewn tools such as Markdown. Medium is a comfortable way of publishing: posting something in an attractive form in the hope that strangers will read it.

I’m in favor of all of these modalities: the shout-out of tweets, the social threading of Facebook, the old-school-made-new publishing of Medium.com. But…

Blogs are — or at least were — different. They are an individual’s place for speaking out loud, but the relationships that form around them were based on links among posts, not social networks that link among people. I’m all for social networks, but we also need networks of ideas.

Bloggy networks of ideas turn into social links, and that’s a good thing. An entire generation of my friendships formed because we were blogging back and forth, developing and critiquing one another’s ideas, applying them to our own circumstances and frameworks, and doing so respectfully and in good humor. But the nodes and the links in the blogosphere form around topics and ideas, not social relationships.

Blogging was a blogosphere because our writing and our links were open to everyone and had as much persistence as the fluid world of domains enables. You could start at one person’s blog post, click to another, on to another, following an idea around the world…and being predisposed to come back to any of the blogs that helped you understand something in a new way. Every link in every blog tangibly made our shared world richer and more stimulating.

Appropriately, I’m not the only person who misses the ol’ sphere. I came across a post by my blogging friend Thomas Vander Wal. That led me to a post on “Short-form Blogging” by Marco Arment. He links to the always-interesting and often awesome Gina Trapani who also suggests the benefits of thinking about blogging when you have an idea that’s about the size of a paragraph. Jason Snell, too. Jason points to a post by Andy Baio that’s exults about what could be a resurgence of blogging. In the comments section, Seth Godin raises his hand: “I never left.”

Isn’t it obvious how awesome that is? A clickable web of ideas! What a concept!

So, I’m happy to see all the talk about shorter posts as a way of lowering the hurdle to blogging. But my main interest is not in getting more paragraph-length ideas out in the world, although that’s good. But it’s especially good if those paragraphs are in response to other paragraphs, because I’m mainly interested in seeing webs of posts emerge around ideas …. ideas like the value blogs can bring to an ecosystem that has Twitter, Facebook, and Medium in it already.

Blogs aren’t for everyone, but they are for some of us. Blogs aren’t for everything, but they sure as hell are for something.

(And now I have to decide whether I should cross-post this at Medium.com. And tweet out a link.)

9 Comments »

November 4, 2014

[2b2k] Thinking needs Making

Here’s the opening of my latest column at KMWorld:

A couple of weeks ago, I joined other former students of Joseph P. Fell at Bucknell University for a weekend honoring him. Although he is a philosophy professor, the takeaway for many of us was a reminder that while hands are useless without minds to guide them, minds need hands more deeply than we usually think.

Philosophy is not the only discipline that needs this reminder. Almost anyone—it’s important to maintain the exceptions—who is trying to understand a topic would do well by holding something in her hands, or, better, building something with them…

More here…/a>

Be the first to comment »

October 27, 2014

[liveblog] Christine Borgmann

Christine Borgman, chair of Info Studies at UCLA, and author of the essential Scholarship in the Digital Age, is giving a talk on The Knowledge Infrastructure of Astronomy. Her new book is Big Data, Little Data, No Data: Scholarship in the Networked World, but you’ll have to wait until January. (And please note that precisely because this is a well-organized talk with clearly marked sections, it comes across as choppy in these notes.)

NOTE: Live-blogging. Getting things wrong. Missing points.Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

Her new book draws on 15 yrs of studying various disciplines and 7-8 years focusing on astronomy as a discipline. It’s framed around the change to more data-intensive research across the sciences and humanities plus, the policy push for open access to content and to data. (The team site.)

They’ve been looking at four groups:

The world thinks that astronomy and genomics have figured out how to do data intensive science, she says. But scientists in these groups know that it’s not that straightforward. Christine’s group is trying to learn from these groups and help them learn from one another

Knowledge Infrastructures are “string and baling wire.” Pieces pulled together. The new layered on top of the old.

The first English scientific journal began almost 350 yrs ago. (Philosophical Transactions of the Royal Academy.) We no longer think of the research object as a journal but as a set of articles, objects, and data. People don’t have a simple answer to what is their data. The raw files? The tables of data? When they’re told to share their data, they’re not sure what data is meant.”Even in astronomy we don’t have a single, crisp idea of what are our data.”

It’s very hard to find and organize all the archives of data. Even establishing a chronology is difficult. E.g., “Yes, that project has that date stamp but it’s really a transfer from a prior project twenty years older than that.” It’s hard to map the pieces.

Seamless Astronomy: ADS All Sky Survey, mapping data onto the sky. Also, they’re trying to integrate various link mappings, e.g., Chandra, NED, Simbad, WorldWide Telescope, Arxiv.org, Visier, Aladin. But mapping these collections doesn’t tell you why they’re being linked, what they have in common, or what are their differences. What kind of science is being accomplished by making those relationships? Christine hopes her project will help explain this, although not everyone will agree with the explanations.

Her group wants to draw some maps and models: “A Christmas Tree of Links!” She shows a variety of maps, possible ways of organizing the field. E.g., one from 5 yrs ago clusters services, repositories, archives and publishers. Another scheme: Publications, Objects, Observations; the connection between pubs (citations) and observations is the most loosely coupled. “The trend we’re seeing is that astronomy is making considerable progress in tying together the observations, publications, and data.” “Within astronomy, you’ve built many more pieces of your infrastructure than any other field we’ve looked at.”

She calls out Chris Erdmann [sitting immediately in front of me] as a leader in trying to get data curation and custodianship taken up by libraries. Others are worrying about bit-rot and other issues.

Astronomy is committed to open access, but the resource commitments are uneven.

Strengths of astronomy:

  • collaboration and openness.

  • International coordination.

  • Long term value of data.

  • Agreed standards.

  • Shared resources.

Gaps of astronomy:


  • Investment in data sstewardship: varies by mission and by type of research. E.g., space-based missions get more investment than the ground-based ones. (An audience member says that that’s because the space research was so expensive that there was more insistence on making the data public and usable. A lively discussion ensues…)


  • The access to data varies.


  • Curation of tools and technologies


  • International coordination. Sould we curate existing data? But you don’t get funding for using existing data. So, invest in getting new data from new instruments??


Christine ends with some provocative questions about openness. What does it mean exactly? What does it get us?


Q&A


Q: As soon as you move out of the Solar System to celestial astronomy, all the standards change.


A: When it takes ten years to build an instrument, it forces you to make early decisions about standards. But when you’re deploying sensors in lakes, you don’t always note that this is #127 that Eric put the tinfoil on top of because it wasn’t working well. Or people use Google Docs and don’t even label the rows and columns because all the readers know what they mean. That makes going back to it is much harder. “Making it useful for yourself is hard enough.” It’s harder still to make it useful for someone in 5 yrs, and harder still to make it useful for an unknown scientist in another country speaking another language and maybe from another discipline.


Q: You have to put a data management plan into every proposal, but you can’t make it a budget item… [There is a lively discussion of which funders reasonably fund this]


Q: Why does Europe fund ground-based data better than the US does?


A: [audience] Because of Riccardo Giacconi.

A: [Christine] We need to better fund the invisible workforce that makes science work. We’re trying to cast a light on this invisible infrastructure.

1 Comment »

October 13, 2014

Library as starting point

A new report on Ithaka S+R‘s annual survey of libraries suggests that library directors are committed to libraries being the starting place for their users’ research, but that the users are not in agreement. This calls into question the expenditures libraries make to achieve that goal. (Hat tip to Carl Straumsheim and Peter Suber.)

The question is good. My own opinion is that libraries should let Google do what it’s good at, while they focus on what they’re good at. And libraries are very good indeed at particular ways of discovery. The goal should be to get the mix right, not to make sure that libraries are the starting point for their communities’ research.

The Ithaka S+R survey found that “The vast majority of the academic library directors…continued to agree strongly with the statement: ‘It is strategically important that my library be seen by its users as the first place they go to discover scholarly content.'” But the survey showed that only about half think that that’s happening. This gap can be taken as room for improvement, or as a sign that the aspiration is wrongheaded.

The survey confirms that many libraries have responded to this by moving to a single-search-box strategy, mimicking Google. You just type in a couple of words about what you’re looking for and it searches across every type of item and every type of system for managing those items: images, archival files, books, maps, museum artifacts, faculty biographies, syllabi, databases, biological specimens… Just like Google. That’s the dream, anyway.

I am not sold on it. Roger cites Lorcan Dempsey, who is always worth listening to:

Lorcan Dempsey has been outspoken in emphasizing that much of “discovery happens elsewhere” relative to the academic library, and that libraries should assume a more “inside-out” posture in which they attempt to reveal more effectively their distinctive institutional assets.

Yes. There’s no reason to think that libraries are going to be as good at indexing diverse materials as Google et al. are. So, libraries should make it easier for the search engines to do their job. Library platforms can help. So can Schema.org as a way of enriching HTML pages about library items so that the search engines can easily recognize the library item metadata.

But assuming that libraries shouldn’t outsource all of their users’ searches, then what would best serve their communities? This is especially complicated since the survey reveals that preference for the library web site vs. the open Web varies based on just about everything: institution, discipline, role, experience, and whether you’re exploring something new or keeping up with your field. This leads Roger to provocatively ask:

While academic communities are understood as institutionally affiliated, what would it entail to think about the discovery needs of users throughout their lifecycle? And what would it mean to think about all the different search boxes and user login screens across publishes [sic] and platforms as somehow connected, rather than as now almost entirely fragmented? …Libraries might find that a less institutionally-driven approach to their discovery role would counterintuitively make their contributions more relevant.

I’m not sure I agree, in part because I’m not entirely sure what Roger is suggesting. If it’s that libraries should offer an experience that integrates all the sources scholars consult throughout the lifecycle of their projects or themselves, then, I’d be happy to see experiments, but I’m skeptical. Libraries generally have not shown themselves to be particularly adept at creating grand, innovative online user experiences. And why should they be? It’s a skill rarely exhibited anywhere on the Web.

If designing great Web experiences is not a traditional strength of research libraries, the networked expertise of their communities is. So is the library’s uncompromised commitment to serving its community’s interests. A discovery system that learns from its community can do something that Google cannot: it can find connections that the community has discerned, and it can return results that are particularly relevant to that community. (It can make those connections available to the search engines also.)

This is one of the principles behind the Stacklife project that came out of the Harvard Library Innovation Lab that until recently I co-directed. It’s one of the principles of the Harvard LibraryCloud platform that makes Stacklife possible. It’s one of the reasons I’ve been touting a technically dumb cross-library measure of usage. These are all straightforward ways to start to record and use information about the items the community has voted for with its library cards.

It is by far just the start. Anonymization and opt-in could provide rich sets of connections and patterns of usage. Imagine we could know what works librarians recommend in response to questions. Imagine if we knew which works were being clustered around which topics in lib guides and syllabi. (Support the Open Syllabus Project!) Imagine if we knew which books were being put on lists by faculty and students. Imagine if knew what books were on participating faculty members’ shelves. Imagine we could learn which works the community thinks are awesome. Imagine if we could do this across institutions so that communities could learn from one another. Imagine we could do this with data structures that support wildly messily linked sources, many of them within the library but many of them outside of it. (Support Linked Data!)

Let the Googles and Bings do what they do better than any sane person could have imagined twenty years ago. Let libraries do what they have been doing better than anyone else for centuries: supporting and learning from networked communities of scholars, librarians, and students who together are a profound source of wisdom and working insight.

Be the first to comment »

September 9, 2014

[liveblog] Robin Sproul, ABC News

I’m at a Shorenstein Center brownbag talk. Robin Sproul is talking about journalism in the changing media landscape. She’s been Washington Bureau Chief of ABC News for 20 years, and now is VP of Public Affairs for that network. (Her last name rhymes with “owl,” by the way.)

NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

This is an “incredibly exciting time,” Robin begins. The pace has been fast and is only getting faster. E.g., David Plouffe says that Obama’s digital infrastructure from 2008 didn’t apply in 2012, and the 2012 infrastructure won’t apply in 2016.

A few years ago, news media were worried about how to reach you wherever you are. Now it’s how to reach you in a way that makes you want to pay attention. “How do we get inside your brain, through the firehose, in a way that will break through everything you’re exposed to?” We’re all adapting to getting more and smaller bites. “Digital natives swerve differently than the older generation, from one topic to another.”

In this social media world, “each of us is a news reporter.” Half of people on social networks repost news videos, and one in ten post news videos they’ve recorded themselves.

David Carr: “If the Vietnam War brought war into our living rooms,” now “it’s at our fingertips.” But we see the world through narrow straws. We’re not going back from that, but we need to get better at curating them and making sure they’re accurate and contextualized.

On the positive side: “I was so moved by a Ferguson coverage: how a community of color, in this case, could tell their own story” and connect with people around the country, in real-time. “The people of that community were ahead of the cables.” Sure, some of the info was wrong, but we could watch people bearing witness to history. Also, the Ray Rice video has stimulated conversations on domestic violence around the country. How do you tap into these discussions? Sort them? Curate them? “A lot of it comes down to curation.”

People are not coming into ABCnews.com directly. “They’re coming in through side doors.” “And the big stories we do compete with the animal stories, the recipes,” etc. “We see a place like Buzzfeed” that now has 200 employees. They’ve hired someone from The Guardian, they’ve been reporting from the ground in Liberia. Yahoo’s hired Katie Couric. Vice. Michael Isikoff. Reddit’s AMAs. Fusion has just hired Tim Pool from Vice Media. “All of these things are competing in a rapidly shifting universe.”

ABC is creating partnerships, e.g., with Facebook for identifying what’s trending which is then discussed on their Sunday morning show. [See Ethan Zuckerman’s recent post on why Twitter is a better news source than Facebook. Also, John McDermott’s Why Facebook is for ice buckets, Twitter is for Ferguson. Both suggest that ABC maybe should rethink its source for what’s trending.] ABC uses various software platforms to evaluate video coming in of breaking news. “We need help, so we’re partnering.” ABC now has a social desk. “During a big story, we activate a team…and they are in a deep deep dive of social media,” vetting it for accuracy and providing context. “Six in ten of Americans watch videos on line and half of those watch news videos. This is a big growth area.” But, she adds ruefully, it’s “not a big revenue growth area.”

So, ABC is tapping into social media, but is wary of those who have their own aims. E.g., Whitehouse.gov does reports that look like news reports but are not. The photos the White House hands out never show a yawning, exhausted, or weeping president. “I joke with the press secretary that we’re one step away from North Korea.” We’re heading toward each candidate having their own network, in effect, a closed circle.

Q&A

Q: You’ve describe the fragmentation in the supply of news. But how about the demand? “Are you getting a sense of your audience?” What circulates? What sticks? What sets the agenda? etc.

A: We do a lot of audience research. Our mainstream TV shows attract an aging audience. No matter what we do, they’re not bringing in a new audience. Pretty much the older the audience, the more they like hard news. We’ve changed the pace of the Sunday shows. We think people want a broader lens from us. “We’re not as focused on horse race politics, or what John McCain thinks of every single issue. We’re open to new voices.”

Q: The future of health reporting? I’m disappointed with what I see. E.g., there’s little regard to the optics of how we’re treating Ebola, particular with regard to the physicians getting treated back in the US.

A: Dr. Richard Besser, who ran the CDC, is at ABC and has reported from Africa. But it’s hit or miss. We did cover the white doctors getting the serum, but it’s hard to find in the firehose.

Q: How do you balance quality news with short attention spans?

A: For the Sunday shows we’ve tried to maintain a balance.

Q: Does ABC try to maintain its own pace, or go with the new pace? If the latter, how do you maintain quality?

A: We used to make a ton of money producing the news and could afford to go anywhere. Now we have the same number of hours of news on TV, but the audiences are shrinking and we’re trying to grow. It’s not as deep. It’s broader. We will want to find you…but you have to be willing to be found.

Q: How do you think about the segmentation of your news audience? And what are the differences in what you provide them?

A: We know which of our shows skew older (Sunday shows), or more female (Good Morning America), etc. We don’t want to leave any segment behind. We want our White House reporter to go into depth, but he also has to tweet all day, does a Yahoo show, does radio, accompanies Nancy Pelosi on a fast-walk, etc.

Q: Some of your audiences matter from a business point of view. But historically ABC has tried to supply news to policy makers etc. The 11 year old kids may give you large numbers, but…

A: When we sit in our morning editorial mornings we never say that we will do a story because the 18-24 year olds are interested. The need to know, what we think is important, drives decisions. We used to be programming for “people like us” who want the news. Then we started getting thousands of “nutjob” emails. [I’m doing a bad job paraphrasing here. Sorry] Sam Donaldson was shocked. “This digital age has made us much more aware of all those different audiences.” We’re in more contact with our audience now. E.g., when the media were accused of pulling their punches in the run-up to the Iraq War, we’d get pushback saying we’re anti-American. Before, we didn’t get these waves.

Q: A fast-walk with Nancy Pelosi, really?

A: [laughs] It got a lot of hits.

Q: Can you elaborate on your audience polling? And do people not watch negative stories?

A: A Harvard prof told me last night that s/he doesn’t like watching the news any more because it’s just so depressing. But that’s a fact of life. Anyway, it used to be that the posted comments were very negative, and sometimes from really crazy people. We learned to put that into perspective. Now Twitter provides instant feedback. We’re slammed whatever we do. So we try to come up with a mix. For World News Tonight, people with different backgrounds talk about the stories, how they play off the story before it, etc. Recenty we’ve been criticized for doing too much “news you can use”, how to live your life, etc. We want to give people news that isn’t always just terrible. There’s a lot of negative stuff that we’re exposed to now. [Again, sorry for the choppiness. My fault.]

Q: TV has always had the challenge of the limited time for news. With digital, how are you linking the on-screen reporting with the in-depth online stories, given the cutbacks? How do you avoid answering every tweet? [Not sure I got that right.]

A: We have a mix of products.

Q: What is the number one barrier to investigative journalism? How have new media changed that balance?

A: There are investigative reporting non-profits springing up all the time. There’s an appetite from the user for it. All of the major news orgs still have units doing it. But what is the business model? How much money do you apportion to each vertical in your news division? It’s driven by the appetite for it, how much money you have, what you’re taking it away from. Investigative is a growth industry.

Q: I was a spokesperson for Healthcare.gov and was interested in your comments about this Administration being more closed to the media.

A: They are more closed than prior admins. There’s always a message. When the President went out the other day to talk, no other admin members were allowed to talk with the media. I think it’s a response to how many inquiries are coming and how out of control info is, and how hard it is to respond to inaccuracies that pop up. The Obama administration has clamped down a little more because of that.

Q: You can think of Vice in Liberia as an example of boutique reporting: they do that one story. But ABC News has to cover everything. Do you see a viable future for us?

A: As we go further down this path and it becomes more overwhelming, there are some brands that stand for something. Curation is what we do well. Cyclically, people will go back to these brands.

Q: In the last couple of years, there’s a trend away from narrative to Gestalt. They were called news stories because they had a plot. Recent news events like Ferguson or Gaza were more like just random things. Very little story.

A: Twitter is a tool, a platform. It’s not really driving stories. Maybe it’s the nature of the stories. It’ll be interesting to see how social media are used by the candidates in the 2016 campaign.

Q: Why splitting the nightly news anchor from …

A: Traditionally the evening news anchor has been the chief anchor for the network. George Stephanopoulos anchors GMA, which makes most of the money. So no one wanted to move him to the evening news. And the evening news has become a little less relevant to our network. There’s been a diminishment in the stature of the evening news anchor. And it plays to GS’s strengths.

Be the first to comment »

Next Page »