Joho the Blog

November 26, 2014

Welcome to the open Net!

I wanted to play Tim Berners-Lee’s 1999 interview with Terry Gross on WHYY’s Fresh Air. Here’s how that experience went:

  • I find a link to it on a SlashDot discussion page.

  • The link goes to a text page that has links to Real Audio files encoded either for 28.8 or ISBN.

  • I download the ISBN version.

  • It’s a RAM (Real Audio) file that my Mac (Yosemite) cannot play.

  • I look for an updated version on the Fresh Air site. It has no way of searching, so I click through the archives to get to the Sept. 16, 1999 page.

  • It’s a 404 page-not-found page.

  • I search for a way to play an old RAM file.

  • The top hit takes me to Real Audio’s cloud service, which offers me 2 gigabytes of free storage. I decline.

  • I pause for ten silent seconds in amazement that the Real Audio company still exists. Plus it owns the domain “real.com.”

  • I download a copy of RealPlayerSP from CNET, thus probably also downloading a copy of MacKeeper. Thanks, CNET!

  • I open the Real Player converter and Apple tells me I don’t have permission because I didn’t buy it through Apple’s TSA clearance center. Thanks, Apple!

  • I do the control-click thang to open it anyway. It gives me a warning about unsupported file formats that I don’t understand.

  • Set System Preferences > Security so that I am allowed to open any software I want. Apple tells me I am degrading the security of my system by not giving Apple a cut of every software purchase. Thanks, Apple!

  • I drag in the RAM file. It has no visible effect.

  • I use the converter’s upload menu, but this converter produced by Real doesn’t recognize Real Audio files. Thanks, Real Audio!

  • I download and install the Real Audio Cloud app. When I open it, it immediately scours my disk looking for video files. I didn’t ask it to do that and I don’t know what it’s doing with that info. A quick check shows that it too can’t play a RAM file. I uninstall it as quickly as I can.

  • I download VLC, my favorite audio player. (It’s a new Mac and I’m still loading it with my preferred software.)

  • Apple lets me open it, but only after warning me that I shouldn’t trust it because it comes from [dum dum dum] The Internet. The scary scary Internet. Come to the warm, white plastic bosom of the App Store, it murmurs.

  • I drag the file in to VLC. It fails, but it does me the favor of tellling me why: It’s unable to connect to WHYY’s Real Audio server. Yup, this isn’t a media file, but a tiny file that sets up a connection between my computer and a server WHYY abandoned years ago. I should have remembered that that’s how Real worked. Actually, no, I shouldn’t have had to remember that. I’m just embarrassed that I did not. Also, I should have checked the size of the original Fresh Air file that I downloaded.

  • A search for “Time Berners-Lee Fresh Air 1999″ immediately turns up an NPR page that says the audio is no longer available.

    It’s no longer available because in 1999 Real Audio solved a problem for media companies: install a RA server and it’ll handle the messy details of sending audio to RA players across the Net. It seemed like a reasonable approach. But it was proprietary and so it failed, taking Fresh Air’s archives with it. Could and should have Fresh Air converted its files before it pulled the plug on the Real Audio server? Yeah, probably, but who knows what the contractual and technical situation was.

    By not following the example set by Tim Berners-Lee — open protocols, open standards, open hearts — this bit of history has been lost. In this case, it was an interview about TBL’s invention, thus confirming that irony remains the strongest force in the universe.

    Be the first to comment »

  • November 24, 2014

    [siu] Accessing content

    Alex Hodgson of ReadCube is leading a panel called “Accessing Content: New Thinking and New Business Models or Accessing Research Literature” at the Shaking It Up conference.

    NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

    Robert McGrath is from ReadCube, a platform for managing references. You import your pdfs, read them with their enhanced reader, and can annotate them and discover new content. You can click on references in the PDF and go directly to the sources. If you hit a pay wall, they provide a set of options, including a temporary “checkout” of the article for $6. Academic libraries can set up a fund to pay for such access.

    Eric Hellman talks about Unglue.it. Everyone in the book supply chain wants a percentage. But free e-books break the system because there are no percentages to take. “Even libraries hate free ebooks.” So, how do you give access to Oral Literature in Africain Africa? Unglue.it ran a campaign, raised money, and liberated it. How do you get free textbooks into circulation? Teachers don’t know what’s out there. Unglue.it is creating MARC records for these free books to make it easy for libraries to include the. The novel Zero Sum Game is a great book that the author put it out under a Creative Commons license, but how do you find out that it’s available? Likewise for Barbie: A Computer Engineer, which is a legal derivative of a much worse book. Unglue.it has over 1,000 creative commons licensed books in their collection. One of Unglue.it’s projects: an author pledges to make the book available for free after a revenue target has been met. [Great! A bit like the Library License project from the Harvard Library Innovation Lab. They’re now doing Thanks for Ungluing which aggregates free ebooks and lets you download them for free or pay the author for it. [Plug: John Sundman’s Biodigital is available there. You definitely should pay him for it. It’s worth it.]

    Marge Avery, ex of MIT Press and now at MIT Library, says the traditional barriers sto access are price, time, and format. There are projects pushing on each of these. But she mainly wants to talk about format. “What does content want to be?” Academic authors often have research that won’t fit in the book. Univ presses are experimenting with shorter formats (MIT Press Bits), new content (Stanford Briefs), and publishing developing, unifinished content that will become a book (U of Minnesota). Cambridge Univ Press published The History Manifesto, created start to finish in four months and is available as Open Access as well as for a reasonable price; they’ve sold as many copies as free copies have been downloaded, which is great.

    William Gunn of Mendeley talks about next-gen search. “Search doesn’t work.” Paul Kedrosky was looking for a dishwasher and all he found was spam. (Dishwashers, and how Google Eats Its Own Tail). Likewise, Jeff Atwood of StackExchange: “Trouble in the House of Google.” And we have the same problems in scholarly work. E.g., Google Scholar includes this as a scholarly work. Instead, we should be favoring push over pull, as at Mendeley. Use behavior analysis, etc. “There’s a lot of room for improvement” in search. He shows a Mendeley search. It auto-suggests keyword terms and then lets you facet.

    Jenn Farthing talks about JSTOR’s “Register and Read” program. JSTOR has 150M content accesses per year, 9,000 institutions, 2,000 archival journals, 27,000 books. Register and Read: Free limited access for everyone. Piloted with 76 journals. Up to 3 free reads over a two week period. Now there are about 1,600 journals, and 2M users who have checked out 3.5M articles. (The journals are opted in to the program by their publishers.)

    Q&A

    Q: What have you learned in the course of these projects?

    ReadCube: UI counts. Tracking onsite behavior is therefore important. Iterate and track.

    Marge: It’d be good to have more metrics outside of sales. The circ of the article is what’s really of importance to the scholar.

    Mendeley: Even more attention to the social relationships among the contributors and readers.

    JSTOR: You can’t search for only content that’s available to you through Read and Register. We’re adding that.

    Unglue.it started out as a crowdfunding platform for free books. We didn’t realize how broken the supply chain is. Putting a book on a Web site isn’t enough. If we were doing it again, we’d begin with what we’re doing now, Thanks for Ungluing, gathering all the free books we can find.

    Q: How to make it easier for beginners?

    Unglue .it: The publishing process is designed to prevent people from doing stuff with ebooks. That’s a big barrier to the adoption of ebooks.

    ReadCube: Not every reader needs a reference manager, etc.

    Q: Even beginning students need articles to interoperate.

    Q: When ReadCube negotiates prices with publishers, how does it go?

    ReadCube: In our pilots, we haven’t seen any decline in the PDF sales. Also, the cost per download in a site license is a different sort of thing than a $6/day cost. A site license remains the most cost-effective way of acquiring access, so what we’re doing doesn’t compete with those licenses.

    Q: The problem with the pay model is that you can’t appraise the value of the article until you’ve paid. Many pay models don’t recognize that barrier.

    ReadCube: All the publishers have agreed to first-page previews, often to seeing the diagrams. We also show a blurred out version of the pages that gives you a sense of the structure of the article. It remains a risk, of course.

    Q: What’s your advice for large legacy publishers?

    ReadCube: There’s a lot of room to explore different ways of brokering access — different potential payers, doing quick pilots, etc.

    Mendeley: Make sure your revenue model is in line with your mission, as Geoff said in the opening session.

    Marge: Distinguish the content from the container. People will pay for the container for convenience. People will pay for a book in Kindle format, while the content can be left open.

    Mendeley: Reading a PDF is of human value, but computing across multiple articles is of emerging value. So we should be getting past the single reader business model.

    JSTOR: Single article sales have not gone down because of Read and Register. They’re different users.

    Unglue.it: Traditional publishers should cut their cost basis. They have fancy offices in expensive locations. They need to start thinking about how they can cut the cost of what they do.

    Be the first to comment »

    [siu] Panel: Capturing the research lifecycle

    It’s the first panel of the morning at Shaking It Up. Six men from six companies give brief overviews of their products. The session is led by Courtney Soderberg from the
    Center for Open Science, which sounds great. [Six panelists means that I won’t be able to keep up. Or keep straight who is who, since there are no name plates. So, I’ll just distinguish them by referring to them as “Another White Guy,” ‘k?]

    NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

    Riffyn: “Manufacturing-grade quality in the R&D process.” This can easily double R&D productivity “because you stop missing those false negatives.” It starts with design

    Github: “GitHub is a place where people do software development together.” 10M people. 15M software repositories. He points to Zenodo, a respository for research outputs. Open source communities are better at collaborating than most academic research communities are. The principles of open source can be applied to private projects as well. A key principle: everything has a URL. Also, the processes should be “lock-free” so they can be done in parallel and the decision about branching can be made later.

    Texas Advanced Computing Center: Agave is a Science-as-a-Service platform. It’s a platform, that provides lots of services as well as APIs. “It’s SalesForce for science.”

    CERN is partnering with GitHub. “GitHub meets Zenodo.” But it also exports the software into INSPIRE which links the paper with the software. [This
    might be the INSPIRE he’s referring to. Sorry. I know I should know this.
    ]

    Overleaf was inspired by etherpad, the collaborative editor. But Etherpad doesn’t do figures or equations. OverLeaf does that and much more.

    Publiscize helps researchers translate their work into terms that a broader audience can understand. He sees three audiences: intradisciplinary, interdisciplinary, and the public. The site helps scientists create a version readable by the public, and helps them disseminate them through social networks.

    Q&A

    Some white guys provided answers I couldn’t quite hear to questions I couldn’t hear. They all seem to favor openness, standards, users owning their own data, and interoperability.

    [They turned on the PA, so now I can hear. Yay. I missed the first couple of questions.]

    Github: Libraries have uploaded 100,000 open access books, all for free. “Expect the unexpected. That happens a lot.” “Academics have been among the most abusive of our platform…in the best possible way.”

    Zenodo: The most unusual uses are the ones who want to instal a copy at their local institutions. “We’re happy to help them fork off Zenodo.”

    Q: Where do you see physical libraries fitting in?

    AWG: We keep track of some people’s libraries.

    AWG: People sometimes accidentally delete their entire company’s repos. We can get it back for you easily if you do.

    AWG: Zenodo works with Chris Erdmann at Harvard Library.

    AWG: We work with FigShare and others.

    AWG: We can provide standard templates for Overleaf so, for example, your grad students’ theses can be managed easily.

    AWG: We don’t do anything particular with libraries, but libraries are great.

    Courtney:We’re working with ARL on a shared notification system

    Q: Mr. GitHub (Arfon Smith), you said in your comments that reproducibility is a workflow issue?

    GitHub: You get reproducibility as a by-product of using tools like the ones represented on this panel. [The other panelists agree. Reproducibility should be just part of the infrastructure that you don’t have to think about.]

    5 Comments »

    [siu] Geoff Bilder on getting the scholarly cyberinfrastructure right

    I’m at “Shaking It Up: How to thrive in — and change — the research ecosystem,” an event co-sponsored by Digital Science, Microsoft, Harvard, and MIT. (I think, based on little, that Digital Science is the primary instigator.) I’m late to the opening talk, by Geoff Bilder [twitter:gbilder] , dir. of strategic initiatives at CrossRef. He’s also deeply involved in Orcid, an authority-base that provides a stable identity reference for scholars. He refers to Orcid’s principles as the basis of this talk.

    NOTE: Live-blogging. Getting things wrong. Missing points. Omitting key information. Introducing artificial choppiness. Over-emphasizing small matters. Paraphrasing badly. Not running a spellpchecker. Mangling other people’s ideas and words. You are warned, people.

    Geoff Bilder

    Geoff is going through what he thinks is required for organizations contributing to a scholarly cyberinfrastructure. I missed the first few.


    It should transcend disciplines and other boundaries.


    An organization nees a living will: what will happen to it when it ends? That means there should be formal incentives to fulfill the mission and wind down.


    Sustainability: time-limited funds should be used only for time-limited activities. You need other sources for sustaining fundamental operations. The goal should be to generate surplus so the organization isn’t brittle and can respond to new opportunities. There should be a contingency fund sufficient to keep it going for 12 months. This builds trust in the organization.

    The revenues ought to be based on series, not on data. You certainly shouldn’t raise money by doing things that are against your mission.


    But, he says, people are still wary about establishing a single organization that is central and worldwide. So people need the insurance of forkability. Make sure the data is open (within the limits of privacy) and is available in practical ways. “If we turn evil, you can take the code and the data and start up your own system. If you can bring the community with you, you will win.” It also helps to have a patent non-assertion so no one can tie it up.


    He presents a version of Maslow’s hierarchy of needs for a scholarly cyberinfrastructure: tools, safety, esteem, self-actualization.


    He ends by pointing to Building 20, MIT’s temporary building for WW II researchers. It produced lots of great results but little infrastructure. “We have to stop asking researchers how to fund infrastructure.” They aren’t particularly good at it. We need to get people who are good at it and are eager to fund a research infrastructure independent of funding individual research projects.

    3 Comments »

    November 21, 2014

    APIs are magic

    (This is cross-posted at Medium.)

    Dave Winer recalls a post of his from 2007 about an API that he’s now revived:

    “Because Twitter has a public API that allows anyone to add a feature, and because the NY Times offers its content as a set of feeds, I was able to whip up a connection between the two in a few hours. That’s the power of open APIs.”

    Ah, the power of APIs! They’re a deep magic that draws upon five skills of the Web as Mage:

    First, an API matters typically because some organization has decided to flip the default: it assumes data should be public unless there’s a reason to keep it private.

    Second, an API works because it provides a standard, or at least well-documented, way for an application to request that data.

    Third, open APIs tend to be “RESTful,” which means that they work using the normal Web way of proceeding (i.e., Web protocols). All you or your program have to do is go to the API’s site using a standard URL of the sort you enter in a browser. The site comes back not with a Web page but with data. For example, click on this URL (or paste it into your browser) and you’ll get data from Wikipedia’s API: http://en.wikipedia.org/w/api.php?action=query&titles=San_Francisco&prop=images&imlimit=20&format=jsonfm. (This is from the Wikipedia API tutorial.)

    Fourth, you need people anywhere on the planet who have ideas about how that data can be made more useful or delightful. (cf. Dave Winer.)

    Fifth, you need a worldwide access system that makes the results of that work available to everyone on the Internet.

    In short, API’s show the power of a connective infrastructure populated by ingenuity and generosity.

    In shorter shortnesss: API’s embody the very best of the Web.

    Be the first to comment »

    November 18, 2014

    [2b2k] Four things to learn in a learning commons

    Last night I got to give a talk at a public meeting of the Gloucester Education Foundation and the Gloucester Public School District. We talked about learning commons and libraries. It was awesome to see the way that community comports itself towards its teachers, students and librarians, and how engaged they are. Truly exceptional.

    Afterwards there were comments by Richard Safier (superintendent), Deborah Kelsey (director of the Sawyer Free Library), and Samantha Whitney (librarian and teacher at the high school), and then a brief workshop at the attendees tables. The attendees included about a dozen of Samantha’s students; you can see in the liveliness of her students and the great questions they asked that Samantha is an inspiring teacher.

    I came out of these conversations thinking that if my charter were to establish a “learning commons” in a school library, I’d ask what sort of learning I want to be modeled in that space. I think I’d be looking for four characteristics:

    1. Students need to learn the basics (and beyond!) of online literacy: not just how to use the tools, but, more important, how to think critically in the networked age. Many schools are recognizing that, thankfully. But it’s something that probably will be done socially as often as not: “Can I trust a site?” is a question probably best asked of a network.

    2. Old-school critical thinking was often thought of as learning how to sift claims so that only that which is worth believing makes it through. Those skills are of course still valuable, but on a network we are almost always left with contradictory piles of sifted beliefs. Sometimes we need to dispute those other beliefs because they are simply wrong. But on a network we also need to learn to live with difference — and to appreciate difference — more than ever. So, I would take learning to love difference to be an essential skill.

    3. It kills me that most people have never clicked on a Wikipedia “Talk” page to see the discussion that resulted in the article they’re reading. If we’re going to get through this thing — life together on this planet — we’re really going to have to learn to be more meta-aware about what we read and encounter online. The old trick of authority was to erase any signs of what produced the authoritative declaration. We can’t afford that any more. We need always to be aware the what we come across resulted from humans and human processes.

    4. We can’t rely on individual brains. We need brains that are networked with other brains. Those networks can be smarter than any of their individual members, but only if the participants learn how to let the group make them all smarter instead of stupider.

    I am not sure how these skills can be taught — excellent educators and the communities that support them, like those I met last night, are in a better position to figure it out — but they are four skills that seem highly congruent with a networked learning commons.

    1 Comment »

    November 12, 2014

    “Netflix is a data hog” and other myths of Net Neutrality

    Medium Backchannel just posted my piece on six myths about Net Neutrality. Here’s the opening:

    Netflix is a data hog

    “…data hogs like Netflix might need to bear some of the cost of handling heavy traffic.”?—?ABCnews

    That’s like saying your water utility is a water hog because you take long showers and over-water your lawn.

    Streaming a high-def movie does take a whole bunch of bits. But if you hadn’t gone ahead and clicked on Taken 2 [SPOILER: she’s taken again], Netflix would not have sent those bits over the Internet.

    So Netflix isn’t a data hog. You are.

    You’re a data hog.

    No, you’re not.

    Some people use the Internet ten minutes a day to check their email. Some people leave their computers on 24/7 to download entire video libraries. None of them are data hogs.

    How can I say this so unequivocally…?

    more over at Medium.

    Be the first to comment »

    November 10, 2014

    The invisible change in the news

    The first chapter of Dan Gillmor‘s 2005 book, We the Media [pdf], is a terrific, brief history of journalism from the US Colonial era up through Sept. 11. And in 2014 it has a different lesson to teach us as well.

    Ten years later, what Dan pointed to as extraordinary is now common as air. It’s now so ordinary that it sometimes leads us to underestimate the magnitude of the change we’ve already lived through.

    For example, he ends that first chapter with stories from Sept. 11. News coming through email lists before it could be delivered by the mainstream press. People on the scene posting photos they took. A blood drive organized online. A little-known Afghan-American writer offering wise advice that got circulated across the Net and worked its way up into the mainstream. Personal stories that conveyed the scene better than objective reporting could.

    This was novel enough that Dan presented it as worth listening to as a portent. The fact that in 2014 it seems old hat is the proof that in 2004 Dan’s vision was acute.

    Think about how you heard about, say, Obama’s Net Neutrality statement today and where you went to hear it explained and contextualized, and then tell me that the Net hasn’t already transformed the news, and that much of the most important, vibrant journalism is now being accomplished by citizens in ways that we now take for granted.

    Be the first to comment »

    November 8, 2014

    Italy’s Declaration of Internet Rights

    An ad hoc study commission of the Italian Chamber of Deputies has published a draft “Declaration of Internet Rights” that should be cause for cheers and cheer. It’s currently open for public comment at the Civici Platform — which by itself is pretty cool.

    TechPresident explains that this came about

    thanks to the initiative of the presidency of the Chamber of Deputies, a dedicated Committee of experts and members of the Parliament from the Committee on Internet Rights and Duties. The bill aims to inform the debate about online civil liberties and fundamental freedoms during the Italian semester of the European Union presidency…

    I like the document a lot. A lot a lot. The principles are based on a genuine understanding of the value that the Net brings and what enables the Net to bring that value. This is crucial because so often those who seek to govern the Net do so because they see it primarily as a threat to order or a challenge to their power.

    The Declaration focuses on the rights of individuals, taking the implicit stance (or so I read it) that the threat to those rights comes not only from Internet malefactors and giant Internet conglomerates run amok, but also from those who seek to govern the Net. It includes as rights not only access to the Net, but access to education about how to use the Net, a point too often forgotten. (Not by Eszter Hargittai, though, who has done the seminal work in showing that Internet skills are not as easily acquired as we often assume.)

    Since my larynx seizes whenever its faced with the prospect of talking about governing the Internet, I personally wish the document would be even more direct about the dangers of trying to “fix” the Internet. For example, it could recommend principles such as these to our Internet Overlords:

    • Every effort will be made to enable the governance of the Net bottom up and by the edges.

    • Controls and regulations should only be introduced when less coercive and restrictive attempts have demonstrably and repeatedly failed.

    • Controls and regulations should be created as far up the stack as possible when they are necessary. (Or is that a bad idea??).

    • The advice of engineers who are not beholden to particular constituencies or entities shall be consulted and heavily regarded. (Not sure how to state this.)

    But that’s probably just me. Far more important, this draft Declaration of Internet Rights is an important reminder to the Internet’s wannabe regulators that the Net is a powerful force for human good that should be helped to flourish, not merely a negative force that needs to be restrained.

    For more information, I strongly recommend the TechPresident article by Fabio Chiusi.

    3 Comments »

    November 7, 2014

    The Blogosphere lives!

    There was a reason we used that ridiculous word to refer to the loose collection of bloggers: Back in the early 2000s, we were reading one another’s blogs, responding to them, and linking to them. Blogging was a conversational form made solid by links.

    It’s time to get back to that. At least for me.

    Tweeting’s great. I love Twitter. And I love the weird conversational form it enables. But it’s better at building social relationships than relationships among ideas: I can easily follow you at Twitter, but not ideas: hashtags (lord love ‘em) let us do a little tracing of tweetful interactions, but they’re really more for searching than for creating dense clouds of ideas in relation.

    Facebook’s great. I mean, not so much for me, but I understand it’s popular with the kids today. But there again the nodes are social more than ideas. Yes, you can certainly get a thread going, but a thread turns the post into the container.

    Medium.com’s great. I actually like it a lot, and publish there occasionally. But why? I don’t use if for its fluent writing experience; these days I prefer more rough-hewn tools such as Markdown. Medium is a comfortable way of publishing: posting something in an attractive form in the hope that strangers will read it.

    I’m in favor of all of these modalities: the shout-out of tweets, the social threading of Facebook, the old-school-made-new publishing of Medium.com. But…

    Blogs are — or at least were — different. They are an individual’s place for speaking out loud, but the relationships that form around them were based on links among posts, not social networks that link among people. I’m all for social networks, but we also need networks of ideas.

    Bloggy networks of ideas turn into social links, and that’s a good thing. An entire generation of my friendships formed because we were blogging back and forth, developing and critiquing one another’s ideas, applying them to our own circumstances and frameworks, and doing so respectfully and in good humor. But the nodes and the links in the blogosphere form around topics and ideas, not social relationships.

    Blogging was a blogosphere because our writing and our links were open to everyone and had as much persistence as the fluid world of domains enables. You could start at one person’s blog post, click to another, on to another, following an idea around the world…and being predisposed to come back to any of the blogs that helped you understand something in a new way. Every link in every blog tangibly made our shared world richer and more stimulating.

    Appropriately, I’m not the only person who misses the ol’ sphere. I came across a post by my blogging friend Thomas Vander Wal. That led me to a post on “Short-form Blogging” by Marco Arment. He links to the always-interesting and often awesome Gina Trapani who also suggests the benefits of thinking about blogging when you have an idea that’s about the size of a paragraph. Jason Snell, too. Jason points to a post by Andy Baio that’s exults about what could be a resurgence of blogging. In the comments section, Seth Godin raises his hand: “I never left.”

    Isn’t it obvious how awesome that is? A clickable web of ideas! What a concept!

    So, I’m happy to see all the talk about shorter posts as a way of lowering the hurdle to blogging. But my main interest is not in getting more paragraph-length ideas out in the world, although that’s good. But it’s especially good if those paragraphs are in response to other paragraphs, because I’m mainly interested in seeing webs of posts emerge around ideas …. ideas like the value blogs can bring to an ecosystem that has Twitter, Facebook, and Medium in it already.

    Blogs aren’t for everyone, but they are for some of us. Blogs aren’t for everything, but they sure as hell are for something.

    (And now I have to decide whether I should cross-post this at Medium.com. And tweet out a link.)

    8 Comments »

    Next Page »