Joho the BlogPeer review review - Joho the Blog

Peer review review

I have a friend who is in charge of managing the peer review process at some serious scientific journals. It’s a tough job requiring a set of skills that includes dealing with sometimes ornery people, managing multiple schedules, and expertise in the fields in which she works. She makes a good case for peer review, and for the journals that rely on it. Peer review has value and costs money, she says. So, journals have to charge fees to support the peer review process, and they have to hold onto the rights at least long enough to recover their costs.

I recognize the value of peer review. It not only directs our attention to worthwhile research, it is part of an editorial process that improves articles before they’re published. But peer review doesn’t scale. There’s so much research being done. A lot of it is good work but isn’t important enough to merit the investment in a traditional peer review process (including the failed hypotheses that we were taught in school were not failures at all). Peer review is valuable, but it’s a choke point required because traditional publishing’s neck is so thin. And it may — may! — turn out that the combination of crowds and quirky individuals can replace peer review’s value. Of course, we’d want the crowd to consist of people with some standing for evaluating the research. And we’d want to be sure that the quirky individuals who buck the crowd are not delusional psychotics. I of course don’t know what the world will look like (or what it does look like, when you come down to it), but I suspect that we’re going to have a mixed research ecology, with peer reviewed journals making recommendations we trust highly, and a wide variety of other ways of finding the research that matters to us. With PLoS and PLoS, and arXiv, and Nature’s version of arXiv, and all the rest of it, we’re already well on the way to filling the important niches in this new knowledge ecology.

In fact, peer review generally establishes two characteristics of a piece of work: It was performed properly and it is important enough to merit throwing some ink at it. Those are important criteria, but hardly the only ones. “This hastily performed work uses a flawed methodology but turns up an interesting fact worth considering” is the type of criterion researchers use when recommending articles to one another. There’s value there, and with research that has good data that it misanalyzes, research that is promising but incomplete, research that inadvertently demonstrates a flaw in some lab equipment, etc. etc. etc. And, as always, the value is in the long tail of et ceteras. [Tags: peer_review open_access science publishing everything_is_miscellaneous ]

6 Responses to “Peer review review”

  1. I call bullshit on the “recouping costs” argument.

    Very few peer reviewers are paid for their work, and when they are it’s a pittance. Recouping those expenses costs *far* less than the outrageous costs that the publishers then charge the subscribers.

  2. I designed a peer-to-peer review system once, that never took actual shape,
    it mixed anonimity
    and reputation points

    to be reviewed you should review others and if your review matches the other 5 anonimous reviewver, you get points,
    if your article get good reviews youget point
    and so forth
    if you start with good professors and then allow invitations

    ….it should get quite big and allow for a pretty good archive

    I hope to see it done one day : )

  3. Peer review is overrated and somewhat useless. Besides helping with grammar (that Scientific papers have no respect for in the first place) there’s no way a peer can review the whole set of things described in “Materials and Methods”, thus a whole lot of non-realistic results get published.

    That wouldn’t be all that bad if it weren’t for the fact that other works rely on the results and methods portrayed on the work previously published by others.

    And I won’t get into how a lot scientific publications don’t pay a lot of attention if the authors are well known.

    The only way peer review could be useful if the whole bunch of experiments would be reproduced and got the exact same results as the original authors. Of course that is impossible due to time and cost restrictions.

    The whole scientific publishing process is still in the dark ages of print. Maybe it is time for science to move forward and abandon some of its outdated paradigms.

    A database of negative results would be a good starting point. Positivism is *not* the only way to move forward…

  4. “Oh look, David’s written a post about disruptive crowdsourcing innovations which threaten to undermine established quality-control practices supported by vested interests. I wonder what he’ll say?”

    “Maybe that quality control will happen anyway, and it won’t work quite so well but that doesn’t matter, and anyway it’ll be fun finding out, and besides it’s already happening whether we like it or not.”

    “Naah, he said that last time…”

  5. “There’s so much research being done. A lot of it is good work but isn’t important enough to merit the investment in a traditional peer review process”

    But who decides importance? Is there a peer review for peer review? Sounds like it may be more political than peer to peer.

  6. jr, yes, nicely put. The point is the at peer review as it exists favors a certain type of importance. E.g., it favors positive results, not positive establishment that a hypothesis is wrong. Besides, in a miscellaneous world, we cannot tell what is going to turn out to be important.


Web Joho only

Comments (RSS).  RSS icon