Is there a better alternative to the peer review system in scientific research?
Asked by
nikipedia (
28095)
December 11th, 2010
When scientists do experiments, the way we communicate our findings is to write a paper outlining what we did and what we found, and send it to a journal. The journal then asks several experts in that particular field to read the paper and decide if what was done was scientifically sound, and if the conclusions are justified. If the reviewers all agree, your paper is published in the journal, and future scientists will refer to your paper and build off your findings.
I think this process tends to work reasonably well, but it has a couple major flaws. For instance, if you’re asked to review the work of someone you know and like, chances are you’re going to go easier on it. Or, if the findings of the paper you’re reviewing reflect well on your OWN findings, you’ll probably look more favorably on it.
And it goes the other way, too. You’ll probably be a lot harder on something that contradicts your own findings/interpretation, or someone you don’t know or don’t like.
But without peer review, people can just say whatever crazy shit they want, send it to the journal, and it’s now “science.”
Is there a better alternative? I can’t think of one, myself, so I’d love to hear some ideas.
Observing members:
0
Composing members:
0
14 Answers
I’ve often wondered this myself. I do think there needs to be a way to anonymize the papers before peers can read and judge and I feel like I would appreciate an independent research team in place to replicate some findings ( I know that’s not feasible to put into practice but still).
I like the anonymizing idea. Reviewers are (usually) anonymous, so why shouldn’t the authors be?
I honestly wish the system was used elsewhere.
Can you imagine if political campaign claims had to be peer reviewed to be presented as fact? I’m sure there’s other places it would help, but that’s the one that jumps out.
But, on the question the two additional problems I see are
- people see “published” as “fact” instead of just another step in review (the hype for and backlash against the arsenic based life finding is a good example)
– “being published” is such a necessary step in becoming known and respected I wonder if sometimes the research is compromised
To improve on it, I really think you’d have to make the science more accessible, and I don’t know if that’s possible. Having a wider variety of “peers” review the publications would be a start, but may be even more time consuming.
Right now if you publish a paper in a certain field, there are only so many people who are fully qualified to review all your findings and you probably know those people by name. If the review process included more “general science” review from other fields it would probably be a more complete review process and also facilitate a greater exchange between the isolated silos.
Overall it’s a pretty amazing system with (at least from the outside) a really good track record.
I think it could be re-designed or aided by better documentation and practical instructions. Papers are each written in a different style without an overarching standard. It’d be beneficial to document every method, every step taken in a video – and then share that directly with other research teams for replication of the research. There is so much information that begs the question. Some will say—well some researchers possess “unique knowledge” that’s not easily transferable to another team. I’d say, this is bullshit, if it’s not transferable—it is not science.
I’d like this method, alongside the paper. So that any other research team can either prove the experiment wrong, or give it more support through similar data—or shine a spotlight on another variable that the previous team didn’t find or develop a method for.
Right now it takes way too long to re-do experiments and further more, to do extensions to successful experiments.
@phoebusg: Interesting answer. Replication of experiments has never really been a problem for us… this actually is one of my favorite things about science (or maybe it’s just our field?)—that it’s so collaborative, we never have issues about people sharing methodology. We’ve hosted students from other labs who wanted to learn techniques from us, and of course it has gone the other way, too.
I’m not crazy about the videotaping idea. We work with human subjects, and many of our experiments have to do with measuring stress responses—video taping is itself, then, a confounding variable that can be an added stressor. Not to mention privacy issues.
@nikipedia you can still video tape the experiment set up, and run a demo. There’s so many variables that may escape the writing or methodology trading. It doesn’t mean this has to run during the experiment.
I can’t come up with any better system. Even when friends act on their prejudice to allow a sloppy piece of research to be published, the real peer review comes in when all the young Turks out there in the world of science seek to make a name for themselves by picking apart a work they see as clearly deficient. The worst outcome is when reviewers don’t like the author or authors, and unfairly reject a paper for publication. But generally some other journal will pick up any worthy work, and hopefully have it reviewed by experts with more open minds.
It’s far from ap perfect system, but it is pretty darned good. It effectively excludes most of the hunk science. And I really don’t know how we’d og about improving it greatly.
@ETpro Hunk science? What journals are you reading? :p
Anonymity on both sides seems like an idea that should already be in place. I wonder if there’d be any way to have multiple teams of reviewers, with shifting memberships. No team would be static, so they wouldn’t be able to get set in their ways and/or biases.
Peer reviews usually include written evaluations/feedback.
The (less powerful) alternative is working with numbers of citations only, but measure this on a global scale. In a way, Google page ranking algorithm uses a similar approach.
There’s a concept called
http://en.wikipedia.org/wiki/Impact_factor
which is a measure reflecting the average number of citations to articles published in science and social science journals. It is frequently used as a proxy for the relative importance of a journal within its field, with journals with higher impact factors deemed to be more important than those with lower ones.
@augustlan Hunk science is what you get when you mix Junk science with a Vicodin-ES to treat a tooth abscess. :-)
This is why the papers are sent to more than one reviewer. We all have biases, whether conscious or otherwise, but having a paper examined by multiple reviewers would tend to cancel those out.
A reviewer can and should also pass on a review if personal biases would tend to color their judgment. Objectivity is at the center of scientific thinking, and one cannot practice science without it.
Some journals have started to open up the review process. That is, what you say will be published for all and your identity will be un-blinded at the end. (Almost all peer-review I know is anonymous the whole way through, although sometimes a reviewer can ‘out’ himself after final decisions are made.) Publishing the review process gives a greater incentive not to be overly nice to your friends or easy on results that support your research. The public will also get to evaluate the pool of reviewers journals choose from. It also can enforce more civility, while of course endangering blunt forthrightness.
Peer review is not limited to those experts asked to review the paper prior to acceptance for publication. All readers with sufficient knowledge on the subject are among the peers who can and frequently review the article after publication. Often lively debates take place in the journal, not unlike discussions on fluther, except that the discussion is restricted to scholarly debate based on sound theoretical and methodological points.
Answer this question