Adam Marcus Headshot HipporeadsIvan Oransky Headshot HipporeadsIvan Oransky and Adam Marcus

Ivan Oransky: Ivan Oransky is currently the vice president and global editorial director of MedPage Today. He teaches medical journalism at New York University’s Science, Health, and Environmental Reporting Program and is the vice president of the Association of Health Care Journalists.

Adam Marcus: Adam Marcus is the managing editor of Gastroenterology & Endoscopy News and Anesthesiology News. His freelance articles have appeared in Science, The Economist, The Christian Science Monitor, The Scientist, Birder’s World, Sciam.com, and many other publications and web sites.


Interview by Wudan Yan

Who, if anyone, has the final word in science?

Peer review — the process by which a research study is written as a manuscript, submitted to a journal, and is scrutinized by a team of reviewers, who ultimately decide if the study can be published by the journal — has been around for over 300 years. Yet, mistakes happen: scientists and reviewers are human. Scientists’ original comments can be left in the published manuscript. With rise of the digital age, there are simply more eyes on published papers to spot shortcomings or errors. This is what is now referred to as post-publication peer review in science: a phenomenon whereby other scientists “review” a paper even after it is published to assess if they obtain results consistent to those found in an original study. In some cases, these inaccuracies — whether unintentional or intentional — lead to retractions. A study published in 2008 notes that the rate of retractions is increasing and “editors are taking more responsibility for correcting the scientific record.” Yet, “more aggressive means of notification to the scientific community appear to be necessary.”

Our science editor, Wudan Yan, spoke with Ivan Oransky and Adam Marcus, the founders of Retraction Watch on the importance of post-publication peer review in the scientific community.

WY: When did the peer review process for science emerge? How does it work, and, in what ways does the process fall short?

IO: Peer review has been around for several hundred years, since the Royal Society of London was founded. The process differs a bit by discipline but the essential idea is that before a study is published, a couple of experts review and critique a manuscript, and say whether it needs more work, whether it needs to be completely redone, or if it’s good enough to be published.

AM: The system of peer review is based on trust: trust in the ability of the reviewers to do a good job and to catch potential problems. But very often — not always — editors ask authors to recommend people in the field who would be good reviewers. For example, the editors may argue that you do very arcane, obscure science and ask for three people who would be good reviewers. What we have seen is that some authors make up their own reviews and pretend to be experts in the field that they’ve recommended. This isn’t a widespread problem, but it does expose a vulnerability in the system. We’ve seen well over a hundred retractions related to that.

IO: And that’s a modern phenomenon because it’s only made possible by the way peer review is done now using online electronic systems. Peer review is the worst possible system except every one we’ve already tried, which means it’s a good filter. If we didn’t have peer review, we would be even worse off. But, the idea that peer review is a perfect system — that once a manuscript has made it through peer review, it must be pristine and there couldn’t possibly be any fraud and that everything is true — is beggar’s belief. The fact is, people who do the peer review learn about what you’re working on and can do things like delay getting their review back so they can do the experiment themselves and beat you or submit to another journal.

Instead of relying solely on pre-publication peer review, we believe in transitioning to a system where post-publication peer review is also valued. Understanding that the review process doesn’t stop when the paper gets published is a very subtle — albeit important — thing. I don’t think it’s something the lay public thinks about, but it’s something that a lot of scientists are waking up to and understanding.

WY: When do you think the perception that things that have gone through peer review are true started changing?

AM: It’s not so much that, but rather people are now witnessing the flaws in the system more than they have before. One of the things we have advocated is that a published paper is not this inviolate, sanctified entity that should be the sole marker of scientific productivity. The more we move away from that notion, the less people will invest in a single paper. This will free up scientists to be more productive in some ways because they won’t have to put all their eggs in a single basket all the time.

IO: This point brings us back to incentives: If your whole career is riding on publishing in big journals with a high impact factor, like Cell, Nature, and Science, you’re going to do everything to publish in them. When someone challenges that work, you will do everything you can to defend that work and not be as forthright as you could be with potential errors. But, that’s the problem: everything is incentivized based on papers: grants, promotions, tenure. So it’s not surprising we have a system that makes it hard to retract or correct the scientific literature.

WY: When did post-publication peer review start to emerge?

IO: It’s been more prominent in the last three years, but post-publication peer review has always been around. It includes someone reading a paper and sending a note to the editor or scientist, and that’s what science is supposed to be. PubPeer, for example, has been around for perhaps 2 years, and there are sites that focus on very specific niches. We don’t go a week now without finding something on PubPeer that leads to a correction or retraction. The frequency of which we’re able to find corrections is what’s changing.

WY: Why do you think there’s more conversation about post-publication peer review now?

AM: Certainly due to the digital age. We should also note that post-publication peer review is not a universally embraced phenomenon. There are lots of scientists in the science publishing community who are suspicious of sites like PubPeer.

WY: How long has Retraction Watch been around?

IO: We launched August 2010.

WY: How did the idea of it come up?

AM: I used to edit at Anesthesiology News, and we covered a very prominent retraction case involving an anesthesiologist called Scott Reuben. At the time, Ivan was editing SciAm.com, and they covered the case after we did. The two of us have known each other for a while, and we spoke on the phone: Ivan says, “we should run a blog about retractions. When I was at The Scientist, we did something similar and it was pretty interesting.”

WY: What was it like starting out? Was it just you two finding retractions by yourself?

IO: At the beginning and this is still true – we set up searches and ways to find a steady stream of cases. Before we started, we didn’t know — and realized this quickly — that there are a lot of scientists who are quite disillusioned by their colleagues’ responses to allegations of fraud and misconduct and are not very happy with the way retractions happen. So we get a lot of tips about papers they think should be retracted, but more often, papers that already have been retracted.

Another thing we didn’t know when we started is that we were coming into this at a time there was a retraction boom. Between 2001 and 2010 (2010 when we started), the number of retractions rose from 40 to 400 a year, and papers published in that time have only risen 44%.

We didn’t know if we would have much to say, but we were quickly up at 2 posts per day — sometimes 3 — and not even keeping up with all the retractions.

WY: The rise of 40 to 400 retractions a year is pretty significant. What do you think contributed to the rise of 40 to 400 retractions a year?

IO: You definitely can’t ignore the role of technology. Plagiarism detection software became more commonly used then, so we saw a lot of retractions for plagiarism for duplications of work, which is sometimes inelegantly referred to as “self-plagiarism.”

The other part of this is that people can simply see more stuff. Even if papers are behind paywalls, more eyes are on them, and those eyes will find things most peer reviewers won’t. Those two factors are related and shouldn’t be underestimated.

AM: There’s also been an awakening among journal editors about the ethical vulnerabilities of their publications. There’s been a huge spike in the last 3-4 years of editorials from journal editors about the importance of research and publishing integrity, and almost nothing before 2000. I think our ideas [at Retraction Watch] have also helped informed that.

IO: Adam and I have been cited more than fifty times, so I think it’s great we’ve been a part of that conversation.

WY: Do you think the number of retractions are declining now because editors recognize their stake and are being more judicious?

AM: It looks as though Thomson Scientific and Pubmed are going to have fewer retractions than they did last year and the year before, but not by a significant amount. In looking at those numbers, it’s important to note that they don’t include huge swaths of retractions we’ve covered in major journals. I couldn’t find the sixty retractions for July’s peer review ring, the 120 retractions for those fake papers in last January or February. So there’s at least 180 that don’t appear in either of those databases. So if there’s 500 reported retractions, it’s hard to say if the actual number is much more than that or not. I don’t know if it’s at 1,000 or closer to 700. We might be seeing a bit of a plateau, but it’s also important to remember that these retractions don’t happen the year a paper is published, so it will take a few years before there is an actual retraction.

IO: Also, retraction isn’t a bad thing. Very often, it’s a good thing because it means science is working as advertised by being self-correcting. We like to reward researchers who retract their own work because of some honest error and that the paper is not totally valid. Colleagues will reward them with more trust in their future work. Retraction shouldn’t be feared by honest researchers.

WY: How was Retraction Watch initially received by academic scientists, and how have their perception of you changed since its inception?

IO: For some time people didn’t know what to make of us, but that’s true of anything new: you reserve judgement. There have been people who weren’t crazy about what we’re doing, but that remains to be a small minority. Adam and I are being invited to speak everywhere, so there are definitely people who really want to hear from us and help understand the problems so they can help solve it. We receive huge support from grad students, postdocs, and junior faculty. We also receive significant support from senior faculty, but some of them just like the way things work and don’t quite love outsiders questioning them.

WY: Last year, a Japanese researcher took his own life after what would have been a seminal paper on stem cells was retracted from Nature. Michael Eisen at University of California, Berkeley wrote about how retractions can be perceived as “witch hunts” in the scientific community. What did you two make of the lead researcher’s suicide and how the STAP case was handled by the research community?

IO: We were deeply horrified by that case. Not just by the science, but the fact that someone had taken his own life. Shortly after the suicide, we wrote a post quoting Louis Brandeis: “a bright sunlight is the best disinfectant.” If we’re reporting the truth, and we’re reporting facts — and checking our facts — inconvenient facts are not going to stop us from doing what we do. If you deal with appropriate gravitas and give it the respect it requires, then it’s important to expose these things. The idea that we would have to sweep some things under the rug in science is why people question science. It’s not the crime, it’s the cover up and what makes people not trust science is that scientists act like there isn’t any fraud when there is. If scientists were more upfront and admitted to fraud in the field, and also explained what was being done to get rid of it, then people would trust the institution more.

WY: Has anyone tried to sue you?

IO: We’ve been threatened to. I think we’re up to four or five threats, but nobody has filed suit.

WY: Last December, Retraction Watch received a $400,000 grant from the MacArthur Foundation. What do you hope to do with this grant?

IO: The MacArthur Foundation has been incredibly supportive and understands what the issues are in science. The main output is going to be a database of retractions, which doesn’t exist now. We’d want the database to be hooked up to something like Mendeley. In so doing, we’re hiring a reporter and editor to do more of what we’re doing. We would also like to build a Center for Scientific Integrity to create long-form features, and enterprise work. We also published our first paper, a review of the retraction literature, in the Journal of Microbiology and Biology Education.

WY: How do you think post-publication peer review can better inform the peer review process moving forward?

AM: The publication of a paper should be an evolutionary process rather than a terminal process. Post-publication peer review is a really critical way of achieving that.


FURTHER READING:


Image credit: Selena N. B. H. via Flickr

About The Author

Avatar photo

Wudan Yan, Hippo Reads Science Editor