It’s not you, it’s me...

Articles

In a recent survey, over two-thirds of respondents reported that they have had an article rejected because it was outside the 'aims and scope' of the journal concerned. Mmmmmm, really? Stephen I'Anson discusses.

Introduction

Intelligent and methodical are two words that one might apply quite reasonably to people engaged in the practise of medicine (HCPs) or the research and development of pharmaceuticals (Pharma). Therefore, surprising perhaps that in a recent survey more than two-thirds of respondents reported that their work had been rejected for publication because it had failed the very first test, i.e. it was outside the aims and scope of the journal they had selected.

Whether an article is an opinion piece, informed by years of experience or a report of a randomised controlled trial that may have taken an age to conduct and write up, surely 'aims and scope' ought to be an essential criterion when drafting a short-list of target journals for publication? Is the light emitted by journals such as The Lancet and the BMJ so bright that authors, like moths, find it irresistible? Or, could journal editors be using 'outside of aims and scope' expediently; as an excuse to save the time and trouble of having to deal with counter-arguments against the real reasons for rejecting an article?

 



"...could journal editors be using 'outside of aims and scope' expediently; as an excuse..."

 



Real reasons for rejection?

A scattering of respondents reported being told that their articles were rejected because of 'space constraints', because their article was 'of insufficient priority or interest', and a real slap in the face reason, the journal staff were 'too busy'; blunt and no doubt frustrating but at least these appear to be reasons rather than excuses.

More than twice as many Pharma respondents as HCPs (42.2% v 17.2%) reported "Adds little or nothing to the body of evidence" as the reason given for rejection. Probably true of most research outcomes, but surely all trial results add something to the body of evidence? Whether they are paid for with Pharma, Government or charity money surely all clinical trials should be written up, peer reviewed and published to assure us that nothing is being hidden.

Pressure to publish

The legislation globally is complex and fractured by territory, but for example, the FDA Amendment Act (2007) essentially mandates the reporting of results of unpublished clinical trials within one year of trial completion (see www.clinicaltrials.gov for the details). However, these reports may comprise no more than a table of values that are not subject to peer review. Surely, a properly peer-reviewed and published article is preferable? And when time is pressing, all the more reason for authors to carefully and coldly consider where to submit a manuscript so that it has a good chance of being accepted for publication.

Motivation to publish

More research is needed to properly understand what motivates authors and publishers. In the meantime, a look at the policies of the two journals I mentioned earlier may help inform decision-making.

The BMJ states under the headline, 'What does the BMJ publish?'

The BMJ's mission is to lead the debate on health, and to engage, inform, and stimulate doctors, researchers and other health professionals in ways that will improve outcomes for patients. We aim to help doctors to make better decisions.



"...surely all clinical trials should be written up, peer reviewed and published to assure us that nothing is being hidden."

 


To achieve these aims we publish original research articles, review and educational articles, news, letters, investigative journalism, and articles commenting on the clinical, scientific, social, political, and economic factors affecting health.

Laudable aims and plenty of scope to work within there, but the guidance goes on to say, 'We can publish only about 7% of the 7000-8000 articles we receive each year...' and 'We reject about two thirds of all submissions without sending them for external peer review...' without further explanation. Given that the BMJ publishes all research articles open access, space is not the limiting factor and the fact that they levy a £3000 open access fee surely means that they can afford to fund the workflow. So why reject 93% of articles? Not because they are outside the aims and scope of the BMJ or fail the much tougher test of peer review it seems.

The Lancet is more specific in its guidance for authors that includes the following:

The Lancet prioritises reports of original research that are likely to change clinical practice or thinking about a disease...We offer fast-track peer review and publication of randomised controlled trials that we judge of importance to practice or research...

Some authors may be guilty of hubris and submit reports that are clearly not going to change clinical practice or be judged 'important', but over two-thirds?

A recent article in the BMJ (BMJ 2013;346:f2865) is just the latest in a litany that criticises the pharmaceutical industry and calls for the publication of negative clinical trial results and data from abandoned clinical trials. I cannot comment on any specific accusations made against individuals or companies of deliberate suppression of negative trial results. However, I certainly hope that if and when this has occurred it is a practise at one extreme. At the other are the results of ground-breaking studies that are very likely to change clinical practice or thinking about a disease that journal editors dream of receiving for publication. In the very broad space in between are the majority of articles that report everything from 'quite interesting', to 'equivocal' to 'not very interesting' research results. The vast majority of these are not going to be cited much if at all and so will inevitably adversely affect a journals rating, howsoever measured (Impact Factor, individual article metrics etc).

What is clear is that the higher a journal's rejection rate, the more attractive it appears to authors. I recently asked the Managing Editor of a well-known traditional (pay-wall) pharmacotherapy journal what they were doing to combat the threat to their business model of open-access publications and the response was 'we are working on upping our rejection rates'.

New approach

So at a time when the pressure on authors and sponsors to publish any and every scrap of clinical trial data is at an all time high, the inclination of many journals to publish anything but the most 'important research results' is at a low. In a world where rejection rates and the Impact Factor seem to be regarded by some as ends in themselves, there is clearly a need for a new approach to publishing, one that is open access but does not compromise on peer review; one that demands full disclosure of vested interests and roots out plagiarism; one that aims to publish rather than reject articles; one that is supportive and will help authors to improve how they write up reports of their research, and one that publishes with no pay-wall barrier and no restricted access.

Addendum

The Survey Method

Two cohorts were invited by email to complete a survey on reasons they were given by journals for their articles being rejected. Cohort 1 (Pharma): 1108 people involved in Publication Planning in or for the Pharmaceutical Industry. Of these, 23% opened the email, 33% of whom clicked through to the survey and 62% of these completed the survey (n=51).

Cohort 2 (HCPs): 682 Published Authors; mainly Healthcare Professionals and Academics. Of these, 29% opened the email, 28% of whom clicked through to the survey and 65% of these completed the survey (n=35)

Headlines:

Twice as many HCPs as Pharma reported:

1. Over half of the articles they submitted were rejected by their 1st choice journal (17.1% v 7.8%)

2. They had to wait more than 6 months to be informed (22.9% v 11.6%)

Both cohorts reported that the most common reason given for rejection was that the article was outside the Aims and Scope of the journal concerned (67.6% [Pharma] v 72.4% [HCPs]). "Adds little or nothing to the body of evidence" was the reason given for rejection by more than twice as many Pharma respondents as HCPs (42.2% v 17.2%).

Specifically, several respondents in the Pharma cohort reported that Pharma authors and/or sponsorship were the reasons given for rejection. And 'of insufficient priority or interest', 'space constraints' and 'too busy' were cited by both cohorts.

 

 



About the author :

Stephen I'Anson held Sales and Marketing roles in the Pharmaceutical Industry throughout the 1980's with Napp Laboratories and Janssen Pharmaceuticals. He joined Medical Tribune as General Manager in the early 90's, from where he bought a number of journals to set up Hayward Medical Communications with three colleagues. He sold his share in the business towards the end of the decade and went on to create and launch Drugs in Context, initially as an independent monograph series, re-launched as a gold open-access, international, peer-reviewed journal in 2012.

Do we place too much importance on a journal's impact factor?