When is a new study NOT a new study?
or, how can I game the academic system and make it look like I am publishing so that I don't perish?
Is it possible that some academics game the system?
Most people are familiar with the old academic adage that says:
“Publish or perish.”
Academics are often judged on their publications by two metrics:
Citation Indexing: The impact of their work assessed by how frequently their work is cited (usually measured through citation indexes like the h-index), and
Publication Count: The total number of publications they have published (irrespective of whether they are first, last or co-author).
Several studies have shown that when the first (citation indexing) became a popular way of evaluating academic impact, some academics began artificially inflating their impact scores through prolific self-citation (for example, see here, here and here). There have even been calls for popular websites to show how many of an academic’s citations are self-citations that came from their other works (here).
Until today I was not aware of anyone truly gaming the raw publication count metric. However, after what I found today, I am sorry to say this may have been nothing more than a failure of imagination on my part. While undertaking a small screening review of other works that proposed risk or outcome prediction models for some of the maternity factors that are elements of my own much larger model (stillbirth, maternal death, gestational hypertension, gestational diabetes, Preeclampsia, and so on) I came across this paper:
and then this paper:
Each of these papers (and at least two others also published in 2016) is essentially the same paper. Sure, an author was removed or added, the text is slightly modified, or the exclusions or inclusions of the cohort are tweaked - but they essentially perform the same study on the same cohort (patients recieving routine pregnancy care at King’s College Hospital and Medway Maritime Hospital between March 2006 and October 2015), all using the same mathematical modelling approach to predict the same outcome.
In each case, and barring some slight alteration to phraseology, they are predicting stillbirth using the same set of patient characteristics and statistical analysis methods (the Yerlikaya paper is on the left, the Mastrodima paper is on the right). Even the internal validation processes were identical.
And again, barring some cohort and wording tweaks, and how the results were presented, everything else seems ostensibly identical. On a full read, any argument that these two papers are a different study or different papers becomes specious, at best.
The cohort tweaking trick in this case was this: The Mastrodima paper included patients attending for routine pregnancy care between 11+0 to 13+6 weeks gestation. The Yerlikaya paper included those same patients with the addition of a small number of patients who went on to also attend between 19+0 and 24+6 weeks gestation.
The craziest thing of all is that the two papers I have highlighted above were published back to back in the same edition of the same journal on the same day. Was the editor asleep at the wheel?
At best, any attentive editor would have suggested that the slight differences could have been incorporated as an extension or counterfactual in a single paper, and even incorporated the two different methods used to report the predictive performance results.
Was this a one-off or a pattern of behaviour?
I thought this might have simply been some sort of accident or fluke… until I discovered two more papers (here and here) where the same cohort-tweaking trick was again used to get two more ostensibly identical papers published - once again, back to back, in the same journal, looking at the same outcome (trisomy screening) and published on the same day. The cohort tweak between these papers was that the first only included 10,698 mothers of singleton pregnancies, while the second included the exact same 10,698 mothers along with 438 mothers of twin pregnancies - a 4% difference in cohort size. Not content with publishing this study of cfDNA screening twice in the same journal, on this occassion they somehow managed to reframe the same study, test and outcome and see it in print a third time (here).
I found several other instances of the same behaviour between 2016 and 2022 involving various members of the same author group, but for brevity’s sake, I’ll move on at this point.
What I will say is that two authors were common to every paper - Prof Kypros Nicolaides1 of King’s College London, and Dr Ranjit Akolekar from Medway NHS Foundation Trust who also provides some services at hospitals within the King’s Health Trust.
The journal
While some later instances of this cohort tweaking pump-and-dump scheme arise in different journals, several examples, including the two discussed above, were published in the journal Ultrasound in Obstetrics and Gynecology.
When I started to look at the editors and editorial board for the journal, how Nicolaides got away with it started to become clear.
One of Nicolaides and Akolekar’s regular co-authors, Liona (L.C.) Poon is an Editor. Further, Nicolaides himself is on the Editorial Board, along with Mar Gil, another of their sometime co-authors.
You do the math…
Nicolaides has an interesting if somewhat disturbing history. He was the gynaecologist who treated Mandy Allwood, the UK’s own Octomum who lost all eight of her babies. He was brought before the General Medical Council after a professional misconduct complaint was laid by Jenny Sabin, the mother of twins who died during a surgery Nicolaides was performing. He was found to have insulted her and made inappropriate comments about her stripy underwear and sex life, to have made disparaging comments about the city of Newcastle, and to have made sexual and misogynistic comments to a support person that was in the room with Ms Sabin. While he was eventually cleared of misconduct with regards to the surgery, it should be patently clear to anyone that making insulting, inappropriate and sexual comments is not something a professional senior doctor should be doing.
And so yet another brick falls from the once solid wall of peer-reviewed research published in prestigious journals.
As if the Covid years have not already by themselves destroyed the credibility of ‘research’.
I have seen this re-publication with minimal changes often in psychology. One group was related to a famous MRI software package, I cannot remember which one. I remember getting annoyed at so many articles essentially saying the same thing, and I had to go through them all because they sounded relevant based on title and abstract.