Before being accepted, a scientific paper has to go through critical assessment by expert reviewers who will assess the paper’s suitability for publication. The peer review process is intended to guarantee standards of quality and provide credibility. The highest ranked medical journals only accept a small fraction of papers submitted to them for publication.
Clinical medicine relies on the scientific literature. For a clinical cardiologist like myself, this is a key issue. The procedures I decide to perform, and the therapy I recommend is, and should be, based on scientific evidence. For the clinician, evidence based medicine is the holy grail.
But what if scientific studies are flawed? What if evidence based medicine relies on erroneous data? Then, obviously, clinical medicine is broken.
Unfortunately, the scientific community is not free from dishonesty and greed. Scientific fraud is hard to deal with.
Faked data exists and is often difficult to expose. However, we should be able to rely on high quality medical journals when it comes to wrong use of statistics, erroneous calculations and wrong conclusions. These journals should guarantee that papers plagued with such problems are not accepted for publication. But, are they up to the task?
The Statins and the Elderly Saga
One of the most important questions facing clinical cardiology today is when to use statin drugs for individuals who have not been diagnosed with cardiovascular disease (CVD). Clinical trials have shown that these drugs lower mortality and reduce the risk of future cardiovascular events among people with CVD. However, in those without established CVD, the magnitude of effect is less clear end it is uncertain when the benefits of therapy outweigh the risks.
Age itself is independently associated with the risk of CVD, and risk factors such as blood pressure, lipid disorders and diabetes are common among the elderly. However, limited clinical research is available addressing statin treatment among healthy people above 65 years old.
Four months ago I read with interest a paper by Gianlugi Savarese and colleagues, published in the Journal of the American College of Cardiology (JACC) presenting a meta-analysis of the benefits of statins in elderly subjects without established CVD. The authors concluded that “their meta-analysis provided the first time evidence that their meta-analysis provided the first time evidence that the benefits of statins on major cardiovascular events extend to people above 65 years old”.
In the paper, the authors came to the conclusion that statins significantly reduce the incidence of myocardial infarction (MI) and stroke, but do not significantly prolong survival in the short term.
Their numbers show that 83 patients have to be treated with statins to prevent one case of MI and 142 patients have to be treated with statins to prevent one stroke, for a mean follow-up of 3.5 years. However, they did not present these numbers in their paper. Instead they claimed that 24 patients needed to be treated for 1 year to prevent one MI and that 42 patients needed to be treated for 1 year to prevent one stroke.
I guess anybody with some statistical knowledge will see that the Number Needed to Treat (NNT) for one year should be a higher number than the NNT for 3.5 years. If one is performing a clinical trial in order to test an effect of a drug, a higher number of patents is needed if the study is planned to run for 1 year than if it is supposed to run for 3.5 years. There will be fewer events in 1 year than in 3.5 years, therefore the NNT for 1 year is a higher number than the NNT for 3.5 years. If you’re still doubtful, read my earlier blog post on the issue.
My Letter to the Editor of JACC
After discovering the error in the paper by Savarese and colleagues I wrote a letter to the editor of JACC which was recently published. I pointed out that the authors appeared to have made an erroneous calculation when reporting the NNT for a period of one year. By using data from their paper I had calculated that the NNT for one year to prevent one MI and one stroke was approximately ten times higher than reported in the paper, given that NNT is constant over time. In other words, the statin effect was exaggerated by a factor of ten.
I also suggested that the most appropriate approach would have been to report the NNT for the mean follow-up of 3.5 years, instead of calculating the NNT for one year.
A correction was published by the authors in JACC on March 25. It’s not very substantial:
The authors report the number needed to treat (NNT) for the entire mean follow-up of studies was 83 and 142 for myocardial infarction and stroke, respectively. The authors apologize for this error.
Strangely, not a word about the erroneous calculation. The wrong NNT numbers per year are left uncorrected.
Two of the authors of the paper, Gianlugi Savarese and Pasquale Perrone-Filardi, responded to my letter. Their response was published in JACC together with my letter. They agreed that it was more appropriate to report the NNT for the mean follow-up of 3.5 years than presenting the NNT for one year. Furthermore they write:
As previous authors did (citation) using the same formula adopted in our meta-analysis, our aim was to calculate the NNT per year dividing the overall NNT calculated for the entire trial duration by the length of the follow up. We agree with dr Sigurdsson that this may represent an oversimplification, since this calculation assumes that the effect of the treatment (relative risk reduction) is constant over time and that events occur at a constant rate over time.
Oversimplification is not the right word. Simply put, this is a completely wrong approach. But to my surprise, Savarese and colleagues don’t seem to realize or understand it. In fact dividing, when you should multiply will provide numbers that are very far from the truth.
Repeating An Error Won’t Make it Right
Of course I was curious to see the paper cited by Savarese and Perrone-Filardi in their response to my letter. It turns out that it’s a paper published in Circulation 2008; “Lipid Management to Reduce Cardiovascular Risk: A New Strategy is Required“, written by H. Roberto Superko and Spencer King III. Dr. King is a world-famous senior cardiologist, a pioneer in cardiac catheterization and coronary angiography.
These two renowned cardiologists address the NNT from a number of statins trials in primary and secondary prevention. Interestingly, they also calculate the NNT per year. The results are published in Table 2. The table shows that the NNT per year is always a higher number than the NNT for the whole study period (which is always longer than one year). For example the NNT to prevent one MI in the famous 4S (SSSS) trial was 11.7 for the whole study period, but the NNT per year of the study was 63.2. The NNT for the WOSCOP trial was 44.2 for the whole study period, but 216.6 per year of the study. In fact, this all looks very reasonable and correct.
But the strange thing is that in the paper’s text, Superko and King use a different approach which is in complete disagreement with the table. They write:
… such as the Scandinavian Simvastatin Survival Study (SSSS), which achieved an NNT of 11.7 and an NNT per year of 2.2
And they do this again and again, as if they never saw the table in their own paper. So, Superko and King are dividing the overall NNT by the length of follow-up in order to find the NNT per year. If they continue to do this they will find that the NNT per six months in the 4S-trial was 1.1. This would mean that only one patient had to be treated for six months to prevent one event. Completely absurd.
I wonder, do these renowned scientific authors don’t understand what they’re talking about, or is this just a slight oversight. Whatever it is, it’s serious and unprofessional. How can a respected peer-reviewed journal such as Circulation publish such rubbish? And, five years later Savarese and colleagues decide it’s time to repeat the error. And now it’s accepted by JACC, another highly respected medical journal.
Surprisingly, Savarese and Perrone-Filardi don’t acknowledge their error. Instead, they cite the old paper where the same error was made, and believe that’ll make it right.
Furthermore, despite the real NNT being a tenfold higher number than the one they reported (meaning the drug effect is ten times less), they have no intention to reconsider the main conclusion of their study. Unfortunately, their mistake was not picked up by the peer reviewers or the editors of JACC.
I must admit I’m deeply disappointed. The medical community expects much more responsibility from the editorial boards of these medical journals. If the medical literature is full of such errors, our knowledge is worthless? Maybe, in this particular context, lying with numbers, whether it’s done on purpose or not, could be called statinistics instead of statistics. Statinistics could be the new word for badly treated statistics.