Why bad research is worse than no research
We are drowning in a sea of poor quality observational studies
One of the most influential figures in the history of medical evidence is Doug Altman. Probably his most famous quote is:
Never has this been more true than today.
Philosophically speaking, more information should always trump less information so long as it can be interpreted in the correct manner. The key part here is, so long as it can be interpreted in the correct manner. And so lies the difficulty.
As things stand, we find ourselves awash in a sea of poorly conducted and misinterpreted medical research. We are drowning. Here I will outline the issues with the currently flood of poor quality research, and why in most cases it would be better to have nothing at all.
Bad research
This article is not about openly fraudulent research, although there is significantly more of this than you might think. It is obvious why this would be a problem, and we can all agree it is unambiguously wrong. You can read more about the issue here.
This is about the much more insidious problem of the churning out of high volume, poor quality observational research that litters the medical literature. It is a bigger problem because many people do not even seem to realise how bad the quality is, nor why it matters.
As a point of reference, let us consider the randomised clinical trial. These are, deservedly, considered the gold standard in clinical evidence. To quote statistician/epidemiologist Darren Dahly,
A less well appreciated reason why randomised trials remain the best form of clinical evidence is that in countries with high stringency regulation agencies (mainly western high income countries), it is actually very difficult to conduct a bad trial. Protocols get dissected and scrutinised to such an extent by regulatory bodies and ethics committees that a truly “bad” clinical trial is almost impossible to get to the point of execution (although there can be discussions about what exactly constitutes a “bad” clinical trial).
This is not the case with observational research. Such research can, of course, be of vital importance, especially for cases where randomised trials are infeasible or impossible due to ethical considerations. Consider the foundational research that determined smoking as a cause of lung cancer.
This type of research - especially when it comes to making causal claims regarding the impact of drug/device interventions - is exceptionally difficult. It is a field fraught with errors due to the non-randomised nature of differing exposures. That is to say, at the outset of the research there are inherent differences between people who do or don’t get exposed to the intervention you are interested in. Most obviously, there is a systematic reason why some people happened to get the drug/intervention, and a reason why some people did not. Untangling the effect of this difference from a possible effect of the intervention is a minefield. It requires detailed, tortuous examination of influential differences which could impact the result, and subsequent attempts to statistically adjust for these.
Even with the most expert epidemiologists and biostatisticians handling the best quality data, you can never be sure that your results have not been impacted by a variable you haven’t even measured (hidden confounding), and even if your methods are accurate it is possible that residual confounding remains. Even on the question of smoking and lung cancer, for many years the two most eminent biostatisticians of the century (Ronald Fisher and Austin Bradford Hill) could not agree whether smoking caused lung cancer or whether there was a common cause to both (such as a genetic problem of the lungs which made you want to smoke and also led to cancer).
Unfortunately, most of this research is not conducted by expert epidemiologists and biostatisticians. It is not conducted with data meticulously collected for the specific purpose of addressing a research question. It is conducted by clinical researchers with inadequate methodological training or expertise, with the use of anonymised or open access data, often derived from electronic health records (which are designed for billing people, not for research). There is so little oversight that it is very easy to do extremely poor quality research. Worse, the oversight provided is often from others who lack the methodological training to properly scrutinise the research proposals. The research is presented by people who do not realise they have done it poorly, it is reviewed by people who do not realise it has been done poorly, then is read and digested by people who do not realise it has been done poorly.
This happens at astonishing volumes. It is not an unhappy accident that every other week in the newspapers, coffee is newly discovered to cause cancer, or prevent cancer, or cause heart attacks, or prevent heart attacks. Or vitamin D is the cure of all ailments from cancer to heart attacks to diabetes, despite subsequent randomised trials of it’s use showing no evidence of any efficacy.
The problem
The damage bad research does extends far beyond academic journals. First, it squanders valuable resources—funding, time, and effort that could be invested in high-quality research, or in providing proper methodological training to the junior researchers doing most of the legwork. But beyond waste, bad research actively distorts the medical literature, creating a “noise” that buries meaningful findings under a mass of unreliable results.
The community are presented with findings which on face value appear legitimate, but in fact have a high probability of being completely wrong. Rather than being knowingly ignorant on a topic, the community think they know something, but what they know is wrong. These flawed studies often become part of systematic reviews and meta-analyses, amplifying their biases and warping broader conclusions about a treatment or intervention. This skewed evidence can influence clinical guidelines and misdirect medical practice, pushing interventions that might be ineffective or even harmful. When flawed research overstates the benefits of a drug or downplays risks, it can lead to misguided treatments, false hope, or misinformed public policy. Bad research doesn’t just mislead; it has real-world consequences that can harm patients and delay the adoption of genuinely beneficial treatments.
Bizarrely, this even stands when the research findings are totally implausible at face value. There were ample examples of this during the pandemic, including multiple ecological studies of mask effectiveness with absurd estimates, or the impact of cabbage consumption on Covid-19 mortality. More are produced each day.
Whilst scientists treat this research waste as if it is some pure, objective truth, the public are often able to see straight through it. This inevitably leads to a breakdown in trust between the public and science/medicine, which is a disastrous outcome when the effectiveness of so many advances in modern biomedicine rely heavily on public uptake and adherence.
Why does this happen?
Education of clinical researchers is poor. This is partly because we are short on time, having to learn to be expert clinicians and researchers is no easy task. Medical students often only receive education in this field by being made to produce precisely the type of research we are discussing, rushed out in a semester without adequate supervision. But it is also because incentives do not value slow, thorough and methodologically robust research. Today’s research ecosystem rewards quantity over quality. Researchers, under pressure to publish frequently to advance their careers, face a "publish or perish" culture that values a high publication count far more than rigorous or replicable findings. Universities, journals, and funding agencies often prize high-output labs or individuals, looking at metrics like citation counts over scientific reliability. This model encourages researchers to prioritise speed and volume, making it tempting to skip crucial time and effort at the design stage, causing whole projects to be built on foundations of sand.
Then, studies with “positive” findings are more likely to be published and promoted, which can lead to questionable research practices and allow compromised studies to slip into the literature unchecked. The result is a system where superficial, fast, and flashy research is rewarded, and the careful, replicable, high-impact studies we need get left behind. Slow, robust research without exciting or hyperbole inspiring findings often goes unpublished and unrewarded.
What is the answer?
There are only two real solutions. The first is to increase the quality and scope of methodological training - initially to researchers themselves, but also to health professionals not directly involved in delivering research, but who are (or should be) consumers of its output, using up-to-date research to guide their practice. They need to be capable of scrutinising published research and sorting the wheat from the chaff, applying appropriate levels of scepticism for all published, non-randomised findings.
The next is to change the incentives to churn out useless observational research to pad CV’s and to increase academic metrics. This is a much more thorny issue than increasing education and training, as it involves broader changes at institutional levels. The topic is discussed in greater detail in this article.
Neither of these things can or will happen quickly, and sadly the tsunami of research waste will likely continue for the foreseeable future.
Summary
Bad medical research isn’t just unproductive; it’s actively harmful to the progress of medicine and public health. Driven by inadequate training and misaligned incentives, poorly designed studies undermine the credibility of science, waste resources, and mislead practitioners and patients. By creating illusions of certainty and spreading false confidence, bad research distorts clinical decision-making and causes harm.
The answer requires structural reforms that reward accuracy over volume, and a scientific culture that values careful, impactful studies over the quick churn of questionable findings. We need education that equips clinical researchers with strong methodological skills and enables practitioners to competently critically appraise research output. The future of medical science depends on it.
For more on the “magic” of randomisation, see this excellent editorial from the New England Journal of Medicine https://www.nejm.org/doi/10.1056/NEJMsb1901642
You can subscribe to Darren’s Substack here https://open.substack.com/pub/statsepi
I help train surgery residents in the US and agree completely with your observations and commentary. Are there specific resources (books, articles, videos etc …) you might recommend to improve my analytical skills while doing the same for my residents.
Many Thanks
The incentives are awful and the thing we could do tomorrow in the UK to help with that would be to remove research/audits etc from any part of compulsory training. It's a race to the bottom, stewed in cynicism. Some people do not care, and just want to do what they are told, and that's OK. We're asking to be lied to and people are hearing our call and lying to us