The Munro Report

Share this post
The Pandemic Evidence Failure
alasdairmunro.substack.com

The Pandemic Evidence Failure

We did so much, under such uncertainty, and learnt so little. We must demand more and do better.

Alasdair Munro
Jul 26
28
5
Share this post
The Pandemic Evidence Failure
alasdairmunro.substack.com

A new article has been published in the journal BMJ Evidence Based Medicine, entitled, “Adapt or die: how the pandemic made the shift from EBM to EBM+ more urgent”. The principle of this article is that the pandemic has demonstrated the field of Evidence Based Medicine should move away from traditional hierarchies of evidence and embrace more mechanistic studies as a “high quality” form of evidence for complex systems.

The article steps directly over the massive elephant in the room, before proceeding to make it’s somewhat baffling case and missing the most important point. It is a prime example of our failure to learn the most important lessons regarding the generation and interpretation of evidence during the pandemic.

What is EBM?

For those unfamiliar with the term, Evidence Based Medicine (EBM) is traditionally defined as:

“…the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients”

You can read a wonderful, brief explainer here. It recognises the importance of generating and using high quality evidence to inform decisions on patient therapy, recognising some of the harms caused by historical practices of simply deciding based on anecdote, personal experience, or even scientific evidence which is far removed from actual patients.

The Elephant in the room

The introduction to the aforementioned article contains the following paragraph,

“Despite a quarter of a million scientific papers on COVID-19, some basic issues remain contested. How exactly does the virus spread? How effective are non-pharmaceutical interventions—masks, distancing, closure of buildings, remote working and learning, lockdowns—in reducing transmission, and what are their trade-offs? How can we make schools, hospitals and other public buildings safe? How can we protect workers and the public without closing down the economy? How can we reduce the shocking inequalities that have characterised this pandemic?”

The question is of course, following the publication of 250,000 articles, why are these absolutely fundamental questions about pandemic management still contested?

The reason is simple. We failed to generate sufficient high quality evidence to address these questions. One of the primary reasons why is precisely because of the erroneous suggestion that there was no need to generate high quality evidence, because we could simply rely on mechanistic evidence.

The reality is that after decades of research, we recognise that mechanistic evidence often fails to translate into meaningful effects in the messy real world. Taking the example of mask use, in the absence of trial data what followed instead has been a slew of inconsistent, messy, confounded observational studies showing anything from a 90% effect of masks to no effect at all. Hence, there are still people claiming high quality masks could “end the pandemic”, and those claiming masks make no difference to transmission at all.

Despite masks being probably the most ubiquitous non-pharmaceutical intervention applied during the pandemic, we have only 2 randomised controlled trials of their use;

  • A study in Denmark (DANMASK) which did not find a significant effect, although it was underpowered to rule out any meaningful impact and the design only really assessed masks as protecting the wearer rather than as source control.

  • A study in Bangladesh which found that surgical masks as part of a package of sustained education may have reduced transmission somewhere between 0 - 18%.

The single biggest scientific failure of the pandemic was not generating more, high quality evidence on the impact of non-pharmaceutical interventions.

Fortunately, for pharmaceuticals we have learnt this lesson - although the failures of accepting low quality observational or mechanistic evidence were played out on the world stage with drugs like hydroxychloroquine, ivermectin and convalescent plasma.

Despite the article claiming mechanistic evidence has been “mission critical”, it led to widespread recommendations for cloth masking which has since been dismissed as so ineffective to be barely worthwhile, and we are still left guessing at how effective the alternatives may be.

Complexity is no excuse

A central tenant of this article is that the simple, or narrow interpretation of what constitutes high quality evidence is not appropriate for complex systems in the real world. This is completely fallacious; it is precisely what warrants stringent evidence, as it is what renders simplistic mechanistic evidence insufficient. It is the same reason why we cannot just demonstrate medical drugs or devices have a mechanism which should cause the effect that we want - because when we apply them to complex systems, unexpected things happen which can render them useless, or worse, harmful. Complexity makes it more difficult to simply extrapolate what “should” work.

It is not enough to rely on rationalism - we must depend on empiricism

Whilst the article is correct in asserting that there are differences in the paradigm for individual patient care compared to population/public health interventions, it is wrong in asserting this makes randomised trials unnecessary. One of the most important developments in the field of Economics in the past decades has been the use of randomised trials. These have been conducted under circumstances very similar to that which we might consider public health interventions, and no-where is this better demonstrated than the fact that the study of masks in Bangladesh was carried out by a group of economists! It is also no coincidence that some of the best evidence regarding the impact of school closures on Covid-19 transmission was generated by economists like Emily Oster.

Can these trials be done? Of course! One of the finest examples was a randomised trial of daily contact testing vs mass quarantining for cases within schools. Despite the astonishing claims such a trial would be “unethical” (also claimed to be a limitation of RCTs in the article), no significant difference was found between the two interventions.

Creating false or unnecessary barriers against generating good evidence has been an enormous own goal in it’s own right.

What has EBM got to do with anything?

Focussing on EBM is an unnecessary distraction, given that it is traditionally regarding the management of individual patients, not mass management of a well population. In this regard, EBM has succeeded in the form of clinical trials such as RECOVERY, PANORAMIC, multiple vaccine trials, and more.

The idea that we should demand and act based on the highest quality of evidence for populations, not just individuals, is also not new (see Muir’s “Evidence Based Healthcare and Public Health”). Indeed, the biggest failure in the scientific pandemic response has not been sticking to a narrow “evidence based" view of implementation; it has been ignoring it. If the need to act is in the absence of high quality evidence, you can still act given the current best evidence - considering the poor quality of evidence (where it existed at all) at the start of the pandemic, there is a lot of room for debate in what this should have looked like.

Where I can see no room for debate, is that after acting you should instigate emergency measures to ensure that you do not continue to act in an evidence vacuum, but immediately set about generating evidence from trials in the community. Cluster randomised trials of masking, rapid antigen testing, school closing/reopening etc would have come at a fraction of the cost of the blanket implementation of these measures, and could have put the debate over their efficacy to bed once and for all.

The most harmful thing you can do in this setting is claiming that high quality evidence is unnecessary, or worse, unethical.

Why didn’t we do better?

There are several reasons why we failed to generate better evidence, including:

  1. Both a reluctance to, and the difficulty of conducting trials once an intervention is implemented despite a lack of existing high quality evidence

  2. Lack of experience in the public health and medical community of complex community randomised trials (economists perhaps have more experience)

These are by no means insurmountable. We missed valuable opportunities in the periods when some of these interventions were being removed to do so in a manner which could have gained valuable information about their effectiveness. In the way that we have “off the shelf” studies ready to go for pharmaceuticals, we should prioritise getting similar study frameworks for community non-pharmaceutical interventions ready to dispatch for the next disease outbreak or pandemic.

Summary

Mechanistic evidence is important, as it is what gives us reason to believe that an intervention may be worth implementing. Within the fields of medicine and public health, it is very rarely sufficient to conclude that it is worth doing on it’s own. When biology and human behaviour have a chance to change the game, all bets are off. Observational evidence is useful, but for complex intervention with marginal effect sizes is at especially high risk of confounding. It can even be dangerously misleading if the results are wrong.

If we wasted less time making excuses for why we can’t do randomised trials, less time and money on endless confounded observational studies, and focussed our attention on generating high quality evidence, these issues simply wouldn’t be contentious anymore.

Deciding we can simply consider lower quality evidence to be sufficient is a enormous error, and is indeed as one reviewer said of the article, “a throwback to the ‘70s”.

Thanks for reading The Munro Report! Subscribe to make sure you don’t miss out on new posts.

5
Share this post
The Pandemic Evidence Failure
alasdairmunro.substack.com
5 Comments

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

Unplanned Admissions
Jul 28·edited Jul 28Liked by Alasdair Munro

Fab piece. These two new covid reports published by the BMA today about the public health response to the pandemic are rather wild. They seem to claim that "The decision of the UK governments to close schools to control the spread of

COVID-19 was sensible given how little we knew about the transmission or impact of the

virus". No proper assessment of the harms of that decision. There is no mention of their ill-judged intervention on vaccine dosing and it glosses over the (lack of) evidence for masking, citing a news story about the Bangladesh study. A triumph of policy-based evidence making.

https://www.bma.org.uk/advice-and-support/covid-19/what-the-bma-is-doing/the-impact-of-the-pandemic-on-population-health-and-health-inequalities

Expand full comment
ReplyCollapse
Ewan
Jul 26

Important piece. It echoes my own frustrations with many public health researchers and policy makers. It really shocked me to learn how little poorly many people working in this area understand the value of randomisation or empiricism generally.

A word in favour of observational studies. Resources devoted to these are not a major issue. Having worked on large community based RCTs in cancer screening (10s of thousands of participants) I would guess that observational studies with routinely collected data would typically be several orders of magnitude cheaper. Of course the RCTs still need to be done! But supporting observational evidence doesn't fundamentally alter the resourcing needed and can generate additional knowledge.

To put it a different way, how many large RCTs are required to arrive at the optimal masking policy? 10? 100? There are so many combinations of mask type, rules for when it can be removed, fitting etc. Head-to-head trials for comparison of all options are probably not possible. The best approach may be to establish the efficacy (or not) of masking with 1 or 2 large RCTs, and then attempt to learn at the margins with observational studies.

Expand full comment
ReplyCollapse
3 more comments…
TopNewCommunity

No posts

Ready for more?

© 2022 Alasdair Munro
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing