Against Models as Propaganda
We examine the misuse of infectious disease models as ideological tools
Mathematical models for infectious diseases have been one of the more controversial aspects of the scientific response to the pandemic. This controversy, however, has often taken inconsistent forms: where one commentator asserts with great confidence that excessive use of modelling is the reason the UK did not follow an “obviously correct” zero-Covid strategy like China, another will be equally sure that overly pessimistic scenarios led to unnecessary lockdowns aimed at avoiding a situation that never could have been.
In fact, modelling is central to science, because it allows us to think through complex consequences of assumptions in a way that allows consensus to be established because numbers are explicit, consistent, and universally understood. Even technically correct modelling, however, is only as good as the assumptions that go into it.
While dismissing models as useless is wrong, so is undue confidence in the output of models, particularly if these confirm our prejudices.
In the same way that there are issues to bear in mind when reading an experimental scientific study or trial - strength of controls, possible sources of confounding, rigour of methods etc. - there are issues that must be considered when reading a modelling study. These can be applied regardless of the fact that the technical details behind how models are put together (and how their information is best communicated or applied to policy) remain beyond the reach of many members of the public, and even many scientists from other disciplines.
Recently there has been a proliferation of studies utilising mathematical modelling for Covid-19 which have been used to argue strongly for certain policies. Whilst the substance of the models is not in question here (although peer review is not a guarantee against the presence of outright ‘bugs’ in code or even mathematical errors), the interpretation and communication of the results by the studies authors has been nothing short of propaganda.
This is bad for the public, bad for science, and bad for modelling.
There are three main issues - also this is far from exhaustive - that can arise in modelling studies, and examples of how these have been misused
Failure to quantify uncertainty and variability
This study on vaccination policy of adolescents in the UK, used as evidence that all teenagers should be vaccinated, did not differentiate between the children who are clinically vulnerable (who were always believed to be a priority for vaccination) and those who are not.
This is an essential source of variability in the population and must be included.
Lumping everyone together as if they have equal risk can be extremely misleading. If for example your model shows you can reduce hospitalisations by 80% by vaccinating everyone, but misses the fact you could reduce hospitalisations by 75% by only vaccinating 5% of the population, this is an absolutely fundamental oversight.
Furthermore, the concerns of the UK vaccination committee (the JCVI) mainly related not to their assessment of the likely benefit of vaccination of children, which they believed to be positive, but rather how certain they were of this. This model does not seek to capture such uncertainty systematically and as such does not address the main policy issue at hand. Whilst being used as support for a desired policy, it fails to address the two most important factors which influence that decision.
Assuming what is intended to be shown
To model the outcome of different scenarios, you first need to give the details of the different parameters to the model. For example, if I want to see what impact mask wearing will have on Covid-19 deaths, I will have to tell the model what masks do (among the many other relevant factors). I can model masks reducing transmission by 0%, 5%, 50%, 99%, or whatever else I wish.
When the parameter you need to input has a very solid evidence base and low levels of uncertainty, that is not an issue. However, when the parameter has extremely poor evidence and high levels of uncertainty (e.g. mask wearing), that is potentially a very BIG issue.
For example, if I tell my model that mask wearing reduces transmission by 99% and then run my scenarios, afterwards I can claim that my model shows mask wearing would have reduced Covid-19 deaths by 800,000 - but this is basically meaningless. I could just as easily tell the model that mask wearing increases transmission by 10% and report that my model shows mask wearing increases Covid-19 deaths by 200,000.
The result of the model is entirely dependent on what I tell it, and the result cannot be used as proof of what I have told it.
There have been many examples of this recently.
This modelling study on mask wearing being used as evidence of how useful masks are, despite the result being entirely dependent on how useful the modellers have told the models masks are.
This modelling study on the difference between transmission based on vaccination status being used as evidence that transmission from unvaccinated people disproportionately impacts transmission to vaccinated people, despite the result being entirely dependent on the modellers telling the model how much transmission comes from unvaccinated people compared to the vaccinated.
For extensive discussions of how the assumptions used could plausibly be completely incorrect (potentially false assumptions about the differences in transmission in these populations) and addressing these errors of reasoning, you can read the extensive replies to this study here.
In these examples, the modellers have already decided how big the effect is, input this into the model, and then used the output of the model as evidence of how big the effect is.
This is obviously circular reasoning, and is a total misuse of the model.
Logical inconsistency.
This modelling study on the impact of the timing of the UK lockdown being used as evidence that locking down a week earlier in March 2020 would have saved up to 43,000 lives, despite the result being entirely dependent on the modellers telling the model that lockdowns were the only thing which resulted in a reduction in transmission.
This is the same issue we discussed above, of circular reasoning, however the errors made by this paper are deeper still.
In particular, they make the claim that an earlier lockdown could also have finished earlier. This is fundamentally logically inconsistent with the assumption that it is lockdowns alone that control transmission - as soon as a lockdown ends, in the authors’ own model, exponential growth immediately resumes and so there is no meaningful sense in which lockdown can both end early and save lives.
A conclusion from the authors’ assumptions here is that a lockdown more than a week earlier would have saved still more lives and been even shorter - to the extent that we could have locked down for half an hour very early in 2020 and no-one would have died in Wave 1 of the pandemic. Clearly this is not a logically consistent conclusion. While it is true that if the aim of lockdown is to achieve truly zero infections, as in New Zealand, Australia or China, then earlier action reduces the necessary duration of infection, however in this paper lockdowns were ended at a particular non-zero level of infection. A case can be made for earlier action, but this model does not help it.
Are models bad?
The issue here is not the science or the modelling. They are useful for asking the question:
IF the effect size of wearing masks/lockdowns/unvaccinated transmission etc is X, then how does this affect your outcome?
But you will notice here that there is a big IF.
Communicating these results needs to be absolutely crystal clear. There is a world of difference between saying:
“Our model shows mask use could reduce Covid-19 deaths by X amount”
vs
“Our model shows that if masks are 80% effective at reducing transmission, then if people use them it could reduce Covid-19 deaths by X amount”
In the latter example, it becomes obvious that the entire result rests on the assumption that masks are 80% effective. That means you can now interpret this result with the necessary caveats, and act accordingly.
It is very useful to know how much masks could reduce Covid-19 deaths if they are 80% effective. It is not useful - in fact, it is harmful and misleading - to claim your model simply suggests that mask use can reduce Covid-19 deaths without giving any of the necessary assumptions, variability or uncertainty.
Summary
Models are essential for understanding how various assumptions might impact on an outcome of interest. But their conclusions are only as good as these assumptions. Using the outcome of the model to assert the importance of the underlying assumption is circular reasoning, misleading, and is damaging to the reputation of this scientific domain. Failing to incorporate known variability or uncertainty into a model's assumptions or conclusions is a fundamental mistake.
To improve the usefulness of, and trust in, modelling, it should be undertaken according to appropriate professional standards, with clear statements about conflicts of interest, and not with the stated intent of delivering a particular policy outcome.
Otherwise, it is just propaganda.
Thanks for writing such a clear and easy to understand article.
That's a very clear and helpful piece, thank you.