Nice succinct summary of some of the biggest problems in Covid modelling and communication.
Failure to quantify uncertainty in models is critical when the models serve as an input to decision making. And this isn't to say that very uncertain evidence of benefits of an intervention preclude it being used. Changing policies on masking etc make a lot more sense when understood as resulting from changing beliefs about extreme pandemic outcomes and not due to any certainty about whether interventions are really effective or not.
In health economics, the 'irrelevance of inference' in decision making is fairly well known but it is a major omission in most science communication.
I worked extensively with models in the engineering world, and I was surprised at just how terribly inaccurate all the COVID epidemic modeling was - in terms of cases, hospitalizations, and deaths. My takeaway from that total modeling failure was that clearly we didn't understand almost anything, from how it spread, how effective mitigation methods were, seasonality of the virus to much simpler things like R values, IFR, or IHR. When models make predictions and they don't come true, they still tell us something important, i.e. that are understanding is incomplete, and the more inaccurate the prediction, the more we lack understanding.
Thanks for writing such a clear and easy to understand article.
That's a very clear and helpful piece, thank you.
Nice succinct summary of some of the biggest problems in Covid modelling and communication.
Failure to quantify uncertainty in models is critical when the models serve as an input to decision making. And this isn't to say that very uncertain evidence of benefits of an intervention preclude it being used. Changing policies on masking etc make a lot more sense when understood as resulting from changing beliefs about extreme pandemic outcomes and not due to any certainty about whether interventions are really effective or not.
In health economics, the 'irrelevance of inference' in decision making is fairly well known but it is a major omission in most science communication.
Experience is the best teacher. But it appears biases, promoted outcomes and political influence were valued much more than history.
I worked extensively with models in the engineering world, and I was surprised at just how terribly inaccurate all the COVID epidemic modeling was - in terms of cases, hospitalizations, and deaths. My takeaway from that total modeling failure was that clearly we didn't understand almost anything, from how it spread, how effective mitigation methods were, seasonality of the virus to much simpler things like R values, IFR, or IHR. When models make predictions and they don't come true, they still tell us something important, i.e. that are understanding is incomplete, and the more inaccurate the prediction, the more we lack understanding.
"Mathematical models for infectious diseases have been one of the more controversial aspects of the scientific response to the pandemic."
Well, that and everything else! Nice post BTW!