How to not be completely wrong about air filters
A new RCT of HEPA filters for residential care homes gets heads spinning
The Covid-19 pandemic brought a renewed interest in an overlooked field of infection prevention: improving the hygiene of the air we breathe. There are benefits beyond reducing the risk of infections, such as reducing exposure to pollutants and improving cognitive performance by reducing CO2, which also make it appealing.
Unfortunately, like many different aspects of the pandemic, this topic seemed to cause people to lose all their senses. The same thing happened with the use of facemasks, for which I wrote a previous post.
I will now attempt to help us all to screw our heads back on with regards to air filtration and get to grips with an important topic.
The problem
One of the biggest issues of the pandemic was the failure to generate high quality evidence around the use of non-pharmaceutical interventions. Many things which seem like a good idea require assessing in high quality studies, ideally randomised controlled trials, to help us assess the size of their impact and whether it is worth the time and trouble to implement them.
This is also true for interventions such as air filtering. It seems like a great idea on the surface, but it comes with a significant financial burden. Directing efforts at this is also an opportunity cost away from time and money which could be spent elsewhere.
Like masks, improving air ventilation and filtration is not a magic fix. Despite what some people claim, improving these will not prevent all infections or “end the pandemic”. Some people have likened cleaning air to prevent Covid-19 to cleaning water for the prevention of Cholera. This is profoundly misguided, as I have also written about previously.
To demonstrate this in the most graphic way possible, using air cleaning mechanisms for respiratory viruses such as Covid-19 cannot stop someone from coughing directly onto your face or in your mouth. For cholera…well, you get the picture.
There are also people who are wholly unconvinced that introducing such measures will have a meaningful impact at all. All of the above is why high quality evidence is needed. In it’s absence all we are left with is lots of very strong, but ill informed, opinions.
The evidence
The update to this situation is the publication of a new randomised controlled trial of the use of air filters, installed in the rooms of the inhabitants of residential aged care facilities. This is the population both at highest risk of infection and with the highest exposure to the intervention, as residents spend essentially all of their time within the facility or their rooms. This is therefore where we would expect the intervention to have the largest impact in reducing infections, and the biggest impact of preventing infections on preventing severe illness or death.
The study utilised a cross over design, where half the participants were randomised to get an air purifier without a filter, and the other half to an air purifier with a filter. This lasted for 3 months, after which there was a one week washout period before the groups with or without a filter in their air purifiers got swapped over. The participants and analysts to the study were blinded as to which group was which to avoid bias.
It all sounds great until we get to the statistical planning. To know how many people you need in a study, a power calculation is performed. To perform a power calculation, you need to determine the “minimal clinically important difference” (MCID), which is essentially the smallest effect size that you think would make the intervention worthwhile. The point of the power calculation is to make sure you don’t miss the smallest difference you would care about.
For this study, they have powered for an effect size of 50% reduction in infections. That is to say, the study is powered so that the smallest effect they could reliably detect was halving the rate of infections (and for this the study only had 80% power, meaning a 20% chance of missing even a halving in the rate of respiratory infections).
To put that into perspective, there are exceptionally few interventions in modern medicine or public health which have an effect size anywhere near a 50% reduction in an outcome of interest. Most people would be interested in a far smaller effect size than this, and I am surprised the study made it through ethical review with this sample size.
A quick back of the envelope calculation suggests for a non-cross over design, to get a 25% risk reduction (incidence of 30% compared to control incidence of 40%) you would need around 350 participants, as compared to the 135 enrolled in this study. You would need fewer for the cross-over design, which improves power.
What’s more, as the primary outcome was simply the proportion of participants who experienced a respiratory infection. This meant any additional infections beyond the first one went uncounted, further reducing the power of the study. I am not sure why they did not simply count the number of infections in each group.
Finally, there was no plan to adjust for baseline variables in the analysis, which is a simple way of increasing statistical precision without any trade offs. This is not always done, but there is no reason not to do it and you may regret not doing it afterwards.
What did they find?
Unsurprisingly, the study was not able to detect a statistically significant difference. The proportion of participants who had a respiratory infection was 42% in the control group and 31% in the intervention group. This resulted in an odds ratio of 0.57 (95% CI 0.32 - 1.04.) This narrowly misses the mark of being “statistically significant”, with a p value of 0.07 and the confidence interval going just over 1. Note, for the pre-planned analysis of only participants who completed the entire study showed an odds ratio of 0.53 (95% CI 0.28 - 1.00), which was statistically significant. Frustratingly, it may be that utilising any of the changes mentioned above (either counting infections or adjusting for baseline covariates) would have been enough to make the primary result “statistically significant”.
There are other analyses and sensitivity analyses reported which I won’t bother with because they tell the same story.
One of the most baffling parts of the manuscript is how they reported these results in the publication “key points” section;
“The findings suggest that air purifiers with HEPA-14 filters placed in residents’ rooms do not reduce the incidence of acute respiratory infections among RACF residents.”
Let’s be absolutely clear. This is categorically not what these study results show. If you perform a statistical test looking for a p value <0.05, and the p value is above 0.05, you can say that the study did not find a statistically significant difference. What you CANNOT say, is the study suggests there is no difference, when it is plausible that you only failed to reach statistical significance due to your study being underpowered (i.e., not including enough participants). P values are extremely sensitive to the number of participants in the study due to the nature of how they are calculated, so this must always be taken into account.
The study does not provide conclusive proof of effectiveness of air filters, but the study results are absolutely consistent with them being highly effective - they just also happen to be consistent with them being ineffective. This confusing outcome is simply a matter of the results being highly uncertain.
What to make of it?
This study is clearly not the final word in the effectiveness of air filters for reducing respiratory infections. It does not provide concrete evidence of their effectiveness, but the results look promising and there is certainly the need for larger, well powered studies to demonstrate how effective they might be. If similar estimates of effectiveness were demonstrated in bigger studies this would be good evidence that they would be a cost effective intervention worthy of quicker and larger investment than is currently planned. Whilst it is a shame the study was not able to answer the question on its own, I think it provides excellent justification for future research as it looks promising. The obvious caveat is that the degree of uncertainty is large; hence the need for future research.
Importantly, trials in this setting which has the highest probability of impact will likely put a ceiling on how much of an effect we can expect from air filters. Trials in other settings will need to be powered for smaller effect sizes and require more participants.
Summary
Air filtration will never be a cure for respiratory infections, and powering a study for a halving of risk was overly ambitious and a wasted opportunity. That said, the results of this new study are promising and should nudge us a bit closer to believing that air filtration is a worthwhile and potentially very effective mechanism of reducing respiratory infections, especially for the elderly and vulnerable in residential care. Future randomised controlled trials are warranted, including in different settings (such as schools and workplaces) to determine their effect.
Improving air quality through ventilation and filtration is likely the way forwards in building standards for many reasons, not just infection prevention. My personal feeling is air filtration will eventually be determined to be a big step forward in infection prevention in some settings in the community, but we’ll see what future research brings.
i remain shocked at how many doctors offices and even hospitals did not improve air quality. Cost yes...
but even if we reduce the risk by 10% it is huge
In the senior living facilities I'm familiar with, I would guess the greatest risk of transmission is in the dining rooms and other commons areas where residents gather in numbers, often w/o masks (esp. while eating). Running filters in individual residents' rooms might reduce the risk of transmission while a resident is being visited there by caregivers. What did this study do to adjust for that exposure bias?