Ok, now I have skimmed it, and the flaws are as you describe (of course, but I felt I should check!). I the defence of the spreaders, the flaws aren't obvious from the abstract. 9871 tweets of the paper!
The episode convinced me that arXiv's need some kind of peer commentary function integrated
From the abstract, “ The probability of getting COVID-19 for mask wearers was 7% (97/1463, p=0.002)” - this p value means absolutely nothing at all! That alone should raise big red flags.
all p values mean little without a specification of the model used to derive them (and so the null against which they are defined). Many would read this abstract and - wrongly - assume that the model specification is given in the paper, giving the authors the benefit of the doubt.
At least within the sphere of quantum computing, there's a website called SciRate that performs this function to some degree. It operates with any paper on arXiv, but of course, you need community adoption for it to be useful. :) People can upvote ('scite') papers they like, and the list of upvoters is public. People can also leave comments, e.g.: https://scirate.com/arxiv/2207.02244#1810
In general, if a paper has > 20 scites and is in my subfield, I should probably at least read the abstract. Of course, scites aren't an indicator of correctness, but if a paper got many scites *and* had glaringly bad errors already in the abstract, I expect people would point out as much in the comments.
I encourage you to try out SciRate in your group, if possible! Some groups use it by 'sciting' papers they want to talk about in group meeting, as a form of bookmarks.
This is unacceptable behavior by scientists! May I also add to your report reason number three? That would be that even though they knew it was filled with errors they went ahead and accepted it to support their greedy and controlling narrative. And they banked on most folk’s short attention span and lack of knowledge.
Thank you for sharing this! And for researching it so thoroughly and explaining it so well!
"The paper gives zero- yes literally zero - details of how the papers were selected from the literature search, yielding a total of only 13 from over 1700."
That's interesting. The CDC went to hospitals and discussed masking with COVID victims back in 2020. 85% of COVID victims report 'always' or 'almost always' wearing a mask. Then I look at all the studies and there were no statistical differences for flu-like RVIs (published by the CDC, WHO, JAMA, NEJM, Lancet, etc), as well as those for COVID, etc.
And masks didn't work. And there has been a LOT of literature starting in the 1990s to futility of masking for RVIs. We're talking HUNDREDS of studies.
I don’t want to amplify them further. If you follow the link to the pre-print and click on “metrics”, you can see all tweets and newspaper articles where the article has been linked to
Your call, of course. But my own view, including as someone about to teach in this area, is that your piece would be much more useful as a teaching tool if it included links to the specific examples you cite.
Just reading "243 subjects were infected with COVID-19, of whom 97 had been wearing masks and 146 had not" from multiple studies made me realize this was junk. Performing a study in a single large school setting would yield larger numbers than that; in fact 850 cases in a study of 140 000 class meetings at Boston University, where masking and vaccination virtually eliminated classroom transmission (https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2794964).
I really want to share this post, but am I now intellectually obliged to read the medRxiv paper in question to check it is as you describe?
Ok, now I have skimmed it, and the flaws are as you describe (of course, but I felt I should check!). I the defence of the spreaders, the flaws aren't obvious from the abstract. 9871 tweets of the paper!
The episode convinced me that arXiv's need some kind of peer commentary function integrated
From the abstract, “ The probability of getting COVID-19 for mask wearers was 7% (97/1463, p=0.002)” - this p value means absolutely nothing at all! That alone should raise big red flags.
all p values mean little without a specification of the model used to derive them (and so the null against which they are defined). Many would read this abstract and - wrongly - assume that the model specification is given in the paper, giving the authors the benefit of the doubt.
At least within the sphere of quantum computing, there's a website called SciRate that performs this function to some degree. It operates with any paper on arXiv, but of course, you need community adoption for it to be useful. :) People can upvote ('scite') papers they like, and the list of upvoters is public. People can also leave comments, e.g.: https://scirate.com/arxiv/2207.02244#1810
In general, if a paper has > 20 scites and is in my subfield, I should probably at least read the abstract. Of course, scites aren't an indicator of correctness, but if a paper got many scites *and* had glaringly bad errors already in the abstract, I expect people would point out as much in the comments.
I encourage you to try out SciRate in your group, if possible! Some groups use it by 'sciting' papers they want to talk about in group meeting, as a form of bookmarks.
Thanks Alex. Hadn't heard of scirate - interesting!
Better https://pubpeer.com/
why better?
More fields, larger community-more activity...
This is unacceptable behavior by scientists! May I also add to your report reason number three? That would be that even though they knew it was filled with errors they went ahead and accepted it to support their greedy and controlling narrative. And they banked on most folk’s short attention span and lack of knowledge.
Thank you for sharing this! And for researching it so thoroughly and explaining it so well!
"The paper gives zero- yes literally zero - details of how the papers were selected from the literature search, yielding a total of only 13 from over 1700."
That should not even get published.
I think we should all bookmark the study so that it’s handy when we need it!
That's interesting. The CDC went to hospitals and discussed masking with COVID victims back in 2020. 85% of COVID victims report 'always' or 'almost always' wearing a mask. Then I look at all the studies and there were no statistical differences for flu-like RVIs (published by the CDC, WHO, JAMA, NEJM, Lancet, etc), as well as those for COVID, etc.
And masks didn't work. And there has been a LOT of literature starting in the 1990s to futility of masking for RVIs. We're talking HUNDREDS of studies.
And then we have this magic study....
Are there any valid studies of whether N-95 masking is effective? It would be good to know one way or the other after all this time.
It was easy, censorship took out the truth, propaganda remained, people dug in and brainwashed themselves.
Could we get links to the tweets and at least some of the news articles?
I don’t want to amplify them further. If you follow the link to the pre-print and click on “metrics”, you can see all tweets and newspaper articles where the article has been linked to
Your call, of course. But my own view, including as someone about to teach in this area, is that your piece would be much more useful as a teaching tool if it included links to the specific examples you cite.
Just reading "243 subjects were infected with COVID-19, of whom 97 had been wearing masks and 146 had not" from multiple studies made me realize this was junk. Performing a study in a single large school setting would yield larger numbers than that; in fact 850 cases in a study of 140 000 class meetings at Boston University, where masking and vaccination virtually eliminated classroom transmission (https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2794964).
Perhaps you mean testing everyone for Covid all the time eliminated classroom transmission.
Could you please give the names of famous people who shared the paper?
Topol, Lauterbach,...