If anything, the test used skewed towards higher false negatives than false positives. Yet the authors assumed a normal distribution in their mathematical treatment. Also, the authors gave an error bar on the results. So the argument is really, the range is not calculated correctly, or the math used is wrong. I would like to see the tweeter's math (an economist).
We shall see how well the study holds up as tests improve in accuracy.
This chart, from Evaluate Vantage, shows the difficulty of obtaining a high PPV† when the tested population has a low true-positive rate—arbitrarily set to 5% in this chart. Under such circumstances, a test with 95% specificity may have little or no practical utility insofar as it produces as many false-positive results as true-positive results—and even a test with 99% specificity produces a 16% false-positive rate.
*Negative Predictive Value: Probability that a negative test result is truly a negative subject.
†Positive Predictive Value: Probability that a positive test result is truly a positive subject.
(Note: The previous version of this post had incorrect definitions of NPV and PPV. h/t ‘valencay’ for noticing the error.)