Why, then, have these cheap and rapid tests not become the foundation of our national testing strategy? The answer lies with test sensitivity.
The problem with a cheaper, less accurate solution deployed cheaply and therefore at scale is the possibility of imperfect testing technology— versus a more perfect diagnosis technology that is more expensive e and more accurate while taking longer. That is because the ideas of statistics are hard to grok. It is also related to why agile software development doesn’t make sense in a world where perfection has traditionally been rewarded.
If everyone took an antigen test today—even identifying only 50 percent of the positives—we would still identify 50 percent of all current infections in the country – five times more than the 10 percent of cases we are likely currently identifying because we are testing so few people. Accuracy could be further increased through repeated testing and through the recognition that quicker test results would identify viral loads during the most infectious period, meaning those cases we care most about identifying – at the peak period of infectiousness—are less likely to be missed. Even better, we would be identifying these cases while they are still infectious, rather than in 10 days when the virus may have already been transmitted repeatedly. Mina and colleagues have shown through modelling that this logic holds up; speed matters much more than test sensitivity in controlling a pandemic.