How U.S. COVID Testing Strategy Will Extend Pandemic Nightmare
The idea is to allow people to test themselves before attending social events, school, going to work, etc., so they can know, almost in real time, whether they may be infectious.
On its face, this seems like a reasonable approach. Why wouldn’t having more information about possible infections be a good thing?
Here’s why that’s actually a really bad idea. Mass testing of people who are overwhelmingly asymptomatic (showing no symptoms) will in fact inevitably extend this pandemic nightmare for additional months — maybe even years — as “cases” continue to mount from false positives (a test result that incorrectly identifies infection when none exists).
To be clear: Biden’s mass testing approach is exactly the opposite of what is needed right now. We should not be testing asymptomatic people.
The reason for this becomes clear only by looking beyond the headlines that claim skyrocketing cases and deaths from the infection.
The accuracy of the case, hospitalization and death numbers are a function of the accuracy of the screening tests we are implementing. Inaccurate tests will naturally lead to inaccurate data.
However, the distortion of these numbers is more than just a matter of the accuracy of our screening tests as will be explained below.
Though the public generally understands every test will have some amount of inherent error, we are told that the widely used COVID-19 tests are very accurate and thus we can trust the reports of “NEW CASES” shouted daily from most mainstream media platforms.
The reality is that even when a reasonably accurate test is used on a population that has a low background prevalence of active disease, the majority of positive test results will, in fact, be false.
Why is this the case? We must first examine what is meant by a test’s accuracy.
Sensitivity versus specificity — what’s the difference?
A test’s accuracy is defined by two things: its ability to diagnose a condition when it exists and its ability to rule out a condition when it doesn’t.
A given diagnostic test does not necessarily have an equal ability to rule in and rule out the condition it is designed to identify. For this reason, the accuracy of a test is defined by its sensitivity and specificity.
Sensitivity and specificity have precise definitions. A test’s sensitivity is the proportion of people who have a disease that the test will correctly identify with a positive result. In other words, if a test has a 90% sensitivity it will return a positive result nine times out of 10 when testing people with the disease.
Specificity is the proportion of people who do not have the disease that the test will correctly identify with a negative result. A test with 90% specificity will return a negative result nine times out of 10 when testing people who don’t have the disease.
Let’s demonstrate this further using an extreme example. Let’s say our test for diagnosing COVID-19 doesn’t involve PCR or antibody titers or antigen testing.
Instead, the test simply involves confirming that a person is alive.
If a person is alive, then in this hypothetical test, they must have COVID-19. If they are dead, they do not have COVID-19. Our hypothetical test’s sensitivity would be 100% because every person who has COVID-19 will test positive; no COVID-19 case will escape detection.
Obviously, this hypothetical test does not offer any meaningful information because every living person tested will test positive for the disease. Assuming we would test only living people, our test will never return a negative result.
In other words, this test will not identify anyone who doesn’t have the disease.
Another way of stating this is by saying that the specificity of our test is 0% because none of those who do not have COVID-19 will ever be identified.
The metric we really need to look at: positive predictive value (PPV)
The sensitivity and specificity of a given test do not change with the prevalence of the disease in the population being tested.
However, the proportion of false positives (people who do not have the disease but test positive) rises as the prevalence of the disease falls.
Though it may seem initially mystifying, this is an inescapable reality with any diagnostic test that is not 100% accurate. This is demonstrated below.
The ratio of the number of people who truly have the disease (true positives) compared to the number of people who test positive is defined as the positive predictive value (PPV) of the test.
Hence, the PPV of a test varies with the true incidence of the disease in the population being tested.
It is the PPV of a test that indicates the probability that a person who tests positive for a disease actually has the disease.
When one asks, “I tested positive for COVID. What are the chances I actually have the disease?” The PPV of the test is the answer they are looking for.
What happens when a reasonably accurate test is deployed upon a population that has a low incidence of disease?
The U.S. Food and Drug Administration describes it here. Using a test that has an impressive 98% specificity on a population where 1 in 100 actually have the disease (a disease prevalence of 1%) will result in a PPV of 30%.
In other words, 70% of those who test positive will not have the disease. Seven out of 10 will be false positives.
Now let us examine what will predictably happen when we deploy 500 million rapid tests on a population of people who are asymptomatic (which will be the case in the vast majority of circumstances in schools, social events, universities, workplaces, when traveling, etc.).
We must first estimate what the true prevalence of active COVID-19 is in the population. There are several ways to do this.
In their vaccine trial study published in the New England Journal of Medicine, Johnson & Johnson investigators found that of 43,783 participants, 238 tested positive by RT-PCR for SARS-CoV-2 infection (Supplemental Index, table S3) upon screening.
This constitutes a background active infection prevalence of 0.5%.
The authors cite the “… extraordinarily high incidence of SARS-CoV-2 infection” at the time of the trial as one of its strengths.
The trial was conducted during the spring and summer months of 2020. A higher prevalence of background active infections occurred in subsequent months.
Although the sample population is small compared to the population of the U.S. as a whole, national data support this number as well.
At the height of the pandemic, the Centers for Disease Control and Prevention (CDC)reported the seven-day rolling average of daily new COVID-19 cases was 250,440.
If we approximate the average bout of active COVID-19 is 10 days (the original recommended length of quarantine following a positive test), there were approximately 2.5 million active COVID-19 cases per 330 million people in the population on Jan. 11, 2021.