The True Incidence of Covid in Population

by Chris

There are a LOT more cases out there than show up in the “confirmed” case counts.

It is undeniably true that there are MORE cases out there than show up in case counts.  HOW many more is the $1,200 question.  Thankfully, a family can live on that amount for up to ten weeks, according to Mnuchin.

We all want to know the true incidence.  We all want to know that people who do test for the SARS2 antibody (Ab) are actually now safe for life from reinfection by SARS2.  That’s probably not the case, sadly, but we can hope.

It would be a mistake to use that handy table from the Salt Lake Tribune as definitive in any way.  Why?  Because of some pesky details that involve testing.

Here we have to dive into the slightly complex world of test specificity and sensitivity.

In medical diagnosis, test sensitivity is the ability of a test to correctly identify those with the disease (true positive rate).

Test specificity is the ability of the test to correctly identify those without the disease (true negative rate).

To translate, if a test has a 90% sensitivity it has a 90% chance of properly delivering a true positive result and a 10% chance of delivering a false negative.

If a test has 90% specificity, it has a 90% chance of properly detecting a true negative but a 10% chance of improperly registering a false positive.

How do we make sense of this?

Let’s look at a real example.  The FDA has approved Cellex’s Ab test which has a sensitivity of 93.8% and a specificity of 95.6%

Sounds pretty good right?

Well, not so fast.  In a population with an incidence of 100% those would be pretty solid.  But something goes off the rails when the true incidence is low.

Here’s the math in visual form.  Because of the interplay between the false positives and the false negatives, which involves some Bayesian math, the Sensitivity and false positive rate does not add up to 100%.  So if that jumps out at you, just let it go.

Let’s begin with a true underlying incidence of 10%:

Wow.  And Weird, right? If the true incidence is 10% in a population this test with it’s 93.8%/95.6% stats is going to cough up a 30% false positive rate.  So if you measured 10% as positive, the true rate would be just 7%.

But it gets worse as we nudge down the incidence curve.  Here are the results for a 1% incidence rate:

In a 1% true incidence population even the very good Celex test is coughing up 82% false positive test results.  For every 100 positives, 82 of them actually weren’t.

Many of the test results in the Salt Lake Tribune table were from earlier Ab test kits that weren’t as ‘good’ as the Celex test.  So, the summary is who the hell knows?

I wish we did.

Here’s a very good article on the problem we face here:

There are two key criteria we look for when we’re evaluating the accuracy of an antibody test. One is sensitivity, the ability to detect what it’s supposed to detect (in this case antibodies). The other is specificity, the ability to detect the particular antibodies it is looking for. Scanwell’s chief medical officer, Jack Jeng, says clinical trials in China showed that the Innovita test achieved 87.3% sensitivity and 100% specificity (these results are unpublished). That means it will not target the wrong kind of antibodies and won’t deliver any false positives (people incorrectly deemed immune), but it will not be able to tag any antibodies in 12.7% of all the samples it analyzes—those samples would come up as false negatives (people incorrectly deemed not immune).

By comparison, Cellex, which is the first company to get a rapid covid-19 antibody test approved by the FDA, has a sensitivity of 93.8% and a specificity of 95.6%. Others are also trumpeting their own tests’ vital stats. Jacky Zhang, chairman and CEO of Beroni Group, says his company’s antibody test has a sensitivity of 88.57% and a specificity of 100%, for example. Allan Barbieri of Biomerica says his company’s test is over 90% sensitive. The Mayo Clinic is making available its own covid-19 serological test to look for IgG antibodies, which Elitza Theel, the clinic’s director of clinical microbiology, says has 95% specificity.

The specificity and sensitivity rates work a bit like opposing dials. Increased sensitivity can reduce specificity by a bit, because the test is better able to react with any antibodies in the sample, even ones you aren’t trying to look for. Increasing specificity can lower sensitivity, because the slightest differences in the molecular structure of the antibodies (which is normal) could prevent the test from finding those targets.

“It really depends on what your purpose is,” says Robert Garry, a virologist at Tulane University. Sensitivity and specificity rates of 95% or higher, he says, are considered a high benchmark, but those numbers are difficult to hit; 90% is considered clinically useful, and 80 to 85% is epidemiologically useful. Higher rates are difficult to achieve for home testing kits.

But the truth is, a test that is 95% accurate isn’t much use at all. Even the smallest errors can blow up over a large population. Let’s say coronavirus has infected 5% of the population. If you test a million people at random, you ought to find 50,000 positive results and 950,000 negative results. But if the test is 95% sensitive and specific, it will correctly identify only 47,500 positive results and 902,500 negative results. That leaves 50,000 people who have a false result. That’s 2,500 people who are actually positive—immune—but are not getting an immunity passport and must stay home. That’s bad enough. But even worse is that a whopping 47,500 people who are actually negative—not immune—could incorrectly test positive. Half of the 95,000 people who are told they are immune and free to go about their business might never have been infected yet.

Because we don’t know what the real infection rate is—1%, 3%, 5%, etc.—we don’t know how to truly predict what proportion of the immunity passports would be issued incorrectly. The lower the infection rate, the more devastating the effects of the antibody tests’ inaccuracies. The higher the infection rate, the more confident we can be that a positive result is real.

(Source – Technology Review)