Consider a medical test for a disease suffered by 1% of the population, which has a 5% “false positive” error rate. If you test positive, what are the chances that you are actually ill? In fact, it’s less than 17%.

Hey, it’s another post about misleading statistics! Last time I wrote about the law of truly large numbers – today it’s the false positive paradox. I should note at the outset that this isn’t a paradox in the sense of something self-contradictory; instead it describes a result that is counterintuitive at first glance. And because it often comes up in medical tests, it has some pretty serious real-life consequences.
Few diagnostic tests are perfect. Sometimes they return an incorrect negative result (you have a disease but the test says you don’t), or an incorrect positive result (you don’t have a disease but the test says you do). Obviously they’re both bad, but the latter leads to a rather curious statistical misunderstanding.
Say a thousand people take a medical test. Just 1% of them have this disease – 10 people – and the test picks them all up. But the test also returns 5% false positives. Of the 990 that don’t have the disease, around 49 or 50 will get an incorrectly positive result. Putting those two groups together, you have 59 or 60 people with a positive result, but only one in six of those actually have the disease. Just a little under 17% in all.
This is really an issue for how to communicate risk. If you’re told that a test for a disease is 95% accurate, but not how many people in the population actually have that disease, you can’t work out how accurate your result is. If the disease is really common, the chance that a positive result is actually positive is high. If the disease is rare (as in the example above), the false positive rate outweighs the actual positives. Counterintuitive, but statistically accurate.
I have yet to see a take-home pregnancy test that advertises their false positive rate… but you would think that’s important. Oh well, at least it’s better than using dead rabbits.
I should probably add that some people use the spectre of “false positives” to ignore the results of or statistics for any medical test (*cough* Covid-19). That’s pretty crummy fear-mongering in my opinion – especially because the paradox described in this post becomes less evident the more prevalent the disease is in a population.
For more on the use (and misuse) of the false positive paradox, see here: https://www.huffingtonpost.co.uk/entry/false-positives-coronavirus_uk_5f686da4c5b6de79b677e909
Yes have seen evidence of this paradox in how people downplay covid, it’s not real or it’s not as bad as it seems if there are false positives brought up by it. Yet another way of confusing people into believing the ubiquitous theories that we are being lied to.