Menu Home

The false positive paradox

Consider a medical test for a disease suffered by 1% of the population, which has a 5% “false positive” error rate. If you test positive, what are the chances that you are actually ill? In fact, it’s less than 17%.

Pregnancy test

Wutthichai Charoenburi, CC BY 2.0, via Wikimedia Commons

Hey, it’s another post about misleading statistics! Last time I wrote about the law of truly large numbers – today it’s the false positive paradox. I should note at the outset that this isn’t a paradox in the sense of something self-contradictory; instead it describes a result that is counterintuitive at first glance. And because it often comes up in medical tests, it has some pretty serious real-life consequences.

Few diagnostic tests are perfect. Sometimes they return an incorrect negative result (you have a disease but the test says you don’t), or an incorrect positive result (you don’t have a disease but the test says you do). Obviously they’re both bad, but the latter leads to a rather curious statistical misunderstanding.

Say a thousand people take a medical test. Just 1% of them have this disease – 10 people – and the test picks them all up. But the test also returns 5% false positives. Of the 990 that don’t have the disease, around 49 or 50 will get an incorrectly positive result. Putting those two groups together, you have 59 or 60 people with a positive result, but only one in six of those actually have the disease. Just a little under 17% in all.

This is really an issue for how to communicate risk. If you’re told that a test for a disease is 95% accurate, but not how many people in the population actually have that disease, you can’t work out how accurate your result is. If the disease is really common, the chance that a positive result is actually positive is high. If the disease is rare (as in the example above), the false positive rate outweighs the actual positives. Counterintuitive, but statistically accurate.

I have yet to see a take-home pregnancy test that advertises their false positive rate… but you would think that’s important. Oh well, at least it’s better than using dead rabbits.



Categories: Health & medicine Mathematics & statistics Sciences

The Generalist

I live in Auckland, New Zealand, and am curious about most things.

3 replies

  1. I should probably add that some people use the spectre of “false positives” to ignore the results of or statistics for any medical test (*cough* Covid-19). That’s pretty crummy fear-mongering in my opinion – especially because the paradox described in this post becomes less evident the more prevalent the disease is in a population.

    Liked by 1 person

  2. Yes have seen evidence of this paradox in how people downplay covid, it’s not real or it’s not as bad as it seems if there are false positives brought up by it. Yet another way of confusing people into believing the ubiquitous theories that we are being lied to.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: