THE CMG VOICE

Probability, False-Positives, and What Your Lab Test Result Really Means

Nearly everyone with experience as a patient in the health care system in the United States is familiar with the concept that doctors (and ARNPs and PAs) will order tests to help them understand what is going on with you and how you can be treated. These often include both lab tests (testing your urine, or your blood) and imaging studies (x-rays, CTs, MRIs).

On its face, this sounds like a good thing – a health care provider gathering data to support his or her decision-making. However, a recent article argues that often times those health care providers don’t correctly interpret the results of tests they order, sometimes leading to patient harm.

The problem involves probability, and the notion of false positives. A false positive test result is one in which the test says you have something you don’t. Given the current state of accuracy in our medical tests, false positives may be more common than you think.

However, it appears health care providers do not account for false positive results as much as they should. So, when a patient tests positive for a particular disease, often times they are treated for it even though they don’t have it. Sometimes this results in no or little harm, but sometimes it results in serious consequences.

Imagine a false-positive test result is acted upon, and surgery is indicated, or chemotherapy is initiated. Or the false-positive test result turns into more tests, including invasive ones, and injury occurs.

Although the author of the article notes there is no simple solution to this problem, one thought is to include on the lab results the percentage of false positives for a particular test, and the probability of a particular patient having the disease tested for.

So, for example, if you are testing for Disease X, and there is generally a 1 in 1000 chance of a person having it, that number could be included in the lab test results, next to the false positive rate for the test (say that is 5%, or 50 out of 1000). An algorithm could even better tailor the chances this particular patient has Disease X by weighing it for factors such as age or weight that may make the chances bigger or smaller.

Even further, simple math with those numbers can show the true chance of having Disease X:

if 1 out of 1000 people have it, that means 50 of the rest (999) don’t have it but test positive for it anyway. That means, of the 1000 people, 51 test positive for it but only 1 person really has it. That means the chances the patient actually has the disease when the test comes back positive is really only 2%.

Instead of a health care provider getting the positive test result back and presuming the patient has Disease X, he or she can then appropriately consider the false positive rate. This would likely lead to less unnecessary testing and treatment, and less patient harm.

Read more here:

[What the tests don’t show: Doctors are surprisingly bad at reading lab results. It’s putting us all at risk.](https://www.washingtonpost.com/news/posteverything/wp/2018/10/05/feature/doctors-are-surprisingly-bad-at-reading-lab-results-its-putting-us-all-at-risk/?utm_term=.861f6ab576fb)