Most doctors miscalculate positive predictive value

By Will Boggs MD

NEW YORK (Reuters Health) - When asked to do a simple calculation of positive predictive value, about three out of four physicians and medical students in Boston got it wrong.

In 1978, Casscells and associates showed that most physicians, house officers, and students overestimated the positive predictive value (PPV) of a laboratory test result when provided the prevalence and false positive rate.

Dr. Sachin H. Jain from Harvard Medical School in Boston, Massachusetts and colleagues asked 24 attending physicians, 26 house officers, 10 medical students, and one retired physician at a Boston-area hospital the same question posed by Casscells et al.: "If a test to detect a disease whose prevalence is 1/1000 has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person's symptoms and signs?"

Here's the correct calculation (assuming a perfectly sensitive test): prevalence divided by the sum of prevalence and false positive rate (in the same terms), or (0.001)/(0.051), which yields a PPV of 1.96%. The authors accepted "1.96%," "2%," or "less than 2%" as correct answers.

Overall, 14 of 61 respondents (23%) gave one of the accepted answers, not much different from the result obtained in the original study (18%).

The answers from the respondents ranged from "0.005%" to "96%," with a median of 66% (33 times larger than the true answer), according to the April 21 research letter in JAMA Internal Medicine.

Explanations of wrong answers showed a lack of understanding: one attending cardiologist said PPV doesn't depend on prevalence, and one resident said PPVs are better when the prevalence is low.

"Our results show that the majority of respondents in this single-hospital study could not assess PPV in the described scenario," the researchers say. "Moreover, the most common error was a large overestimation of PPV, an error that could have considerable impact on the course of diagnosis and treatment."

They advocate revising premedical education standards to incorporate training in statistics, which is used commonly, instead of calculus, which is seldom used in clinical practice.

Dr. Yong-Geun Choi from Korea University in Seoul recently published a summary of five key statistical concepts for clinicians. He told Reuters Health by email, "I think the lack of statistical skills of clinicians is mostly attributed to the almost complete dependence on statisticians in education and training courses. The current statistics course is too oriented toward the computation of numbers using computer software, interpretation of numbers, and statistical inference from sample to general population."

"However, there are a few teachers who are able to teach linking skills of how to make relationships between medicine and math (statistics)," Dr. Choi said. "The teacher in the future should be a bilingual person who speaks both languages of statistics and clinical science by training and experience."

"The authors of the JAMA article obviously spotted the unimproved pattern of statistical skills and reasoning among clinicians by replicating and comparing with the (study from) 35 years ago," Dr. Choi said. "The report nonetheless also repeated the same error of generalization of the findings from around 60 testees to all clinicians with the use of a convenience sample which is a product of non-representative sampling."

"Thus," Dr. Choi said, "a representative random sample would have been much better to make valid statistical inferences in the study."

Dr. Jain declined to comment on the study.

SOURCE: http://bit.ly/1mIAnRu

JAMA Intern Med 2014.

(c) Copyright Thomson Reuters 2014. Click For Restrictions - http://about.reuters.com/fulllegal.asp