Sunday, November 30, 2008

One problem with drug screening


In a random act of sanity, the Bernards (New Jersey) Board of Education narrowly rejected random drug testing of its students last Monday.

Before I step into the whirling void of passion that passes for rational discourse, let me preface this post with a few things that should be (but are not) needless to say:

1) I do not condone the use of recreational drugs by minors.
2) Ethanol (alcohol, hooch, whatever) is a drug.
3) Nicotine (butts, bogeys, whatever) is a drug.
4) Caffeine (java, joe, whatever) is a drug.
5) I accept the use of recreational drugs (ethanol, nicotine, caffeine) by adults. I'm not saying it's smart.

I am a retired board-certified pediatrician--while that does not make my views sacrosanct, cut me a little slack. I've seen the damage drugs can do. I've also seen the damage thoughtless drug screening can also do.

The American Academy of Pediatrics (AAP) opposes involuntary drug screening of adolescents. The AAP certainly does not support young adults hanging out behind the local WaWa sharing a spliff.

You can go all over the web reading the pros and cons of drug screening, and I hope you do. I want to focus on just piece of the argument, but it's a big one, and one not well understood by many physicians, never mind the general public.

It involves mathematics, and it will turn common sense on its head. So break out your abacus, bear with me, and learn why even a good test can be a lousy one in certain situations.
***

To understand testing, you need to know a couple of terms.

Sensitivity in a drug screen is the percentage of students who are actually using the drugs in the test coming out positive. If 99% of the kids smoking weed behind Wawa are found positive by the test, the test is 99% sensitive.

Specificity in a drug screen is a little more complicated--it tells you what percent of the students not inhaling are correctly identified as not inhaling. If a test is 99.9% specific and you're not using the drugs tested, there is only a 1 in a thousand chance you will be wrongly identified as a drug user.

So, you think, if a test is 99% sensitive and 99% specific, it's a pretty good test. And it is.
Then you say, well, hey, if you test positive, then I can be 99% sure you are using the drugs.
And you'd be 100% wrong.

Huh?

The accuracy of the test depends on what percent of the population is actually using drugs.

Let's suppose you just developed the Spiffy Spliff Test, a cheap, amazing screen that is 100% sensitive and 99% specific.

You need to get FDA approval, so you are looking a benevolent group to donate their time and urine to your fine nonprofit company incorporated solely to save the yewt of America.

Let's say an order of monks lives on an atoll in the middle of the Pacific. They use drug sniffing dogs to prevent any marijuana coming onto their island. Still, one of the monks has end-stage cancer, so he has a prescription for medical marijuana, which he uses, um, religiously.

This is a pretty popular place for monks. 10,000 monks live here, and one is a known marijuana user. Let's say for the sake of argument that not one of the other monks has used marijuana in the past decade.

Now let's test them. The test is 100% sensitive, so the one monk using reefer gets identified as such. So far, so good.

There are 9,999 more monks to be tested. If the test is 99% specific, then 1% of the remaining monks will be falsely identified as using mary jane.

1% of a big number leaves a lot of monks--about 100 of the remaining monks will test positive.

So now we have 101 positive tests, and only 1 monk has truly used grass. Despite a test that's 100% sensitive and 99% specific, the vast majority (over 99%) of those that tested positive have never used ganja.

What if the test is 99.9% specific? Well, then about 10 monks will be falsely positive. For every true postive (the cancer-stricken monk), we have 10 monks on the verge of getting kicked out of the monestary for "wrong" results.

I know this is counterintuitive. Still, in order for the test to be accurate, you need a fairly high proportion of the monks to be hanging out bhind the Wawa.

What if 20% of the monks are potheads? Let's crunch the numbers again.

20% of 10,000 is 2000, so right off the bat we have a couple thousand positive tests. 8000 monks are left. If the test is 99% specific, then 1% of these 8000 monks, or 80, will test falsely positive. In this case, only 80 out of the 2080 (or about 4%) tests will be false positive.

Same test, drastically different false positive rate.

Take home message? The predictive value of a drug screening test, even a really good one, depends on how many kids in the population are actually using drugs.

Until people can wrap their heads around the testing, urine belongs in a toilet, not a test tube.



Coffee and distillery photos from the Google Life collection; the no smoking sign is from the National Archives.

8 comments:

Blogger In Middle-earth said...

Kia ora Michael

Forget about your 99.9% specific red herring – you didn’t do a comparison with that specificity.

You are calculating the falseness of the test with reference to the total number found positive. This is where your logic goes all to pot.

If they were all on the pot there would be absolutely no false tests returned. Right?

The number of false tests will naturally decrease the greater the proportion of crack heads in the sample. It comes down to proportion and the false logic of always calculating your percentage on the number left who are straight.

It’s to do with your sampling for your calculation. When the proportion of crack heads increases, you really should adjust your sample size to be fair. This is because, as you say, the test is 100% sensitive.

So you should base your calculation on the same number of pot-frees every time. It’s got nothing to do with the number of crack heads, for we know they are always positive anyway.

Catchya later
from Middle-earth

doyle said...

Kia ora Ken

It's not a false logic--the prevalence of the condition you are looking for (in this case, specific drug use) determines the likelihood that a positive result means that a randomly selected person in that population is true positive.

This is why physicians take histories before doing tests. This is why the CDC came down hard on docs who used HIV screening tests without getting histories first.

You are right that if all were using pot, there would be no false positives; there would also be no need to do the test.

(In schools where half or more of the kids are getting drunk on weekends, it's just as pointless to screen.)

As an aside, a 100% sensitive test is easy--here's one. I will draw your blood and put a drop on the table--if it's red, you're positive. Bingo, 100% sensitivity (though useless since there's no specificity).

The problem is that the percent of false positives gets unacceptably high (we both love puns it seems) if you screen a population with low drug use. The bigger problem is that most people administering the test do not realize this.

You're right that the absolute number of monks testing positive on the island would drop if the prevalence drops--but the false positive rate rises dramatically.

Larger sample sizes won't change the percentages.

Even with a specificity of 99.9%, a positive test means little if the prevalence of drug use is low.

I'm not worried about the users here (though I do worry about them for other reasons); I worry about those falsely accused. Here in the States many employers require a screening test. A false positive can travel a long way in the information world.

Charlie Roy said...

I am the principal of a school that has mandatory drug testing. I find the conversations about false negatives to be very thought provoking. We use a hair test that gives a 90% drug history. We've had the program now for ten years. On average less than 1% of our students test positive each year. In general the students on annual surveys indicate the fact that they will be tested at school gives them a reason to say "no".

What I'm curious about is whether this hair test method as fell contains false negative. I don't want to discipline a student who doesn't deserve it. Although discipline is a tough thought considering our policies for a positive call for mandatory family and individual counseling - a more reform based model.

Take a look at their video http://www.drugtestwithhair.com/# . I'd appreciate some input from those more science-savy than me.

Blogger In Middle-earth said...

Kia ora Michael

I don't doubt that the incidence of false positives is too high. That wasn't my argument at all.

In 10,000 clean people tested you will on average get 10 false positives, if the specificity is 99.9%, no matter HOW you choose the sample of 10,000. This is my point.

It should not be based on the number of actual pot heads in the sample at all. BUT. . .

You are right, 10 false positives is 10 too many, whichever way you look at it, for it does not justify the use of the test, especially if the incidence of actual pot heads is of the same order or less.

Frankly, I don't think ANY argument justifies the use of a testing system that returns that percentage of false positives from a clean sample.

Catchya later

doyle said...

Dear Souly,

I don't know what the specificity and the sensitivity of a particular hair screening test are (I will look at the video when I get a chance tonight), but the false positive rate depends on the prevalence of drug use in your population. The lower the prevalence, the higher the false positive rate.

I agree that if the presence of the test gives people an excuse to say no, it may have some value. I don't know of any studies that show this, but it makes sense. Ironically, one of the nice things about being a college student in Ann Arbor was that since marijuana was essentially legal (a $5 fine), there was no pressure to smoke a joint just to prove you are not a narc.

If you have a concern about the screening test, you can follow it up with tests with higher specificity at more cost. Screening tests are just that--meant to screen. I applaud your counseling program; I think any screening program requires both follow-up testing and a program designed to help fix the problem.

Kia ora Ken

I may be debating a different point. My fear is that those in charge of testing do not grasp what a positive test means in a population that does not generally use the drugs being screened.

You are right--the specificity does not change, but the positive predictive value (the chance that a person who tests positive is actually using drugs) does. In a population with a very low incidence of drug use, most of the positive tests can, in fact, be false positives.

If a test is 99% specific (and assume it's 100% sensitive), and 2% of the population is using the drugs beings screened, we get the following results if we test 1000 students.

The 20 actually using the drugs will test positive. We have 980 left. Of those 980, about 10 will test positive. That means the positive tests will be wrong about a third of the time.

If 50% of the population is using the screened drugs, then 500 will test true positive, and about 5 of the remaining 500 will test false positive. The positive tests then are accurate 99% of the time.

I used to argue with fellow docs chasing their own tails all the time--they'd use a shotgun approach to diagnosis, order a million tests, then get all excited when one of the results came back positive even if the clinical history did not fit the result. They'd argue "95% sensitive, 95% specific, patient has a 95% chance of having the disease"--and another patient goes down the garden path of pointless fear and testing.

Now if we could come up with a cheap screening test that's 100% specific and 100% sensitive, no worries.

(Screening tests are, by their nature, meant to have high sensitivities at low cost of testing--positive screening tests should be followed up with more accurate, and usually more expensive, tests.)

doyle said...

Of course, Ken, I just screwed up my own argument by using an example that shows less than half are false positives, but I hope the larger point was made.

Cheers!

doyle said...

As unseemly as it is to post multiple messages on one's blog, I cannot find your email, Souly, so an update here.

The link gets me to the testing company, but not the video--there is no mention of sensitivity nor specificity except in vague terms.

I've been googling around to see what I can find, but so far I've only managed to cough up a couple of articles, both of which require me to pay a fee--and I'm cheap.

I'll get back to you when I learn more.

Charlie Roy said...

@Doyle
Thanks for looking into it. I always new the Ann Arbor police were pretty hard nosed. The counseling follow up is really the cornerstone of the program - ideally preventing minor problems from turning into dependence issues or worse.

Our diocese chose this company due to their " highly scientific" approach - whatever that means. Our parents can always appeal the test for further follow up so I suppose that would eliminate the false negatives but I'd love to be sure.

email is charlieroy1977@gmail.com