InvestorsHub Logo

exwannabe

02/18/13 9:48 PM

#112488 RE: investingdog #112480

You explained stat sig well but something needs to be added. The p < 0.05 is really arbitrarily set number, it is 5% or second standard deviation. What that means is that if a drug is approved with p=0.04999 the chance is still about 5% that the drugs really doesn't work and the result of that particular phase 3 was a statistical fluke. We could have set the standard for approval to be first standard deviation, or the third one, or any number in between. So saying that p=0.04999 is stat sig and p=0.05001 is not and sticking strictly to that as a condition for a drug approval is almost ridiculous.


I certainly agree that .05 is an arbitrary number.

But P value does not imply the probability that the drug does not work. It is the probability that a placebo could have duplicated the results.

Thought experiment:

Grab a random stranger in the US and ask them to flip a coin 5 times. If it comes up heads all time, would you really think it is probably 2 headed, or would you think it is luck?

Repeat the same experiment on Mars, would you be more likley to suspect the coin had 2 heads?

The chance that the drug works or not is a Baysian number, and the FDA stat boys are frequentists so will not even discuss this.


co3aii

02/20/13 5:03 PM

#112914 RE: investingdog #112480

Apologies for the late reply. Yes, it is best to look for a 95% probability that the null hypothesis has been rejected. In other words you want to be sure that that there is only a 5% or less chance that the results are random and not related to the variable studied. The higher the probability the more you can be assured that you are not dealing with chance results.

This same type of logic is applied to the coefficients of regression equations and samples when determining if they are statistically valid and what is the range you are dealing with. By going out enough standard deviations you can be assured that the you have the mean, e.g. the average height of a population is somewhere between 2 feet and 8 feet yielding not very useful information, as opposed to 5'8" plus of minus 2 inches (hypothetically speaking). The latter being more meaningful. Go out too far and you have meaningless stats though they be "certain".

I keep in mind that the sample sizes we are dealing with are too small given the population they represent to be taken as proof and so may be suspect, but better they are what they are than no difference to the good. It pays to keep in mind the old saying, "Statistics don't lie, but statisticians do."