Dew's "Program Survival Bias" is a variant of a common statistical fallacy (One I am sure has a name but I don't know it offhand). The most common example I've seen for it is one having to do with diagnostic testing. Background:
a) Disease X has a 100% mortality rate - it has symptoms, but ones that can be confused with severe flu.
b) there is a diagnostic for disease X. It has a 0.1 pct false negative rate and 0.2 pct false positive rate.
c) there is a treatment for disease x and it works 100% of the time to prevent death by disease x, but kills 10% of patients.
Questions - If you give the test to a person, what is the chance that the person really has the disease?
The naive answer is 99.8% - since the false positive rate is only 0.2%. This is the same as the naive answer that a trial with a p value of 0.02 has only a 2% chance that it is really no better than placebo.
The real answer is that you need to know the percent of the population that goes into the test as a true negative vs true positive. For instance, if you assume that only one person per 20,000 with the symptoms for Disease X or Severe Flu actually has the disease then you will get 0.1 pct * 20,000 people who test positive (=20) for every 1 person who is tested positive that really has the disease. So the chance that a person who tests positive for the disease really has the disease is approximately 5%. And you should never give the test under the conditions given above because you will kill more people than you save.
Similarly if there are 10 worthless drugs that enter a phase ii for every one that is actually worthwhile then a p of 0.04 implies not a 4% chance that the drug is worthless, but actually much higher (I haven't specified the chance of a false negative, so I can't calculate an exact number).
Clark