InvestorsHub Logo

biomaven0

05/26/17 11:11 AM

#211533 RE: iwfal #211528

Thanks - appreciate the informed commentary.

I was thinking about these issues some, and wonder if you'd ever seen something along these lines:

Assume you have a 1-1 randomized trial of size 2n, with the members of the control group being C1...Cn and the intervention group I1...In.

You have some primary endpoint measurement you are considering, E. Now create a pre-specified similarity metric M that measures how similar any two subjects are. The baseline measure of E will be the dominant element of this metric, but you can also consider other parameters such as age, sex and the like.

Now randomly pair members of the control group with members of the intervention group in such a way as to minimize total M. You can potentially do this multiple times as there won't be a unique matching. Assume one such pair is (C2 I3). Now it is simple to compare their respective changes in E - you are comparing like with like and so don't have issues like regression to mean and ceiling and floor effects.

So the only secret sauce here is the pre-specified similarity metric M, so no p-hacking issues that I can think of. I guess it is basically similar to a case-control study, albeit in the context of a randomized trial.

Comments?

Peter

DewDiligence

05/29/17 12:42 PM

#211549 RE: iwfal #211528

Re: Biostatistics blog

…we already have a problem with way, way, way too many published papers with false or very exaggerated positives and moving more in the direction that blog generally suggests would increase that problem substantially.

With respect to seriously underpowered trials, the blogger makes a good point, IMO, about the subtle incentive to misreport a lack of efficacy (“no significant difference was found…”) instead of acknowledging that nothing was learned other than ruling out the likelihood of overwhelming efficacy.