InvestorsHub Logo
Post# of 253152
Next 10
Followers 75
Posts 4738
Boards Moderated 0
Alias Born 09/06/2003

Re: biomaven0 post# 211533

Friday, 05/26/2017 7:49:53 PM

Friday, May 26, 2017 7:49:53 PM

Post# of 253152

I was thinking about these issues some, and wonder if you'd ever seen something along these lines:

Assume you have a 1-1 randomized trial of size 2n, with the members of the control group being C1...Cn and the intervention group I1...In.

You have some primary endpoint measurement you are considering, E. Now create a pre-specified similarity metric M that measures how similar any two subjects are. The baseline measure of E will be the dominant element of this metric, but you can also consider other parameters such as age, sex and the like.

Now randomly pair members of the control group with members of the intervention group in such a way as to minimize total M. You can potentially do this multiple times as there won't be a unique matching. Assume one such pair is (C2 I3). Now it is simple to compare their respective changes in E - you are comparing like with like and so don't have issues like regression to mean and ceiling and floor effects.

So the only secret sauce here is the pre-specified similarity metric M, so no p-hacking issues that I can think of. I guess it is basically similar to a case-control study, albeit in the context of a randomized trial.



I am far from an expert in the details of different estimation algs - but offhand I see nothing hugely wrong with it. The only obvious weakness is in the fact that there are going to inherent assumptions in how you combine the M's for a global goodness-of-fit. My guess would be that they are less distorting than most regression algs (this being a kind of regression), but I'd have to play with it to figure that out.

FWIW many regressions are ok in my opinion. Yeah, they exaggerate, but not too much. The real problem is when there are multi parameter internal models inside the regression. Give me one of those and in 24 hours I can turn an utterly failed trial into a success.

Random aside - recently it has become very popular to use MMRM in analyzing datasets. This is a good example of the fact that a lot of statisticians (particularly academic statisticians) are always trying to find ways to "increase power". But the FDA, rightly, pushes back. Almost all such power increasers do it by ignoring real world issues that are, often, not obvious at first. See http://onbiostatistics.blogspot.com/2014/06/is-mmrm-good-enough-in-handling-missing.html . This debate provides a good window into the problems and the camps (essentially academics vs real worlders).

Join InvestorsHub

Join the InvestorsHub Community

Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.