InvestorsHub Logo
Followers 13
Posts 1106
Boards Moderated 0
Alias Born 02/01/2023

Re: Steady_T post# 444247

Wednesday, 12/27/2023 12:25:55 PM

Wednesday, December 27, 2023 12:25:55 PM

Post# of 464598
Steady, that's just not how it works:

Doesn't matter where the Odds Ratio ranks in the SAP, if such a ranking exists.

I know this from past conversations with my biostatistician friend. A SAP's first-listed statistical analysis for an endpoint is given much more weight by the FDA (and I presume the EMA) than those that are lower in the hierarchy. They may pay attention to the lower-listed tests if there is lots of other evidence in the trial that inclines toward approval of the drug, but there is a very strong preference that the first-listed test be passed. She's also said there is no way that difference of the means would not be the first-listed test for all three of the endpoints for the types of endpoints in the P2a/3.

(Starting here it's all me without my friend's input.) If a trial sponsor pre-specifies various statistical analyses, it creates a problem called multiplicity. Choosing one of various ways to analyze data ad-hoc is one of several means of doing what's called p-hacking. This problem of multiplicity is controlled in part by putting the onus on the sponsor to first-list the analysis it wants the FDA to depend on. If the sponsor can choose among different analyses, it has a much better chance of finding a significant p-value; but the other analyses are actually showing the trial failed on that endpoint.

Here are a couple quotes to help give a sense of the problem:

[T]here are many different ways to implement multiple imputation, such as including different variables in the imputation model and imputing under different statistical models. Therefore, this approach to pre-specification is ineffective, as it still allows investigators to analyse the data in many different ways before deciding on a final approach. This issue of ‘incomplete’ pre-specification, where methods are pre-specified to some extent but the specification still allows for some degree of p-hacking, is common in clinical trials.

https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-020-01706-7

The second discrepancy, termed an ‘addition’, occurred when the original analysis plan gave the investigators flexibility to subjectively choose the final analysis method after seeing trial data. This could occur if the original analysis plan (i) contained insufficient information about the proposed analysis or (ii) allowed the investigators to subjectively choose between multiple different potential analyses.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7257229/

The football analogy doesn't work. I can't come up with a good analogy, but this is closer: I say I have a great way to pick stocks and I list ten that will go up over the next month, and then I want you to confirm my method by looking at only the winners.

Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent AVXL News