It's important not to mislead people. Avatar's problem was not that it was "small". Let's review and take it step by step:
Avatar's endpoints failed at week 7. They evidently passed at week 4, because the AUC scores were good and graphs weren't ever provided. The company used AUC (area-under-curve) as the endpoint gauge to "smooth over" the volatility. This stats technique is valid, but comes with a compromise: it de-emphasizes the patient's end-of-trial drug benefit. For a small mid-stage trial with a volatile endpoint, the technique is justifiable. Even though the company said it was potentially pivotal, it ended up not pivotal because we only passed using AUC, not week-7-change. Smoothing over with AUC is too much of a stats 'trick'. We have to show that patients have stat-sig RSBQ and CGI scores at the CONCLUSION of the trial (week 7), not at an arbitrary time point in the middle that gets us to a stat-sig AUC. If we drew a chart, it would show the drug's line looking good at week 4 but trending right back to the placebo line at week 7. That's no good for drug approval.
Now let's look at Excellence. Excellence was over 90 patients, so AUC was not used; the company used just the end-of-trial values (week-12-change). Nothing is stat-sig. Apparently RSBQ is stat-sig at week 4. Sound familiar? This is just like Avatar. And the problem is not just RSBQ; CGI apparently didn't reach stat-sig anywhere. Nor ADAMS.
So Avatar was not stellar, nor pivotal (self-evident). And Excellence was like Avatar on RSBQ, and worse on CGI and ADAMS. (ADAMS is particularly disappointing, given Avatar's p=.01. We didn't even report Excellence's ADAMS scores.)
BTW does everyone see the parallels with our AUC in Avatar to claim "met endpoint" with using odds risk in the AD 2b/3, and the risks when you do that? Missling has to be watched on endpoints. He can mislead the inexperienced when he's "spinning" results. AUC and odds risk have to be used appropriately, and understood.