Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
I suspect the placebo effect had little to do with the missed P value.
It looks to me the it is the high variability of the RSBQ measure as shown by the very high SE values.
The variability happened to be high enough to swamp the response in the noise. RSBQ is an imprecise tool.
McFarland made a comment as a site manager for an AD trial that there were people that had headaches, if I remember correctly, and that indicated that they were on the drug.
Fine, you have now come up with a different explanation.
I get that the RSBQ is highly variable. The CGI is supposed to be done by Doctors who are supposed to be less influenced by expectations. Of course I get the zoom argument. That also applies to the placebo group as well as the treatment arm.
Unfortunately that argument applies equally to both the treatment arm and the placebo arm. Unless you can find a way to explain why the placebo arm would be more encouraging than the treatment arm that is not the explanation.
That is the number for the 4 week RSBQ measure, not the end of the trial, i.e. total RSBQ number.
George. What was the P value of the Total RSBQ?
Let's take this a bit more carefully and discuss the assumptions in both the question and the answers.
My assumption is the data from the first trial is indeed accurate and representative. You may argue that and if fact I think that is what you are arguing. If I understand what you posted you are saying that another trial might come up with different results because it is a different trial. That argument is basically based in the p<0.05 having a possibility of coming up with a different answer from the 5% random sampling possibility.
I'm arguing that another trial with a larger n can detect a smaller change in the primary variable with statistical significance than a smaller trial can. In fact with a very large trial a very small change in the primary variable can be detected with statistical significance. This is generally described by the use of the descriptor the power of the trial.
If there is in fact a real signal in a small trial that is swamped out by the random variations in the placebo response, a sufficiently larger trial will show that signal real signal statistical significance.
If there is no there there, it doesn't matter how much larger the trial is, it will not show statistical significance.
I'm looking forward to the TLR and and a deeper dive into the data.
We saw the RWE seizure videos. If the trial data supports that reduction in a large number of patients then this trial has a good chance of success. That might result in a conditional approval requiring a follow up P4. We won't know until the full results are available.
Do we know for a fact that the CGI interviews were done via Zoom?
What a schitty week. And next week will just be another. I never learn.
Actually increasing the size of a trial can turn a failed endpoint into a met endpoint. The larger the trial the smaller difference between the placebo and the drug group is required for statistical significance.
I believe you are confusing clinical efficacy with statistical significance in this case.
I'm inclined not to think that enthusiasm for 2-73 was a confounding factor. There is no reason to think that the placebo group was any more or less enthusiastic about taking 2-73 than the drug group. After all, the subjects don't know which treatment arm they are in. That is the point of a blinded trial. Each group is led to believe they COULD be taking the drug. So the expectations should be the same in each arm.
To me it seems more likely that there is a high variability in the various measures and if you look at the SE you can see that they are pretty large.
The random variability in the measures broke the wrong way in this trial or perhaps it is what Anavex suspects. Anavex thinks that the placebo arm had a different overall level of disease severity in the placebo arm vs the drug arm and that skewed the results.
For example that FDA discusses the ADAS-COG measures of AD as not being valid until they are taken over a three year period because of the high levels of variability in the measure. It takes a long time to capture the real decline in function.
People have good days and bad days and behavioral measures capture that. If you are taking measures every week or better several times a week then the good day, bad day issue tends to get washed out. But trials don't take measures that frequently. You can imagine the difficulty in getting a Rett girl in a wheel chair dressed and ready for a possibly long car trip to visit the Dr's office for a series of behavioral tests and observations every week. That is a big burden on the girls and the caretakers.
I agree with that. I hadn't looked at it from that perspective. Good thinking.
Kinda like evaluating your posts?
I suppose it depends on what your definition of long shot is. If by that you mean based on the current pediatric trial I'll agree. If you mean long term I'll disagree. I think worst case is the FDA requires an additional larger trial. My feeling is that trial would succeed.
It all depends on what the FDA says. On that we can agree.
George,
You quoted the document out of order. If you think that is what the PR said so be it.
That is not the way I read it and re read it and checked again after reading your post.
Actually Anavex won't pay fees for Rett. It will for AD.
They can use the existing trial design with minor tweaks.
The feedback from the FDA will take a few months as you say.
Recruitment will be the biggest time portion of the overall timeline I think.
What is with your "won't allow an NDA" ?
Anavex hasn't filed one. The FDA can't disallow an NDA that hasn't been filed.
**** off.
The trial wasn't that long. 12 weeks with a follow up safety visit at 16 weeks. There was a significant delay in recruiting due to Covid and the Vax requirement in the UK. Those delays won't happen in a new trial.
The CROs are still in place and trained so that is a big time saver. Anavex might for go the genetic testing in a new trial to save what I assume to be a considerable amount of time.
If a new trial is required it won't take three years. It could conceivably be done in a year.
Sorry. I didn't understand what you were referring to.
As has been linked. The FDA does sometimes approve drugs with failed endpoints. It's a low probability event but the analysis of those drug that were approved showed that 2-73 Rett matched several of the characteristics of those approved.
So there is that.
It will all depend on what the FDA says when Anavex has meetings. I think the worst case is the FDA says run another trial with more subjects.
Rett is not dead. It is wounded.
As I read that PR the p value you quote was for the RSBQ data. There is no mention of the results of the CGI other than it failed.
The categories I was referring to were pass & fail.
We will know a lot more when that peer reviewed paper is published. At that point we can make estimates of the probability that the one Anavex MAA will succeed.
I'm not trying to be a downer but pointing out that the 15% that fail thought that their MAA was going to succeed when they filed it.
Anavex clearly has things going for it, real need, low bar, great safety.
The other co-primary endpoint, the Clinical Global Impression – Improvement scale (CGI-I), which represents a less granular assessment by the site investigators using a seven-point scoring (one=“very much improved” to seven=“very much worse”), was not met.
Unfortunately that 85% approval rate doesn't tell which category a single application falls into.
Sort of like having a 10% chance of 100 people getting cancer. That doesn't tell you which individuals in that 100 person group are going to get cancer.
If there is anything to take away from that PR is that Missling suggests that the EMA got to look at more than just TLD and approved going forward. That means there were no glaring issues with the data.
You might also want to mention that the other endpoint was a full on fail according to the PR.
You always have the option to cut your losses and sell. Why torture yourself?
What? You got something against loki_the_bubba?🙄
That's the gist of it. Classy firm.
Ummm. Both trial arms led the participants to believe their daughters were getting the drug. The placebo response applies equally.
So, You're saying there's a chance?
Interesting article. 2-73 tics a lot of the boxes in the list of those that were approved.
Of the identified 210 NDA approvals in the observed time period, Johnston and colleagues observed 21 (10.0%) that included null findings for ≥1 primary efficacy end point; each of the 21 drugs were approved for unique clinical indications. More than half (n = 11 [52.4%]) were first-in-class approvals; 10 (47.6%) previously received an Orphan Drug Designation; 13 (61.9%) received an expedited review pathway from the FDA. Only 3 (14.3%) required an advisory committee meeting prior to FDA approval.
Knowing something will fail and evaluating something as having a possibility of success are not the same thing.
What The EMA did was look at the data presented and saying "this has a chance". So the EMA will take a full look at the data in an application.
As understand it in a two primary endpoint trial in most cases if one fails then the secondary endpoints are not calculated.
So in the Excellence trial since the CGI fail to meet its endpoint the secondary endpoints are not meaningful.
The trial design didn't suck. It was small number of subjects. The smaller the number of subjects increases the potential variability of the results.
Had there been say several hundred subjects and an even split between placebo and dosed subjects the probability of an unequal representation of mild and moderate Rett subjects in either the drug arm or the placebo arm would be reduced.
Having a larger number of subjects with a rare disease like Rett makes getting the required number of subjects in the trial more of a problem. It also costs more money and it takes longer to get the trail run. So trial design is always a question of tradeoffs.
The best trial would be to enroll every Rett girl on the planet. Then there is absolutely no question about how well the drug works. Of course that is not possible or desirable.
So a smaller segment of the overall population of Rett girls is enrolled in the trial. Now we have to use statistics to see how well the sample of the Rett population represents the overall Rett population. If the sample is very large then the sample is less likely to not be representative of the overall population.
Lets take the extreme case and just use 1 subject for the test. How do you know if that one person represents the overall population? It is unlikely that one person represents the the wide range of Rett impact on every Rett girl. So now we need to have more subjects in the trial to get a more representative sample of the overall Rett population. Here is where sample statistics come into play. The larger the sample the smaller the odds are that the people selected for the trial don't represent the over all population.
This is really what the p value is about. The p value indicates that if you took say 100 different samples from the overall Rett population that for a p <0.05 95 times out of 100 the results you get from the trial represent the actual drug effect and not some random bias in the trial by selecting only people that don't respond to the drug.
A placebo group helps to test the selection bias and other types of biases that can show up in a trial by hopefully having a placebo group that matches the drug group in as many ways as possible. The theory is that the two groups are equal and so the differences between them represent the drugs real effect. In most cases that is how it works.
However, just like the problem with selecting subjects for the trial, selecting subjects for the placebo group can have the same issues of the placebo group not being representative of the over all population and more importantly not exactly matching drug group. If the two two groups are not equal then that introduces a bias into the statistical analysis and you might not get true results.
Subjects are randomly assigned to either the drug group or the placebo group which hopefully eliminates any potential bias. But it is a random process which means it is not always going to be bias free.
Just like flipping a coin. Over many flips the coin will be about even number of heads and tails. But in a small number of flips you might get 10 or more consecutive heads in a row throwing off the appearance of a fair coin.
One of the more interesting examples of this was in a stat course I took. The instructor broke the class into two groups. One group flipped a coin 100 times and wrote the results on one board. Then on another board he had the students make up what they thought 100 flips would look like and he left the room saying that when he came back he could tell which group made up the number and which group actually flipped the coin.
He came back to the room and immediately pointed to the real coin group. When he was asked how he knew which was which, he pointed to the variability of the real coin flip. There were many more strings of head or tails in a row than in the made up group.
Statistics is never having to say you're certain.
It was a safety follow up issue. Like some of the subjects didn't make their follow up visit when they were supposed to. You assumed that it was an adverse event that need to be followed up on. Doesn't look like that was the case.
Your post crap is becoming tiresome. It is board noise.
Probably a bit soon to be talking about the demise of Rett. Today was certainly a set back. No question about that.
The FDA might still accept the totality of the data from all three trials or it might require an additional trial. Either way I doubt that Rett is dead.
This was top line data. There is still a deeper dive into the data coming. We won't know if the data is more supportive or less supportive until we see it.
Ummm. Placebo subjects don't get the drug so how does 2-73 kick it up a notch?