Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
So ASCENT-2 was not blinded then. The investigators or at least the PI, Dr. Scher, should have had a good idea of what was going on long before the halt.
Don't forget that the TTP data will be available at the interim too. It has been relegated to just a secondary endpoint. But if it shows a good trend or achieves stat sig that would be a good signal for the final look.
<(Although I wouldn't have described the CEO's words as 'word games' - just the standard attempt to turn statistics into laymanese.)>
Although that's largely correct since the p value and the hazard ratio more or less correspond, Dew is right that there is a bit of word playing here because the p value communicates a certain good and bad quality that the hazard ratio does not. This has to do with the fact that the HR needs to be considered along with the confidence interval for it, how large that is and whether it overlaps 0. For example, the hazard ratio for 9902a wasn't too bad as a numerical value but its confidence interval overlapped 0, hence the bad p value.
IMHO, the effect of Web 2.0 will be multiple times larger than Web 1.0 in terms of social engineering but its launch will not be/is not as dramatic either socially or in the market. It's the next logical step of evolution from Web 1.5 such as iHub, Yahoo, iVillage, etc. Little new engineering will be needed, just more opportunities for consumers. Web 1.0 starting with Tim Berners-Lee in the academic side to the introduction of Netscape on the commercial side was a paradigm-shifting event. It also made companies such as Cisco, Microsoft and Intel household names.
OT - Facebook - "Is it this decade's Netscape transaction?"
Time will tell. But a major difference here is that netscape is a tool but facebook is a community. There is no stickiness in the former. There is in the latter.
One issue with companies whose revenue comes from ads is to consistently get to the right people who will spend money on what they see. Google does a good job of that with their sophisticated algorithms to guess at viewer's true interest from their search terms and search history. facebook does one better in that they have captured a large chunk of the most desirable segment of the population, between teens to late 20's. These people spend on average half an hour or more a day playing around with their facebook "friends" online. And, the kicker is that they voluntarily reveal their interests in their profiles and their activities. That makes ad matching really simple and effective.
The valuation of 15B is too high for today. But then MSFT only put in 240M for a small stake, pocket changes to gain access to the core facebook platform and learn a few things.
I wondered if that harshness was a kneejerk reaction after the relatively soft review of the Provenge BLA and the ensuing mess. Pazdur and CDER were bent on a "we'll show you how it's done here" kind of thing after forcing the Provenge CRL right before their own CDER AC meeting. The published OrBec data actually looked decent so the later extension of the review date for OrBec might have been a small sign of remorse on Pazdur's part. Not saying that this is a strong motivation, but given the ongoing publicity and lawsuits against the FDA, a factor to consider is that a reversal of the negative AC recommendation for OrBec would serve as an example that absolute approval authority for any cancer drug remains with him and the FDA.
The puzzle was that Dr. Provost's talk on 02/06 indicated only around 100 enrollees (slides 18&19 showed that there were 66 treated patients at that time). Of course, her slides could use old data but that would be odd given the importance of the meeting. So a wild guess is that those were the patients known for sure to be GS<=7. The trial ramped up substantially after the change to all GS and minimal pain to get to 179 on Feb 2006 but perhaps those patients were excluded from her presentation.
Feb. 05 - 99 patients
Wall - Did you mean Feb. 06? That's when Nicole Provost gave her talk. There were about 100 patients randomized then per her slides.
You probably can answer that question more precisely and quickly than I with the search feature of iHub.
>Sorry to perpetuate this, but what you said simply means that the pros were wrong prior to the AdCom, ain't they?<
Not at all—it means the advisory panel was wrong.
This is a presumptuous and dangerous line of thinking.
Presumptuous because the advisory committee consisted mostly of people who have spent their life doing research in the area while who knows what the so called "pros" on this board do. In any case, the AC's role was only to give advices to the FDA, not to predict their action. As such, they could not be wrong.
Dangerous because this line of thinking implies that the so called "pros" here could accurately assess drug efficacy from published data and, further, predict what the FDA would do. In fact, in the other case for PC recently dealt with, Satraplatin, the "pros" (or perhaps just "pro"?) were dead wrong with their continuing hype of the hazard ratio in the progression endpoint over the mediocre median difference and survival data.
The main lesson to be learned in both cases was that even with much available publication, there were more hidden information than known before the respective AC's.
crou & micro, I guess humor is suspended on a Sunday :).
Dew, feel free to delete this post as well as my other one. If crou didn't get it, it wasn't worth it.
David - Sorry but you cannot be right. Steady state is a well-known math model that always works... unless you are saying that this actual data makes its application here an imitation naugahide argument.
However when Gleason score was used in a cox model by the FDA statistician he showed that the p-value was not statistically significant. By not including Gleason score in the cox model Dendreon were able to demonstarte that 9902a could be statisticaly significant.
Gleason probably had little to do with it. It was more likely that the non-stat-sig finding was because of what covariates the FDA statistician left out of the model. In turn, that included more patients in the analysis. The main issue with Dendreon's Cox analysis was that, of the patients excluded due to missing data on certain covariates, the controls lived longer than the treated.
The interim analysis would be unlikely to achieve stat sig for a variety of reasons including low alpha and low number of events.
However, the events do not necessarily consist of mainly early deaths. Due to the extremely long enrollment time and the fact that D9902b did not start from scratch but from a protocol change only from D9902a, there may be many early enrolled patients who have lived long but have passed on by now too. They would contribute to the latter parts of the curves. David Miller asked a very astute question on closing and reopening centers at the last CC that was related to this phenomenon.
Got it. It's in a table in the BLA. You know your numbers! Thx.
I'm guessing that there were 99 patients enrolled by Feb 2005, and we know that 179 were enrolled by Feb 2006, 294 by Nov 06, and 400 by March 07. We'll have to extrapolate from there, depending on when the assumed 180th death occurs.
Wall - The Provost presentation in Feb 2006 implied that there were about 100 patients enrolled in 9902b then. Where did you get the "179" from?
This is just simple combinatorics but when you have a range from 8 to 57 with only 10 or even 22 data points and you don't know how they distribute or clump along that range, the median is rather unreliable. When you add on the aspects of a subgroup analysis with some excluded patients and a phase-2 trial where the patients are openly selected and cared for, you need to be careful with these numbers.
For example, you may recall that the phase-2 Provenge trial showed 32 weeks median TTP for the patients receiving the same dose used in the phase-3 trials D9901 and D9902a. We know now how TTP went in the phase-3's. Just in terms of trial mechanics, what might have caused the difference? Larger number of patients and blinded randomized enrollment. These parameters ensure that the good results were not solely because the patients were healthy and would have done well regardless.
Note that I am not questioning the efficacy of GVAX, esp. its use jointly with chemo or ipilimumab (which was not the subject of testing in Vital-1 anyway). I am just cautioning blind trust in the quality of these published median numbers. And a final note, you have a style of writing that exudes certainty to your readers. It is good sometimes to question whether that certainty is what you intend.
<CEGE clains that the Ph3 GVAX dosing is at the highest dose level given in its Ph2 trials and that median survival at that dose was 35 months. This GVAX dosing without Taxotere equates to the median survival that Dr. Petrylak reported for the 51 ITT patients in 9901 and 9902a that took Taxotere after Provenge and will be compared to a Taxotere control arm where Taxotere had a median survival of 23 months in the assymptomatic subgroup in its clinical trials.>
When you look at medians, beware of small numbers. The GVAX median survival of 35 months was for 10 patients with a range (8,57). This is way too sparse to be reliable. It would have been nicer if they reported the survival times of a few patients around the median to give some idea of how far apart they were. As this data is way too optimistic and given the nature of an open trial with a tiny number of patients, it would not surprise me if Vital-1 will yield wildly different results much more negative than shown here.
These errors are just simple edit things. It appeared that Liu copied the chart from D9901 to change the numbers for D9902a but missed that one data point that Steve noticed.
<We're in the bitter ex-long phase.>
Neither bitter nor ex-long :). As usual just keeping things straight as I can see. Despite the usual elements of self-interest and extremism in these things, there are people who genuinely feel passionate about making Provenge available to patients. It's something one can feel sympathetic about, hence my continuing interest.
I suspect that management realized early on that the 483 issues are a problem that will cause a delay in approval. However, I also think that they were pleasantly surprised by the panel vote, as we all were, and thus concluded that they will get a minor approvable letter regarding the 483 issues only that could delay marketing till the end of 2007.
...
The fly in the ointment here is that the 483 letter may also be the cause why the FDA felt that they could dealy approval until they got further confirmation from 9902b. From the FDA viewpoint, since approval was going to get delayed to 2008 anyway because of the 483 issues, why not delay a bit longer and make sure the data is confirmed.
"a bit longer" would not be something I use to describe the difference between a delay to end of 2007 because of minor CMC issues and a delay to some time in 2009 to wait for the interim data and then resubmit new data. Further, CMC issues are certainly fixable while interim data success is not guaranteed. The weight of CR letter lays squarely on the requirement for new efficacy data and has very little to do with the CMC issues.
I would tend to trust Urdal who said something to the effect that the mention of the CMC issues in the CR letter was just a reminder to get it done. DNDN might or might not have done all the technical work to resolve the issues already. But if they have to resubmit a new set of documents in the second half of 2008, there is little urgency to put together a document on CMC now. If they get it all done by year end, that would be fine. The plant is still producing Provenge for new IMPACT patients so whatever the problems were, they probably had little to do with the manufacturing of Provenge itself.
<This would be done in case the alpha allocated for the interim look is in the 0.01-0.015 range, but 9902B's interim p value comes back between 0.015 and 0.05. The FDA would have to bend its Pazdurian standards somewhat to grant approval in this example>
Agree. I said earlier on iVillage that this might be a good reason for DNDN to tread lightly around the FDA despite recent events. ENCY did not manage their relationship with the FDA well.
http://www1.investorvillage.com/smbd.asp?mb=971&mn=137616&pt=msg&mid=2421916
[OT] Actually, ocyan, the iV administrator had nothing to do with it: I deleted the iV post myself.
Excellent! I am glad for you and glad to be wrong. Now you should work on refraining from such thoughts in the first place. We can all discuss more constructively then. Thanks for the info.
Rodenta - There is a difference between name calling and expression of an on-topic opinion. There is also a difference between the owner of a site preserving a modicum of politeness to promote constructive discussion and a moderator who removes posts based solely on his preference, thus excluding relevant information. Sorry if you don't see these differences but I respect your right to speak your opinions.
The relevant issue is what we think the magnitude of this bias is. The bias could be 1% or it could be 91% depending on (in poster iwfal's example) among other parameters, the fraction of the underlying population of drugs that are truly effective (v. bogus ones), the rate of false IDs in these clinical trials, ....
This is correct. The problem with "program-survival bias" is that it is at too abstract a level to enable making any prediction for a particular instance with some measure of reliability. This thread of discussion really started with Dew's post below:
http://www.investorshub.com/boards/read_msg.asp?message_id=20615339
The problem that he framed was whether or not D9901+02a are good predictors of success for D9902b. The articulated sequence of reasoning steps was this: (1) asserting that the two phase-3 Provenge trials were like phase-2, (2) applying "program-survival bias" and (3) concluding that there is little predictive power from such phase-3's. (2) is a true statement but one must ask whether (1) was justified.
http://www.oncolink.org/conferences/article.cfm?c=3&s=26&ss=155&id=1082
The article that I quoted above, in fact, provides empirical data to guide in assessing such an assertion. These phase-3 trials were randomized and each of smallish size but together they were at a reasonable size. So, empirically they would be good predictors for similar conclusions in a new trial following the same protocol. So (1) above does not apply. Hence, (3) does not necessarily follow - except by chance.
Just to be clear, I am simply pointing out the fallacy in Dew's reasoning and not saying anything about the success probability of D9902b. There are better empirical ways to assess that probability based on simulation that Clark and others have done and I will do when I next have time to muck with my code.
I was a bit hard on Dew last night because of what he did to posters on the ivillage board, calling them delusional and sociopath. That post of his was thankfully deleted by ivillage management. But this also serves as a good example of sampling bias in doing statistics. If one was to sample Dew's posts solely from ivillage, one's impression of him would be rather uncharitable despite his obvious knowledge exhibited elsewhere.
Clark - There is no doubt that this phenomenon regarding the predictive relationship between trials exists and could be captured by Bayesian statistics at a high level. But beyond that, there is a reasonable explanation of its root cause based on empirical evidence in the article that I mentioned in post 4353.
The problem with Dew is that he only has a superficial understanding of this phenomenon. But, in his usual blowhard mode, he made a big deal of it by giving it a weird and meaningless but important sounding name then wrongly applying it to the Provenge case by simply asserting the relabeling of two phase-3 trials as phase-2. Even after I showed the above article, he did not try to get the meaning of it, instead just stuttering something incomprehensible in his reply.
Dew isn't entirely stupid but he does not know the limit of what he knows. His nose/ego is too big so he often drips by insulting others gratuitously - witness his latest posts on iVillage. It's sad to see him like that as his natural skepticism and large knowledge of biotechs can be a good constructive force. But he discredits himself by not thinking through his assertions and often devolving into childish acts when not receiving the accolades that he assumes or, worse, being questioned.
Those who look to Dew for his opinions should place the correct weighting factor on them with regard to DNDN. My personal weighting function on Dew's opinions on DNDN is near zero at this point.
This is my last post on this thread. Back to paper writing.
The “cause” is variation relative to the underlying efficacy during phase-2 combined with selective attrition as programs advance from phase-2 to phase-3.
I must not be reading English. I have no idea what that says. What are the elements involved in "selective attrition"? I have a program that can generate better obfuscated sentences than that.
Clark - Your example is a simple one in Bayesian statistics. With regard to what I say about Dew's cockamammily named "program-survival bias" thingy, what's the expression that Americans use sometimes? Yanking something or some other :). We have an expression for his type: His nose is too big, it runs.
Program-survival bias refers only to the underlying efficacy per se—not to the clinical-trial design. To reiterate: the efficacy observed in phase-2 programs that end up advancing to phase-3 is biased on the high side relative to the true efficacy of the drugs in question. This bias is either underestimated or not understood by many biotech investors.
That's just another simpleton-ish tautology as it only restates the assertion that phase-2 trials do not predict phase-3 trials well without any explanation of the cause.
Below is an old article that you likely got the idea of "program-survival bias" from:
http://www.oncolink.org/conferences/article.cfm?c=3&s=26&ss=155&id=1082
Two pertinent points: (1) most phase-2's are not randomized and (2) most phase-2's are small. (1) means selection bias or special treatment can come into play while (2) means imbalances are rampant (perhaps partially due to 1). So any observation from phase-2's would/should be questionable. The article then concluded that "A careful analysis of P2Ts is crucial prior to designing RCTs and future P2Ts should include adequate number of patients. Additionally, randomized P2Ts may provide a more efficient trial design."
Both D9901 and D9902a were randomized trials with smallish but good size. They would be good predictors of how D9902b will turn out. So to say "Actually, DNDN did run a couple of phase-2 trials in HRPC: 9901 and 9902a." (post 4327) to apply the result of most phase-3's failing to replicate phase-2's efficacy observation to conclude that D9902b would likely fail shows either a lack of rigorous thinking or worse, rigorous and deceptive thinking.
Dew - Do you even understand the logic of what you write and how it applies to a particular situation? It is you who are propagating misinformation if you believe in the scenario you described and how it applies to the relationship between D9901+02a and D9902b. Think harder and more honestly. Regards.
Cox is an iterative procedure that continues to add/subtract individual prognostic values to correct for imbalances. So, inherent in its working is an averaging process due to the additions/subtractions. In signal processing, it is well-known that averaging is a good filter to reduce noise. That's likely what gives Cox the additional power even when the arms are perfectly balanced in all prognostic factors. That is, treatment arms can be balanced even as individual data points fluctuates, ie, noisy, and Cox averages out the noise.
Cox will worsen the p-value if treatment performs at least as well as placebo and there were significant imbalances in favor of the treatment arm. Witness the p-value of the Cox model V in the FDA's stat review of D9901. That's the case where there were only one 1 missing data point so it's closest to ITT yet the p-value worsened to .04 from log-rank .01 because there was an imbalance in Gleason scores favoring the treatment arm.
You are talking about a different phenomenon. "Program-survival bias" as Dew has been using assumes that a phase-3 trial is designed based on one or more successful phase-2 trials and says something about the success rate of the phase-3. It's just his cockamammy way of putting together a few big sounding words to describe a trivial phenomenon (ie, any prediction with incomplete information has a chance of failing).
anyone betting long on the outcome of 9902b who is not at least somewhat apprehensive about program-survival bias is probably overestimating the likelihood of success.
Some of us do have the ability to compute. So when we invest or not, it's a little less than blind betting. Anyway, do you have any intrinsic definition of "program-survival bias" other than being just a fancy name for "phase-2's don't reliably predict phase-3's"?
Ok. Your assertion of "interim analysis stuff doomed to fail" implies knowledge and that got me curious. So I checked to see if you have something more concrete about the data, interim alpha, etc. Sounds like not.
this interim analysis stuff which is doomed to fail
Could you share your insights on how you came to that strong assertion?
<Ocyan, was that your post on the PSA-Rising website???>
Yes. I gave Dr. Scher a lot of slack until the NOVC-SGP partnership. Just didn't smell right.
At some point, I'll try to run a few simulations to see how the interim look might come out. Too busy at the moment. I can look at data but no promise on any quick return.
<Biowatch, I'm replying to your comment #msg-20151385 here...>
I posted earlier tonight on the Biotech Values board to reply to biowatch's message about the same point you made on DNDN investors. I also addressed the fallacy that the protest of the approval delay of Provenge has been just by investors. Instead, as the ABC broadcast showed, many patients were upset with the FDA decision.
That post was deleted by the moderator, DewDiligence, or one of his assistants. The possibility and now reality (to my experience) of this sort of censorship by a small mind is what keeps me from having a paid membership to iHub. Wall, keep up the good work. But I probably won't post much any more on this venue.
IO - The ABC piece also mentioned the meeting with Von E. but nothing on its outcome.
About the PS, I am currently not too occupied with investment - happy with what's already in place and happily busy. But, as you see, I am still partial to Dendreon for their science. Provenge works.
IO - Wall can always exercise his moderation but I really don't mind leaving Dew's posts there. People like him often show more of their inner self than they are aware of in their juvenile posts.
Back to topic, the ABC news piece on the DC protest today was great. They hit all the right points on how the FDA has turned too conservative on delaying Provenge. They even mentioned the panel votes 17-0 on safety and 13-4 on efficacy.