Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Raf - Sorry I was not clear. Since they did not saying anything in the abstract about looking at combinations of lipid measures to define groups, it is unclear what they did, but it seems doubtful they created groups by trying to categorize on multiple lipid measures simultaneously (too awkward). It would be more common to look at high TG as the primary predictor, and use propensity score matching to statistically equalize everyone with regard to other lipid parameters. This would get rid of the influence of the other lipids on the results they reported. My other comment had to do with whether RWE results could be interpreted as indicating high TGs cause MACE events (which would more strongly imply that reducing TGs would reduce MACE events). Their description does not make clear whether their RWE was cross-sectional (TGs and MACE events looked at at the same time) or whether it was prospective (assess TGs early and look at later MACE events). The latter, which would allow stronger causal interpretation, is possible but much more complicated.
I think it is safe to say that one thing NWBO will NOT be presenting as an abstract at ASCO is the blinded, blended data. You are not allowed to submit abstracts of previously published data, which would seem to be the case here as the publication is expected any time. If they submit an abstract (LBA or otherwise), it would have to be topline data, and that does not seem possible given LP's statement in January about the 3 month data scrubbing process required which would occur in the spring. I suppose that they could announce topline results (not ready until the last minute) in an industry theater talk. Much as I wish they would announce topline results in a poster or peer-reviewed talk at ASCO, I just don't see how it is possible.
You are correct - Once the trial is unblinded, any data collected after that is treated differently. In their topline publication, they will only be able to include the data that were collected as designed, in a blinded trial. Waiting longer to unblind therefore lets them maximize the OS (and PFS?) for this topline publication, which will be critical to how DCVAX impacts medical care (and probably reimbursement as well). There is nothing to stop NWBO from doing unblinded follow-up of their study patients for years after this, but that will have to be reported in a separate publication. Optune can go on collecting follow-up data and reporting updates as if it were part of the original trial because the trial was never blinded, so there are no unblinding issues. It was always a lousy design (only option was a control group wearing tinfoil placebo hats). The fact that it got approved anyway should give us all hope.
Raf - Unless they looked at the records over time (TG at baseline and then MACE events later in the medical records), you cannot necessarily infer causation. Regardless, they can handle these other potentially confounding variables statistically using something call propensity matching, and they may have done this even if it was not a true prospective study.
If a paper is accepted with minor revisions after the initial submission (not typical), 100 days is not unusual. I think that given the topic of the blinded paper and the challenges involved (being convincing in interpretation despite ongoing blinding), they would have to do at least one round of major revisions (with re-review), which would take longer.
AVI - I appreciate your balanced critiques.
FWIW - I can tell you that with all articles I have been involved in, the "date received" refers to the initial submission. Usually, it will also list the date any revision was received, and then the acceptance date.
Flip - I may be wrong on this, but based on a nice description of the toplnie result process with another stock I own, it seems that once the final event is called and the database is locked, the DMC (if there is one) has no major role anymore. Their primary role is to make sure as the study is in process that the patients are protected and the data are monitored for integrity. The company controls the analysis once the database is locked.
Not sure why - maybe they are just playing it safe just in case? This just does not seem to be much of a change, so I don't think it reflects any new "bad news" about the publication. If the article had been rejected recently, they would need a lot longer than an extra month for the warrants to be extended. Who knows, the publication might come out March 10 or June 30, but it is coming out eventually, which is what really matters to me.
How are those 2 things connected in any way? June has always been an end of the line. Moving it out a couple of weeks does not change the publication date. The company could not know a publication date yet that was that far away.
Not much point in updating the blinded paper. Unblinded topline data will likely be available for publication by next Fall.
FWIW - The acknowledgments in Bosch's Sept 1 presentation where he said the blinded manuscript was "being finalized" listed 60 investigators (by my count). Presumably, these will be the authors on the blinded publication.
Linda M. Liau, MD, PhD; Robert M. Prins, PhD; Jian Li Campain, MD, PhD; John Trusheim, MD; Charles Cobbs, MD; Keyoumars Ashkan, MD; Jason Heth, MD; Sarah Taylor, MD; Stacy D’Andre, MD; Fabio M. Iwamoto, MD; Yaron Moshel, MD, PhD; Kevin A. Walter, MD; Clement Pillainayagam, MD; Edward J. Dropcho, MD; Rekha Chaudhary, MD; Samuel Goldlust, MD; Michael Gruber, MD; Tobias Walbert, MD; Paul Duic, MD; Jay Grewal, MD; Daniela Bota, MD, PhD; Kevin O. Lillehei, MD; Heinrich Elinzano, MD; Steven R. Abram, MD; Andrew Brenner, MD; Jana Portnow, MD; Simon Khagi, MD; Steven Brem, MD; Reid C. Thompson, MD; William G. Loudon, MD; Lyndon J. Kim, MD; Andrew E. Sloan, MD; Karen L. Fink, MD, PhD; David E. Avigan, MD; Julian K. Wu, MD; Scott M. Lindhorst, MD; Gabriele Schackert, MD; Dietmar Krex, MD; Jose Lutzky, MD; Hans-Jorg Meisel, MD, PhD; Minou Nadji-Ohl, MD; Arnold B. Etame, MD, PhD; Raphael Davis, MD; Christopher Duma, MD; David Piccioni, MD, PhD; David Mathieu, MD; Erin Dunbar, MD; Timothy J. Pluard, MD; Michel Lacroix, MD; David S. Baskin, MD; Victor C. Tse, MD; Sven-Axel May, MD; John Lee Villano, MD, PhD; James D. Battiste, MD, PhD; Michael Pearlman, MD, PhD; Paul Mulholland, MD; Michael Schulder, MD; Manfred Westphal, MD, PhD; Timothy F. Cloughesy, MD;
Flip - If I understand this correctly, I think 2 is referring to the median OS as compared to the total trial duration. So, if mOS is 22 months, the overall trial duration should be at least 44 months. This requirement would seem to make the mOS more accurate and stable.
Didn't doubt you Kiwi - just had not seen that myself. "Claim" was probably not the right word.
I guess this explains why the PPS is where it is right now...
"...and not anticipated to have a major effect on the outcome of the study...
Doc - In terms of reality, you are likely correct.
Mr Main - Look at Table 2 #13 re: exclusion criteria: "Use of non-study-drug-related, nonstatin lipid-altering medications, dietary supplements, or foods...or plans to use during the treatment/follow-up period" Jardiance might fall under that.
Doc - If you want to get out there in speculation, another possibility is that if NWBO was in BO negotiations, they might hold off starting new trials to give the purchasing company full flexibility in proceeding however they see fit. Plausible, but don't know how likely that is.
Agreed - No changes in drug regimens during trials is standard practice because it makes results for the intervention impossible to interpret. Only way this happens is if people start taking them from outside MDs and lie about it to study staff.
Bio Turtle - I like that! And as we know from the kids' story, the turtle wins the race (against the hare).
Raf - Complicated hypothetical, but in this situation I suppose there might be bias. I just would question why fewer drop outs due to side effects might occur in the V group, if both groups had similar % taking Drug X and all had the same side effects. In your scenario, if Drug X works to reduce MACE events, and all people dropping out due to side effects stop taking Drug X, then the bias would actually be in favor of the V group (more people stayed in the study in the V group who also got Drug X which reduces MACE). I could live with that
Would assume that as PYs increase, so do events in roughly parallel fashion over short time periods like this. Shoudn't change much.
As long as any new medication use is similar in both randomized groups (which I assume it would be), I am not sure it would bias things in either direction.
Kiwi - Well, they have published the design and noted in their presentations multiple times that R-It was designed for a placebo rate of 5.2% (after they re-powered it). The designed placebo rate is not in question. What Granowicz meant in stating otherwise, I have no idea. Since they are still blinded, they could not know that the observed placebo rate was 5% exactly.
My modelling is based on the fact that if we know both the patient years and number of MACE events at the same point in time (recent PR, 60% IA, 80% IA announcements), you can calculate the overall event rate, and from that, determine what the V event rate would be for any given placebo rate. Applying this approach using recent PR data and the 80% IA information, we seem to be able to hit 15% RRR as long as we see a placebo rate of 5.2%. AVI recently pointed out that this may paint a better picture than will actually be the case, due to the possibility (fact?) that patients who event continue to accrue patient years. I re-did my model pulling out about 3,000 patient years as a fudge factor, and the key target placebo rate becomes about 5.3%. None of my models showed that a placebo rate of 5% exactly led to an RRR of at least 15%. I still think Granowitz was giving a rough estimate rather than saying they expect the actual placebo rate to be exactly 5%. Just doesn't make sense.
Anders - Just joking - I meant that nothing ever happens fast with NWBO
Ex - I agree that the blinded data paper may not move the needle much on the PPS initially, unless it is in a JAMA, NEJM, or Lancet level journal. If OS results look the way the company has suggested (and with OS essentially reflecting all patients receiving DCVAX), what the publication will do is give me high confidence that final top-line DCVAX results at least for the secondary OS outcome will be good, they will be published, and FDA approval will likely follow. Granted, it may take another 2 years, but it will happen. The peer-reviewed blinded publication of more or less final OS numbers might also be useful as a tool in BO negotiations with a BP. Just saying....
Kiwi - If Granowitz literally meant that the placebo rate was exactly 5%, then their modelling would certainly show the trial will fail to meet an RRR = 15% target. Even my simple modelling shows this. If they knew R-It would fail, going to the expense of the planned advertising before R-it results would make little sense. I take the "5%" comment with a grain of salt, as an off the cuff estimate from someone on a phone call.
Raf -
FWIW - The PPS prior to the 80% IA hit its' low almost exactly a month before it hit its peak on the run-up prior to the announced continuation. I think right now this is manipulation or "the weather," and no real sustained movement upward will happen until early summer.
Tas and MRM - I noticed in the conference call transcript that Granowitz said that the RWE studies to be presented (announced today) found that the MACE rate in their large clinical sample with high TGs was similar to what they assumed as a placebo rate in planning R-It. I thought this was good news, as it was based on >23,000 patients with TGs >= 150 who were taking statins (just like R-It).
I am guessing #3 at this point also (acceptance pending minor revisions). Even at this point, they could not announce acceptance because nothing is final until you get the official acceptance letter.
Ex - I agree. Substantive changes in a trial in mid-stream would never seem to be a good idea, even if you thought it was an improvement. Only way I could see this happening is if they got prior approval from the FDA to make this change.
Anders - True, but keep in mind that the paper is being done by the scientists, not the management. The scientists publish all the time, and they know where they are in the publication process. They have surely communicated this to the management, but given some prior experiences I have had in parallel situations, the management may not understand the nuances of publishing, so may make incorrect assumptions about the finality and timing of the publication process (which gets reflected in overly optimistic management guidance about the publication).
FWIW - Journals have several categories of editorial decisions they send out after reviewers send in their responses to the submission:
1) Reject (and don't send it back)
2) Major revisions (with no guarantee of acceptance, has to be sent out for re-review)
3) Minor revisions (will be accepted after minor changes are made)
4) Accept as is
It is common for articles that are eventually accepted to start out as a #2, then there is a #3, and then finally #4. That is why the process often drags out.
Flip - If they have an acceptance of the paper in hand (via e-mail to the corresponding author), then they have an approximate idea of when it will come out (or at least be available online). It would likely be hard for them to know an exact date until shortly before it did get published. My read, FWIW, on the conferences and drop outs is that you cannot present at a conference unless you propose a talk, with proposals for talks typically needing to be sent in 6 months or more in advance. So they have been setting themselves up to talk IF the paper was accepted in time. When acceptance did not come by a given conference date, they drop out because they have nothing to say yet. So, the fact that they are even signing up for conferences tells me they are far enough along the publication process that they think they might have something to present by the conference date. So in a weird way, their cancellations encourage me that they DO in fact have something to say (just not yet).
You are correct - LP and company have no control over the publication timeline.
Flip - As an alternative to preconditioning, "getting that damn paper out" might reflect frustration of someone who thought they had the paper accepted after making initial changes (and then scheduled the ASM), and then unexpectedly had to do additional revisions prior to acceptance (and let everyone down at the ASM).
Journals live and die by their "impact factor." The way to increase your impact factor is to publish widely cited articles. If we assume that the blinded results really are "very interesting" even though blinded, and it seems like something that would be a widely cited study, journals would have a strong incentive for wanting to publish this study despite its weaknesses.
Ex -
Why have they not voted to raise the share limit?
Anders - No. There are ways of handling conflict of interest issues for editors who are also authors. They usually just assign the handling of the paper to an associate editor. I do think LL's status as former editor-in-chief and as a current associate editor would help insure that the paper got accepted there eventually (if that is where they submitted it). I think her stepping down had more to do with being overcommitted given her new role as department chair.