Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Beartrap - Possible, but there are many people qualified to do these analyses. Even if they were fixated on one specific person, I can see that delaying things maybe a month but not 6 months. They would have had this person lined up in advance. The analyses themselves would not take more than a few weeks once started. I am still hopeful that the holdup is waiting for journal acceptance. We will get TLD much sooner that way.
True -- Not totally impossible, and I hope to be pleasantly surprised. The question in my mind is, if this were true, why has it been 6 months from data lock with no publication. For comparison, Amarin's TLD were published in NEJM within 4 months of data lock. I am not being negative about DCVAX results - the trial will succeed using the new outcomes and it should be approved by the FDA according to the new guidelines. I think we just differ in how likely we think the best case scenario is to occur.
If so, that would be a likely tell that they are submitting to a top tier journal.
Flip - I am certainly mentally prepared for that still-good "worst case scenario", but hope to be pleasantly surprised. If DCVAX works reasonably well in recurrent GBM, which is only an assumption, there is virtually no way for the original OS outcome to be significant. There are just too few people who did not crossover and receive the intervention ("everyone is getting better"). The dots I am connecting on this are: 1) If the original OS outcome was significant, this would warrant publication in a top tier journal like NEJM, 2) To publish in a top tier journal, the endpoints reported have to match what is listed on CT.gov (the editors will check this), 3) CT.gov has not been updated so their new endpoints do not match what they now plan to report as their key outcomes (new SAP) in a TLD paper, 4) TLD will only be able to be published in a lower tier (but still good) journal. While management may not appreciate this, I cannot imagine the SAB missing out on a chance for a super high impact publication if this were possible. They would certainly have pressured the company to update CT.gov in this scenario.
PFS is more iffy in my view -- It might be significant or it might not. To what extent pseudoprogression confounds PFS is unknown (unlike OS, there are no blinded data available to use for making inferences). It is certainly possible that if the original PFS outcome were significant, they could publish this in a top tier journal, focusing on what is now a lower tier endpoint in their new SAP but still calling it the "primary outcome" (per the CT.gov listing). That would be weird, but possible.
Senti -
There is a lot of crazy talk these days about NWBO. Ongoing lack of information updates leaves an ambiguous situation that is easy to interpret from ones own biases and fears. I was hoping to post a few bits of reality, if for no other reason than to make myself feel better about the situation. Here's what we have:
1) Statements in yesterday's filing about COVID delays appear to be interpreted by some as indicating that there is an ongoing delay, resulting in the company (and advisory board) still being blinded 6 months after data lock. I can conceive of no set of circumstances where that is remotely plausible, even if they did have to "validate" certain data more after data lock. The latter situation also goes against the whole rationale for having a data lock -- it is the final dataset for analysis and cannot be changed further. They announced data lock 6 months ago. Ergo, the statisticians have been analyzing and the company and SAB have seen the data at some point since data lock. Analyses do NOT take anywhere close to 6 months no matter how detailed they are. That is impossible.
2) If the trial had totally failed (original endpoints and comparisons versus historical controls), there would be no reason to drag this out for 6 months "pending publication and release of TLD" as their PRs have indicated. Blinded data presented previously support DCVAX as being effective compared to historical controls. So, they are publishing a TLD article that at worst shows significant benefits compared to historical controls, and at best shows the original endpoints succeeded. The fact that they are clearly delaying TLD for release with a simultaneous publication (the only theory that makes sense), we can infer that the data probably do not show the original endpoints succeeded but all evidence suggests that historical comparisons will succeed. They want to provide appropriate context for the TLD to avoid misinterpretation.
3) Some question whether the key endpoints have really been changed to historical comparisons, citing lack of announced "FDA approval" of that change and failure to update clinicaltrials.gov with the new endpoints. The following are relevant: a) the endpoints HAVE been officially changed by the agencies responsible for posting these changes outside the US, without a shred of doubt. b) The FDA never formally "approves" ((on the record) protocol changes unless done as part of a special protocol assessment (not relevant to NWBO). They will either say informally "that looks perfect" or (more likely) say "that change will raise several issues and we cannot guarantee how the review committee will respond." c) If there is no formal FDA approval of changes, why would the company update CT.gov (which is their their responsibility, unlike Europe). CT.gov is relevant to publication in top tier journals, but has nothing to do with the FDA approval process. I am not even sure administratively whether it is possible to change the primary endpoints on CT.gov after a study has ended (defeats the whole transparency purpose of the website). All changes have to go through a series of reviews - the company cannot just make them.
4) An interview with LL recently posted made clear that she feels historical controls are necessary and justified in trials like this, and also highlighted the importance of GBM treatment focusing on mechanistically relevant subtypes (no one size fits all treatment). We can infer from this that TLD analyses might show success only in particular GBM subgroups (e.g., methylated). Clearly she is totally on board with the revised SAP though. The fact that her UAB presentation included a slide with pooled historical control OS values across the most relevant studies makes pretty clear that they are using this approach in the new SAP (and were far enough along in analyses to have the comparison data available for that slide).
5) Many complain about lack of a stated timeline for TLD, and blame this on poor communication by the company. They may indeed be historically bad at this, but the company in this current situation cannot give a timeline because they are at the mercy of the journal that will publish the results. They really do not know. It is possible they may have to send it to 2-3 journals to get it published, and each journal starts a new clock on the process. This article will definitely get published, but it is not an easy sell despite the potential clinical importance. This is no longer a randomized control trial (it is not being analyzed that way). It is a case-comparison study, which in truth is a fairly mediocre research methodology, even if it is fully justified given the circumstances (changes over time in state of the art, confounding due to crossover, pseudoprogression). The company and the SAB have elected to use a less compelling analysis because they believe it is important to get the findings out there (indicating that they are assuming success) and find a way to show DCVAX works in spite of all the methodology limitations. This type of study is unlikley to get accepted by top tier journals, but if I were them, I would at least try this route first even if unlikely to be successful. As a result, the publication process has been extended, and the company cannot tell us how long it will take. It will be over when it's over.
6) Somebody posted that a psychic told them the trial results would be good, but not announced until around May. I thought that long a process was ridiculous at the time, but maybe that psychic was on to something.
Personally, my wife and I have 180,000 shares long and we are holding based on the information above regardless of what the AFs of the world say.
You can also go to the NIH Reporter website, search for Linda Liau, and see for yourself.
NIH Reporter Website
Dan - Thanks and agree. That is exactly what I meant. Maybe it wasn't clear in my earlier post.
You are probably correct if you mean by "data are bad" that the study failed its original primary PFS and/or secondary OS endpoints. That would not come as a surprise to most people here. However, those are no longer the official study outcomes (now it is historical controls, and results for these outcomes are all but certain to be positive). I agree that the motivation for announcing results through a publication has to be to provide adequate context for the complexities of the trial and how this impacted on the analytic approach and the study results.
Anders - Top journals like NEJM and Lancet have a 2 step review process. When you submit it, the manuscript undergoes an initial editorial review by a group of Associate Editors who decide if the submission is likely to get eventually published. If they decide it probably is not publishable, this initial review will trigger a fast rejection letter (within a week or two of submission). If they think it could be publishable, it gets sent out to reviewers which is when the waiting game begins. No matter how "famous" you are in your area, phone calls to the journal editor will not change this process - they are very strict in sticking to it.
Anders - I think the late Feb -- mid-March timeline Senti stated today is reasonable. There is just a lot we don't know. They probably did reach for the stars and send this to a NEJM/Lancet level journal just to see if they could get it in there despite the fact that the study no longer has internal placebo controls (a major weakness). If successful, we should see that publication soon. If not successful, my guess is that to avoid further delays in simultaneous TLD-Pub release, they would next send this to a good oncology journal, where I think their chances of getting published are much higher. That does unfortunately reset the clock on publication, which could extent the timeline for acceptance into April depending on how fast the journal's editorial process is. I am confident that this will get published in a good (if not great journal), but as our prior JTM publication experience shows, it can take a long time unless the stars align just right.
Fox -
It is DCVAX. Here is the abstract of one project of their NCI Spore Grant that is supporting the combo trials:
Understood. I was thinking of the URI measure event rates.
HK - Good points. However, I am not sure the R-It event rate has any relevance to expected event rates in this study given the different outcome.
HK - FWIW - The analysis is probably propensity score matching, which is a well-accepted stats method. You can read about it online if interested.
Raf - For me, 3 month scripts are $27 and 1 month are $9. Not sure why it makes no difference.
FWIW - Just picked up my first 2021 V script from the pharmacy. Navitus (my Aetna pharmacy benefits manager) is still covering V as tier 1 with no generic substitution ($9 a month).
I have to wonder whether some of the people making big bucks on CVM might put some of their profits into NWBO, hoping for a twofer. This could drive increased volume and buying pressure, so maybe this recent increase in PPS while waiting for TLD could be more sustained. Where's Sushifishman?
Diver -
I agree. If the trial results the company has seen (and they certainly have seen them by now) are anything short of a total slam dunk (i.e., even the original randomized PFS and OS outcomes are significant), I think the company might have opted to have a TLD PR that coincides with release of a peer-reviewed publication. With the recent change in endpoints from what is listed on CT.gov and the primary analyses now being non-traditional in terms of FDA approval (use of historical controls), they probably feel (correctly) that the results will be received a lot more positively if they are fully detailed and explained (way more than can be said in a typical TLD PR). I can see NO other reason for TLD taking this long, so I am comfortable just waiting on the publication to be accepted (which I posted previously, can be a 3-6 month process from submission except under ideal circumstances). No way the historical control analyses (at minimum) aren't significant, so no worries about results ultimately being viewed as positive.
Flip - That's true, and if they write the TLD paper to focus on the original randomized trial PFS and OS endpoints, NEJM or Lancet would absolutely be interested in publishing results, whether positive or negative. However, the changed endpoints make me think that the TLD publication will be framed as an historical control study, not an RCT. That is the type of paper that I am suggesting NEJM/Lancet is more likely to pass on. NEJM is one of those journals that will actually check clinicaltrials.gov and require that the primary and secondary endpoints listed there match what is reported in the TLD paper. This is where NWBO's changed endpoints and failure to update CT.gov can actually hurt them. Hypothetically, if either of their original endpoints were significant, they might find a way to present TLD as both an RCT and an historical control comparison to satisfy NEJM reviewers. That would be my dream study.
Senti - NEJM and Lancet pass on a lot of very high quality studies (including actual RCTs) for various reasons. The DCVAX trial with the new endpoints is not an RCT, which is typically a design requirement for studies they publish. The topic will likely be very interesting to them, but I do not think the odds are high that this high interest will be enough to outweigh their concerns about the design limitations. Not impossible, just not likely. I think results will be good, but I am trying to be a realist regarding the study limitations and the impact that will have on the eventual publication outlet.
Bullish on results too but I tend to agree on publication outlet for the reasons stated. It will end up in a good journal, but not likely top tier. If the company publicizes the results well, I am not even sure it will make much difference to the stock price.
Viking -
Even if they did not start writing until after DL, announcement of acceptance of an article anytime from about mid-January onward is plausible. This is based on the speed with which "hot topic" articles can be written by motivated investigators and the speed of the review process at many top journals (e.g., see timing of Amarin's Reduce-It results in NEJM about 3 months after DL).
Mary Ann (Dawn Wells) just died of Covid last week. Weirdly, I actually had Christmas dinner with her once at a friend's house. She was a really nice person.
Probably having difficulties getting their office door closed...
Pablo - Maybe they are imrovosing based on the complexity of the results they have found once analyses were completed.
Happy - I am not saying results will be a failure whitewashed to look like success. What I am saying is we will likely have success based on the new primary endpoints (which regulators seem to agree with), but possibly may have failure based in the original ITT PFS and OS endpoints (now secondary endpoints). We dom't even know that failure of the latter is the case until we know whether DCXAC worked significantly in recurrent GBM and what adjudicated PFS looks like. So I am arguing for success that will require significant explanation (beyond 22--3 paragraphs) for readers to comprehend it accurately.
FWIW - I know a cardiologist at my major medial center who said they are seeing a lot of young patients with severe Covid-related inflammation who are going into cardiac failure as a result. I just sent her the recent Bhatt study on V's benefits on inflammation and Covid symptoms. She knew of Vascepa but had not seen the study. She said she would forward it to the group at our medical center which compiles all current info on potential Covid treatments. Any of you in similar positions to alert those on the frontlines treating Covid patients may want to do the same thing. I doubt many know of these recent findings.
Anders - Just my opinion but here it goes (sorry for how long this is). The ever lengthening amount of time since data lock with no TLD announcement increasingly makes me think they are planning to announce TLD via a simultaneous peer-reviewed publication. This is not the "normal" way of doing TLD announcements, but this is not a normal dataset. I think the multiple analyses planned (potentially getting at the same issue using multiple statistical approaches), the multiple endpoints, the use of historical controls which needs to be fully explained, the possible failure of original endpoints despite apparent trial success compared to other similar trials, etc. all may require more detailed context than is possible in the typical standard 2-3 paragraph TLD press release. This context will be critical to making sure that the results come across positively rather than incorrectly making the trial look like it failed or is questionable, so it is definitely worth the wait to do things right (as DI has also apparently stated). If the statisticians have been moving cautiously to make sure that all of the analyses are complete and correct, I think they should have had all results, tables, and figures needed to begin writing a publication available by approximately mid-November (possibly sooner if they have been dedicated solely to working on this one project). We know from LL's recent talk that they do in fact already have the survival curves (individually and pooled) necessary to do what I suspect might be the most time consuming part of the primary analysis - comparison to external controls. It may may have taken them some time to identify and agree on the right comparison trials and construct these survival curves (I'm not sure sure whether they had easy access to the raw patient-level trial data from other trials). So, I assume they had the complete results together by about mid-November. I further assume that LL will be first author (as in the blinded/blended paper) and that she would be the one drafting the paper, possibly with the primary statistician simultaneously working on the statistical methods and results sections. LL always seems to be quite busy, but given that she knows this literature quite well without having to do an extensive literature search, I can see her finishing an initial draft in as short as 2-3 weeks (beginning of December). Next, she would have to send the draft out to 68 or so co-authors for comments and edits. If I were in her shoes, I would give the co-authors a two week deadline (which is a realistic minimum for busy physicians) to send back comments. Every time I have written a paper with a lot of authors, it has been like herding cats, and there are always a couple of people who don't respond until they have been reminded multiple times. LL will be limited in her ability to finalize and submit the paper until every author responds, because all authors will be required at the time of submission to click a link saying they did indeed contribute to and agree with what is presented in the submitted paper and they all will also have to fill out online copyright assignment and conflict of interest forms. The article will not be sent out for review until all authors do this. Someone will always be slow to take care of these requirements, and with 60+ co-authors, I can see this taking up to a couple of weeks between submission and actually being sent out for review. So my guess is that they by today would have gotten to the point of having an article submitted. As I have said in prior posts, if their original planned PFS and OS outcomes (DCVAX vs. the internal placebo group) are not significant and the only significant findings are the comparisons to historical controls, getting this accepted by NEJM or Lancet may be challenging. However, I assume they would want to try one or both of these top journals first. These high end journals do an internal editorial board review of submissions before the paper is sent out for peer-review. If the editorial board thinks the paper is unlikely to get accepted eventually, they will typically within a week or so of submission contact the corresponding author and reject the paper before it is even sent out for review. I think LL would at least try to send this paper in to one or both of these top journals. If they get editorially triaged at a top journal, they would then immediately submit the paper unchanged to another journal. If NEJM or Lancet did send it out for full review, I think it would be on a rapid timeline, where the authors would get reviews back within 2 weeks. No paper is ever accepted without revisions, so there would be a process of another approximately 2 weeks (if they are trying to rush it) to revise the paper and get everyone to ok the changes, followed by another 2 week review process (and hopefully acceptance). The best case scenario assuming they submitted the article by 12/15 would be formal acceptance by one of these two top journals by mid-January (although holidays often will slow this process down). Once the company has an acceptance letter in hand, they could in theory announce TLD in a PR saying the paper has been accepted by XX journal, and provide some additional context in the PR based on what is written in the article. My guess however is that they would want to wait to release TLD until the manuscript actually has come out in electronic preprint form (which could be as short as 3-4 weeks after formal acceptance based on my experience). That is the best case scenario at a top journal. If the first submission is not successful, each time the article is rejected we are probably looking at adding a minimum of another 4 weeks to the timeline above. My guess is that this paper will be accepted by some reasonable quality journal by around the end of January, because the trial is unique in how long it ran (5 year survival is for all practical purposes actual survival rather than K-M estimates), how deadly the disease is, and because it suggests a rare success of a vaccine approach to cancer (so it is quite novel). So, there is a lot of guesswork in this, but considering all information available, I think the publication could come out by mid--January at the absolute earliest, and possibly as late as end of February if they end up on a torturous path to acceptance. This is all informed guesswork. It is always possible that if the publication process looks like it will not be smooth, they could change plans and release TLD in a PR earlier. If we have not heard anything about TLD by the first week in Janury, I think it will be pretty certain that they are waiting on a journal acceptance to release TLD.
BSB -
I have no special expertise in this area (I do biomedical rather than financial research), but here is my take on the two issues you raised:
1) Illegal shorts - No supporting facts that I can see for this, but I would guess there are not a lot of naked shorts currently if there are any.
2) MM impact on price -- Absolutely exists. I think MMs are manipulating PPS both up and down as we wait, moving it in whichever direction maximizes their profits at the time. There just have been too many days with relatively large percentagewise downward or upward movements in PPS on relatively low volume to have real supply and demand alone be driving the price. Although I have always been a bit skeptical about the utility of TA as a predictor of future stock price movements, I have to say that in NWBOs case, Soj and others' TA posts have seemed to predict with some accuracy when the MM algorithms are switching from price down to price up strategies. However, I think the big rise in price in October was actually driven by real buying rather than MM manipulation, based on the much higher volume and the real news context at the time (Flaskworks purchase, assumption of impending data lock). Given the absence of real news recently, I think we are back to having MM algorithms drive much of the daily price changes.
I am a "Full Professor" too [By the way, that is an actual term used in academics to more clearly distinguish Professor from Assistant and Associate Professor].
Fox and AVII are correct that the new primary outcomes cannot really be called "results of a randomized placebo controlled trial." This can only be said about the new secondary outcomes (the original primary and secondary outcomes) that will compare those randomized initially to SOC + DCVAX to those randomized initially to SOC + placebo in terms of PFS and OS.
The new primary outcomes are best described as a controlled case comparison study with external controls, which is in fact not considered a very methodologically strong design (although it was necessitated in this case by the need for a crossover option to get people facing almost certain death to enroll). Because of this suboptimal design, I think everyone needs to temper their expectations about publishing these findings in NEJM or Lancet, which have VERY high methodological standards. They might end up getting the results published there simply because they are intriguing in suggesting possible vaccine efficacy against a very deadly cancer, but certainly not a slam dunk. If their analyses of the original randomized placebo controlled endpoints show that PFS and/or OS are significant, then publication in NEJM or Lancet is virtually assured. In the end, where it is published will not matter a huge amount to the stock price as long as the journal has a decent impact factor, is PubMed listed, and the company/journal do a good job of PRing the publication.
Because of the nature of the new endpoint analyses, the results will not go unquestioned, and for legitimate methodological reasons. One poster here recently mentioned the value of a mechanistic analysis, linking the magnitude of immune response (or even pseudoprogression) to positive outcomes. Great point! That is probably the company's best way to show that the results of their historical control analyses are real treatment effects rather than spurious findings due to differences in selection criteria. These study limitations are potentially addressable by fully using all of the data they have.
While we wait, I think it helps to remember that:
1) GBM is an FDA-designated orphan condition without adequate treatments, and the FDA is so serious about finding treatments for orphan conditions that they have their own division within FDA.
2) Consistent with the goal above, the FDA has explicitly changed their guidance in the past year regarding drug development for orphan conditions to make it easier to approve treatments that may be effective but where efficacy may be hard to show using traditional RCTs. Such RCTs in the orphan context are made difficult by the small patient populations (and sample size issues) and ethical concerns (and associated recruitment disadvantages) related to assigning patients to receive placebo only. Patients won't sign up for that. This latter issue is why the DCVAX trial morphed many years ago from a straight DCVAX vs. Placebo RCT to a crossover design in which everyone could get the potentially life-saving treatment. This crossover design also meant that the most clinically-salient endpoint, OS (which is a "real outcome" rather than a surrogate outcome like PFS), could not really be analyzed using traditional treatment vs. placebo comparisons in an unconfounded way. These issues driven by the unique nature of trying to do orphan condition studies are why historical controls are now considered by the FDA to be ok as a comparison group specifically in the orphan condition context. The old RCT rules do not apply.
3) The FDA is concerned with BOTH safety and efficacy. While efficacy may still be unclear, we know DCVAX is very safe.
4) The FDA wants to be able to approve new effective treatments for orphan conditions, and will focus on the risk/benefit ratio. For a treatment that is extremely safe like DCVAX, they will likely not have to show amazing efficacy for the risk/benefit ratio to be viewed as sufficient to warrant approval. Some signal of efficacy ((e.g., advantage of 4-5 months on OS) vs. historical controls with no side effects should be exactly what the FDA is looking for to advance the treatment of GBM.
ATL --
I took the ""3 different analyses" comment to mean standard K-M curve analysis, a simple comparison of milestone median for OS (e.g., 3 year survival), and one other analysis that they feel will be particularly sensitive to capturing the long tail (not sure what though).
Thanks ATL. I had missed that graphic from LL's recent talk. Great news, as it indicates they have been able to construct individual and pooled K-M curves for SOC groups in these prior similar trials, and that these (most likely the pooled curve) will probably serve as the historical comparison data in their TLD. Sad to have to rely on this "tea leaf," but I think it does indicate they are well along in the analysis process -- they would probably not have had these curves in a graphic otherwise.
Regarding your table, I think that the most directly interpretable comparisons are the specific Methylated and Unmethylated data (which do look positive for DCVAX). That is because the proportion of methylated/unmethylated differs substantially in the combined samples across studies, and this will influence the overall OS in the combined groups. The DCVAX trial has about 40% methylated, whereas Gilbert is only 32% and Stupp is about 45%.
ATL - Here is my prior post. It was before we learned of the new endpoints, so the numbers reflect combining new and recurrent GBM:
FWIW - Some back of the envelope calculations to estimate unblinded 3 year OS in the DCVAX ITT group using very conservative assumptions. We don't have info to estimate 5 year OS.
These calculations are based on the following:
1) Overall blinded median OS from the JTM article is 24.2 months(based on those actually in the trial for 36 months at the time of JTM publication)
2) Assume all crossover patients did exactly as well as those assigned initially to DCVAX
3) The 30 control patients who never crossed over live to the standard of care median OS of 17 months.
Using this input, the median 3 year OS for the ITT DCVAX group alone increases to 25 months (from 24.2 months in the combined group). This tells us that if DCVAX works just as well for rGBM as nGBM, what the JTM article reported likely represent a fairly accurate picture of what median 3 year OS would be in the unblinded DCVAX ITT group (i.e., splitting out the controls does not really change things much). This is the lower limit of 3 year OS for the ITT DCVAX group that we could expect in the TLD.
What is important to recognize though is that these unblinded DCVAX ITT median OS numbers increase further to the extent that crossovers (rGBM) do not do as well as those originally treated with DCVAX (which to me does not seem to be an unreasonable assumption). Just as an example, if we assume that unblinded 3 year median OS in the crossover group was 20 months (rather than 17 month SOC), the unblinded median OS for the ITT DCVAX group becomes 26.8 months (an increase of 2.6 months over the blinded JTM data). This to me seems to be a pretty realistic assumption. Regardless, it doesn't seem that unblinded median 3 year OS will increase dramatically from what has been reported based on blinded data unless DCVAX simply fails to work in rGBM. Caveat - Everything above does not address what 5 year median OS will look like on an unblinded basis, or what the KM analysis would show.
ATL -- I did a "what if" analysis of unblinded OS on the assumption that the 30 or so initial placebo patients who did not cross over showed the typical OS reported in the literature. Taking these people out did not make a huge difference (1-2 months) in the observed OS among the remaining patients if we assumed that the crossovers got some benefit. Unless crossovers got no benefit from DCVAX at all, the blinded OS is a reasonable approximation of the final unblinded result.
Means nothing. CME requirements for presenters at talks typically state that brand names of drugs should not be used. Not using the brand name "DCVAX" therefore is totally consistent with this requirement.
Cherry - I agree that results using their new endpoints will likely look very good. My sense is that even though the field is changing (as reflected in recent FDA guidance), there is still likely to be a bias at the best journals for highly rigorous studies methodologically. A trial with outside historical controls rather than within-study randomized controls is inherently a much weaker design (although I totally agree with the reasons for NWBO choosing this new option). I will be pleasantly surprised if this trial is published in NEJM or Lancet, but I would never totally rule it out. I have no doubt it will get published somewhere with a reasonably good impact factor regardless. The results will just be too cool for some journal to pass up.
Dr. Bala - Absolutely correct. Totally fits with my experience.
No - I have not heard of that. The following is informed speculation only. Traditionally, there is a TLD news release followed later by a journal article detailing the results. The only reason I can see NWBO skipping this traditional TLD news release before any journal article is accepted is that there is so much nuance/context required to interpret the results that they think a brief TLD release would be misleading or inadequate to capture what the trial found. That might fit with the unexplained delay in TLD release (waiting on journal acceptance) and DI's implications that they want to make sure everything is rock solid.
Unless the original primary (PFS) and secondary (OS in ITT analysis) outcomes are significant, I think they may have trouble publishing this in a top journal (e.g., NEJM) due to the post hoc changed endpoints and lack of within-study controls for analyses. I don't think this necessarily invalidates findings of likely DCVAX efficacy, but their final study design with the changed endpoints is a much weaker design than before. I can see this getting into a second tier specialty journal pretty easily given the novelty of the findings, even if the study is suboptimal. Journals are ranked by their impact factor, which is a number determined by the number of citations articles in the journal receive. The prestige of journal editors comes from having a high impact factor, and I know from having been Associate Editor of several journals that editors sometimes chase impact factors. So, I can see some editor wanting to take this paper despite its flaws simply to get the study published in the journal so it is cited a lot, with the benefit to the journal being a bump in their impact factor. If LL knows the journal editor or action editors handling the submission, this could increase its chances of publication. Well-known expert authors get cut a little more slack sometimes even when reviews are critical. I have also been involved in handling a few submissions where the final decision to accept was subtly influenced by the editor wanting the article accepted.
All of the above speculation could be wrong. What may be happening is just that the statisticians and company are very slow in finalizing all of the results. Maybe the approach they proposed in the revised SAP is more complicated to carry out than they anticipated. Sixty days to get to TLD after data lock is not unheard of. If we do not get TLD before Christmas, that may indicate that journal publication is the issue, in which case we may not hear anything for a while depending on how smoothly the submission goes.