InvestorsHub Logo
icon url

petemantx

05/01/19 11:21 AM

#261637 RE: 1oldprof #261636

Here are 2 blogs dtd Sept 2018 and April 2019 posted by IPIX on their website comparing Brilacidin-OM to competitors. Pretty clear Brilacidin is the most effective treatment going, not to mention in addition the safety and ease of delivery. Anybody really believe the company is going to put out such data unless they know they have the most effective product?

http://www.ipharminc.com/new-blog/2018/9/24/brilacidin-for-oral-mucositis-at-a-glance-comparative-data-presentation-with-other-investigational-om-drugs


http://www.ipharminc.com/press-release/2018/4/9/innovation-pharmaceuticals-phase-2-oral-mucositis-trial-additional-data-show-brilacidin-om-demonstrated-a-significant-reduction-in-the-incidence-of-severe-oral-mucositis
icon url

PlentyParanoid

05/01/19 3:51 PM

#261684 RE: 1oldprof #261636

Hi 1oldprof, I appreciate your insights. I suspect that answer from SS is not forthcoming. Maybe I can pipe in. Below is the most honest comparison I can come by.



I guess some explanations are needed.

First: In general FDA (I even more) frowns upon giving ratios not based on actual ITT populations but some evaluable group. Bad practise, especially when dealing with preventive treatment where any 'evaluable' group is hard to justify. But biopharma does what it does best until time comes to file for FDA's approval. Hence, only values calculated per actual ITT population are for Soligenix and Galera's overall performance. Even that did take some undesired work: Soligenix probably did not notice that sufficient info could be found between Clinicaltrials.gov and an appendix for an article. Galera made a slip and included a slide having SOM swimlanes with actual ITT population for placebo and 90 mg groups. Rest of Galera's numbers are based on 'evaluables' which don't allow backtracking to per actual ITT values. It is easy to see that headcounts based on percentages reported by Galera (in a table touting ITT counts) for SOM subjects in subgroups do not add up to SOM counts based on swimlanes. Hmm... IPIX's numbers do add up, but are still based on 'evaluables'. Hmm... Let's put it this way: Galera is feeding us the usual press release / presentation fudge and IPIX is probably doing the same.

Some observation:
1. Both Galera and Soligenix are currently recruiting for P3 trial. Take a look at Dusquetide's performance. If that warrants P3 ...

2. Cisplatin once weekly group (Q1W) with placebo in IPIX trial seems to be anomaly. It's SOM rate is significantly different from corresponding groups in the other trials; p-values from Fisher's Exact test were below 0.05 when tested.

3. Galera's trial should be considered to involve different subject population from the others. See differences in placebo risk ratios: Somehow subjects with low weekly dose of cisplatin fare worse than those with high cisplatin dose every three weeks. Actually, risk ratios for Brilacidin and Dusquetide are not included in 95 % CI for GC4419. A bit inconclusive, but hmm...

In the table there are only two indications for statistically significant difference. One for GC4419 performance in cisplatin Q1W group. The other for Brilacidin in cisplatin Q3W group. BTW: the latter 'achievement' (zero not included in CI) is based on confidence interval for rate difference. Some dudes plenty better than me do consider significance based on confidence intervals less sensitive to small changes and therefore more reliable method when dealing with small samples. This, of course, does not mean that P3 would be a breeze, but worthwhile - yes, in my opinion.

For those who are interested methods used.
p-value : Barnard's test
Confidence intervals (CI) for single rate: Jeffrey's CI, which has nice coverage properties at 95 %.
Confidence intervals for rate difference: Miettinen-Nurminen score based CI for the same reason as Jeffrey's CI.

I guess this should take care of my posting quota for the month of May.
icon url

arvitar

05/01/19 8:51 PM

#261706 RE: 1oldprof #261636

The FDA views p-values as being necessary, but not sufficient, for the interpretation of study results. For example, they consider bias (in the statistical sense) as being at least as important. Sources of bias include study design, randomization strategy, the conduct (blinding, informative censoring, missing data), the analysis (e.g. changing pre-specified goals), reporting and interpretation (i.e. intention of protocols and amendments), and the like.

That is, there are many reasons why the FDA might consider the results of a study dead-on-arrival even though satisfactory cookbook p-values can be calculated