Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
Register for free to join our community of investors and share your ideas. You will also get access to streaming quotes, interactive charts, trades, portfolio, live options flow and more tools.
EMA & FDA can approve if one of two primary endpoints reaches the significance level (p ≤ 0.025)
and one of the primary endpoints is not important, also the drug can be approved for the significant indication.
When a clinical trial is designed with two primary endpoints, the trial's success typically hinges on whether both endpoints or at least one of them meet a pre-specified significance threshold. If only one of the two primary endpoints reaches the significance level (p ≤ 0.025), the following procedure is generally followed:
The analysis below is from Chat GPT:
### 1. **Pre-Trial Planning:**
- **Statistical Analysis Plan (SAP):** Before the trial begins, a SAP should outline how the primary endpoints will be analyzed, including how the significance of the endpoints will be interpreted if only one meets the threshold.
- **Multiple Endpoints Consideration:** The SAP should also address how to adjust for multiple endpoints, such as through a Bonferroni correction, a hierarchical testing procedure, or a gatekeeping strategy, to control for Type I error (false positives).
### 2. **Interim Analysis (if applicable):**
- **Early Stopping Rules:** If the trial includes interim analyses, stopping rules for efficacy may be defined. If one endpoint reaches significance and the other does not, the Data Monitoring Committee (DMC) will consider whether to continue, modify, or stop the trial based on the pre-specified rules.
### 3. **Final Analysis of Primary Endpoints:**
- **Primary Endpoint 1 (Significant):** If one primary endpoint achieves a p-value ≤ 0.025, this result is considered statistically significant, suggesting the drug has an effect on this endpoint.
- **Primary Endpoint 2 (Not Significant):** If the second primary endpoint does not reach the significance threshold, it suggests that the drug may not be effective for this outcome.
### 4. **Handling the Results:**
- **Regulatory Implications:**
- **Approval Based on One Endpoint:** Regulatory agencies like the FDA or EMA will evaluate the significance and clinical relevance of the endpoint that met the threshold. If the successful endpoint is clinically critical, it may still support drug approval, particularly if the trial was designed with the expectation that meeting one of the two primary endpoints would be sufficient.
- **Consideration of Unmet Endpoint:** The fact that the second endpoint did not reach significance may lead regulators to request additional analyses, studies, or to limit the drug's indication.
- **Multiplicity Adjustments:** If the SAP included adjustments for multiple endpoints, these would be applied to ensure that the observed p-value for the successful endpoint is appropriately interpreted. This may involve correcting the p-value threshold for significance.
### 5. **Secondary and Exploratory Endpoint Analysis:**
- **Secondary Endpoints:** The analysis of secondary endpoints will proceed according to the SAP. The results of these endpoints might provide additional context or support for the findings on the primary endpoints.
- **Exploratory Analyses:** Post-hoc analyses may be conducted to explore why the second primary endpoint was not met, but these analyses are generally considered hypothesis-generating rather than confirmatory.
### 6. **Clinical and Statistical Interpretation:**
- **Clinical Relevance:** The clinical significance of the successful endpoint will be assessed, considering factors like effect size, safety, and the overall risk-benefit profile.
- **Publication and Reporting:** Results should be transparently reported, with a clear explanation of why only one endpoint was significant and the implications for clinical practice.
### 7. **Next Steps:**
- **Further Studies:** Depending on the importance of the second primary endpoint and the regulatory feedback, additional trials may be required to further explore the drug’s efficacy.
- **Labeling and Indications:** If approved, the drug's labeling may be specific to the condition related to the successful primary endpoint, with limitations noted regarding the second endpoint.
This procedure ensures a balanced approach to interpreting efficacy in a situation where only one of the two primary endpoints meets the significance threshold, with considerations for both statistical rigor and clinical relevance.
Is it necessary that ADCS-ADL co-primary endpoint be met?
Not according to the Company and this publication on NIH PUB. I have more confidence in AVXL’s experts than the mavens on this board.The publishers have developed statistical methods to derive rules to evaluate end-points Collectively.
“Therefore, it may not be necessary to require all the co-primary endpoints to be statistically significant at the 1-sided 0.025 level to control the error rate….”
AVXL’s rational for using alternate methods for evaluating co-primary endpoints maybe explained here: NIH PUB
Evaluating co-primary endpoints collectively in clinical trials.
Often a treatment is assessed by co-primary endpoints so that a comprehensive picture of the treatment effect can be obtained. Co-primary endpoints can be different medical assessments angled at different aspects of a disease, therefore, are used collectively to strengthen evidence for the treatment effect. . It is common sense that if a treatment is ineffective, the chance to show that the treatment is effective in all co-primary endpoints should be small. Therefore, it may not be necessary to require all the co-primary endpoints to be statistically significant at the 1-sided 0.025 level to control the error rate of wrongly approving an ineffective treatment. Rather it is reasonable to allow certain variation for the p -values within a range close to 0.025. In this paper, statistical methods are developed to derive decision rules to evaluate co-primary endpoints collectively. The decision rules control the error rate of wrongly accepting an ineffective treatment at the level of 0.025 for a study and the error rate at a slightly higher level for a treatment that works for all the co-primary endpoints except perhaps one. The decision rules also control the error rates for individual endpoints. Potential applications in clinical trials are presented.
https://pubmed.ncbi.nlm.nih.gov/19219905/
The FUDSTERS don’t want you to know that the co-primary endpoints can be separated in a special statistical way to prove effectiveness of treatment and AVXL can get this approved with the data they have.
This paper shows how AVXL is using this method:
“Therefore, it may not be necessary to require all the co-primary endpoints to be statistically significant at the 1-sided 0.025 level to control the error rate….”
AVXL’s rational for using alternate methods for evaluating co-primary endpoints maybe explained here: NIH PUB
Evaluating co-primary endpoints collectively in clinical trials.
Often a treatment is assessed by co-primary endpoints so that a comprehensive picture of the treatment effect can be obtained. Co-primary endpoints can be different medical assessments angled at different aspects of a disease, therefore, are used collectively to strengthen evidence for the treatment effect. . It is common sense that if a treatment is ineffective, the chance to show that the treatment is effective in all co-primary endpoints should be small. Therefore, it may not be necessary to require all the co-primary endpoints to be statistically significant at the 1-sided 0.025 level to control the error rate of wrongly approving an ineffective treatment. Rather it is reasonable to allow certain variation for the p -values within a range close to 0.025. In this paper, statistical methods are developed to derive decision rules to evaluate co-primary endpoints collectively. The decision rules control the error rate of wrongly accepting an ineffective treatment at the level of 0.025 for a study and the error rate at a slightly higher level for a treatment that works for all the co-primary endpoints except perhaps one. The decision rules also control the error rates for individual endpoints. Potential applications in clinical trials are presented.
https://pubmed.ncbi.nlm.nih.gov/19219905/
AVXL’s rational for using alternate methods for evaluating co-primary endpoints maybe explained here: NIH PUB
Evaluating co-primary endpoints collectively in clinical trials
Often a treatment is assessed by co-primary endpoints so that a comprehensive picture of the treatment effect can be obtained. Co-primary endpoints can be different medical assessments angled at different aspects of a disease, therefore, are used collectively to strengthen evidence for the treatment effect. It is common sense that if a treatment is ineffective, the chance to show that the treatment is effective in all co-primary endpoints should be small. Therefore, it may not be necessary to require all the co-primary endpoints to be statistically significant at the 1-sided 0.025 level to control the error rate of wrongly approving an ineffective treatment. Rather it is reasonable to allow certain variation for the p -values within a range close to 0.025. In this paper, statistical methods are developed to derive decision rules to evaluate co-primary endpoints collectively. The decision rules control the error rate of wrongly accepting an ineffective treatment at the level of 0.025 for a study and the error rate at a slightly higher level for a treatment that works for all the co-primary endpoints except perhaps one. The decision rules also control the error rates for individual endpoints. Potential applications in clinical trials are presented.
https://pubmed.ncbi.nlm.nih.gov/19219905/
Massive finds
Very good drill results:
Thanks for the reply Investor2014, wanted to show that an alternative method is possible here. Placebo is one of the 2 groups needed to run the comparison so no problem, they are comparing 2 data samples. MWU has no problem with placebo.
Doc328 thanks for the reply. i wanted to show that they used MWU and that
the data is also being analyzed as nonparametric so....alternative exit
The Mann-Whitney-U can produce Odds Ratios
Some people think they are doing just that:
https://www.ahajournals.org/doi/pdf/10.1161/STROKEAHA.113.003151?download=true
Effect Size Measures and Their Relationships in Stroke Studies
Volker W. Rahlfs, PhD CStat; Helmuth Zimmermann, Dipl. Math.; Kennedy R. Lees, MD, FRC Mann–Whitney Measure and Odds Ratio
The Mann–Whitney measure and the odds ratio (OR) are 2 sides of the same coin. One can be derived from the other, which is useful for the interpretation of study results; these in turn can be transformed into other measures of effect size, including standardized difference or NNT. The formula for obtaining MW from the OR is as follows, as could be demonstrated by our research team (H.Z. and V.R.):
MW = OR [(OR-1)-ln(OR)] (OR -1)2
This was a typo:
one and two tailed test-t and is more suited for nonparametric samples
The important take way is the use of MWU test to crunch data, it gives them flexibility to use small samples with missing data and MWU is acceptable to FDA. One can drive Odds Ratios, which it appears they are doing.
I suspect MWU test is in the SAP so the FDA is on board and they used the Mann Whitney-U-test in Slide #6 to correlate mRNA with positive ADSC_ADL scores. MWU is accepetable for nonparametric data an small sample size.
https://www.sciencedirect.com/sdfe/pdf/download/eid/3-s2.0-B9780123694928500145/first-
page-pdf9.1 Why Nonparametric Tests?
The methods studied in the previous chapter were mostly concerned with data from a normal distribution. In many situations the data may consist of a number of ordered categories such as a subjective rating of the amount of pain relief (none, a little, a lot, total) a patient perceives after receiving a treatment. In other cases the data may simply be the presence or absence of a condition. In such cases the investigator may be unwill- ing to use a numerical scale but still wants to test a hypothesis related to the effect of a treatment or to the effects of two different treatments. The sign test discussed in this chapter can be used for situations with two outcomes. Other methods in this chapter are used with ordered data or with numerical data that do not follow the normal distribution.
The methods for testing the mean and proportion in the previous chapter are based on normality assumptions. If there is obvious nonnormality in the data, distribution-free methods can be used. In some cases we may suspect that the data do not follow the normal distribution, but we cannot determine the lack of normality for sure because the sample size is too small. Distribution-free methods can be used then and are also often used for small samples when the central limit theorem may not apply.
https://www.cuemath.com/data/non-parametric-test/
Reasons to Use Non-Parametric Tests
It is important to access when to apply parametric and non-parametric tests in order to arrive at the correct statistical inference. The reasons to use a non-parametric test are given below:
When the distribution is skewed, a non-parametric test is used. For skewed distributions, the mean is not the best measure of central tendency, hence, parametric tests cannot be used.
If the size of the data is too small then validating the distribution of the data becomes difficult. Thus, in such cases, a non-parametric test is used to analyze the data.
If the data is nominal or ordinal, a non-parametric test is used. This is because a parametric test can only be used for continuous data.[color=red][/color]
I suspect they are using MWU for all the data not just slide #6 JPM 1/12/23 as they have missing data and the groups are unpaired.
In this chapter, we introduced several of the more frequently used nonparametric tests for continuous data. The nonparametric tests are attractive because they do not require an assumption of the normal distribution. Even when the data do come from normal distributions, these nonparametric tests do not sacrifice much power in comparison to tests based on the normality assumption. Although these tests were designed to be used with continuous data, they are often used with ordered data as well. Their use with ordered data can create problems as there are likely to be more ties for ordered data than for continuous data. In the next chapter, we introduce methods for testing hypotheses about ordered or nominal data, as well as about continuous data that are grouped into categories.
https://www.sciencedirect.com/topics/mathematics/mann-whitney-u-test
It appears they are using Mann-Whitney-U test to crunch data, it requires smaller sample sizes to produce p values can be used with missing data points. If this is true all the criticism about one and two tailed t-test are not correct. The Mann-Whitney-U can produce Odds Ratios, one and two tailed test-t and is more suited for nonparametric samples.
JPM conference of 1/12/23 page #6 revels they used Mann-Whitney U test to crunch the ADCS-ADL data to show S1 mRNA levels correlates with positive outcome in ADCS_ADL scores.
https://www.sciencedirect.com/topics/mathematics/mann-whitney-u-test
It appears AVXL has used Mann-Whiney-U-test, instead of the t-test, OR and two tailed test can be derived.
Slide #6 from 1/12/23 JPM conference states they used Mann-Whitney-U-test to correlate S1 mRNA with positive results on 10mg to 50mg patients. If they used it her this must be in th SAP.
https://www.sciencedirect.com/topics/biochemistry-genetics-and-molecular-biology/mann-whitney-u-test
raja48185 Have you seen Slide #6 from JPM 1/12/23 it revels the data for 10mg to 50mg for n=20 patients show S1 mRNA correlates with positive outcome p=0.015. They also revel that Mann-Whitney-U-test was used to crunch the data not 2 sample t test.
https://www.sciencedirect.com/topics/biochemistry-genetics-and-molecular-biology/mann-whitney-u-test
Analyzing relationships between two variables
Joann G. Elmore MD, MPH, in Jekel's Epidemiology, Biostatistics, Preventive Medicine, and Public Health, 2020
3.1 Mann-Whitney U-test
The test for ordinal data that is similar to the two-sample t-test is theMann-Whitney U-test (often referred to as the Wilcoxon rank-sum test).U, similar tot, designates a probability distribution. In the Mann-Whitney test, all the observations in a study of two samples (e.g., experimental and control groups) are ranked numerically from the smallest to the largest without regard to whether the observations came from the experimental group or from the control group. Next, the observations from the experimental group are identified, the values of the ranks in this sample are summed, and the average rank and the variance of those ranks are determined. The process is repeated for the observations from the control group.
The Mann-Whitney test defines the null hypothesis as a 50:50 chance that a randomly selected observation from one population (x) would be larger than an observation from the other population (y). If the null hypothesis is true, the average ranks of the two samples should be similar. If the average rank of one sample is considerably greater than that of the other sample, the null hypothesis probably can be rejected, but a test of significance is needed to be sure.
https://www.sciencedirect.com/topics/biochemistry-genetics-and-molecular-biology/mann-whitney-u-test
Mann-Whitney used in place of t-test
ANAVEX LIFE SCIENCES
Presenting
Thursday, January 12, 2023 at 08:15 AM PST
slide 6.jpg
00:00 / 38:38
slide 4.jpg slide 5.jpg slide 6.jpg slide 7.jpg slide 8.jpg
4 5 6 7 8
0 seconds of 38 minutes, 38 secondsVolume 0%
No, Mann-Whitney can be used in place of the t-test spears to be better for these complex variables.
https://www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/mann-whitney-u-test-2/
georgejjl what do you make of Slide#6 they used Mann-Whitney U test which according to Statitical Solutions website can be used in place of t-test, two tailed or one tailed. This seems to be the company’s tact on SAP.
https://www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/mann-whitney-u-test-2/
https://www.statisticssolutions.com/free-resources/directory-of-statistical-analyses/mann-whitney-u-test/
Slide 6 uses Mann-Whitney U test on 10-50 mg patients what up with that.
Mann-Whitney U test is the non-parametric alternative test to the independent sample t-test. It is a non-parametric test that is used to compare two sample means that come from the same population, and used to test whether two sample means are equal or not. Usually, the Mann-Whitney U test is used when the data is ordinal or when the assumptions of the t-test are not met.
Sometimes understanding the Mann-Whitney U is difficult interpret because the results are presented in group rank differences rather than group mean differences. The Intellectus Statistics tool below interprets the analysis in plain English!
Assumptions of the Mann-Whitney:
Mann-Whitney U test is a non-parametric test, so it does not assume any assumptions related to the distribution of scores. There are, however, some assumptions that are assumed
1. The sample drawn from the population is random.
2. Independence within the samples and mutual independence is assumed. That means that an observation is in one group or the other (it cannot be in both).
3. Ordinal measurement scale is assumed.
georgejjl the session time on Anavex web link is earlier.
Anavex .......today announced that it will present at the 41st Annual J.P. Morgan Healthcare Conference on Thursday, January 12th 2023, at the Westin St. Francis in San Francisco, CA. Christopher U Missling, PhD, President & Chief Executive Officer will present the Company in a session scheduled 08:15 AM - 08:55 AM (Pacific Time).
sab63090 abew4me is correct about the nutrient coctail. I would add Tollovid by Todos Medical. My wife and I had mild Covid last Jan and I believe it was mild because of those nutraceutical and Tollovid. Tollovid is a 3CLProtease Inhibitor we take it daily now to keep away Long Covid.
Grimmer is the key t'll all be about Biomarkers at the end of the day.
I have been here since 2015. I believe 2-73 works but showing a statistical significance with the presnt tools ADAS-Cog ect..will not demonstrate a sig..
My guess is Dr. Grimmer is the key his knowledge of biomarkers is the direction for the next Phase 3 with biomarker end points.
amstock82's Responding to FDA Text:
..." Back to the text from the FDA that you highlighted in blue. The FDA is discussing the need to have more than one endpoint - i.e, have an ADAS-Cog and ADCS-ADL. It is not at all about what statistical test is used. In other words, Anavex uses ADAS-Cog Primary endpoint and ADCS-ADL co-Primary end-point therefore meeting the guidance of the FDA."
The cited FDA text:
Regarding the significance factor for multiple primary endpoints, the FDA guidance that has previously been posted, explicitly covers co-primary endpoints for Alzheimer’s trials (see section copied below). It guides to use co-primary endpoints that both have to be met for the trial to be a success, and that therefore a significance level of 0.05 is appropriate for each individual co-primary endpoint. As far as I can see from the PR and the slides, AVXL haven’t said that they have co-primary endpoints. However, this guidance was published in Jan 2017 and the Ph2b/3 trail was first posted on clinicaltrials.gov in Jan 2019. So although AVXL doesn’t have a SPA or any such direct involvement from the FDA to rubber stamp the trial design before it started, I think it is fairly safe to assume that AVXL will have taken the guidance into account and specified the dual endpoints as co-primary. The fact that AVXL themselves seem to have used a significance criteria of p<0.05 would appear to confirm that co-primary endpoints are being used.
528 A second kind of circumstance in which a demonstration of an effect on two endpoints is needed
529 is when there is a single identified critical feature of the disorder, but uncertainty as to whether
530 an effect on the endpoint alone is clinically meaningful. In these cases, two endpoints are often
531 used. One endpoint is specific for the disease feature intended to be affected by the drug but not
532 readily interpretable as to the clinical meaning, and the second endpoint is clinically interpretable
533 but may be less specific for the intended action of the test drug. A demonstration of
534 effectiveness is dependent upon both endpoints showing a drug effect. One endpoint ensures the
535 effect occurs on the core disease feature, and the other ensures that the effect is clinically
536 meaningful.
537
538 An example illustrating this second circumstance is development of drugs for treatment of the
539 symptoms of Alzheimer’s disease. Drugs for Alzheimer’s disease have generally been expected
540 to show an effect on both the defining feature of the disease, decreased cognitive function, and
541 on some measure of the clinical impact of that effect. Because there is no single endpoint able to
542 provide convincing evidence of both, co-primary endpoints are used. One primary endpoint is
543 the effect on a measure of cognition in Alzheimer’s disease (e.g., the Alzheimer’s Disease
544 Assessment Scale-Cognitive Component), and the second is the effect on a clinically
545 interpretable measure of function, such as a clinician’s global assessment or an Activities of
546 Daily Living Assessment.
547
548 Trials of combination vaccines are another situation in which co-primary endpoints are
549 applicable. These vaccine trials are typically designed and powered for demonstration of a
550 successful outcome on effectiveness endpoints for each pathogen against which the vaccine is
551 intended to provide protection.
552
553 As discussed in section II.E, multiplicity problems occur when there is more than one way to
554 determine that the study is a success. When using co-primary endpoints, however, there is only
555 one result that is considered a study success, namely, that all of the separate endpoints are
556 statistically significant. Therefore, testing all of the individual endpoints at the 0.05 level does
557 not cause inflation of the Type I error rate; rather, the impact of co-primary endpoint testing is to
558 increase the Type II error rate. The size of this increase will depend on the correlation of the co-
559 primary endpoints. In general, unless clinically very important, the use of more than two co-
560 primary endpoints should be carefully considered because of the loss of power.
amstocks82, on Investors Village, explains there is no need for Two tails as onetail showing significance Obviates the other in these 2 populations:
amstock82
..... "But first to the test. With books of the t-test, when you have a change in two populations over time that are being tested, a right-handed t-test or left-handed t-test is used. This is essentially a one-sided t-test.
In the case we have, the population that took A2-73 changed from the population that took the placebo.This is apparent in the data. We want to prove if the change in the population is unlikely to be random (p .05 or better) or it is statistically significant.
In t-tests, it is classically used for a few things. The one we are interested in is to show that one group's mean is different from another group's mean. Or the group that took Anavex 2-73 changed from the group that took the placebo.
A one sample t-test in this case is a statistical test where the critical area of a distribution is one-sided so that the alternative hypothesis is accepted if the tested population parameter is either greater than or less than the original population or placebo population, but not both. To put it another way, the placebo population should reflect the normal untreated Alzheimer's disease population that is being tested. To make sure, (double-blind) random selection of treatment patients versus placebo patients occurred. The test population is taking Anavex 2-73. The hypothesis being tested is that the overall population of patients taking 30 or 50 mg of A 2-73 will show a reduction in cognitive decline. Therefore, the population taking A 2-73 essentially on the graph moves/changes in one direction from the population taking the placebo. One of the classic equations for this is below. Note, there are some that are a bit different, but the following is good.".....
.
Clarifying of the actual date for DATA may limit the rise in price till after Dec 1. They will need to leak tidbits of data or tease the market with PDD or Rette news or we'll be in a range below $13.00
Seems Dr. M is locking in 2-73 value beyond the normal FDA approval, by building a patent moat around the pipeline. I'm sure we'll see the same for 3-71 or anyother molecule we use in future.
So Glad for your getting better, hoping for your complete recovery! I'm a bit afraid to go off the daily supplement, my son is recovering and is on the daily supplement and thinks we should stay on it for till something better comes along.
Which Product were you taking? the difference in mg dose is huge. 600mg for Daily and 11800mg / day for the 20 day high dose if you are Pos Cov. I'm Pos since the 1/21/22 , taking the 20 day dose,very mild symptoms , age 78.
Being Vaccinated does not reduce ones risk of Hospitalization or reinfection, compared to Natrual Infection.
This probably why Israel is experiencing a large wave of reinfections.
UnVaccinated have better Immunity without questuon!
; Dr. Campbell Breaks it down these 2 videos:
I tested Cov19 pos 1/21/22. Wife and I been taking “Tollovid Daily” a low dose version of the Clinical dose, since Nov/10/21. All my symptoms mild , Wife never tested positive. She’s with me constantly. I’m now taking Tollovid high dose and believe it’s working . I’m high risk, age 78, if God wills I’ll see 79 next week.
As a side note: I think I picked up Cov19 last year around Jan ,mild symptoms but never tested pos, however, after two Vaccine shots in March 21 I began to experience Irritable Bowell Syndrome starting in August 2021, 5 months after the Vaccine shot, IBS lasted 3 months till after 4 days of Tollovir and hasn’t returned.
I may have been a Cov long hauler? Hope I don’t have to take Tollovir forever!
The skin is has Ace ll receptors the Cov19 virus attches to Ace ll.
https://idpjournal.biomedcentral.com/articles/10.1186/s40249-020-00662-x
https://crimsonpublishers.com/rpn/fulltext/RPN.000590.php
Full disclosure I have been vaccinated twice and am elderly (78 years)
You need to come to the realization that the Vaccines have failed to prevent reinfection and spread. Up to 40% the fully vacinated are infecting others. Only a small percentage of the population , those having co-morbitities, and are un-vaccinated are dying or getting serious complications. Taken as a whole ONLY 21.2 percent of the population is at risk of serious illness or dying, they are the ones needing the vaccine. The vaccines with their present efficacy are unjustified for 79% of us.
“With Delta, we saw 43 percent of patients needing to be hospitalized while just over 5 percent died. With Omicron, although it is still early, we are seeing just under 15 percent of patients need hospitalization, and thus far, just under 1 percent have died,” said Long
https://www.advisory.com/daily-briefing/2020/07/13/covid-risk
"The majority of people who become infected with coronavirus are not expected to become seriously ill, but a large segment of the U.S. adult population – one third (37.6 percent) of adults ages 18 and older – have a higher risk of serious illness if they do become infected due to their age or underlying medical condition."
Overall only 21.2 percent of adults 18 and under 65 are at risk for Serious illness to COV19. This is among the unvaccinated
https://www.kff.org/coronavirus-covid-19/issue-brief/how-many-adults-are-at-risk-of-serious-illness-if-infected-with-coronavirus/
Charted: Who's at highest risk of dying from Covid-19?
Study details
For the study, published in an early form on Wednesday in Nature, researchers analyzed data from the United Kingdom's National Health Service on 17,278,392 adults who were tracked for three months. During that time, 10,926 died from Covid-19, the disease caused by the novel coronavirus, or from complications related to the disease.
The researchers found that patients above the age of 80 were at least 20 times likelier to die from Covid-19 than patients in their 50s, and hundreds of times likelier to die from Covid-19 than patients younger than 40.
The researchers also found that men were about 59% more likely to die from Covid-19 than women. In addition, patients of racial and ethnic minorities—who made up around 11% of all patients tracked for the study—had a higher risk of dying from Covid-19 than white patients. The risk of dying from Covid-19 was especially high among Black and South Asian patients when compared with others, the researchers found.
The researchers also concluded that patients with underlying medical conditions—including respiratory disease, chronic heart disease, diabetes, and obesity—were more likely to die from
It is mutagenic to the HOST According to this publication https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8136050/?fbclid=IwAR1BQ09Up_X1Kc4fAVt4Lr8j3u25nACyNNW-CsfrSM7RjFnMI5F-hogi0pc#sup2
The ambiguous base-pairing of rNHC after incorporation places it in the class of mutagenic compounds targeting incorporation into viral RNA (along with favipiravir [FAV], a base analog [6], and ribavirin [RBV], a ribonucleoside analog [7]). Here we considered the antiviral activity against SARS-CoV-2 of rNHC, FAV, and RBV in a head-to-head comparison of viral inhibition and the ability to induce mutations in the viral genome. Due to their mechanism of action, mutagenic ribonucleoside analogs could be metabolized by the host cell to the 2'-deoxyribonucleotide form by ribonucleotide reductase and then incorporated into DNA, leading to mutagenesis of the host. Thus, we also examined mutagenesis of host DNA using a modified hypoxanthine phosphoribosyltransferase (HPRT) gene mutation assay [8]. We found that rNHC has potent antiviral activity far beyond FAV and RBV but is also mutagenic to the host in the HPRT mutagenesis assay.
Hi NTBob thanks for the Heads up been a long time, one must be patient with Mining Stocks, that post was 2011!
Looks like a good deal for PLG fills the need for liquidity to move the other projects.
PhaseII Clinical trail Data locked end June, start of data analysis in late July
We should be hearing results very soon!
TORONTO, Aug. 05, 2021 (GLOBE NEWSWIRE) -- Arch Biopartners Inc. (“Arch” or the “Company”) (TSX Venture: ARCH and OTCQB: ACHFF), a clinical stage company developing new drug candidates for treating organ damage caused by inflammation, today provided an update that the analysis of the results of the Phase II trial of its lead drug LSALT peptide (Metablok) is ongoing and will be disclosed to the public following third-party, scientific peer review.
After the recruitment of the last patient into the trial in early May, the Company’s Phase II data team reconciled the patient data collected from the seven clinical sites that participated in the trial. The database containing the trial data was then locked at the end of June, which enabled the start of data analysis in late July. All of the Phase II clinical sites in Canada, the U.S. and Turkey have been officially closed.
In Phase II of covid19 clinical trails
https//clinicaltrails.gov/ct2/show/ntc04402957
From8K period ending 06/30/21
ARCH BIOPARTNERS INC.
Notes to Condensed Interim Consolidated Financial Statements Nine Months Ended June 30, 2021 and 2020 (Unaudited - See Notice of No Auditor Review)
1. DESCRIPTION OF OPERATIONS
Arch Biopartners Inc. (the “Company”) is a portfolio based biotechnology company focused on the development of innovative technologies that have the potential to make a significant medical or commercial impact. The Company works closely with the scientific community, universities and research institutions to advance and build the value of select preclinical technologies, develop the most promising intellectual property, and create value for its investors.
At present, the Company is focused on the clinical development of its lead drug candidate Metablok TM.
• Metablok TM - or ‘LSALT peptide’, has the potential to treat or prevent dipeptidase-1 (DPEP-1) mediated organ inflammation in the lungs, liver or kidneys which often results in organ damage or failure, including in the case of sepsis and COVID-19;
The Company has three additional technology platforms in its portfolio under development:
• AB569 - a new drug candidate for treating or preventing antibiotic resistant bacterial infections, primarily in the lungs, and wounds;
• Borg: Peptide-Solid Surface Interface - binding of proprietary peptides to solid metal and plastic surfaces to inhibit biofilm formation and reduce corrosion; and
• MetaMx TM - proprietary synthetic molecules that target brain tumour initiating cells and invasive glioma cells.
The Company owns, or has exclusive licensing rights on the intellectual property ("IP") emanating from the programs listed above.
The corporate headquarters are located in Toronto, Ontario.
Merger 1 for 75 reverse after trading today
8K filed today
https://www.otcmarkets.com/filing/conv_pdf?id=15265726&guid=eETwknHfISvOvth
LYRA, ready to move into Phase III with Significant Efficacy (p=0.03) in a $50 B market, 80 million patients in China alone
https://investors.lyratherapeutics.com/news-releases/news-release-details/lyra-therapeutics-announces-positive-outcome-end-phase-2-meeting
Analyst: William Blair Focus Conference
https://wsw.com/webcast/blair59/lyra/1950784
LYRA, ready to move into Phase III with Significant Efficacy (p=0.03) in a $50 B market, 80 million patients in China alone.
https://investors.lyratherapeutics.com/news-releases/news-release-details/lyra-therapeutics-announces-positive-outcome-end-phase-2-meeting
Analyst: William Blair Focus Conference
https://wsw.com/webcast/blair59/lyra/1950784
LYRA, ready to move into Phase III with Significant Efficacy (p=0.03) in a $50 B market
https://investors.lyratherapeutics.com/news-releases/news-release-details/lyra-therapeutics-announces-positive-outcome-end-phase-2-meeting
Analyst: William Blair Focus Conference
https://wsw.com/webcast/blair59/lyra/1950784