Dear all,

today the EMA published the “Draft guideline on the responsibilities of the sponsor with regard to handling and shipping of investigational medicinal products for human use in accordance with Good Clinical Practice and Good Manufacturing Practice”. End of consultation is on 31 August 2018.

Have also a look at the applicable guidelines on Good Distribution Practice (GDP):]]>

Hi John,

I'm afraid yes. As the lower test strength have to pass the in vitro comparative dissolution testing vs the high bio-batch (which pass the BE study vs the RLD) test strength.

regards,

Osama]]>

Hi libaiyi,

For two tests, it could.

Try this (conventional BE-limits, T/R-ratio 0.95, target power 0.8, 2×2×2 crossover):

`library(PowerTOST)`

CV1 <- 0.3 # Cmax

CV2 <- 0.2 # AUC0-t

n1 <- sampleN.TOST(CV=CV1, print=FALSE)[["Sample size"]]

n2 <- sampleN.TOST(CV=CV2, print=FALSE)[["Sample size"]]

n <- max(n1, n2)

rho.min <- 0.25

rho.max <- 0.75

rho.step <- 0.25

rho <- c(1-.Machine$double.eps,

seq(rho.max, rho.min, -rho.step),

.Machine$double.eps)

pwr0.1 <- power.TOST(CV=CV1, n=n)

pwr0.2 <- power.TOST(CV=CV2, n=n)

pwr1 <- numeric(length(rho))

res1 <- character(length(rho))

f <- "%.4f"

for (j in seq_along(rho)) {

pwr1[j] <- power.2TOST(CV=c(CV1, CV2), n=n, rho=rho[j], nsims=nsims)

res1[j] <- paste0("\n rho = ", sprintf(f, rho[j]), ": ",

sprintf(f, pwr1[j]))

}

res0 <- paste0("\nCV1: ", sprintf(f, CV1), ", CV2: ", sprintf(f, CV2),

"\nSample size based on max(CV1, CV2) = ", sprintf(f, max(CV1, CV2)), ": ", n,

"\nPower for two independent TOSTs",

"\n PK metric with CV = ", sprintf(f, CV1), ": ", sprintf(f, pwr0.1),

"\n PK metric with CV = ", sprintf(f, CV2), ": ", sprintf(f, pwr0.2),

"\n overall power (p1\u00D7p2) : ", sprintf(f, pwr0.1*pwr0.2),

"\nPower for two simultaneous TOSTs (",

formatC(nsims, digits=0, format="d", big.mark = ","), " simulations each)")

cat(res0, paste0(res1), "\n")

`CV1: 0.3000, CV2: 0.2000`

Sample size based on max(CV1, CV2) = 0.3000: 40

Power for two independent TOSTs

PK metric with CV = 0.3000: 0.8158

PK metric with CV = 0.2000: 0.9848

overall power (p1×p2) : 0.8035

Power for two simultaneous TOSTs (1,000,000 simulations each)

rho = 1.0000: 0.8162

rho = 0.7500: 0.8151

rho = 0.5000: 0.8111

rho = 0.2500: 0.8071

rho = 0.0000: 0.8041

If ρ=1, power ~ the one by TOST of the PK metric with the higher CV. Implicitly people assume a perfect correlation when estimating the sample size based on the PK metric with the higher CV.

If ρ=0, power ~ p

Likely similar for three tests. Let’s assume the CV of AUC

Another example. Since CV1 scratches the limit of HVD(P)s, it might be a good idea to nr cautious and assume a T/R of 0.90 (and – if you are courageous – assume a nicer one of AUC at 0.925). Let’s try a 4-period full replicate design:

`library(PowerTOST)`

CV1 <- 0.3 # Cmax

CV2 <- 0.2 # AUC

design <- "2x2x4"

theta0.1 <- 0.90

theta0.2 <- 0.925

n1 <- sampleN.TOST(CV=CV1, theta0=theta0.1, design=design,

print=FALSE)[["Sample size"]]

n2 <- sampleN.TOST(CV=CV2, theta0=theta0.2, design=design,

print=FALSE)[["Sample size"]]

n <- max(n1, n2)

nsims <- 1e6

rho.min <- 0.25

rho.max <- 0.75

rho.step <- 0.25

rho <- c(1-.Machine$double.eps,

seq(rho.max, rho.min, -rho.step),

.Machine$double.eps)

pwr0.1 <- power.TOST(CV=CV1, theta0=theta0.1, n=n, design=design)

pwr0.2 <- power.TOST(CV=CV2, theta0=theta0.2, n=n, design=design)

pwr1 <- numeric(length(rho))

res1 <- character(length(rho))

f <- "%.4f"

for (j in seq_along(rho)) {

pwr1[j] <- power.2TOST(CV=c(CV1, CV2), theta0=c(theta0.1, theta0.2),

n=n, design=design, rho=rho[j], nsims=nsims)

res1[j] <- paste0("\n rho = ", sprintf(f, rho[j]), ": ",

sprintf(f, pwr1[j]))

}

res0 <- paste0("\nCV1 : ", sprintf(f, CV1),

", CV2 : ", sprintf(f, CV2),

"\ntheta0.1: ", sprintf(f, theta0.1), ", theta0.2: ", sprintf(f, theta0.2),

"\nSample size based on max(CV1, CV2) = ", sprintf(f, max(CV1, CV2)),

": ", n, " (", design, " design)",

"\nPower for two independent TOSTs",

"\n PK metric with CV = ", sprintf(f, CV1), ": ", sprintf(f, pwr0.1),

"\n PK metric with CV = ", sprintf(f, CV2), ": ", sprintf(f, pwr0.2),

"\n overall power (p1\u00D7p2) : ", sprintf(f, pwr0.1*pwr0.2),

"\nPower for two simultaneous TOSTs (",

formatC(nsims, digits=0, format="d", big.mark = ","), " simulations each)")

cat(res0, paste0(res1), "\n")

`CV1 : 0.3000, CV2 : 0.2000`

theta0.1: 0.9000, theta0.2: 0.9250

Sample size based on max(CV1, CV2) = 0.3000: 40 (2x2x4 design)

Power for two independent TOSTs

PK metric with CV = 0.3000: 0.8100

PK metric with CV = 0.2000: 0.9985

overall power (p1×p2) : 0.8088

Power for two simultaneous TOSTs (1,000,000 simulations each)

rho = 1.0000: 0.8101

rho = 0.7500: 0.8105

rho = 0.5000: 0.8102

rho = 0.2500: 0.8096

rho = 0.0000: 0.8094

`CV1 : 0.3000, CV2 : 0.2000`

theta0.1: 0.9000, theta0.2: 0.9000

Sample size based on max(CV1, CV2) = 0.3000: 40 (2x2x4 design)

Power for two independent TOSTs

PK metric with CV = 0.3000: 0.8100

PK metric with CV = 0.2000: 0.9819

overall power (p1×p2) : 0.7953

Power for two simultaneous TOSTs (1,000,000 simulations each)

rho = 1.0000: 0.8101

rho = 0.7500: 0.8089

rho = 0.5000: 0.8041

rho = 0.2500: 0.7992

rho = 0.0000: 0.7958

`power.2TOST()`

of the R-package `PowerTOST`

to explore various correlations (ρ). This issue is a little bit academic because ρ is rarely known.- An early truncated partial AUC can be highly variable as well.

Thank you for the reply. I am afraid that I did not state clearly. I still want to clarify do you mean that for the estimation of sample size, the power need to be calculated as:

Overall power = (power of AUC0 * power of AUCinf * power of Cmax)

like 0.8=（0.92*0.92*0.92）

And it could not be simplified as Overall power = (power of AUC0 * power of Cmax) to decrease power needed of each for the lack of ρ?

Thanks again.]]>

Hi guys and girls,

Question. When one does the f2 comparisons for a biowaiver on lower strengths, they are usually strength 1 vs RLD strength 2, strength 2 vs RLD strength 2, etc etc. If one passes these comparison but fails f2 for some of the lower strength vs highest strength comparison of the test products themselves, would that be an issue?

Thanks

John]]>

Thank you guys.

John]]>

Hi Ohlbe and Helmut,

I kinda like the format with the requirements being tabulated. Easier to find things...

I knew that 7% ISR is not enough and they admitted it in a seminar later.

John]]>

:rotfl:]]>

Hi Biostats,

see also this post #3 --> 16205, specifically:

If you are interested in regulatory requirements, please consult the guidance collection first.

If you can’t find what you’re looking for, then start a new thread and hopefully others will find that thread useful in the future. If it is clear that you have done basic background research, you are far more likely to get an informative response.

It seems that some of our members take this advice not seriously enough. One hint is that the average number of replies in the category Regulatives / Guidelines is less than three. In other words, many posts are essentially of the type “Where can I find regulation X?” and the reply “There!”… Please do your homework first.

Guidance, Section 2.1:

The measured drug content of the lots of the test and reference products, used in the study (expressed as percent of the label claim) should be within 5% of each other. Certificates of analysis documenting potency should be generated within 6 months prior to the start of the study.

In exceptional cases where a reference batch with a measured drug content differing less than 5% from the test product cannot be found, potency correction may be accepted. If potency correction is to be used, this intention should be pre-specified in the protocol and justified. The results from the potency assay of the test and reference products should be submitted. In such cases, the applicable bioequivalence standards should be met on both potency-corrected and uncorrected data.

Are you

Dear All,

How to calculate potency correction BE results for Canada submission?

also in what situation potency correction BE results are required?

Is there any formula or reference literature?

please guide me.]]>

Dear EKY,

Well, a bracketing approach validates whatever is between the brackets... If you use HQC and LQC working solutions you will not validate the stability of the working solutions below LQC and above HQC. So I would definitely recommend to use LLOQ and ULOQ.

This may seem illogical compared with the stability in matrix, where LQC and HQC are used, not LLOQ and ULOQ. But there is no other choice: when you analyse LLOQ samples, part of your results will be BLQ. Same for ULOQ samples: part of the results will be above the ULOQ. You would need to extrapolate on both sides of the curve, with no information on the resulting precision and accuracy. Not something you want to do to generate pivotal information.]]>

Hi libaiyi,

AUC and C

If you think about sample size estimation I would go a step further and consider only the PK metric with the highest variability, which generally* is the one of C

`power.2TOST()`

of the R-package `PowerTOST`

to explore various correlations (ρ). This issue is a little bit academic because ρ is rarely known.- An early truncated partial AUC can be highly variable as well.

Hi, all

I want to ask about power calculation. In BE, Cmax, AUCt, and AUCinf are all needed for power consideration. And the power for each one need to be bigger than overall power for the accumulation of power. But AUCinf and AUCt are highly correlated, so could I decrease the individual power and only consider about AUCt and Cmax for the power setting?

Thanks in advance.]]>

From the "Guideline on bioanalytical method validation" (EMEA/CHMP/EWP/192217/2009 Rev. 1 Corr. 2**, 21 July 2011) in Topic "4.1.9 Stability" mentioned about the stability of stock and working solutions that ' It is not need to study the stability at each concentration level of working solutions and a bracketing approach can be used'.

Which concentration should be use for bracket study in this experiment? It is need to be at ULOQ and LLOQ concentration, or not?

Can use high and low QC concentration (HQC, LQC) for this experiment as same as the other stability evaluations (in matrix)?

Edit: category changed [Ohlbe]]]>

Dear balakotu

The ANVISA have adopted the EMA's Approach. However, for you use this approach you need to convince them that their drug is highly variable. I hope I have helped you.

Best Regards

Weidson C. de Souza

Edit: Post moved. [Helmut]]]>

Dear ElMaestro,

I don't think it would prevent to re-analyse samples as part of an investigation (e.g. to identify a root cause or an explanation for anything weird) as long as:

- the repeat result is not used for PK calculations

- the repeats are clearly identified in advance as being investigation repeats, not to be used for PK

- the whole process is described in a SOP.

Well... Yes and no...

You may not find an assignable cause for the IS variation (was it a spiking error ? Blocked or dysfunctional SPE column ? Strict application of Murphy's Law ?). But IS variations could be considered an assignable cause for analytical repeats.

Unfortunately I was not able to attend the WRIB this year. They had a full day on IS variation, with folks from the FDA and EU Agencies. Last year there was a presentation from the FDA who talked about the importance of monitoring the IS response. Actually the guideline states

Thanks Ohlbe,

for this document.

Now a million dollar question:

Are you on basis of the wording on page 13 allowed to do a repeat analysis as part of an investigation? Is for example absent or low IS response (without obvious poor chromatography) itself an assignable cause? If it is, then that would negate the need for an investigation, right?

Common sense takes me a bit in the opposite direction and I would any day prefer to talk about reported repeats and unreported repeats on basis of the presence of a cause.]]>

Hi Ohlbe,

THX! What a monster!

At least replaced

Looked over the fence and realised that is a good idea to assess carryover. Still the whacky recovery…

Hi Irene,

The ones we compared in our paper should do (except Kinetica, of course). I guess that at least Statistica, SPSS, STaTa, and JMP (

If you can afford the license, no.]]>

Hi Kotu,

Replicate designs – although for average bioequivalence – are acceptable for the ANVISA for ages (even mentioned in the guidance).

If you mean reference-scaling, the latter (actually the WHO’s). See this post --> 16334. Confirmed by a Brazilian colleague earlier this month.]]>