Hi yatish gosai

and all posters on this thread,

to be honest I think this is a great, great question and it is one that made me think very hard for a prolonged moment.

The mere fact that such a question is asked should give rise to a lot of reflection on the part of regulators. What is obvious to a regulator or someone from one geographical region is obviously not obvious to others. At the end of the day guidelines are not intended solely for regulators but for all involved parties.

So here's something to work with.]]>

These are no "opinions" (i.e. subjected to matters of taste or which can be discussed), these are facts. And it's amazing (to say the least) that such a question arises in the first place in a forum for BA/BE experts...]]>

Thanks for valuable opinion

Yatish Gosai

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [Helmut]]]>

Thanks for valuable opinion

Yatish Gosai

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [Helmut]]]>

Hi Yatish,

Look up

See the definition of

A literate person, who is independent of the research and would not be unfairly influenced by people involved with the study, who attends the informed consent process if the participant […] cannot read, and understand the informed consent form and any other written information supplied to the participant.

]]>Dear Yatish Gosai,

I could not find a definition of an "impartial witness" in the New Drugs and Clinical Trials Rules, 2019. ICH GCP § 1.26 defines the impartial witness as

In a Phase 3 trial I would say yes as there is no financial benefit. In a BA/BE trial, one may object that the husband has a direct interest in having his wife participate (he will benefit from the money). The husband is not impartial as he may be influenced by the financial incentive... The wife's consent may not be freely given (particularly in the case of illiterate subjects, which is the situation where you would need an impartial witness, where wifes are used to obeying their husband).]]>

Can Husband act as an Impartial witness for Screening and Enrollment of his wife in BABE study?

Yatish Gosai

QA Professional

Edit: Category changed; see also this post #1 --> 16205. [Helmut]]]>

Dear All,

Population Bioequivalence (PBE) analysis is required for in-vitro studies as per FDA guidance for few products like Iron Sucrose Injection, Ciclosporine Ophthalmic Emulsion, Azelastine Nasal spray etc.

For performing PBE analysis there are two statistical methods

1. One-sided PBE

2. Traditional PBE

Which method is most preferable out of the two methods?

If the study is meeting criteria as per one-sided PBE analysis & not meeting the criteria as per the traditional PBE analysis, whether that study will be accepted by regulatory?

Which is the better method for PBE analysis that regulatory will accept?

Thanks& Regards

Kotu.]]>

`# Get the data according to Shuanghe`

*» emadata <- "https://dl.dropboxusercontent.com/u/7685360/permlink/emabe4p.csv"*

*» ema <- read.csv(file = emadata, stringsAsFactors = FALSE)*

Dear Detlew,

Can you share the .CSV raw data file used in the program.

I am not able to download from the link which used in the program.

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! Subject line changed; see also this post #2 --> 16205. [Helmut]]]>

Dear all,

by chance I discovered that in the East African Community (Burundi, Kenya, Rwanda, South Sudan, Tanzania, and Uganda) ABEL for C

Hello, I need your advice please for Naproxen Oral Suspension

should BE study be conducted with dose 500 mg dose or 25 dose ?. Strength of product is 25 mg/mL. We are intending to submit data to USA.

thanks lot]]>

Dear Helmut, dear Olivbood,

To repeat the recommendation in this post --> 19956:

We can mimic the df's, at least

(TaaTP: Two-at-a-Time Principle)

But makes seldom a difference worth thinkin' about:

`library(PowerTOST)`

sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)

Design alpha CV theta0 theta1 theta2 Sample size Achieved power Target power

1 2x2x2 0.05 0.3 0.95 0.8 1.25 52 0.9019652 0.9

sampleN.TOST(CV=0.3, design="3x6x3", targetpower=0.9, robust=TRUE, print=FALSE)

Design alpha CV theta0 theta1 theta2 Sample size Achieved power Target power

1 3x6x3 0.05 0.3 0.95 0.8 1.25 54 0.9112411 0.9

2 subjects more but due to balanced design, i.e. sample size has to be a multiple of 6.

If we go for an unbalanced design we have also a power of 90% with 52 subjects:

`power.TOST(CV=0.3, design="3x6x3", robust=TRUE, n=52)`

Unbalanced design. n(i)=9/9/9/9/8/8 assumed.

[1] 0.9005174

]]>Hi Olivbood,

`sampleN.TOST(design="3x6x3", ...[["Sample size"]]`

would give me the sample size necessary for a power of 90% to detect BE for at least one given comparison (either A vs B or C vs B) if I evaluated all data at the same time, while `sampleN.TOST(design="2x2x2", ...)[["Sample size"]]`

would give me the sample size for a given comparison if I evaluated the data two at a time…Correct.

Oops! So no incentive in terms of sample size in this example.

`sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)[["Sample size"]]`

, then:`power.TOST(CV=0.3, design="2x2x2", n=n) * power.TOST(CV=0.3, design="2x2x2", n=n)`

would evaluate to less than 0.9.The other way ’round. If you assume no correlation, overall power ~ power of the part driving the sample size (higher CV and/or worse PE). If correlation = 1, then overall power = power

`sampleN.2TOST`

in this case…`library(PowerTOST)`

CV.2 <- CV.1 <- 0.30

theta0.2 <- theta0.1 <- 0.95

target <- 0.90

# rho 1 throws an error in earlier versions

if (packageVersion("PowerTOST") > "1.4.7") {

rho <- c(0, 1)

} else {

rho <- c(0, 1-.Machine$double.eps)

}

res <- data.frame(fun=c(rep("sampleN.TOST", 2), rep("sampleN.2TOST", 2)),

CV.1=rep(CV.1, 4), CV.2=rep(CV.2, 4),

theta0.1=rep(theta0.1, 4), theta0.2=rep(theta0.2, 4),

rho=signif(rep(rho, 2), 4), n1=NA, n2=NA, n=0,

pwr.1=NA, pwr.2=NA, pwr=0, stringsAsFactors=FALSE)

n.1 <- sampleN.TOST(CV=CV.1, theta0=theta0.1, targetpower=target,

design="2x2x2", print=FALSE)[["Sample size"]]

n.1 <- n.1 + (6 - n.1 %% 6) # round up to the next multiple of 6

n.2 <- sampleN.TOST(CV=CV.2, theta0=theta0.2, targetpower=target,

design="2x2x2", print=FALSE)[["Sample size"]]

n.2 <- n.2 + (6 - n.2 %% 6)

n <- max(n.1, n.2)

pwr.1 <- power.TOST(CV=CV.1, theta0=theta0.1, design="2x2x2", n=n)

pwr.2 <- power.TOST(CV=CV.2, theta0=theta0.2, design="2x2x2", n=n)

# rho = 0

if (n.1 > n.2) {

pwr <- pwr.1

} else {

pwr <- pwr.2

}

res[1, 7:12] <- c(n.1, n.2, n, pwr.1, pwr.2, pwr)

# rho = 1

pwr <- pwr.1*pwr.2

res[2, 7:12] <- c(n.1, n.2, n, pwr.1, pwr.2, pwr)

for (j in seq_along(rho)) {

x <- suppressWarnings(sampleN.2TOST(CV=c(CV.1, CV.2),

theta0=c(theta0.1, theta0.2),

targetpower=target, rho=rho[j], nsims=1e6,

print=FALSE))

res[2+j, "n"] <- x[["Sample size"]]

res[2+j, "pwr"] <- x[["Achieved power"]]

}

print(res, row.names=FALSE)

… which gives

` fun CV.1 CV.2 theta0.1 theta0.2 rho n1 n2 n pwr.1 pwr.2 pwr`

sampleN.TOST 0.3 0.3 0.95 0.95 0 54 54 54 0.9117921 0.9117921 0.9117921

sampleN.TOST 0.3 0.3 0.95 0.95 1 54 54 54 0.9117921 0.9117921 0.8313649

sampleN.2TOST 0.3 0.3 0.95 0.95 0 NA NA 64 NA NA 0.9093240

sampleN.2TOST 0.3 0.3 0.95 0.95 1 NA NA 52 NA NA 0.9018890

Maybe the ratios and CVs are not identical. Try:

`CV.1 <- 0.20 # fasting might be lower`

CV.2 <- 0.30 # fed/fasting might be higher

theta0.1 <- 0.95 # optimistic

theta0.2 <- 0.90 # might be worse

Then you end up with this:

` fun CV.1 CV.2 theta0.1 theta0.2 rho n1 n2 n pwr.1 pwr.2 pwr`

sampleN.TOST 0.2 0.3 0.95 0.9 0 30 114 114 0.9999994 0.91402 0.9140200

sampleN.TOST 0.2 0.3 0.95 0.9 1 30 114 114 0.9999994 0.91402 0.9140195

sampleN.2TOST 0.2 0.3 0.95 0.9 0 NA NA 108 NA NA 0.9004570

sampleN.2TOST 0.2 0.3 0.95 0.9 1 NA NA 108 NA NA 0.9000410

Depends on what you want. If you want to show similarity of both (BE and lacking food effects), no. If not, base the sample size on the BE-part.

Correct. Has some strange side-effects (see there --> 19946).

No. These tests aim a completely different targets. Hence, the order is not relevant (though in practice, if the BE-part fails the fed/fasting is history).

Not sure what you mean here.]]>

We are developing a prolonged release tablet for Europe market. This product has got single strength. The therapeutic doses is achieved with this single strength by increasing the number of tablets per day in three divided doses. These doses should be attained slowly, by step-wise increase by one tablet every two weeks.

We are required to conduct a single dose fasting, single dose fed and multiple dose study in fed condition. Since this product has single strength and dose is increased to therapeutic level step-wise, can we conduct the multiple dose study at given strength considering it as a sensitive strength?

Edit: Please follow the Forum’s Policy. [Helmut]]]>

Dear Imad(ov),

The EMA GL says the test biobatch should be representative of the product to be

As long as the biobatch is less than 100.000 then scaling up is not allowed.

Best regards,

Osama]]>

The question is :

If we prepare a Qty of granules of bio-batch equivalent to 100 000 finished product units and compress only about 30 % of the final blend of granules Qty ( means to get about 30 000 to 35 000 tablets without completing the compression stage with continuing the film coating and the other QC testing for the produced tablets )

Can we consider this compressed Qty as a representative batch and justified to be used in the Bio-Equivalency study ?

Thanks

Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 --> 16205! [Helmut]]]>

Thanks again for your answers Helmut, that's really helpful !

Unfortunately I am not sure to completely understand the following :

`sampleN.TOST()`

use the argument `design="2x2x2"`

`design="3x6x3"`

. You will get a small reward:`library(PowerTOST)`

*» sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9,
*
» print=FALSE)[["Sample size"]]

`sampleN.TOST(CV=0.3, design="3x6x3", targetpower=0.9, print=FALSE)[["Sample size"]]`

would give me the sample size necessary for a power of 90% to detect BE for at least one given comparison (either A vs B or C vs B) if I evaluated all data at the same time, while `sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)[["Sample size"]]`

would give me the sample size for a given comparison if I evaluated the data two at a time (keeping in mind that I should potentially increase the result so that the sample size is a multiple of 6).However, what should I do if I want the sample size for a power of 90% to detect BE for both comparisons ? If I assume that there is not correlation between both comparisons, and be n =

`sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)[["Sample size"]]`

, then:`power.TOST(CV=0.3, design="2x2x2", n=n) * power.TOST(CV=0.3, design="2x2x2", n=n)`

would evaluate to less than 0.9.Is it correct to calculate the sample size only based on one comparison?

Furthermore, while the two parts of the trial will be evaluated as incomplete block designs, it seems to me that the original sequences and periods are preserved (e.g. an observation from period 3 is still coded as period 3), so that the degree of freedom would not be the same as for the conventional 2x2x2 crossover, no?

Lastly, when you said no multiplicity adjustment procedure was needed, was it implied that I should specify that BE between the clinical and marketed formulation will be tested first, before the food effect (i.e. hierarchical testing)? It is just to be in accordance with EMA and FDA.]]>

Hi Olivbood,

My pleasure.

No idea. If you have variability/ies you could use an upper confidence limit of the CV (that’s what I do). The higher the sample size of the previous study/ies, the lower the uncertainty of the CV and hence, the tighter the CI of the CV. Various approaches in

`PowerTOST`

:`library(PowerTOST)`

CV <- 0.25 # Estimated in previous study

n <- 24 # Its (total) sample size

theta0 <- 0.95 # Assumed T/R-ratio

target <- 0.90 # Target (desired) power

design <- "2x2x2" # or "paired"

alpha <- c(NA, 0.05, 0.20, 0.25, NA) # 0.05: default, 0.25: Gould

if (design == "2x2x2") df <- n-2 else df <- n-1

alpha.not.NA <- which(!is.na(alpha))

res <- data.frame(CV=rep(CV, length(alpha)), df=rep(df, length(alpha)),

approach=c("\'carved in stone\'",

paste0("CI.", 1:length(alpha.not.NA)),

"Bayesian"), alpha=alpha, upper.CL=NA, n=NA,

power=NA, stringsAsFactors=FALSE)

res$approach[alpha.not.NA] <- paste0(100*(1-alpha[alpha.not.NA]),"% CI")

for (j in seq_along(alpha)) {

if (j == 1) { # 'carved in stone'

x <- sampleN.TOST(CV=CV, theta0=theta0, targetpower=target,

print=FALSE)

} else if (j < nrow(res)) { # based on upper CL of CV

res$upper.CL[j] <- signif(CVCL(CV=CV, df=df,

alpha=res$alpha[j])[["upper CL"]], 4)

x <- sampleN.TOST(CV=res$upper.CL[j], theta0=theta0, targetpower=target,

print=FALSE)

} else { # Bayesian

x <- expsampleN.TOST(CV=CV, theta0=theta0, targetpower=target,

prior.type="CV", prior.parm=list(df=df),

print=FALSE)

}

res$n[j] <- x[["Sample size"]]

res$power[j] <- signif(x[["Achieved power"]], 4)

}

print(res)

CV df approach alpha upper.CL n power

1 0.25 22 'carved in stone' NA NA 38 0.9089

2 0.25 22 95% CI 0.05 0.3379 66 0.9065

3 0.25 22 80% CI 0.20 0.2919 50 0.9051

4 0.25 22 75% CI 0.25 0.2836 48 0.9085

5 0.25 22 Bayesian NA NA 42 0.9039

The default

`alpha`

of function `CVCL()`

calculates a one-sided 95% CI. Pretty conservative (2In the spirit of a producer’s risk of 20% I try to convince my clients to use an 80% CI (3

Gould* recommends a 75% CI (4

You can walk the Bayesian route as well (5

Up to you. ;-)

Well… If already known that you will write sumfink like

Unfortunately your story does not end here. Actually that’s only the start. Power (and therefore, the sample size) is

`library(PowerTOST)`

CV <- 0.25 # Estimated in previous study

n <- 24 # Its (total) sample size

theta0 <- 0.95 # Assumed T/R-ratio

target <- 0.90 # Target (desired) power

design <- "2x2x2" # or "paired"

x <- pa.ABE(CV=CV, theta0=theta0,

targetpower=target, design=design)

plot(x, pct=FALSE)

print(x)

Sample size plan ABE

Design alpha CV theta0 theta1 theta2 Sample size Achieved power

2x2x2 0.05 0.25 0.95 0.8 1.25 38 0.9088902

Power analysis

CV, theta0 and number of subjects which lead to min. acceptable power of at least 0.7:

CV= 0.3362, theta0= 0.9064

N = 23 (power= 0.7173)

You have no data about fed/fasting so far. What now? With the Bayesian approach you could take the uncertainty of the PE into account as well:

`library(PowerTOST)`

CV <- 0.25 # Estimated in previous study

n <- 24 # Its (total) sample size

theta0 <- 0.95 # Assumed T/R-ratio

target <- 0.90 # Target (desired) power

design <- "2x2x2" # or "paired"

expsampleN.TOST(CV=CV, theta0=theta0, targetpower=target,

prior.type="both", prior.parm=list(m=n, design=design),

details=FALSE)

++++++++++++ Equivalence test - TOST ++++++++++++

Sample size est. with uncertain CV and theta0

-------------------------------------------------

Study design: 2x2 crossover

log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.9

BE margins = 0.8 ... 1.25

Ratio = 0.95 with 22 df

CV = 0.25 with 22 df

Sample size (ntotal)

n exp. power

98 0.901208

`Power.TOST`

. ;-)No. You have to show BE for all PK metrics with α 0.05 each and the Intersection-Union-Test (IUT) maintains the type I error. Just estimate the sample size for the most nasty PK metric. I feel a little bit guilty since I was such a nuisance to Detlew and Benjamin. They worked hard to implement

`power.2TOST()`

. The idea sounds great but the correlation of say, AUC and C`PowerTOST`

…Yes.

- Gould AL.
*Group Sequential Extensions of a Standard Bioequivalence Testing Procedure.*J Pharmacokinet Biopharm. 1995;23(1):57–86. doi:10.1007/BF02353786.

Hi Helmut,

Thanks for your instructive answer !

I see, would you know of any guidance or "good practice" on how to determine this safety margin?

That's interesting ! As far as I know no BE study is planned in the fed state (since the clinical formulation is only administered in fasting state), but I'll pass on the note.

No idea neither :)

`Power.TOST`

. ;-)I see, so I would only need to power the study for (supposedly) Cmax, and not use for instance the function power.2TOST to calculate the power of two TOST procedure for Cmax and AUC (either 0-inf or 0-t)?

So, no need to perform multiplicity adjustment is that right ? I tried to find some publications on the issue but since it is neither a joint decision rule nor a multiple decision rule situation, so far no luck...

Thanks again for your help !]]>

Hi Olivbood,

Some obstacles. That was a

The most important CV is the one of C

In the crossover you will have one degree of freedom less than in the paired design (the CI will be wider). Given, peanuts.

Now it gets nasty. In many cases the variability in fed state is (much) higher than in fasted state. I have seen too many studies of generics (sorry) where the fasting study passed (“We perform it first because that’s standard.”) – only to face a failed one in fed state. Oops! Hence, I always recommend to perform the fed study first. For you that’s tricky since you have only data of fasted state. What about a pilot or a two-stage design? I prefer the latter because with the former you throw away information. In a TSD you can stop the arms which are already BE in the first stage (likely the fasting part) and continue the others to the second stage.

- EMA
- B
*vs.*A

AUC_{0–t}and C_{max}. If modified release*additionally*AUC_{0–∞}(once you cross flip-flop PK with k_{a}≤ k_{e}the late part of the profile represents absorption).

- C
*vs.*B

As above.

- B
- FDA
- B
*vs.*A

AUC_{0–t}, AUC_{0–∞}, and C_{max}.

- C
*vs.*B

Tricky. I prefer always AUC_{0–t}over AUC_{0–∞}due to its intrinsic lower variability. I never understood why the FDA asks for AUC_{0–∞}*at all*. There are a couple of papers (one even by authors of the FDA…) showing that once absorption is essentially complete (for IR 2×t_{max}), the PE is stable and only its variability increases. I would initiate a controlled correspondence arguing in this direction (what does*“when appropriate mean”*?).

*“Or C*is strange. I can only_{max}”*speculate*that the FDA prefers C_{max}for MR formulations (where dose-dumping is more likely than for IR). But what about efficacy? Why not AUC*and*C_{max}? I’m afraid that you have to clarify that with the FDA as well. For the sample size estimation it’s not relevant since you have to power the study for the BE-part anyhow.

BTW, you were referring to the FDA’s guidance of Dec 2002 (page 7). The current draft guidance for INDs/NDAs of Feb 2019 (page 9) states “…**and**C_{max}”.

- B

If you are not within the limits (C

You only have to observe the one with the highest variability / largest deviation form unity. Yep, doable in

`Power.TOST`

. ;-)Although you plan for a 3×6×3 Williams’ design, the two parts will be

`sampleN.TOST()`

use the argument `design="2x2x2"`

`design="3x6x3"`

. You will get a small reward:`library(PowerTOST)`

sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9,

print=FALSE)[["Sample size"]]

[1] 52

sampleN.TOST(CV=0.3, design="3x6x3", targetpower=0.9,

print=FALSE)[["Sample size"]]

[1] 54

You do. See also this thread --> 18888.

Yep, that’s fine.]]>