Olivbood
☆    

France,
2019-05-08 18:16
(1785 d 16:33 ago)

Posting: # 20273
Views: 7,729
 

 Sample Size Calculation for Drug Effect and Food Effect study [Power / Sample Size]

Hello Everyone,

I would like to perform sample size calculation for a bioequivalence study which would aim at assessing drug effect and food effect at the same time (in order to bridge the clinical formulation with the market
formulation) with a 3-Treatment Williams Design (3x6x3).

The study would include 3 treatments:

A = the clinical formulation (reference) under fasting status,
B = the market formulation (test 1) under fasting status,
C = the market formulation (test 2) under fed status

The goal of the study is to asses the FE on the market formulation in addition to BE between the market and clinical formulations, so that we would test C vs B and A vs B.

Primary Pk parameters for the study would be AUC0-inf, AUC0-t, and Cmax, for which I have CV intra estimates from a previous single sequence study where subjects took the clinical formulation in a single dose for the first period, and then took repeated doses of the clinical formulation in the second period.

My first question would be, do I need to power the study to get average bioequivalence on each of the PK param or just one is sufficient?

The FDA guidance "Food-Effect Bioavailability and Fed Bioequivalence Studies" notes that:

"For an NDA, an absence of food effect on BA is not established if the 90 percent CI for the ratio of population geometric means between fed and fasted treatments, based on log-transformed data, is not contained in the equivalence limits of 80-125 percent for either AUC0-inf (AUC0-t when appropriate) or Cmax".

If I had to compute power to get bioequivalence for all the PK parameters, how should I proceed ? If possible using the package Power.TOST.

Then, since I am interested in the comparisons C vs B and A vs B, how should I proceed to get power estimates, likewise using the package Power.TOST?

If I remember correctly the EMA advises to analyse the comparisons one at a time, so should I perform a multiplicity adjustment? Since only the marketed formulation would then be prescribed to the patients I would not use multiplicity correction, but since I plan to interpret both comparisons independently I'm not sure...

Many thanks for you input !
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-05-08 20:02
(1785 d 14:47 ago)

@ Olivbood
Posting: # 20274
Views: 6,457
 

 Tricky question, lengthy answer

Hi Olivbood,

❝ […] bioequivalence study which would aim at assessing drug effect and food effect at the same time (in order to bridge the clinical formulation with the market formulation) with a 3-Treatment Williams Design (3x6x3).


❝ A = the clinical formulation (reference) under fasting status,

❝ B = the market formulation (test 1) under fasting status,

❝ C = the market formulation (test 2) under fed status


❝ The goal of the study is to asses the FE on the market formulation in addition to BE between the market and clinical formulations, so that we would test C vs B and A vs B.


❝ Primary Pk parameters for the study would be AUC0-inf, AUC0-t, and Cmax, for which I have CV intra estimates from a previous single sequence study where subjects took the clinical formulation in a single dose for the first period, and then took repeated doses of the clinical formulation in the second period.


Some obstacles. That was a paired design. Not unusual but you have to assume no period effect. This comparison makes only sense for AUC0–τ (steady state) vs. AUC0–∞ (SD), i.e., if you were within some pre-specified limits you demonstrated linear PK. Since CVintra of AUC0–∞ generally is larger than the one of AUC0–t, no worries. On the other hand what you got from this study is pooled from AUC0–∞ and AUC0–τ, where quite often the variability is steady state is lower than the one after a SD. Hence, allow for a safety margin.
The most important CV is the one of Cmax. Here I would allow for an even larger safety margin (in steady state its variability might be substantially lower than after a SD). In other words, the lower CV in steady state dampens the pooled CV and the one you will observe after a SD likely will be higher.
In the crossover you will have one degree of freedom less than in the paired design (the CI will be wider). Given, peanuts.
Now it gets nasty. In many cases the variability in fed state is (much) higher than in fasted state. I have seen too many studies of generics (sorry) where the fasting study passed (“We perform it first because that’s standard.”) – only to face a failed one in fed state. Oops! Hence, I always recommend to perform the fed study first. For you that’s tricky since you have only data of fasted state. What about a pilot or a two-stage design? I prefer the latter because with the former you throw away information. In a TSD you can stop the arms which are already BE in the first stage (likely the fasting part) and continue the others to the second stage.

❝ My first question would be, do I need to power the study to get average bioequivalence on each of the PK param or just one is sufficient?


❝ The FDA guidance "Food-Effect Bioavailability and Fed Bioequivalence Studies" notes that:


❝ "For an NDA, an absence of food effect on BA is not established if the 90 percent CI for the ratio of population geometric means between fed and fasted treatments, based on log-transformed data, is not contained in the equivalence limits of 80-125 percent for either AUC0-inf (AUC0-t when appropriate) or Cmax".

  • EMA
    • B vs. A
      AUC0–t and Cmax. If modified release additionally AUC0–∞ (once you cross flip-flop PK with ka ≤ ke the late part of the profile represents absorption).
    • C vs. B
      As above.
  • FDA
    • B vs. A
      AUC0–t, AUC0–∞, and Cmax.
    • C vs. B
      Tricky. I prefer always AUC0–t over AUC0–∞ due to its intrinsic lower variability. I never understood why the FDA asks for AUC0–∞ at all. There are a couple of papers (one even by authors of the FDA…) showing that once absorption is essentially complete (for IR 2×tmax), the PE is stable and only its variability increases. I would initiate a controlled correspondence arguing in this direction (what does “when appropriate mean”?).
      “Or Cmax is strange. I can only speculate that the FDA prefers Cmax for MR formulations (where dose-dumping is more likely than for IR). But what about efficacy? Why not AUC and Cmax? I’m afraid that you have to clarify that with the FDA as well. For the sample size estimation it’s not relevant since you have to power the study for the BE-part anyhow.
      BTW, you were referring to the FDA’s guidance of Dec 2002 (page 7). The current draft guidance for INDs/NDAs of Feb 2019 (page 9) states “… and Cmax”.
If you are not BE (B vs. A), bad luck.
If you are not within the limits (C vs. B) the food effect goes into the SmPC (EMA) or the label (FDA).

❝ If I had to compute power to get bioequivalence for all the PK parameters, how should I proceed ? If possible using the package Power.TOST.


You only have to observe the one with the highest variability / largest deviation form unity. Yep, doable in Power.TOST. ;-)

❝ Then, since I am interested in the comparisons C vs B and A vs B, how should I proceed to get power estimates, likewise using the package Power.TOST?


Although you plan for a 3×6×3 Williams’ design, the two parts will be evaluated as incomplete block designs (IBD), having the same degrees of freedom as the conventional 2×2×2 crossover. Hence, in sampleN.TOST() use the argument design="2x2x2" and not design="3x6x3". You will get a small reward:

library(PowerTOST)
sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9,
             print=FALSE)[["Sample size"]]
[1] 52
sampleN.TOST(CV=0.3, design="3x6x3", targetpower=0.9,
             print=FALSE)[["Sample size"]]
[1] 54


❝ If I remember correctly the EMA advises to analyse the comparisons one at a time,…


You do. See also this thread.

❝ … so should I perform a multiplicity adjustment? Since only the marketed formulation would then be prescribed to the patients I would not use multiplicity correction, but since I plan to interpret both comparisons independently I'm not sure...


Yep, that’s fine.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Olivbood
☆    

France,
2019-05-08 21:20
(1785 d 13:29 ago)

@ Helmut
Posting: # 20275
Views: 6,382
 

 Tricky question, lengthy answer

Hi Helmut,

Thanks for your instructive answer !

❝ The most important CV is the one of Cmax. Here I would allow for an even larger safety margin (in steady state its variability might be substantially lower than after a SD). In other words, the lower CV in steady state dampens the pooled CV and the one you will observe after a SD likely will be higher.

❝ In the crossover you will have one degree of freedom less than in the paired design (the CI will be wider). Given, peanuts.


I see, would you know of any guidance or "good practice" on how to determine this safety margin?

❝ Now it gets nasty. In many cases the variability in fed state is (much) higher than in fasted state. I have seen too many studies of generics (sorry) where the fasting study passed (“We perform it first because that’s standard.”) – only to face a failed one in fed state. Oops! Hence, I always recommend to perform the fed study first. For you that’s tricky since you have only data of fasted state. What about a pilot or a two-stage design? I prefer the latter because with the former you throw away information. In a TSD you can stop the arms which are already BE in the first stage (likely the fasting part) and continue the others to the second stage.


That's interesting ! As far as I know no BE study is planned in the fed state (since the clinical formulation is only administered in fasting state), but I'll pass on the note.

❝ (what does “when appropriate mean”?).


No idea neither :)

❝ You only have to observe the one with the highest variability / largest deviation form unity. Yep, doable in Power.TOST. ;-)


I see, so I would only need to power the study for (supposedly) Cmax, and not use for instance the function power.2TOST to calculate the power of two TOST procedure for Cmax and AUC (either 0-inf or 0-t)?

❝ ❝ Then, since I am interested in the comparisons C vs B and A vs B, how should I proceed to get power estimates, likewise using the package Power.TOST?


❝ ❝ … so should I perform a multiplicity adjustment? Since only the marketed formulation would then be prescribed to the patients I would not use multiplicity correction, but since I plan to interpret both comparisons independently I'm not sure...


❝ Yep, that’s fine.


So, no need to perform multiplicity adjustment is that right ? I tried to find some publications on the issue but since it is neither a joint decision rule nor a multiple decision rule situation, so far no luck...

Thanks again for your help !
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-05-09 02:47
(1785 d 08:02 ago)

@ Olivbood
Posting: # 20276
Views: 6,389
 

 Tricky question, lengthy answer

Hi Olivbood,

❝ Thanks for your instructive answer !


My pleasure.

❝ ❝ The most important CV is the one of Cmax. Here I would allow for an even larger safety margin […].


❝ I see, would you know of any guidance or "good practice" on how to determine this safety margin?


No idea. If you have variability/ies you could use an upper confidence limit of the CV (that’s what I do). The higher the sample size of the previous study/ies, the lower the uncertainty of the CV and hence, the tighter the CI of the CV. Various approaches in PowerTOST:

library(PowerTOST)
CV     <- 0.25    # Estimated in previous study
n      <- 24      # Its (total) sample size
theta0 <- 0.95    # Assumed T/R-ratio
target <- 0.90    # Target (desired) power
design <- "2x2x2" # or "paired"
alpha  <- c(NA, 0.05, 0.20, 0.25, NA) # 0.05: default, 0.25: Gould
if (design == "2x2x2") df <- n-2 else df <- n-1
alpha.not.NA <- which(!is.na(alpha))
res <- data.frame(CV=rep(CV, length(alpha)), df=rep(df, length(alpha)),
                  approach=c("\'carved in stone\'",
                           paste0("CI.", 1:length(alpha.not.NA)),
                           "Bayesian"), alpha=alpha, upper.CL=NA, n=NA,
                  power=NA, stringsAsFactors=FALSE)
res$approach[alpha.not.NA] <- paste0(100*(1-alpha[alpha.not.NA]),"% CI")
for (j in seq_along(alpha)) {
  if (j == 1) {               # 'carved in stone'
    x <- sampleN.TOST(CV=CV, theta0=theta0, targetpower=target,
                      print=FALSE)
  } else if (j < nrow(res)) { # based on upper CL of CV
    res$upper.CL[j] <- signif(CVCL(CV=CV, df=df,
                              alpha=res$alpha[j])[["upper CL"]], 4)
    x <- sampleN.TOST(CV=res$upper.CL[j], theta0=theta0, targetpower=target,
                      print=FALSE)
  } else {                    # Bayesian
    x <- expsampleN.TOST(CV=CV, theta0=theta0, targetpower=target,
                         prior.type="CV", prior.parm=list(df=df),
                         print=FALSE)
  }
  res$n[j]     <- x[["Sample size"]]
  res$power[j] <- signif(x[["Achieved power"]], 4)
}
print(res)

    CV df          approach alpha upper.CL  n  power
1 0.25 22 'carved in stone'    NA       NA 38 0.9089
2 0.25 22            95% CI  0.05   0.3379 66 0.9065
3 0.25 22            80% CI  0.20   0.2919 50 0.9051
4 0.25 22            75% CI  0.25   0.2836 48 0.9085
5 0.25 22          Bayesian    NA       NA 42 0.9039

The 1st row gives the ‘carved in stone’ results (i.e., one believes that in the next study the CV will be like in the previous study – or lower). Used by many but risky and IMHO, not a good idea.
The default alpha of function CVCL() calculates a one-sided 95% CI. Pretty conservative (2nd row).
In the spirit of a producer’s risk of 20% I try to convince my clients to use an 80% CI (3rd row).
Gould* recommends a 75% CI (4th row).
You can walk the Bayesian route as well (5th row).
Up to you. ;-)

❝ ❝ Now it gets nasty. In many cases the variability in fed state is (much) higher than in fasted state. [:blahblah:].


❝ That's interesting ! As far as I know no BE study is planned in the fed state (since the clinical formulation is only administered in fasting state), but I'll pass on the note.


Well… If already known that you will write sumfink like “X has to be administered on an empty stomach” in the SmPC/label, why bother about trying to demonstrate lacking food effects? If you find one, write in the SmPC/label “Food de/increases the absorption of X by y%”. If you want – only for marketing purposes (better compliance) – to write “Food does not influence the absorption of X. However, administration on an empty stomach is recommended.” you have to power the study to be within the limits in order to support such a claim.

Unfortunately your story does not end here. Actually that’s only the start. Power (and therefore, the sample size) is much more dependent on the PE than on the CV. Try this:

library(PowerTOST)
CV     <- 0.25    # Estimated in previous study
n      <- 24      # Its (total) sample size
theta0 <- 0.95    # Assumed T/R-ratio
target <- 0.90    # Target (desired) power
design <- "2x2x2" # or "paired"
x      <- pa.ABE(CV=CV, theta0=theta0,
                 targetpower=target, design=design)
plot(x, pct=FALSE)
print(x)

Sample size plan ABE
 Design alpha   CV theta0 theta1 theta2 Sample size Achieved power
  2x2x2  0.05 0.25   0.95    0.8   1.25          38      0.9088902

Power analysis
CV, theta0 and number of subjects which lead to min. acceptable power of at least 0.7:
 CV= 0.3362, theta0= 0.9064
 N = 23 (power= 0.7173)


You have no data about fed/fasting so far. What now? With the Bayesian approach you could take the uncertainty of the PE into account as well:

library(PowerTOST)
CV     <- 0.25    # Estimated in previous study
n      <- 24      # Its (total) sample size
theta0 <- 0.95    # Assumed T/R-ratio
target <- 0.90    # Target (desired) power
design <- "2x2x2" # or "paired"
expsampleN.TOST(CV=CV, theta0=theta0, targetpower=target,
                prior.type="both", prior.parm=list(m=n, design=design),
                details=FALSE)

++++++++++++ Equivalence test - TOST ++++++++++++
  Sample size est. with uncertain CV and theta0
-------------------------------------------------
Study design:  2x2 crossover
log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.9
BE margins = 0.8 ... 1.25
Ratio = 0.95 with 22 df
CV = 0.25 with 22 df

Sample size (ntotal)
 n   exp. power
98   0.901208

Crazy sample size but still speculative cause you have no clue about the food effect yet.

❝ ❝ You only have to observe the one with the highest variability / largest deviation form unity. Yep, doable in Power.TOST. ;-)


❝ I see, so I would only need to power the study for (supposedly) Cmax, and not use for instance the function power.2TOST to calculate the power of two TOST procedure for Cmax and AUC (either 0-inf or 0-t)?


No. You have to show BE for all PK metrics with α 0.05 each and the Intersection-Union-Test (IUT) maintains the type I error. Just estimate the sample size for the most nasty PK metric. I feel a little bit guilty since I was such a nuisance to Detlew and Benjamin. They worked hard to implement power.2TOST(). The idea sounds great but the correlation of say, AUC and Cmax is unknown in practice. See here and there. For a while we were even considering removing it from PowerTOST

❝ So, no need to perform multiplicity adjustment is that right ?


Yes.


  • Gould AL. Group Sequential Extensions of a Standard Bioequivalence Testing Procedure. J Pharmacokinet Biopharm. 1995;23(1):57–86. doi:10.1007/BF02353786.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Olivbood
☆    

France,
2019-05-10 23:11
(1783 d 11:38 ago)

(edited by Olivbood on 2019-05-11 19:40)
@ Helmut
Posting: # 20277
Views: 6,246
 

 Tricky question, lengthy answer

Thanks again for your answers Helmut, that's really helpful !

Unfortunately I am not sure to completely understand the following :

❝ Although you plan for a 3×6×3 Williams’ design, the two parts will be evaluated as incomplete block designs (IBD), having the same degrees of freedom as the conventional 2×2×2 crossover. Hence, in sampleN.TOST() use the argument design="2x2x2" and not design="3x6x3". You will get a small reward:

library(PowerTOST)

❝ sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9,

❝              print=FALSE)[["Sample size"]]

[1] 52

❝ sampleN.TOST(CV=0.3, design="3x6x3", targetpower=0.9,

❝              print=FALSE)[["Sample size"]]

[1] 54


If my understanding is correct,
sampleN.TOST(CV=0.3, design="3x6x3", targetpower=0.9, print=FALSE)[["Sample size"]] would give me the sample size necessary for a power of 90% to detect BE for at least one given comparison (either A vs B or C vs B) if I evaluated all data at the same time, while sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)[["Sample size"]] would give me the sample size for a given comparison if I evaluated the data two at a time (keeping in mind that I should potentially increase the result so that the sample size is a multiple of 6).

However, what should I do if I want the sample size for a power of 90% to detect BE for both comparisons ? If I assume that there is not correlation between both comparisons, and be n = sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)[["Sample size"]], then:
power.TOST(CV=0.3, design="2x2x2", n=n) * power.TOST(CV=0.3, design="2x2x2", n=n) would evaluate to less than 0.9.
Is it correct to calculate the sample size only based on one comparison?

Furthermore, while the two parts of the trial will be evaluated as incomplete block designs, it seems to me that the original sequences and periods are preserved (e.g. an observation from period 3 is still coded as period 3), so that the degree of freedom would not be the same as for the conventional 2x2x2 crossover, no?


Lastly, when you said no multiplicity adjustment procedure was needed, was it implied that I should specify that BE between the clinical and marketed formulation will be tested first, before the food effect (i.e. hierarchical testing)? It is just to be in accordance with EMA and FDA.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-05-14 16:11
(1779 d 18:38 ago)

@ Olivbood
Posting: # 20281
Views: 6,261
 

 Tricky question, lengthy answer

Hi Olivbood,

❝ If my understanding is correct,

sampleN.TOST(design="3x6x3", ...[["Sample size"]] would give me the sample size necessary for a power of 90% to detect BE for at least one given comparison (either A vs B or C vs B) if I evaluated all data at the same time, while sampleN.TOST(design="2x2x2", ...)[["Sample size"]] would give me the sample size for a given comparison if I evaluated the data two at a time…


Correct.

❝ … (keeping in mind that I should potentially increase the result so that the sample size is a multiple of 6).


Oops! So no incentive in terms of sample size in this example.

❝ However, what should I do if I want the sample size for a power of 90% to detect BE for both comparisons ? If I assume that there is not correlation between both comparisons, and be n = sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)[["Sample size"]], then:

power.TOST(CV=0.3, design="2x2x2", n=n) * power.TOST(CV=0.3, design="2x2x2", n=n) would evaluate to less than 0.9.


The other way ’round. If you assume no correlation, overall power ~ power of the part driving the sample size (higher CV and/or worse PE). If correlation = 1, then overall power = power1 × power2. I suggest to use function sampleN.2TOST in this case…
library(PowerTOST)
CV.2 <- CV.1 <- 0.30
theta0.2 <- theta0.1 <- 0.95
target   <- 0.90
# rho 1 throws an error in earlier versions
if (packageVersion("PowerTOST") > "1.4.7") {
  rho    <- c(0, 1)
} else {
  rho    <- c(0, 1-.Machine$double.eps)
}
res      <- data.frame(fun=c(rep("sampleN.TOST", 2), rep("sampleN.2TOST", 2)),
                       CV.1=rep(CV.1, 4), CV.2=rep(CV.2, 4),
                       theta0.1=rep(theta0.1, 4), theta0.2=rep(theta0.2, 4),
                       rho=signif(rep(rho, 2), 4), n1=NA, n2=NA, n=0,
                       pwr.1=NA, pwr.2=NA, pwr=0, stringsAsFactors=FALSE)
n.1      <- sampleN.TOST(CV=CV.1, theta0=theta0.1, targetpower=target,
                         design="2x2x2", print=FALSE)[["Sample size"]]
n.1      <- n.1 + (6 - n.1 %% 6) # round up to the next multiple of 6
n.2      <- sampleN.TOST(CV=CV.2, theta0=theta0.2, targetpower=target,
                         design="2x2x2", print=FALSE)[["Sample size"]]
n.2      <- n.2 + (6 - n.2 %% 6)
n        <- max(n.1, n.2)
pwr.1    <- power.TOST(CV=CV.1, theta0=theta0.1, design="2x2x2", n=n)
pwr.2    <- power.TOST(CV=CV.2, theta0=theta0.2, design="2x2x2", n=n)
# rho = 0
if (n.1 > n.2) {
  pwr <- pwr.1
} else {
  pwr <- pwr.2
}
res[1, 7:12] <- c(n.1, n.2, n, pwr.1, pwr.2, pwr)
# rho = 1
pwr <- pwr.1*pwr.2
res[2, 7:12] <- c(n.1, n.2, n, pwr.1, pwr.2, pwr)
for (j in seq_along(rho)) {
  x <- suppressWarnings(sampleN.2TOST(CV=c(CV.1, CV.2),
                       theta0=c(theta0.1, theta0.2),
                       targetpower=target, rho=rho[j], nsims=1e6,
                       print=FALSE))
  res[2+j, "n"]   <- x[["Sample size"]]
  res[2+j, "pwr"] <- x[["Achieved power"]]
}
print(res, row.names=FALSE)


… which gives

          fun CV.1 CV.2 theta0.1 theta0.2 rho n1 n2  n     pwr.1     pwr.2       pwr
 sampleN.TOST  0.3  0.3     0.95     0.95
   0 54 54 54 0.9117921 0.9117921 0.9117921
 sampleN.TOST  0.3  0.3     0.95     0.95   1 54 54 54 0.9117921 0.9117921 0.8313649
sampleN.2TOST  0.3  0.3     0.95     0.95
   0 NA NA 64        NA        NA 0.9093240
sampleN.2TOST  0.3  0.3     0.95     0.95   1 NA NA 52        NA        NA 0.9018890


Maybe the ratios and CVs are not identical. Try:

CV.1     <- 0.20 # fasting might be lower
CV.2     <- 0.30 # fed/fasting might be higher
theta0.1 <- 0.95 # optimistic
theta0.2 <- 0.90 # might be worse


Then you end up with this:

          fun CV.1 CV.2 theta0.1 theta0.2 rho n1  n2   n     pwr.1   pwr.2       pwr
 sampleN.TOST  0.2  0.3     0.95      0.9   0 30 114 114 0.9999994 0.91402 0.9140200
 sampleN.TOST  0.2  0.3     0.95      0.9   1 30 114 114 0.9999994 0.91402 0.9140195
sampleN.2TOST  0.2  0.3     0.95      0.9
   0 NA  NA 108        NA      NA 0.9004570
sampleN.2TOST  0.2  0.3     0.95      0.9   1 NA  NA 108        NA      NA 0.9000410


❝ Is it correct to calculate the sample size only based on one comparison?


Depends on what you want. If you want to show similarity of both (BE and lacking food effects), no. If not, base the sample size on the BE-part.

❝ Furthermore, while the two parts of the trial will be evaluated as incomplete block designs, it seems to me that the original sequences and periods are preserved (e.g. an observation from period 3 is still coded as period 3), so that the degree of freedom would not be the same as for the conventional 2x2x2 crossover, no?


Correct. Has some strange side-effects (see there).

❝ Lastly, when you said no multiplicity adjustment procedure was needed, was it implied that I should specify that BE between the clinical and marketed formulation will be tested first, before the food effect (i.e. hierarchical testing)?


No. These tests aim a completely different targets. Hence, the order is not relevant (though in practice, if the BE-part fails the fed/fasting is history).

❝ It is just to be in accordance with EMA and FDA.


Not sure what you mean here.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2019-05-14 18:01
(1779 d 16:48 ago)

@ Helmut
Posting: # 20282
Views: 6,096
 

 Degrees of freedom of TaaTP

Dear Helmut, dear Olivbood,

❝ ...

❝ ❝ Furthermore, while the two parts of the trial will be evaluated as incomplete block designs, it seems to me that the original sequences and periods are preserved (e.g. an observation from period 3 is still coded as period 3), so that the degree of freedom would not be the same as for the conventional 2x2x2 crossover, no?


❝ Correct. Has some strange side-effects (see there).


To repeat the recommendation in this post:
The degrees of freedom are different for the 2x2 design and the design of the TaaTP.
We can mimic the df's, at least approximately, if we use the robust df's.

(TaaTP: Two-at-a-Time Principle)

But makes seldom a difference worth thinkin' about:
library(PowerTOST)
sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)
  Design alpha  CV theta0 theta1 theta2 Sample size Achieved power Target power
1  2x2x2  0.05 0.3   0.95    0.8   1.25          52      0.9019652          0.9
sampleN.TOST(CV=0.3, design="3x6x3", targetpower=0.9, robust=TRUE, print=FALSE)
  Design alpha  CV theta0 theta1 theta2 Sample size Achieved power Target power
1  3x6x3  0.05 0.3   0.95    0.8   1.25          54      0.9112411          0.9

2 subjects more but due to balanced design, i.e. sample size has to be a multiple of 6.
If we go for an unbalanced design we have also a power of 90% with 52 subjects:
power.TOST(CV=0.3, design="3x6x3", robust=TRUE, n=52)
Unbalanced design. n(i)=9/9/9/9/8/8 assumed.
[1] 0.9005174

Regards,

Detlew
Olivbood
☆    

France,
2019-05-24 00:30
(1770 d 10:19 ago)

(edited by Olivbood on 2019-05-24 09:43)
@ d_labes
Posting: # 20298
Views: 5,900
 

 Use of incomplete block design?

Many thanks to both of you !

I come back yet again with another question.

Since the number of treatments is relatively high, the number of blood samples drawn from each subject would be quite high and the study would be time consuming (thus subjects would be more likely to dropout).

In such a case, a balanced incomplete block design with 2 periods and 6 sequences( where each subject would take only 2 treatments) is appealing.
However, I guess going with such a design have its own disadvantages such as an increase in sample size and unbalance between the sequences would be more problematic.

Do you know if such a design is practicable and likely to be accepted by the regulatory agencies? I made some researched but did not find anything convincing.

Also, how should I proceed to power such a study, if possible using the package powerTOST? Would I be bound to perform simulations and base my sample size on expected power?

Thanks !
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-05-24 13:28
(1769 d 21:21 ago)

@ Olivbood
Posting: # 20299
Views: 5,781
 

 Radio Yerevan answers

Hi Olivbood,

❝ Since the number of treatments is relatively high, the number of blood samples drawn from each subject would be quite high and the study would be time consuming (thus subjects would be more likely to dropout).


Yes and no. The loss in power due to dropouts is overrated by many. Try the function pa.ABE() in PowerTOST. Unless the drug is nasty (i.e., drug-related AEs leading to withdrawals) it should not hurt. Concerning the blood volume: I just performed a five (!) period Xover with 19 samples/period. Total blood volume (including pre- / post-study clin. chemistry) was 440 mL. In short: Develop a better bioanalytical method. ;-)

❝ In such a case, a balanced incomplete block design with 2 periods and 6 sequences( where each subject would take only 2 treatments) is appealing.

❝ However, I guess going with such a design have its own disadvantages such as an increase in sample size and unbalance between the sequences would be more problematic.


Maybe. Though BIBDs are mentioned in most textbooks of BE, I didn’t see a single one in my entire career. Phase III is another story, of course.

❝ Do you know if such a design is practicable and likely to be accepted by the regulatory agencies?


(1) Duno and (2) why not? However, expect questions since assessors likely are not familiar with their application in BE.

❝ Also, how should I proceed to power such a study, if possible using the package powerTOST? Would I be bound to perform simulations and base my sample size on expected power?


Good question, next question.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,640 registered users;
76 visitors (0 registered, 76 guests [including 6 identified bots]).
Forum time: 09:49 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5