Tricky question, lengthy answer [Power / Sample Size]

posted by Helmut Homepage – Vienna, Austria, 2019-05-14 16:11 (2039 d 15:09 ago) – Posting: # 20281
Views: 7,497

Hi Olivbood,

❝ If my understanding is correct,

sampleN.TOST(design="3x6x3", ...[["Sample size"]] would give me the sample size necessary for a power of 90% to detect BE for at least one given comparison (either A vs B or C vs B) if I evaluated all data at the same time, while sampleN.TOST(design="2x2x2", ...)[["Sample size"]] would give me the sample size for a given comparison if I evaluated the data two at a time…


Correct.

❝ … (keeping in mind that I should potentially increase the result so that the sample size is a multiple of 6).


Oops! So no incentive in terms of sample size in this example.

❝ However, what should I do if I want the sample size for a power of 90% to detect BE for both comparisons ? If I assume that there is not correlation between both comparisons, and be n = sampleN.TOST(CV=0.3, design="2x2x2", targetpower=0.9, print=FALSE)[["Sample size"]], then:

power.TOST(CV=0.3, design="2x2x2", n=n) * power.TOST(CV=0.3, design="2x2x2", n=n) would evaluate to less than 0.9.


The other way ’round. If you assume no correlation, overall power ~ power of the part driving the sample size (higher CV and/or worse PE). If correlation = 1, then overall power = power1 × power2. I suggest to use function sampleN.2TOST in this case…
library(PowerTOST)
CV.2 <- CV.1 <- 0.30
theta0.2 <- theta0.1 <- 0.95
target   <- 0.90
# rho 1 throws an error in earlier versions
if (packageVersion("PowerTOST") > "1.4.7") {
  rho    <- c(0, 1)
} else {
  rho    <- c(0, 1-.Machine$double.eps)
}
res      <- data.frame(fun=c(rep("sampleN.TOST", 2), rep("sampleN.2TOST", 2)),
                       CV.1=rep(CV.1, 4), CV.2=rep(CV.2, 4),
                       theta0.1=rep(theta0.1, 4), theta0.2=rep(theta0.2, 4),
                       rho=signif(rep(rho, 2), 4), n1=NA, n2=NA, n=0,
                       pwr.1=NA, pwr.2=NA, pwr=0, stringsAsFactors=FALSE)
n.1      <- sampleN.TOST(CV=CV.1, theta0=theta0.1, targetpower=target,
                         design="2x2x2", print=FALSE)[["Sample size"]]
n.1      <- n.1 + (6 - n.1 %% 6) # round up to the next multiple of 6
n.2      <- sampleN.TOST(CV=CV.2, theta0=theta0.2, targetpower=target,
                         design="2x2x2", print=FALSE)[["Sample size"]]
n.2      <- n.2 + (6 - n.2 %% 6)
n        <- max(n.1, n.2)
pwr.1    <- power.TOST(CV=CV.1, theta0=theta0.1, design="2x2x2", n=n)
pwr.2    <- power.TOST(CV=CV.2, theta0=theta0.2, design="2x2x2", n=n)
# rho = 0
if (n.1 > n.2) {
  pwr <- pwr.1
} else {
  pwr <- pwr.2
}
res[1, 7:12] <- c(n.1, n.2, n, pwr.1, pwr.2, pwr)
# rho = 1
pwr <- pwr.1*pwr.2
res[2, 7:12] <- c(n.1, n.2, n, pwr.1, pwr.2, pwr)
for (j in seq_along(rho)) {
  x <- suppressWarnings(sampleN.2TOST(CV=c(CV.1, CV.2),
                       theta0=c(theta0.1, theta0.2),
                       targetpower=target, rho=rho[j], nsims=1e6,
                       print=FALSE))
  res[2+j, "n"]   <- x[["Sample size"]]
  res[2+j, "pwr"] <- x[["Achieved power"]]
}
print(res, row.names=FALSE)


… which gives

          fun CV.1 CV.2 theta0.1 theta0.2 rho n1 n2  n     pwr.1     pwr.2       pwr
 sampleN.TOST  0.3  0.3     0.95     0.95
   0 54 54 54 0.9117921 0.9117921 0.9117921
 sampleN.TOST  0.3  0.3     0.95     0.95   1 54 54 54 0.9117921 0.9117921 0.8313649
sampleN.2TOST  0.3  0.3     0.95     0.95
   0 NA NA 64        NA        NA 0.9093240
sampleN.2TOST  0.3  0.3     0.95     0.95   1 NA NA 52        NA        NA 0.9018890


Maybe the ratios and CVs are not identical. Try:

CV.1     <- 0.20 # fasting might be lower
CV.2     <- 0.30 # fed/fasting might be higher
theta0.1 <- 0.95 # optimistic
theta0.2 <- 0.90 # might be worse


Then you end up with this:

          fun CV.1 CV.2 theta0.1 theta0.2 rho n1  n2   n     pwr.1   pwr.2       pwr
 sampleN.TOST  0.2  0.3     0.95      0.9   0 30 114 114 0.9999994 0.91402 0.9140200
 sampleN.TOST  0.2  0.3     0.95      0.9   1 30 114 114 0.9999994 0.91402 0.9140195
sampleN.2TOST  0.2  0.3     0.95      0.9
   0 NA  NA 108        NA      NA 0.9004570
sampleN.2TOST  0.2  0.3     0.95      0.9   1 NA  NA 108        NA      NA 0.9000410


❝ Is it correct to calculate the sample size only based on one comparison?


Depends on what you want. If you want to show similarity of both (BE and lacking food effects), no. If not, base the sample size on the BE-part.

❝ Furthermore, while the two parts of the trial will be evaluated as incomplete block designs, it seems to me that the original sequences and periods are preserved (e.g. an observation from period 3 is still coded as period 3), so that the degree of freedom would not be the same as for the conventional 2x2x2 crossover, no?


Correct. Has some strange side-effects (see there).

❝ Lastly, when you said no multiplicity adjustment procedure was needed, was it implied that I should specify that BE between the clinical and marketed formulation will be tested first, before the food effect (i.e. hierarchical testing)?


No. These tests aim a completely different targets. Hence, the order is not relevant (though in practice, if the BE-part fails the fed/fasting is history).

❝ It is just to be in accordance with EMA and FDA.


Not sure what you mean here.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,336 posts in 4,902 threads, 1,698 registered users;
44 visitors (0 registered, 44 guests [including 9 identified bots]).
Forum time: 06:20 CET (Europe/Vienna)

Only dead fish go with the current.    Scuba divers' proverb

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5