Replicate designs: Pros & Cons [Design Issues]

posted by Helmut Homepage – Vienna, Austria, 2017-01-31 13:42 (2612 d 20:28 ago) – Posting: # 17000
Views: 12,496

Hi VSL,

your terminology is unfortunate. Generally x-way (x: 2, 3, 4…) refers to the number of treatments (or formulations). The common 2×2×2 (for simplicity aka 2×2, TR|RT, AB|BA) refers to two treatments, two sequen­ces, two periods. If we have more than one treatment we talk about a ”higher order crossover”. Examples: 3×3 and 4×4 Latin Squares or a 3×6×3 Williams’s design.
In none of these designs any of the treatments is replicated.

❝ 1. Is there any thumb rule (apart from %CV more than 30), when to use three way against four way cross over design?


IMHO, only if the sample volume is limited, a three-period replicate is the better choice. In a four-period replicate the chance of dropouts is higher than in three-period replicates. However, the loss of power is overrated by many.

❝ 2. […] four way design reduces the sample size compared to three way designs,


Correct. But to get the same power the number of treatments (and hence, the number of biosamples mainly driving the study costs) is similar. Sample sizes (n) and number of treatments (t) for GMR 0.9, target power 80%:

           FDA’s RSABE
      2×2×4   2×2×3   2×3×3
CV%   n  t    n  t    n  t 
 30  32 128  46 138  45 135
 40  24  96  38 114  33  99
 50  22  88  34 102  30  90
 60  24  96  36 108  33  99

           EMA’s ABEL
      2×2×4   2×2×3   2×3×3
CV%   n  t    n  t    n  t
 30  34 136  50 150  54 162
 40  30 120  46 138  42 126
 50  28 112  42 126  39 117
 60  32 128  48 144  48 144

In all cases (independent from the scaling method) the four-period full replicate requires the least number of treatments – which is both ethically and economically preferable.

❝ […] is there any additional advantage which four way design give


You’ll get CVwT additionally to CVwR (which is required for reference-scaling). More about that at the end of the post. Nice to know and required for the FDA’s RSABE for NTIDs.

❝ (in terms of overall outcome of the study)?


If you mean the chance of passing BE, no.

❝ 3. Even though variability is less than 30%, we perform three or four ways cross over study, would it enhances chance of passing the study (i.e forced BE) compared to simple two way design?


Generally not. The sample size is not directly accessible, only power. The sample sample size is iteratively altered until at least the target power is reached. Example:

library(PowerTOST)
# sample size and expected power for ABE
design <- c("2x2x2", "2x2x4", "2x2x3", "2x3x3")
CV     <- seq(0.15, 0.3, 0.05)
theta0 <- 0.95 # expected T/R-ratio
target <- 0.90 # target power
res    <- matrix(nrow=length(CV), ncol=2*length(design)+1,
                 byrow=FALSE, dimnames=NULL)
colnames(res) <- c("CV%", paste0(rep(design, each=2), c(": n", ": pwr")))
for (j in seq_along(CV)) {
  res[j, 1] <- 100*CV[j]
  for (k in seq_along(design)) {
    x <- sampleN.TOST(CV=CV[j], theta0=theta0, targetpower=target,
                      design=design[k], details=FALSE, print=FALSE)
    res[j, k*2] <- x[["Sample size"]]
    res[j, k*2+1] <- signif(100*x[["Achieved power"]], 4)
  }
}
print(as.data.frame(res), row.names=FALSE)

Should give:

 CV% 2x2x2: n 2x2x2: pwr 2x2x4: n 2x2x4: pwr 2x2x3: n 2x2x3: pwr 2x3x3: n 2x3x3: pwr
  15       16      92.60        8      93.29       12      93.36       12      93.36
  20       26      91.76       12      90.15       18      90.18       18      90.18
  25       38      90.89       20      92.43       28      90.76       30      92.44
  30       52      90.20       26      90.43       40      91.09       39      90.44

In conventional (unscaled) ABE the sample sizes for the 4-period full replicate designs are ~½ of the 2×2×2 and the ones for the 3-period replicates ~¾ of the 2×2×2.

The number of periods in replicate designs is not so important.
You have to decide whether you use one of the full replicates (two sequences; four periods: 2×2×4 or RTRT|TRTR, three periods: 2×2×3 or RTR|TRT) or the partial replicate (three sequences; three periods: 2×3×3 or RRT|RTR|TRR). The later is a lousy design (since T is not repeated and the FDA’s model is over-specified). In the worst case the study is done and the optimizer fails to converge (independent from the software). Then you are in bad situation. Lots of money spent, no result at all… Please avoid it; don’t follow guidelines blindly.

If you opt for one of the full replicates (which I hope) you should perform the pilot study in a full replicate design as well. In many cases the variability of T is lower than the one of R – which will lead to a lower sample size for the pivotal study. If the pilot study was performed in the partial replicate you have to assume that CVwT = CWwR. Examples for GMR 0.90, target power 80%, 4-period full replicate:In other words, if you “know” that T shows a lower intra-subject variability than R you’ll get a reward in terms of the required sample size (same scaling based on CVwR but narrower CI based on CVw pooled from CVwT and CVwR). Try this code for the EMA’s ABEL:

library(PowerTOST)
CVwT <- 0.35 # CV of T
CVwR <- 0.50 # CV of R
PE   <- 0.9
# pooled CV used for the calculation of the CI
CVw  <- mse2CV(mean(c(CV2mse(CVwT), CV2mse(CVwR))))
# scaled BE limits
AR   <- 100*scABEL(CV=CVwR, regulator="EMA")
# 90% CI assuming CVwT = CVwR
n1   <- sampleN.scABEL(CV=CVwR, theta0=PE, targetpower=0.8, design="2x2x4",
                       details=FALSE, print=FALSE)[["Sample size"]]
CI1  <- round(100*CI.BE(pe=PE, CV=CVw, n=n1, design="2x2x4"), 2)
if (CI1[["lower"]] >= AR[["lower"]] & CI1[["upper"]] <= AR[["upper"]] &
    PE >= 0.8 & PE <= 1.25) {
  concl1 <- paste("(passes BE with", n1, "subjects)")
} else {
  concl1 <- paste("(fails BE with", n1, "subjects)")
}
# 90% CI with CVwT < CVwR
n2   <- sampleN.scABEL(CV=c(CVwT, CVwR), theta0=PE, targetpower=0.8, design="2x2x4",
                       details=FALSE, print=FALSE)[["Sample size"]]
CI2  <- round(100*CI.BE(pe=PE, CV=CVw, n=n2, design="2x2x4"), 2)
if (CI2[["lower"]] >= AR[["lower"]] & CI2[["upper"]] <= AR[["upper"]] &
    PE >= 0.8 & PE <= 1.25) {
  concl2 <- paste("(passes BE with", n2, "subjects)")
} else {
  concl2 <- paste("(fails BE with", n2, "subjects)")
}
cat(sprintf("\nCVwT = %.2f%%, CVwR = %.2f%%, CVw = %.2f%%",
            100*CVwT, 100*CVwR, 100*CVw),
    "\nAR    :", sprintf("%.2f - %.2f%%", AR[["lower"]], AR[["upper"]]),
    "\n90% CI:", sprintf("%.2f - %.2f%%", CI1[["lower"]], CI1[["upper"]]),
    concl1,
    "\n90% CI:", sprintf("%.2f - %.2f%%", CI2[["lower"]], CI2[["upper"]]),
    concl2, "\n")

You should get:

CVwT = 35.00%, CVwR = 50.00%, CVw = 42.96%
AR    : 69.84 - 143.19%
90% CI: 79.07 - 102.44% (passes BE with 28 subjects)
90% CI: 77.74 - 104.20% (passes BE with 22 subjects)


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,640 registered users;
86 visitors (0 registered, 86 guests [including 6 identified bots]).
Forum time: 10:11 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5