Hi Laura,
❝ If there is a compound that we already know is highly variable, do you think that making a 2x2 pilot is robust enough to later, using the pilot results, make a 2x4 pivotal or would it be more reliable to make a 2x4 pilot?
The latter. In a 2×2 pilot you get only the within-subject
CVw (pooled from the – unknowns – of T and R,
i.e.,
CVwT and
CVwR). See
this article why that is not a good idea for planning a replicate design. In the sample size estimation of the pivotal study you would have to
assume CVwT =
CVwR. That’s both ethically and economically questionable. See also
this article.
Let’s explore a hypothetical example (
-script at the end). Say, you have three pilot studies.
- A 2×2 with CVw 40%. You have to assume CVwT = CVwR in planning the pivotal.
- A 2x4 replicate with a variance ratio of the first’s ≈0.67, i.e., CVwT < CVwR. It is not uncommon that CVwT < CVwR. Then you will get an incentive in planning the pivotal, i.e., require a smaller sample size compared to CVwT = CVwR.
- Another 2x4 replicate with a variance ratio of the first’s 1.5, i.e., CVwT > CVwR. Rare but possible. You will need a larger sample size than in the two other cases.
You design the pivotals assuming a T/R-ratio of 0.9 (recommended for HVDs) and target ≥ 80% power for the EMA’s Average Bioequivalence with Expanding Limits (ABEL). Note that these are the defaults in the reference-scaling functions of
PowerTOST
and therefore, don’t have to be specified.
pivotal studies based on pilots
pilot CVw CVwT CVwR s2.ratio L U n power
2×2 0.40000 0.40000 0.40000 1.00 74.62% 134.02% 30 0.80656
1st 2×4 0.40000 0.35507 0.44153 ~0.67 72.56% 137.81% 24 0.81029
2nd 2×4 0.40000 0.44153 0.35507 1.50 76.96% 129.94% 42 0.81378
L
and
U
are the expanded limits in ABEL based on
CVwR:$$\small{
\eqalign{s_\text{wR}&=\sqrt{\log_e(CV_\text{wR}^2+1)}\\
\left\{L,U\right\}&=100\exp(\mp0.76\cdot s_\text{wR})}}$$
n
are the estimated sample sizes based on
CVwT,
CVwR, the T/R-ratio, target power, and the design.
The confidence interval depends on the pooled variance of T and R$$\small{\eqalign{s_\text{w}^2&=\log_e(CV_\text{w}^2+1)\\
&=\log_e(0.4^2+1)\approx0.14842\ldots\textsf{,}}}$$which is identical in all our cases.
power of pivotals compared to planned based on 2×2 with 30 subjects
pivotal power power.30
2 0.81029 0.87752
3 0.81378 0.69853
If you plan the pivotal based on the 2×2 with 30 subjects:
- If actually CVwT < CVwR, you gain power (≈88% instead of ≈81% because you can expand more than assumed and have 30 subjects instead of the required 24) but waste money.
- If actually CVwT > CVwR, your study will be underpowered (≈70% instead of ≈81% because you can expand less than assumed and have only 30 subjects instead of the required 42).
Quoting Section 3.5 of ICH M9:
The number of subjects in a clinical trial should always be large enough to provide a reliable answer to the questions addressed.
Statistics are not exactly one of the strengths of ethics committees, but I [
sic] would not accept a protocol for ABEL based on the results of a 2×2 pilot study.
A final hint: If you don’t have your own replicate design pilot (preferrable anyway) but the results of another study (report, publication), you can back-calculate
CVwR from the upper confidence limit. For our examples:
U CVwR
134.02 0.4000
137.81 0.4415
129.94 0.3551
Hope that helps.
library(PowerTOST)
CVw <- 0.4
pilots <- c("2×2", "1st 2×4", "2nd 2×4")
ratios <- c(1, 2 / 3, 3 / 2)
CV <- data.frame(T = NA_real_, R = NA_real_)
for (j in 1:3) {
CV[j, 1:2] <- CVp2CV(CV = CVw, ratio = ratios[j])
}
pivotals <- data.frame(pilot = pilots, CVw = CVw, CVwT = CV[, "T"], CVwR = CV[, "R"],
s2.ratio = c(sprintf("%5.2f ", ratios[1]),
sprintf("~%.2f ", ratios[2]),
sprintf("%5.2f ", ratios[3])),
L = NA_real_, U = NA_real_, n = NA_integer_, power = NA_real_)
for (j in 1:3) {
pivotals[j, 2:4] <- sprintf("%.5f", c(CVw = CVw, CV[j, 1:2]))
pivotals[j, 6:7] <- sprintf("%.2f%%", 100 * scABEL(CV = CV[j, "R"]))
# using the defaults: theta0 = 0.9 and targetpower = 0.8
tmp <- sampleN.scABEL(CV = as.numeric(CV[j, 1:2]), design = "2x2x4",
details = FALSE, print = FALSE)
pivotals$n[j] <- tmp[["Sample size"]]
pivotals$power[j] <- tmp[["Achieved power"]]
}
comp <- data.frame(pivotal = 2:3, power = pivotals$power[2:3], power.30 = NA_real_)
for (j in 1:2) {
comp$power.30[j] <- power.scABEL(CV = as.numeric(CV[j + 1, 1:2]), design = "2x2x4",
n = pivotals$n[1])
}
t <- c("pivotal studies based on pilots\n",
paste("\npower of pivotals compared to planned based on 2×2 with",
n = pivotals$n[1], "subjects\n"))
cat(t[1]); print(pivotals, row.names = FALSE); cat(t[2]); print(comp, row.names = FALSE)
# calculate CVwR from the upper expanded confidence limit U
# (it has to be 1.2500 < U < 1.4319)
back <- data.frame(U = c(134.02, 137.81, 129.94), CVwR = NA_real_)
for (j in 1:3) {
back$CVwR[j] <- sprintf("%.4f", CVwRfromU(U = back$U[j] / 100))
}
print(back, row.names = FALSE)