Tightening the limits reduces power for a given sample size [🇷 for BE/BA]
❝ Thank you for the response.
❝ It's about a study passed with 80-125% limits.
❝ Then our regulation asked us to tighten the limits to 90-111.
❝ Do we have to verify that the power is above 80% after tightening the limits?
Post hoc power (\(\small{\widehat{\pi}}\)) is – completely – irrelevant in BE. Only a priori power (\(\small{\pi}\)) is important in designing a study.
Even if all of your assumptions (CV, T/R-ratio, eligible subjects) based on the conventional limits of 80–125% were exactly realized (then \(\small{\widehat{\pi}=\pi}\)), for the tightened limits power will be lower than planned. As long as you pass BE, it does not matter. If you fail – which is very likely – start an argument with the agency, but please not based on post hoc power.
Example:
library(PowerTOST)
CV <- 0.175
design <- "2x2x2"
theta0 <- 0.95
target <- 0.80
theta1 <- c(0.80, 0.90)
theta2 <- c(1.25, 1 / 0.90)
n <- sampleN.TOST(CV = CV, design = design, theta0 = theta0,
theta1 = theta1[1], theta2 = theta2[1],
targetpower = target, print = FALSE)[["Sample size"]]
CI <- CI.BE(CV = CV, pe = theta0, n = n, design = design)
comp <- data.frame(theta1 = theta1, theta2 = theta2, n = n, PE = theta0,
lower.CL = CI[["lower"]], upper.CL = CI[["upper"]],
BE = "fail", power = NA)
for (j in 1:nrow(comp)) {
if (comp$lower.CL[j] >= as.numeric(comp$theta1[j]) &
comp$upper.CL[j] <= as.numeric(comp$theta2[j])) comp$BE[j] <- "pass"
comp$power[j] <- power.TOST(CV = CV, design = design, theta0 = theta0,
theta1 = theta1[j], theta2 = theta2[j], n = n)
comp$theta1[j] <- sprintf("%.4f", as.numeric(comp$theta1[j]))
comp$theta2[j] <- sprintf("%.4f", as.numeric(comp$theta2[j]))
comp$PE[j] <- sprintf("%.4f", as.numeric(comp$PE[j]))
}
comp[, c(5:6, 8)] <- round(comp[, c(5:6, 8)], 4)
names(comp)[c(1:2, 7)] <- c("L", "U", "BE?")
txt <- paste("Study designed based on the conventional limits",
"\n{L = 0.8000, U = 1.2500}; all assumptions are",
"\nexactly realized in the study.\n")
cat(txt); print(comp, row.names = FALSE)
Study designed based on the conventional limits
{L = 0.8000, U = 1.2500}; all assumptions are
exactly realized in the study.
L U n PE lower.CL upper.CL BE? power
0.8000 1.2500 16 0.9500 0.8526 1.0585 pass 0.8401
0.9000 1.1111 16 0.9500 0.8526 1.0585 fail 0.0694
For tightened limits much larger sample sizes would be required – unless the T/R-ratio is closer to 1. That’s why the FDA requires more strict batch release specifications and the default
theta0 = 0.975
in the function sampleN.NTID()
. See also this article.CV <- seq(0.1, 0.2, 0.01)
design <- "2x2x2"
theta0 <- 0.95 # not a good idea for NTIDs
target <- 0.80
theta1 <- c(0.80, 0.90)
theta2 <- c(1.25, 1 / 0.90)
comp <- data.frame(CV = CV,
L1 = rep(theta1[1], length(CV)),
U1 = rep(theta2[1], length(CV)), n1 = NA, pwr1 = NA,
L2 = rep(theta1[2], length(CV)),
U2 = rep(theta2[2], length(CV)), n2 = NA, pwr2 = NA)
for (j in seq_along(CV)) {
tmp <- sampleN.TOST(CV = CV[j], design = design, theta0 = theta0,
theta1 = theta1[1], theta2 = theta2[1],
targetpower = target, print = FALSE)
if (tmp[["Sample size"]] < 12) {
comp$n1[j] <- 12 # force to minimum acc. to guidelines
comp$pwr1[j] <- power.TOST(CV = CV[j], design = design, theta0 = theta0,
theta1 = theta1[1], theta2 = theta2[1], n = 12)
} else {
comp$n1[j] <- tmp[["Sample size"]]
comp$pwr1[j] <- tmp[["Achieved power"]]
}
tmp <- sampleN.TOST(CV = CV[j], design = design, theta0 = theta0,
theta1 = theta1[2], theta2 = theta2[2],
targetpower = target, print = FALSE)
comp$n2[j] <- tmp[["Sample size"]]
comp$pwr2[j] <- tmp[["Achieved power"]]
}
comp <- round(comp, 4)
names(comp)[2:9] <- rep(c("L", "U", "n", "power"), 2)
print(comp, row.names = FALSE)
CV L U n power L U n power
0.10 0.8 1.25 12 0.9883 0.9 1.1111 44 0.8040
0.11 0.8 1.25 12 0.9724 0.9 1.1111 54 0.8115
0.12 0.8 1.25 12 0.9476 0.9 1.1111 62 0.8007
0.13 0.8 1.25 12 0.9148 0.9 1.1111 74 0.8083
0.14 0.8 1.25 12 0.8753 0.9 1.1111 84 0.8022
0.15 0.8 1.25 12 0.8305 0.9 1.1111 96 0.8018
0.16 0.8 1.25 14 0.8487 0.9 1.1111 110 0.8055
0.17 0.8 1.25 14 0.8057 0.9 1.1111 122 0.8003
0.18 0.8 1.25 16 0.8204 0.9 1.1111 138 0.8045
0.19 0.8 1.25 18 0.8294 0.9 1.1111 152 0.8014
0.20 0.8 1.25 20 0.8347 0.9 1.1111 168 0.8015
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- power post hoc Imph 2024-01-29 19:24 [🇷 for BE/BA]
- post hoc power Helmut 2024-01-29 20:09
- post hoc power Imph 2024-01-30 09:16
- Tightening the limits reduces power for a given sample sizeHelmut 2024-01-30 09:43
- post hoc power Imph 2024-01-30 09:16
- post hoc power Helmut 2024-01-29 20:09