Dear
R-Users,
we had already a couple of times the discussion whether
can be used in regulated environments, and the answer was always: Yes – if the code is validated.
When it comes to the package
PowerTOST
, in the
/tests
sub-folder of the library you find scripts which you can use to black-box validate your installation,
i.e., compare the results with the respective reference tables.
If you are an
expet, you can inspect the code for correctness (white-box validation). Given, pretty difficult.
Another approach is to perform simulations, like in ElMaestro’s famous
EFG.
I started with sample sizes for ABE.
library(PowerTOST)
compl.seq <- function(n, size) {
# round (up) to complete sequence
size <- as.integer(size)
return(size * (n %/% size + as.logical(n %% size)))
}
sampleN.TOST.sim <- function(alpha = 0.05, CV = CV, theta0 = 0.95,
theta1 = 0.80, theta2 = 1.25, targetpower = 0.80,
design = "2x2x2", nsims = 1e5) {
# sample size by simulations which gives at least targetpower
if (missing(CV)) stop("CV must be given!")
if (missing(theta2) & !missing(theta1)) theta2 <- 1/theta1
if (missing(theta1) & !missing(theta2)) theta1 <- 1/theta2
design <- as.character(design)
fun1 <- function(x) { # the workhorse
suppressWarnings(
suppressMessages(
power.TOST.sim(n = x, CV = CV, theta1 = theta1, theta2 = theta2,
theta0 = theta0, design = design, nsims = nsims))) -
targetpower
}
tmp <- uniroot(fun1, interval = c(4L, 64L), extendInt = "upX", tol = 0.1)
n <- ceiling(tmp$root) # next integer
n <- compl.seq(n, substr(design, 3, 3)) # ensure balance
pwr <- power.TOST.sim(n = n, CV = CV, theta1 = theta1, theta2 = theta2,
theta0 = theta0, design = design, nsims = nsims)
res <- data.frame(Design = design, alpha = alpha, CV = CV, theta0 = theta0,
theta1 = theta1, theta2 = theta2, n = n, pwr = pwr,
target = targetpower, nsims = nsims, Iterations = tmp$iter)
names(res)[7:9] <- c("Sample size", "Achieved power", "Target power")
return(res)
}
Comparison with
sampleN.TOST()
, exact method:
CV <- 0.4
design <- "2x2x4"
print(sampleN.TOST(CV = CV, design = design,
print = FALSE, details = FALSE), row.names = FALSE)
Design alpha CV theta0 theta1 theta2 Sample size Achieved power Target power
2x2x4 0.05 0.4 0.95 0.8 1.25 34 0.8193438 0.8
print(sampleN.TOST.sim(CV = CV, design = design), row.names = FALSE)
Design alpha CV theta0 theta1 theta2 Sample size Achieved power Target power nsims Iterations
2x2x4 0.05 0.4 0.95 0.8 1.25 34 0.81812 0.8 1e+05 10
print(sampleN.TOST.sim(CV = CV, design = design, nsims = 1e6), row.names = FALSE)
Design alpha CV theta0 theta1 theta2 Sample size Achieved power Target power nsims Iterations
2x2x4 0.05 0.4 0.95 0.8 1.25 34 0.819263 0.8 1e+06 10
Well, these simulation are based on the statistics (lognormal-distributed PE,
χ²-distributed s²). If you believe that only the ‘gold-standard’ of subject-simulations are valid, we can misuse the function
sampleN.scABEL.sdsims()
– only for the 3- and 4-period full replicates and the partial replicate:
# define a reg_const where all scaling conditions are ‘switched off’
abe <- reg_const("USER", r_const = NA, CVswitch = Inf,
CVcap = Inf, pe_constr = FALSE)
CV <- 0.4
design <- "2x2x4"
print(sampleN.scABEL.sdsims(CV = CV, theta0 = 0.95, design = design,
regulator = abe, print = FALSE, details = FALSE)[1:10], row.names = FALSE)
Design alpha CVwT CVwR theta0 theta1 theta2 Sample size Achieved power Target power
2x2x4 0.05 0.4 0.4 0.95 0.8 1.25 34 0.81737 0.8
print(sampleN.scABEL.sdsims(CV = CV, theta0 = 0.95, design = design,
regulator = abe, nsims = 1e6, print = FALSE, details = FALSE)[1:10], row.names = FALSE)
Design alpha CVwT CVwR theta0 theta1 theta2 Sample size Achieved power Target power
2x2x4 0.05 0.4 0.4 0.95 0.8 1.25 34 0.819161 0.8
Since the sample sizes obtained by all simulations match the exact method, we can be confident that it is correct. As usual with a higher number of simulations power gets closer to the exact value.