assumptions vs. realizations [Power / Sample Size]

posted by Helmut Homepage – Vienna, Austria, 2019-09-18 13:04 (1845 d 23:52 ago) – Posting: # 20609
Views: 6,643

Hi Rocco,

❝ sampleN.TOST(CV=.3, theta0=1.0, theta1= 0.8, theta2=1.25, logscale=TRUE, alpha=0.05, targetpower=0.9, design="parallel")

❝ I get sample size minimum be 78 total.


Correct.1

❝ But then if I run CI.BE(pe=1.0, CV=.3, design="parallel", n=24)

❝ I get CI = approx [0.81, 1.23].


Correct again.

❝ This confuse me. Shouldn’t I need to enter n at least 78 in order to get CI within [.8, 1.25]?


Nope. CV and theta0 in sampleN.TOST() are assumptions (before the study), whereas CV and pe in CI.BE() are realizations (observations in the study). I’m not a friend of post hoc (a posteriori, restrospective) power but let’s check that:

library(PowerTOST)
power.TOST(CV = 0.3, theta0 = 1, n = 24, design = "parallel")
# [1] 0.1597451

That’s much lower than the 0.9 you targeted and what you would have got with 78 subjects. Try this one:

n      <- c(24, 78)
theta0 <- seq(0.80, 1.25, length.out = 201)
res    <- data.frame(n = rep(n, each = length(theta0)),
                     theta0 = rep(theta0, length(n)),
                     power = NA)
j      <- 0
for (k in seq_along(n)) {
  for (l in seq_along(theta0)) {
    j <- j + 1
    res$power[j] <- power.TOST(CV = 0.3, n = res$n[j],
                               theta0 = res$theta0[j],
                               design = "parallel")
  }
}
plot(theta0, rep(1, length(theta0)), type = "n",
     ylim = c(0, 1), log = "x", yaxs = "i",
     xlab = "theta0", ylab = "power", las = 1)
grid()
abline(h = c(0.05, 0.9), col = "lightgrey", lty = 2)
box()
col <- c("red", "blue")
for (k in seq_along(n)) {
  lines(x = theta0, y = res$power[res$n == n[k]],
        lwd = 3, col = col[k])
  text(x = 1, y = max(res$power[res$n == n[k]]), pos = 3,
       labels = signif(max(res$power[res$n == n[k]]), 4))
}
legend("topright", legend = rev(paste("n =", n)),
       bg = "white", col = rev(col), lwd = 3)


❝ In other words, what are implications if study *meet* bioequivalence but is underpowered?


None. Sample size estimation is always based on assumptions. If they turn out to be wrong (higher CV, PE worse than theta0, more dropouts than anticipated), you might still meet BE by luck. As ElMaestro once wrote:
Being lucky is not a crime.

But any confirmatory study (like BE) requires an appropriate sample size estimation. There are two problems which  could  should lead already to rejection of the protocol by the IEC.
  1. If you assume a CV of 0.3 and a theta0 of 1 (I would not be that optimistic) and suggest in the protocol a sample size of 24. You can also press the Boss Button in FARTSSIE only to be told that
    “90% power not attainable even if the means are equal. The highest attainable power is 12.7%.”2
    Not a good idea.
  2. Not in your case of a new drug but if the IEC knows CVs from other studies. Don’t try to cheat (“assume” a much lower one, i.e., 0.16 instead of 0.30 in order to end up with just 24 subjects).


  1. See the man-pages of the functions in PowerTOST. In many cases there is no need to specify the defaults (alpha=0.05, theta1=0.8, theta2=1.25, logscale=TRUE). Makes your live easier.
  2. Calculated by the non-central t-distribution. You can confirm that when you run power.TOST(..., method = "nct").

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,247 posts in 4,885 threads, 1,652 registered users;
71 visitors (0 registered, 71 guests [including 9 identified bots]).
Forum time: 12:57 CEST (Europe/Vienna)

The real struggle is not between the right and the left
but between the party of the thoughtful
and the party of the jerks.    Jimmy Wales

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5