Bioequivalence Limit, Power and interpretation [Regulatives / Guidelines]

posted by Helmut Homepage – Vienna, Austria, 2020-09-22 15:59 (1308 d 21:44 ago) – Posting: # 21931
Views: 2,434

Hi Pharma88,

❝ 1. What to conclude if we got the result where study is failed in lower side i.e. below 80%...

❝ 2. What to conclude if we got the result where study is failed in upper side i.e. above 125%...


See this post and scroll down to formula (7.1). The text below (esp. 2. and footnote h.) will answer your questions.

❝ 3. What to conclude where study's post hoc power is less then 80 or 85%?


Nothing useful. Post hoc (a.k.a. a posteriori, retrospective) power is completely irrelevant for the BE assessment. Stop estimating it. See also the vignette of the [image] package PowerTOST.

❝ What are the main factors associated with apart from human or instrument error?


You planned the study based on assumptions (T/R-ratio, variability, dropout-rate) and for a desired (target) power. If at least one of the assumptions is not fulfilled (see the post mentioned above), you may loose power and the chance of failing increases. Even if all assumptions turn out to be exactly realized in study, the chance of failing is \(\small{\beta=1-\pi}\), where \(\small{\beta}\) is the probability of the Type II Error (producer’s risk) and \(\small{\pi}\) the desired power.
In other words, if you plan studies for 80% power, one out of five will fail by pure chance. That’s life.

Simple example in [image]:

library(PowerTOST)
set.seed(123456)
CV      <- 0.25 # assumed CV
theta0  <- 0.95 # assumed T/R-ratio
do.rate <- 0.10 # anticipated dropout rate (10%)
# defaults: targetpower 0.80, design = "2x2"
studies <- 20
N       <- sampleN.TOST(CV = CV, theta0 = theta0, details = FALSE,
                        print = FALSE)[["Sample size"]]
res     <- data.frame(study = c("as planned", 1:studies),
                      CV = c(CV, rnorm(mean = CV, n = studies, sd = 0.05)),
                      theta0 = c(theta0, rnorm(mean = theta0, n = studies, sd = 0.05)),
                      n = c(N, round(runif(n = studies, min = N*(1-do.rate), max = N))),
                      CL.lower = NA, CL.upper = NA, BE = "fail", assessment = NA,
                      power = NA)
for (j in 1:nrow(res)) {
  res[j, 5:6]  <- round(100*CI.BE(CV = res$CV[j], pe = res$theta0[j],
                                  n = res$n[j]), 2)
  if (res$CL.lower[j] >= 80 & res$CL.upper[j] <= 125) {
    res$BE[j]         <- "pass"
    res$assessment[j] <- "equivalent"
  } else {
    if (res$CL.lower[j] > 125 | res$CL.upper[j] < 80) {
      res$assessment[j] <- "inequivalent"
    } else {
      res$assessment[j] <- "indecisive"
    }
  }
  res$power[j] <- suppressMessages(
                     power.TOST(CV = res$CV[j], theta0 = res$theta0[j],
                                n = res$n[j]))
}
res[, c(2:3, 9)] <- signif(res[, c(2:3, 9)], 4)
txt <- paste(sprintf("%1.0f%%", 100*length(which(res$BE == "fail"))/studies),
             "of actual studies failed\n")
cat(txt); print(res, row.names = FALSE)

Gives

25% of actual studies failed
      study     CV theta0  n CL.lower CL.upper   BE assessment  power
 as planned 0.2500 0.9500 28    84.91   106.28 pass equivalent 0.8074
          1 0.2917 0.9123 26    79.66   104.48 fail indecisive 0.4729
          2 0.2362 1.0130 25    90.47   113.39 pass equivalent 0.8923
          3 0.2322 0.9519 28    85.75   105.68 pass equivalent 0.8647
          4 0.2544 0.9595 28    85.60   107.55 pass equivalent 0.8274
          5 0.3626 0.9731 27    82.64   114.59 pass equivalent 0.4513
          6 0.2917 0.9286 26    81.09   106.35 pass equivalent 0.5496
          7 0.3156 0.9508 26    82.15   110.05 pass equivalent 0.5534
          8 0.3751 0.9852 27    83.23   116.63 pass equivalent 0.4154
          9 0.3084 0.9986 27    86.80   114.88 pass equivalent 0.6819
         10 0.2287 0.9190 26    82.56   102.29 pass equivalent 0.6927
         11 0.2002 0.9072 26    82.58    99.67 pass equivalent 0.7181
         12 0.1943 0.9535 26    87.02   104.47 pass equivalent 0.9386
         13 0.2472 0.8977 26    79.97   100.77 fail indecisive 0.5040
         14 0.3087 0.8126 26    70.42    93.76 fail indecisive 0.0712
         15 0.3027 0.8935 26    77.64   102.83 fail indecisive 0.3582
         16 0.2529 0.9069 25    80.38   102.33 pass equivalent 0.5300
         17 0.2132 1.0280 28    93.38   113.17 pass equivalent 0.9547
         18 0.2965 1.0010 27    87.44   114.54 pass equivalent 0.7285
         19 0.3334 1.0020 26    85.91   116.91 pass equivalent 0.5542
         20 0.2780 0.8942 26    78.56   101.78 fail indecisive 0.4108

Little bit cheating because neither the T/R-ratio nor the CV follow a normal distribution. But you get the idea. Some of the studies pass even with low post hoc power. An example is #5, where the “worse” CV is counteracted by a “better” T/R-ratio. The post hoc power of just ~45% is not relevant. It was a lucky punch.
When you increase the number of simulated studies you will sooner or later end up with 20% failing.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
22,990 posts in 4,826 threads, 1,665 registered users;
61 visitors (0 registered, 61 guests [including 5 identified bots]).
Forum time: 13:44 CEST (Europe/Vienna)

If you don’t like something change it;
if you can’t change it, change the way you think about it.    Mary Engelbreit

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5