d_labes
★★★

Berlin, Germany,
2013-03-15 15:56

Posting: # 10202
Views: 21,216
 

 ‘alpha’ of scaled ABE? [RSABE / ABEL]

Dear All!

We had already from time to time the question which combination of CVwR and GMR apply to find alpha ≤ 0.05 from power calculations. No one was really sure. Except the statement that power calculations at the conventional BE limits 1.25 (or 0.8) as for the ABE test will not be appropriate.

A rather natural choice IMHO would be to calculate the power at the border(s) of the widened BE acceptance limits.
Thus I employed power.scABEL() and power.RSABE() from R-package PowerTOST.

Here the results for each method on its own:
GMR=1.25 if CV<=0.3 or GMR=exp(0.8925742*CV2se(CV)) if CV>0.3 for the FDA RSABE
GMR=1.25 if CV<=0.3 or GMR=exp(0.7601283*CV2se(CV)) if CV>0.3 for the EMA scABEL and cap on if CV>0.5

    CV      GMR pBE_ABEL     GMR2 pBE_RSABE
 0.200 1.250000 0.050220 1.250000  0.050153
 0.225 1.250000 0.050632 1.250000  0.051969
 0.250 1.250000 0.053143 1.250000  0.061009
 0.275 1.250000 0.061006 1.250000  0.082647
 0.300 1.250000 0.076460 1.250000  0.116120 ┐
 0.302 1.251782 0.075453 1.301734  0.048548 ┘ FDA discontinuity
 0.325 1.272352 0.067721 1.326887  0.047741
 0.350 1.294853 0.064379 1.354484  0.046156
 0.375 1.317485 0.062291 1.382326  0.043941
 0.400 1.340231 0.059506 1.410391  0.041041
 0.425 1.363072 0.055282 1.438658  0.037799
 0.450 1.385991 0.049795 1.467104  0.034401
 0.475 1.408971 0.043420 1.495709  0.031150
 0.500 1.431997 0.036960 1.524452  0.028097
 0.525 1.431997 0.040644 1.553312  0.025383
 0.550 1.431997 0.043412 1.582270  0.022875
 0.575 1.431997 0.045423 1.611308  0.020699
 0.600 1.431997 0.046647 1.640406  0.018660
 0.625 1.431997 0.047379 1.669548  0.016948
 0.650 1.431997 0.047754 1.698717  0.015457
 0.675 1.431997 0.047698 1.727897  0.014135
 0.700 1.431997 0.047192 1.757073  0.012861
 0.725 1.431997 0.046282 1.786230  0.011799
 0.750 1.431997 0.045020 1.815356  0.010865
 0.775 1.431997 0.043338 1.844438  0.010057
 0.800 1.431997 0.041320 1.873464  0.009425

[image]

Interesting! At least for me :yes:.

Both methods start for low CV's with pBE ~ 0.05. Classical ABE evaluation controls alpha≤0.05 :cool:.
Approaching CV=0.3 the probability of (falsely?) accepting BE raises, for the FDA method up to 0.116. This is understandable since a increasing part of our simulated studies will have sample CV's >0.3 which are then evaluated with widened limits or downscaled linear scaled ABE criterion.
After/at CV=0.3 pBE for the FDA method drops sharply to or below 0.05 due to the known discontinuity in the widened BE limits and becomes with rising CV lower and lower (dominated by the point estimator criterion? conservative?). For the EMA method the pBE stays above 0.05 up to CV=0.45. The cap at CV≥0.5 then prevents the decrease for higher CV's seen in the FDA method. pBE first rises slightly with increasing CV. This behaviour (power increases with increasing CV) we have already seen here. But then a decrease is seen (due to the point estimator criterion?).

If we really estimating here the type I error then both methods are cases of an alpha inflation!? The FDA method in the vicinity of CV≤0.3 and the EMA method also but additionally up to CV=0.45. And the 'alpha inflation' is much more pronounced as that what the EMA lead to question Potvin method C.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-15 17:27

@ d_labes
Posting: # 10203
Views: 16,654
 

 ‘alpha’ of scaled ABE?

Dear Detlew!

» A rather natural choice IMHO would be to calculate the power at the border(s) of the widened BE acceptance limits.

Agree.

» GMR=1.25 if CV<=0.3 or GMR=exp(0.8925742*CV2se(CV)) if CV>0.3 for the FDA RSABE
» GMR=1.25 if CV<=0.3 or GMR=exp(0.7601283*CV2se(CV)) if CV>0.3 for the EMA scABEL and cap on if CV>0.5

<nitpicking>

I would prefer a GMR of 0.80 since PowerTOST employs simulations – remember the “rounding thread”? ;-)

</nitpicking>

Which designs / sample sizes have you used? I’m struggling reproducing your numbers.

» Interesting! At least for me :yes:.

For me as well. I will ask the two Lászlós and maybe Panos Macheras.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2013-03-16 20:10

@ Helmut
Posting: # 10208
Views: 16,298
 

 ’alpha’ of scaled ABE: design

Dear Helmut!

» <nitpicking>

I would prefer a GMR of 0.80 since PowerTOST employs simulations – remember the “rounding thread”? ;-)

</nitpicking>

No rounding at all employed. Even 0.760... not.

» Which designs / sample sizes have you used? I’m struggling reproducing your numbers.

No explicit design written down in the code. That means default -> "2x3x3". n=48 if I remember correctly.

Regards,

Detlew
d_labes
★★★

Berlin, Germany,
2013-03-18 08:11
(edited by d_labes on 2013-03-18 09:52)

@ d_labes
Posting: # 10216
Views: 16,215
 

 N=24, 48

Dear Helmut!

» No explicit design written down in the code. That means default -> "2x3x3". n=48 if I remember correctly.

Must correct me: N=24 in the calculations above. nsims=1E6.

N=48 comes here:
    CV      GMR pBE_ABEL     GMR2 pBE_RSABE
 0.200 1.250000 0.050071 1.250000  0.050197
 0.225 1.250000 0.050103 1.250000  0.050458
 0.250 1.250000 0.050896 1.250000  0.055809
 0.275 1.250000 0.057014 1.250000  0.084654
 0.300 1.250000 0.076539 1.250000  0.147126
 0.302 1.251782 0.075132 1.301734  0.045037
 0.325 1.272352 0.066421 1.326887  0.044747
 0.350 1.294853 0.063700 1.354484  0.040058
 0.375 1.317485 0.061786 1.382326  0.033091
 0.400 1.340231 0.058847 1.410391  0.026019
 0.425 1.363072 0.054528 1.438658  0.019674
 0.450 1.385991 0.048974 1.467104  0.014709
 0.475 1.408971 0.042496 1.495709  0.010821
 0.500 1.431997 0.035153 1.524452  0.008119
 0.525 1.431997 0.041243 1.553312  0.006059
 0.550 1.431997 0.045210 1.582270  0.004553
 0.575 1.431997 0.047477 1.611308  0.003430
 0.600 1.431997 0.048748 1.640406  0.002631
 0.625 1.431997 0.049389 1.669548  0.002035
 0.650 1.431997 0.049721 1.698717  0.001592
 0.675 1.431997 0.049877 1.727897  0.001291
 0.700 1.431997 0.049954 1.757073  0.001048
 0.725 1.431997 0.049987 1.786230  0.000879
 0.750 1.431997 0.049998 1.815356  0.000727
 0.775 1.431997 0.050014 1.844438  0.000621
 0.800 1.431997 0.050007 1.873464  0.000538

[image]

Regards,

Detlew
martin
★★  

Austria,
2013-03-25 20:47

@ d_labes
Posting: # 10279
Views: 16,167
 

 adaptive design without adjustment

dear all!

this behavior may be due to an adaptive choice of a statistical method in face of the data without any "adjustment" for this adaptation. this reminds me on the "flowchart" approach many CROs are using which should be avoided.

best regards

martin

PS.: no idea at the moment how to account for this.
PPS.: why flowcharts should not be used link.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-26 15:21

@ martin
Posting: # 10286
Views: 16,507
 

 α adjustment?

Dear all!

» this behavior may be due to an adaptive choice of a statistical method in face of the data without any "adjustment" for this adaptation.

Yep. The acceptance range is data-driven (random variable dependent on CVWR).

» PS.: no idea at the moment how to account for this.

Adjust α? Quick shot:

library(PowerTOST)
reg   <- "EMA"   # "EMA" for scABEL or "FDA" for RSABE
CV    <- 0.300   # intra-subject CV of reference
n     <- 24      # total sample size
                 # in case of imbalanced sequences use a vector:
                 # e.g., c(n1,n2,n3)
des   <- "2x3x3" # partial: "2x3x3" full: "2x2x4"
print <- FALSE   # intermediate results
if (reg == "EMA") { # scABEL
  ifelse (CV <= 0.5, GMR <- exp(0.7601283*CV2se(CV)),
    GMR <- exp(0.7601283*CV2se(0.5)))
  if (CV <= 0.3) GMR <- 1.25
    } else {        # RSABE
  ifelse (CV > 0.3, GMR <- exp(0.8925742*CV2se(CV)), GMR <- 1.25)
}
nsims <- 1e6
prec  <- 1e-8     # precision of bisection algo
x     <- 0.05     # target alpha
sig   <- binom.test(x*nsims, nsims,
           alternative='less')$conf.int[2] # significance limit of sim’s
nom   <- c(0.001, x)                # from conservative to target alpha
lower <- min(nom); upper <- max(nom)
delta <- upper - lower              # interval
ptm   <- proc.time()                # start timer
iter  <- 0
while (abs(delta) > prec) {         # until precision reached
  iter  <- iter + 1
  x.new <- (lower+upper)/2          # bisection
  if (reg == "EMA") {
    pBE <- power.scABEL(alpha=x.new, theta0=GMR, CV=CV, n=n,
                        design=des, nsims=nsims)
    } else {
    pBE <- power.RSABE(alpha=x.new, theta0=GMR, CV=CV, n=n,
                       design=des, nsims=nsims)
  }
  if (print == TRUE) cat(iter, x.new, pBE, "\n")
  if (abs(pBE - sig) <= prec) break # precision reached
  if (pBE > sig) upper <- x.new     # move limit downwards
  if (pBE < sig) lower <- x.new     # move limit upwards
  delta <- upper - lower            # new interval
}
if (print) cat("run-time:", round((proc.time()[3]-ptm[3]),1),
  "seconds  iterations:", iter, "\n")
cat("regulator:", reg, " CV:", CV, " n:", n, " design:", des,
  "\ntarget alpha:", x, " adjusted alpha:", x.new, " pBE:", pBE, "\n")

  CV   adj alpha pBE_scABEL  adj alpha pBE_RSABE
0.200  0.050000  0.050220    0.050000  0.050153
0.225  0.049730  0.050360    0.048467  0.050359
0.250  0.047370  0.050359    0.040790  0.050360
0.275  0.040833  0.050360    0.028606  0.050359
0.300  0.031124  0.050359    0.017999  0.050360
0.302  0.031612  0.050359    0.050000  0.048548
0.325  0.035545  0.050360    0.050000  0.047741
0.350  0.037636  0.050360    0.050000  0.046156
0.375  0.039190  0.050360    0.050000  0.043941
0.400  0.041417  0.050360    0.050000  0.041041
0.425  0.045221  0.050360    0.050000  0.037799
0.450  0.050000  0.049795    0.050000  0.034401
0.475  0.050000  0.043420    0.050000  0.031150
0.500  0.050000  0.036960    0.050000  0.028097
0.525  0.050000  0.040644    0.050000  0.025383
0.550  0.050000  0.043412    0.050000  0.022875
0.575  0.050000  0.045423    0.050000  0.020699
0.600  0.050000  0.046647    0.050000  0.018660
0.625  0.050000  0.047379    0.050000  0.016948
0.650  0.050000  0.047754    0.050000  0.015457
0.675  0.050000  0.047698    0.050000  0.014135
0.700  0.050000  0.047192    0.050000  0.012861
0.725  0.050000  0.046282    0.050000  0.011799
0.750  0.050000  0.045020    0.050000  0.010865
0.775  0.050000  0.043338    0.050000  0.010057
0.800  0.050000  0.041320    0.050000  0.009425


[image]

[image]

Take my code with a grain of salt. Cases where no adjustment is necessary agree with Detlew’s results.


Note: Any result <0.05035995 is not significantly >0.05 in 106 simulations. However, if you want to get a more conservative estimate replace

sig   <- binom.test(x*nsims, nsims,
           alternative='less')$conf.int[2]

by

sig   <- 0.05



Edit: Much faster code in this post.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2013-03-26 16:19
(edited by martin on 2013-03-26 16:51)

@ Helmut
Posting: # 10287
Views: 16,069
 

 iteratively adjusted alpha

dear helmut!

iteratively adjusted alpha – super idea!

you may find of interest that two weirdos published a paper about iterative alpha adjustment a few years ago (paper) soley based on simulation results (not enough skilled to provide evidence via mathematical proof).

best regards

martin
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-26 18:02

@ martin
Posting: # 10289
Views: 16,144
 

 throw away our sample size tables?

Hi Martin!

» iteratively adjusted alpha – super idea!

If the inflation of the consumer’s risk is real (based on our assumption to estimate pBE with a GMR at the borders of the scaled acceptance range) regulators should revise their methods. Would increase the sample size and IMHO requires a multi-step procedure.
  1. Estimate the sample size for unadjusted α
  2. Estimate adjusted α for this n
  3. Estimate the sample size for adjusted α
  4. Repeat steps 2–3 until you are in limbo

» […] two weirdos published a paper […]

Not their only crime. ;-)

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2013-03-26 19:44
(edited by martin on 2013-03-26 20:13)

@ Helmut
Posting: # 10290
Views: 16,016
 

 if real

dear Helmut!

totally agree: If the inflation of the type-I-error is real = serious risk to public health (to quote a famous colleague of mine).

best regards

martin

PS.: IMHO - a-priori specification of the method should control the type-I-error (i.e. CV from historical data or more stringent the corresponding lower confidence limit should be greater than 30% to allow scaling).
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-26 21:18

@ martin
Posting: # 10291
Views: 16,133
 

 regulators don’t care?

Hi Martin!

» totally agree: If the inflation of the type-I-error is real = serious risk to public health (to quote a famous colleague of mine).

Is this the same guy who is (in)famous for publicly calling some approaches “bullshit”?

BTW, α-inflation is not unknown. See the figures and table 1 of the two Lászlós.1

» PS.: IMHO - a-priori specification of the method should control the type-I-error (i.e. CV from historical data or more stringent the corresponding lower confidence limit should be greater than 30% to allow scaling).

I would also say so. Already noted in the past:

Besides, it is not clear at present what kind of regulatory policy will be followed when several σWR estimates (i.e. results of previous sub­mis­sions) are available to a drug regulatory agency. If regulators use all available data, then they can get an improved estimate for σWR, and possibly can draw a different conclusion from the sponsor.2


But likely it is wishful thinking to expect a list of acceptance ranges or CVs of reference formulations.


  1. Endrényi L, Tóthfalusi L. Regulatory Conditions for the Determination of Bioequivalence of Highly Variable Drugs. J Pharm Pharmaceut Sci. 2009;12(1):138–49. free online
  2. Tóthfalusi L, Endrényi L, García Arieta A. Evaluation of Bioequivalence for Highly Variable Drugs with Scaled Average Bioequivalence. Clin Pharmacokinet. 2009;48(11):725–43. doi:10.2165/11318040-000000000-00000

Edit: EMA’s data set I (full replicate, n1 36, n2 37, CVWR 0.4696) needs no α-ad­just­ment, pBE 0.012061. If the two outliers are excluded (n1 34, n2 37, CVWR 0.3216) we require an adjusted α of 0.03149 in order to maintain the consumer’s risk. The 93.70% CIs of 106.03–126.16% (Method A) and 106.10–126.24% (Method B) lie still within the scaled acceptance range of 78.79–126.93%, but it is a close shave…

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-08-11 16:19

@ Helmut
Posting: # 11262
Views: 14,970
 

 adjusting α

Hi all!

slow code to estimate sample sizes for adjusted α.

library(PowerTOST)
CV    <- 0.30    # intra-subject CV of reference
reg   <- "EMA"   # "EMA" for scABEL or "FDA" for RSABE
des   <- "2x2x4" # full: "2x2x4"; partial: "2x3x3"
target<- 0.80    # target power
TR    <- 0.90    # expected T/R ratio
alpha <- 0.05    # target alpha
print <- FALSE   # intermediate results
nsims <- 1e6
if (reg == "EMA") { # scABEL (rounded switching cond. acc. to GL)
  ifelse (CV <= 0.5, GMR <- exp(0.76*CV2se(CV)),
                     GMR <- exp(0.76*CV2se(0.5)))
  if (CV <= 0.3) GMR <- 1.25
    } else {        # RSABE (exact switching cond.: 0.8925742…)
  ifelse (CV > 0.3, GMR <- exp(log(1.25)/0.25*CV2se(CV)),
                    GMR <- 1.25)
}
if (reg == "EMA") { # first estimate (unadjusted alpha)
  res <- sampleN.scABEL(alpha=alpha, theta0=TR, CV=CV,
                        targetpower=target, design=des,
                        print=FALSE, details=FALSE)
  } else {
  res <- sampleN.RSABE(alpha=alpha, theta0=TR, CV=CV,
                       targetpower=target, design=des,
                       print=FALSE, details=FALSE)
}
n1   <- res[["Sample size"]]
pwr  <- res[["Achieved power"]]
infl <-NULL
if (reg == "EMA") {
  pBE <- power.scABEL(alpha=alpha, CV=CV, theta0=GMR,
                      n=n1, design=des, nsims=nsims)
  } else {
  pBE <- power.RSABE(alpha=alpha, CV=CV, theta0=GMR,
                     n=n1, design=des, nsims=nsims)
}
if (pBE > alpha) infl <- "alpha-inflation!"
cat("regulator:", reg, " design:", des,
  "\nexpected CV:", CV, "expected PE:", TR,
  " target power:", target, " achieved power:", pwr,
  "\nn:", n1, " target alpha:", alpha, " pBE at extended limit:",
  round(pBE, 5), infl, "\n")
n.adj <- n1      # start of search: sample size with unadjusted alpha
pwr   <- 0.5     # that's rather low!
prec  <- 1e-8    # precision of bisection algo
iter1 <- 0
while (pwr < target) {
  infl  <- NULL
  iter1 <- iter1 + 1
  start <- alpha   # unadjusted alpha
  stop  <- 0.05    # target alpha
  nom   <- c(0.001, start)           # from conservative to unadjusted alpha
  lower <- min(nom); upper <- max(nom)
  delta <- upper - lower             # interval
  iter2 <- 0
  while (abs(delta) > prec) {        # until precision reached
    iter2  <- iter2 + 1
    adj <- (lower+upper)/2           # bisection
    if (reg == "EMA") {
      pBE <- power.scABEL(alpha=adj, theta0=GMR, CV=CV,
                          n=n.adj, design=des, nsims=nsims)
      } else {
      pBE <- power.RSABE(alpha=adj, theta0=GMR, CV=CV,
                         n=n.adj, esign=des, nsims=nsims)
    }
    if (abs(pBE - stop) <= prec) break # precision reached
    if (pBE > stop) upper <- adj       # move limit downwards
    if(pBE < stop) lower <- adj        # move limit upwards
    delta <- upper - lower             # new interval
  }
  if (reg == "EMA") {                  # current alpha and power
    pwr <- power.scABEL(alpha=adj, theta0=TR, CV=CV,
                        n=n.adj, design=des, nsims=nsims)
    pBE <- power.scABEL(alpha=adj, theta0=GMR, CV=CV,
                        n=n.adj, design=des, nsims=nsims)
    } else {
    pwr <- power.RSABE(alpha=adj, theta0=TR, CV=CV,
                       n=n.adj, design=des, nsims=nsims)
    pBE <- power.RSABE(alpha=adj, theta0=GMR, CV=CV,
                       n=n.adj, design=des, nsims=nsims)
  }
  if (print) {
    cat("iteration:", iter1, " n:", n.adj, " power:", pwr,
        " adj. alpha:", adj, "\n")
    if (.Platform$OS.type == "windows") flush.console()
  }
  if (pBE > alpha) infl <- "alpha-inflation!"
  if (!is.null(infl) | pwr < target) { # increase sample size if inflation or insufficient power
    ifelse (des == "2x2x4", n.adj <- n.adj+2,
                            n.adj <- n.adj+3)
  }
}
if (n.adj > n1) {
cat("regulator:", reg, " design:", des,
    "\nexpected CV:", CV, "expected PE:", TR,
    " target power:", target, " achieved power:", pwr,
    "\nn:", n.adj, " target alpha:", alpha, " adj. alpha:",
    round(adj, 5), " pBE at extended limit:", round(pBE, 5), infl,
    "\nSample size penalty due to adjusted alpha of",
    round(adj, 5), "( ~", round(100*(1-2*adj), 2),
    "% confidence interval):\n", n.adj-n1, "(",
    round(100*(n.adj-n1)/n1, 1), "% )\n")
  } else {
cat("regulator:", reg, " design:", des,
    "\nexpected CV:", CV, "expected PE:", TR,
    " target power:", target, " achieved power:", pwr,
    "\nn:", n.adj, " target alpha:", alpha, " adj. alpha:",
    round(adj, 5), " pBE at extended limit:", round(pBE, 5), infl,
    "\nNo sample size penalty due to adjusted alpha of", round(adj, 5),
    "(~", round(100*(1-2*adj), 2), "% confidence interval)\n")
}

[image]

I set the expected ratio to 0.9 (more realistic than 0.95 for HVDs/HVDPs). Observations: Sample size penality due to αadj is higher in the fully replicated design. Generally no more penalty for CVWR > ~40%.
Bonus question: αadj depends on CVWR observed in the study, which regulators probably don’t like. In a deficiency letter on Potvin’s method C The Netherland’s MEB stated:

Adapting the confidence intervals based upon power is not acceptable and also not in accordance with the EMA guideline. Confidence intervals should be selected a priori, without evaluation of the power.”

(my emphases)
Let’s say in study planning we expect a CVWR of 40% (T/R 0.9, 90% power). In a fully replicate design we would estimate the sample size with 42 – but α will be inflated to 0.05555. So we adjust α to 0.04377 (91.25% CI), which would need 44 subjects. In the study CVWR turned out to be 35%, which would call for a more conservative αadj of 0.03812 (92.38% CI). If we would use the αadj from study planning, we would face an inflation to 0.05681. Maybe it is acceptable to regulators to apply a lower α (wider CI) than in study planning, but what if it is the other way round? If it is not acceptable to use a higher α (narrower CI) we would loose power.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-27 15:38

@ martin
Posting: # 10297
Views: 16,025
 

 target α in sim’s?

Hi Martin,

a technical question (see the note at the end of my previous post). Though any result <0.05035995 is not statistically significant >0.05 shouldn’t I set the target (sig) in the algo to 0.05?

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2013-03-27 16:36

@ Helmut
Posting: # 10298
Views: 15,922
 

 target α in sim’s?

Hi Helmut!

IMHO: yes as this is a constant and should not dependent on the number of simulation runs. however, differences should be rather small by using 1E6 runs.

best regards

martin

Ps.: ... and yes its the same guy who is famous for publicly calling some approaches “bullshit” :-D
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-28 23:20

@ martin
Posting: # 10305
Views: 16,081
 

 If it is real: results

Hi Martin!

» IMHO: yes as this is a constant and should not dependent on the number of simulation runs.

Oh yes, sure. Sorry for my stupidity.

» Ps.: ... and yes its the same guy who is famous for publicly calling some approaches “bullshit” :-D

I always thought he should work on his social skills.

Here the results of my sim’s for target α 0.05. Partial and full replicate designs, n=24/48.

[image]

[image]

The FDA’s method would lead to a large α-inflation (as presented by Detlew above) for CVWR ≤30%. This could be corrected by adjusting α. However, since scaling is only allowed for CVWR >30% nothing has to be done. Like the conventional TOST the test gets more con­ser­vative with increasing CV (and n).

[image]

[image]

The EMA’s method is a different pot of tea. α-inflation is seen up to 45%! The test is closer to the nominal level than the FDA’s.
Interesting the behavior of the partial replicate with n=24. :confused:


Code (10–25 iterations to reach convergence, runtime on my machine 15–45 seconds depending on design, n, CV):
require(PowerTOST)
reg   <- "EMA"   # "EMA" for ABEL or "FDA" for scABE
CV    <- 0.300   # intra-subject CV of reference
n     <- 24      # total sample size
                 # in case of imbalanced sequences use a vector:
                 # e.g., c(n1,n2,n3)
des   <- "2x3x3" # partial: "2x3x3" full: "2x2x4"
                 # for others see known.designs()
print <- TRUE    # intermediate results
if (reg == "EMA") { # scABEL
  ifelse (CV <= 0.5, GMR <- exp(0.7601283*CV2se(CV)),
                     GMR <- exp(0.7601283*CV2se(0.5)))
  if (CV <= 0.3) GMR <- 1.25
  } else {          # RSABE
  ifelse (CV > 0.3, GMR <- exp(0.8925742*CV2se(CV)),
                    GMR <- 1.25)
}
nsims <- 1e6
prec  <- 1e-8    # precision of bisection algo
x     <- 0.05    # target alpha
nom   <- c(0.001, x)                # from conservative to target alpha
lower <- min(nom); upper <- max(nom)
delta <- upper - lower              # interval
ptm   <- proc.time()                # start timer
iter  <- 0
while (abs(delta) > prec) {         # until precision reached
  iter  <- iter + 1
  x.new <- (lower+upper)/2          # bisection
  if (reg == "EMA") {
    pBE <- power.scABEL(alpha=x.new, theta0=GMR, CV=CV,
                        n=n, design=des, nsims=nsims)
    } else {
    pBE <- power.RSABE(alpha=x.new, theta0=GMR, CV=CV,
                       n=n, design=des, nsims=nsims)
  }
  if (print) {                      # show progress
    if (iter == 1) cat(" i  adj. alpha       pBE\n")
    cat(format(iter, digits=2, width=2),
        format(x.new, digits=6, width=11, nsmall=7),
        format(pBE, digits=6, width=9, nsmall=6), "\n")
    if (.Platform$OS.type == "windows") flush.console()
  }
  if (abs(pBE - x) <= prec) break   # precision reached
  if (pBE > x) upper <- x.new       # move upper limit downwards
  if (pBE < x) lower <- x.new       # move lower limit upwards
  delta <- upper - lower            # new interval
}
if (print) cat("run-time:", round((proc.time()[3]-ptm[3]), 1),
               "seconds  iterations:", iter, "\n")
cat("regulator:", reg, " CV:", CV, " n:", n, " design:", des,
    "\ntarget alpha:", x, " adjusted alpha:", x.new,
    " pBE:", pBE, "(empiric alpha)\n")

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2013-03-28 15:08

@ Helmut
Posting: # 10304
Views: 15,914
 

 having alpha

Hi Helmut & simulants !

What do you think about a kind of adaptive group-sequential design with one interim analysis (i.e. estimate CV) allowing to make a decision regarding average BE or scaled BE in the final analysis? This may solve the problem.

best regards

martin
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-28 23:34

@ martin
Posting: # 10306
Views: 16,142
 

 fixed swr

Hi Martin!

» What do you think about a kind of adaptive group-sequential design with one interim analysis (i.e. estimate CV) allowing to make a decision regarding average BE or scaled BE in the final analysis? This may solve the problem.

Nope. Actually the methods are (hidden) sequential.
Most clear in FDA’s guidance:

Determine sWR, the within-subject standard deviation (SD) of the re­fe­rence pro­duct, for the pharmacokinetic (PK) parameters AUC and Cmax.

  1. If sWR < 0.294,1,2 use the two one-sided tests procedure to determine bio­equi­valence (BE) for the individual PK parameter(s)
  2. If sWR ≥ 0.294,2 use the reference-scaled procedure to determine BE for the in­di­vidual PK parameter(s)
However, once scaling is justified (sWR ≥ 0.294), the extent of scaling is still data-driven (dependent on sWR from the study). IMHO the problem would only vanish if regulators publish a fixed value of sW for each reference which has to be used independent from the findings in the study.


  1. sWR 0.294 ~ CVWR 0.300469…
  2. We see the worst inflation at CVWR 0.3. Maybe it would be better to use ABE if sWR 0.294 and RSABE if sWR > 0.294.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2013-03-29 15:07
(edited by martin on 2013-03-29 15:23)

@ Helmut
Posting: # 10311
Views: 15,871
 

 fixed swr

Hi Helmut !

Yep; your are totally right (should have done my homework first ;-)). what about defining the extent of scaling after interim analysis of the first stage data (CV estimated in 1st stage) within a group-sequential adaptive design?

Drawback: BUT what if the CV at interim is smaller than 0.3 ... :confused:

best regards

martin
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-03-29 16:10

@ martin
Posting: # 10312
Views: 16,053
 

 Yes but no but yes but no but…

Hi Martin!

» what about defining the extent of scaling after interim analysis of the first stage data (CV estimated in 1st stage) within a group-sequential adaptive design?

Theoretically OK. ;-)

» Drawback: BUT what if the CV at interim is smaller than 0.3 ... :confused:

or it is larger than 0.3 in stage 1 and in the pooled data set (n↑ = more precise estimate) you find one which is ≤0.3? If the inflation is real (still not sure) we have a problem – only with EMA’s method – in the CV-range 0.3–0.5. IMHO there are two options:
  1. Iteratively adjust α. :-D
  2. Use a fixed α of ~0.030 for partial replicates and ~0.025 for full replicates if CVWR ≤0.5 (see plots above). This should cover the entire range of designs and sample sizes of up to 96. The maximum ad­just­ment (at 0.3) is almost in­de­pendent from sample size (see plot below). Conservative for any CV >0.3…

    [image]

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Activity
 Admin contact
20,642 posts in 4,327 threads, 1,436 registered users;
online 15 (0 registered, 15 guests [including 1 identified bots]).
Forum time: 22:14 CEST (Europe/Vienna)

I have had my results for a long time:
but I do not yet know how I am to arrive
at them.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5