unique_one
☆    

India,
2014-07-08 10:58
(3810 d 20:19 ago)

Posting: # 13234
Views: 17,046
 

 Results Analysis [Study As­sess­ment]

Dear All,

Results obtained from 2 way-crossover study of 28 volunteers are as mentioned below:

90% CI AUC: Lower limit - 98.35, Upper limit - 132.50
90% CI Cmax: Lower limit - 95.38, Upper limit - 127.80

T/R ratio of AUC: 112.2
T/R ratio of Cmax: 109.5

Intrasubject CV% AUC: 27.3
Intrasubject CV% Cmax: 28.2

Questions:
  1. Primary reason for BE Failure?
  2. Do re-formulation would be needed?
Regards,
unique_one.


Edit: Category changed. [Helmut]
ElMaestro
★★★

Denmark,
2014-07-08 12:08
(3810 d 19:09 ago)

@ unique_one
Posting: # 13235
Views: 15,149
 

 Results Analysis

Hi unique_one,

Excellent example.

❝ Questions:


❝ 1. Primary reason for BE Failure?


Noone can tell. It might be a bioinequivalent product, or it might be a bioequivalent product. Your result cannot distinguish the two.

❝ 2. Do re-formulation would be needed?


If you think the two products are bioinequivalent, then obviously yes.
If you think the two products are bioequivalent with the estimated GMR reflecting true product performance, then you might not be able to afford the sample size necessary for another study in which case reformulation is necessary.
You might opt for a repeat study, too. You can probably assume any GMR within the CI's (although a GMR close to the some of the extremes are likely a bit unwise and unethical. Hötzi rants are a possibility).

There is little literature covering such situations, but I know people are working on it - publication within 12 months or so...:-D

Pass or fail!
ElMaestro
unique_one
☆    

India,
2014-07-08 12:42
(3810 d 18:35 ago)

@ ElMaestro
Posting: # 13236
Views: 15,117
 

 Results Analysis

Hello ElMaestro,

Thank you very much for the info!

However, as you mentioned that by considering the obtained T/R ratio, a higher sample size would be required.

Could you please let me know that what could be in approx sample size if study is repeated?

Secondly, as per our formulation experts team, they mentioned that as T/R ratio is quite well near 110% for both parameters, the reformulation might not be required in this case.

Please advice.

Thanks,
unique_one.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-07-08 14:00
(3810 d 17:17 ago)

@ unique_one
Posting: # 13237
Views: 15,258
 

 Don’t hurry; explore consequences

Hi!

❝ However, as you mentioned that by considering the obtained T/R ratio, a higher sample size would be required.

❝ Could you please let me know that what could be in approx sample size if study is repeated?


I strongly suggest not to use values obtained in the study hoping that you get exactly the same ones in a repeated one. They are estimates, not “carved in stone”. Consider reading one of my pre­sen­ta­tions about sample size estimation. You will see that in your case (similar CVs of AUC and Cmax) the ratio is more important. To give you a first idea (using exactly the values for AUC): 78 subjects for 80% power and 108 for 90%.
Get [image] / package PowerTOST and run a sensitivity analysis according to ICH E9:

The method by which the sample size is calculated should be given in the protocol, to­geth­er with the estimates of any quantities used in the calculations (such as variances, mean values, […] difference to be detected). The basis of these estimates should also be given. It is impor­tant to investigate the sensitivity of the sample size estimate to a vari­ety of deviations from these assumptions and this may be facilitated by providing a range of sample sizes appro­pri­ate for a reasonable range of deviations from as­sump­tions.


❝ unique_one.


Always remember that you are absolutely unique.
   Just like everyone else.
     Margaret Mead

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
unique_one
☆    

India,
2014-07-08 16:01
(3810 d 15:16 ago)

@ Helmut
Posting: # 13240
Views: 15,095
 

 Don’t hurry; explore consequences

Dear Helmut,

Thanks for the info!


I just wanted to add one more info to the matter!

In the study, six subjects dropped out due to different reasons.

Therefore, only 22 evaluable subjects were available for PK calculations and BE evaluation.

Whether this had resulted in higher Intrasubject variability and due to which the study may have failed?

Look forward to hear from you on the same.


regards,
unique_one


Edit: Please don’t open new posts if you want to add something; I merged your follow-up post. You can edit your posts for 24 hours (see also the Forum’s Policy). [Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-07-08 19:39
(3810 d 11:39 ago)

@ unique_one
Posting: # 13244
Views: 15,141
 

 The smaller a study, the more un­cer­tain its results

Hi,

❝ In the study, six subjects dropped out due to different reasons.

❝ Therefore, only 22 evaluable subjects were available for PK calculations and BE evaluation.


6/28 – that’s a lot! Which were the “different reasons”? How many subjects did complete each sequence? The latter information is only important if n1  n2 (imbalanced sequences).

❝ Whether this had resulted in higher Intrasubject variability and due to which the study may have failed?


Possible, but unlikely. With your remaining sample size the CVs and ratios should be pretty “stable”. To give you an impression we can calculate confidence limits of the CV. Let’s assume two cases: A CV of 28% obtained from 28 and 22 subjects; upper CL, α 0.2:
   n  CL of CV (%) 
 ──────────────────
  28     32.26     
  22     33.03     

As expected the estimate from 28 subjects is more precise (lower upper CL), but the gain compared to 22 subjects is not dramatic.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
unique_one
☆    

India,
2014-07-09 08:59
(3809 d 22:19 ago)

@ Helmut
Posting: # 13246
Views: 14,997
 

 The smaller a study, the more un­cer­tain its results

Hi Helmut,

Thank you very much for sharing your ideas!

However, the subject dropout reasons were as mentioned below:
2 subjects withdrawn due to personal reasons.
2 subjects dropped out due to protocol non compliance.
1 subject dropped out due to Adverse Event
1 subject did not reported for second period of study.

With regards to the imbalanced sequences, kindly find the details mentioned below:

12 subjects completed RT sequence
10 subjects completed TR sequence

Look forward towards your feedback.

Regards,
unique_one.
unique_one
☆    

India,
2014-07-10 09:09
(3808 d 22:09 ago)

@ unique_one
Posting: # 13252
Views: 15,015
 

 The smaller a study, the more un­cer­tain its results

Dear Helmut,

Any update on the above query?

I just wanted to know the impact of the dropout rate and imbalanced sequence in the study.

Thanks and Regards,
unique_one.
d_labes
★★★

Berlin, Germany,
2014-07-08 14:03
(3810 d 17:15 ago)

@ unique_one
Posting: # 13238
Views: 15,222
 

 156 ?

Dear unique_one.

❝ Could you please let me know that what could be in approx sample size if study is repeated?


Simple enough using PowerTOST:
Approach: The Believer’s ("Carved in stone")
AUC
sampleN.TOST(CV=0.273, theta0=1.122, print=F)
  Design alpha    CV theta0 theta1 theta2 Sample size Achieved power Target power
1    2x2  0.05 0.273  1.122    0.8   1.25          78       0.802053          0.8
Cmax
sampleN.TOST(CV=0.282, theta0=1.095, print=F)
  Design alpha    CV theta0 theta1 theta2 Sample size Achieved power Target power
1    2x2  0.05 0.282  1.095    0.8   1.25          56      0.8039018          0.8


Approach: The Conservative’s
AUC
sampleN.TOST(CV=0.3, theta0=1.15, print=F)
  Design alpha  CV theta0 theta1 theta2 Sample size Achieved power Target power
1    2x2  0.05 0.3   1.15    0.8   1.25         156      0.8030697          0.8
Cmax
sampleN.TOST(CV=0.3, theta0=1.1, print=F)
  Design alpha  CV theta0 theta1 theta2 Sample size Achieved power Target power
1    2x2  0.05 0.3    1.1    0.8   1.25          68      0.8073325          0.8


Thus a relative conservative estimate (more conservative assumptions about CV and GMR are imaginable) of sample size would be 156. Seldom seen such a high sample size in BE studies.

Regards,

Detlew
unique_one
☆    

India,
2014-07-08 16:03
(3810 d 15:14 ago)

@ d_labes
Posting: # 13241
Views: 15,049
 

 156 ?

Dear d_labes,

Thanks for sharing information!

Regards,
unique_one
Shuanghe
★★  

Spain,
2014-07-08 14:57
(3810 d 16:21 ago)

@ unique_one
Posting: # 13239
Views: 15,112
 

 Results Analysis

Hi unique_one,

❝ Secondly, as per our formulation experts team, they mentioned that as T/R ratio is quite well near 110% for both parameters, the reformulation might not be required in this case.


I don't think going forward with the same formulation is a good idea.

Apart from what Helmut and Detlew said, you can try bootstrap data set with sample size of 28 and repeat about 10000 times to get a sense of your data (ratio, ISCV etc) and also try to bootstrap larger sample size as estimated by Helmut/Detlew to see your chance of success.

All the best,
Shuanghe
unique_one
☆    

India,
2014-07-08 16:08
(3810 d 15:09 ago)

@ Shuanghe
Posting: # 13243
Views: 15,155
 

 Results Analysis

Dear Shuanghe,

Thanks for sharing info!

regards,
unique_one
Samaya B
☆    

India,
2014-07-09 09:58
(3809 d 21:19 ago)

@ unique_one
Posting: # 13247
Views: 15,040
 

 Results Analysis

Dear Shuanghe,

Can you please explain in some more detail how to use bootstrap data set to get a sense of the data and bootstrap with larger sample size?

Thanks in advance.

Regards,
Samaya.
Shuanghe
★★  

Spain,
2014-07-09 20:26
(3809 d 10:52 ago)

@ Samaya B
Posting: # 13250
Views: 15,094
 

 bootstrap

Hi Samaya,

❝ Can you please explain in some more detail how to use bootstrap data set to get a sense of the data and bootstrap with larger sample size?


Many program can perform bootstrap. If you use SAS, try PROC SURVEYSELECT. Use METHOD=URS for sampling with replacement. You can specify the sample size (say 28) with SAMPSIZE=28* and and repeated number (say 1000 times) with REPS=1000. Use subject as selection unit with SAMPLINGUNIT subject; .

* Note that you can have sample size which is greater than your original data since this is sampling with replacement.

now you have a data set with 1000 sets of data with each set having 28 subjects. You can do BE analyse as usual (add BY replicate statement in your sas code) so for each replicate you'll have T/R ratio and 90% CI and ISCV etc. So you'll have 1000 of those now. Obvious BE result of each replicate will be slightly different. Now you can see how they vary.

However, there are some drawbacks such as repeated subject number, sometimes sequence is extremely unbalanced (e.g., for certain replicate you might have 20 subject with TR and 8 with RT) etc.

You can generate dummy subject number to replace with the original ones and to remove certain replicates with extremely unbalanced sequence (what you think is unlikely occur in real life. for example, for BE study with 100 subjects (50TR+50RT), after 15 drop out, 45TR +40RT might be likely but 50TR +35RT might not) before doing BE analysis.

Also, you can try adding some random number to your logPK data based on the intra-subject variability of the formulation before analysis. But I'll leave it to you to try them.

Please read the sas manual for details.

All the best,
Shuanghe
Weidson
☆    

Brazil,
2021-07-03 03:36
(1259 d 03:42 ago)

@ Shuanghe
Posting: # 22455
Views: 6,919
 

 bootstrap

Dear Shuanghe,

It would be possible for you publish the SAS code (or R code if disponible) this your proposal? I would stay very grateful. At moment I am studing the bioequivalence studies with alternative methods as bootstrap for design crossover 2x2. I wasn't able to find any SAS code disponible for boostrap of the crossover experiments (I'm not programer). I am intending to use it to verify the robustness of the conclusion of bioinequivalence. :ok:

Is there any member of this forum have experience with bootstrap methods? Does We have any code can be aplied for failed crossovers design? :confused:

Best regards.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2021-07-08 15:50
(1253 d 15:27 ago)

@ Weidson
Posting: # 22463
Views: 6,703
 

 Bootstrapping BE: An attempt in 🇷

Hi Weidson,

❝ I wasn't able to find any SAS code disponible for boostrap of the crossover experiments.


No idea about SAS; a lengthy (157 lines) [image]-script at the end.
I simulated a failed study (underpowered). Output:

Simulation goalposts
  n     :  12
  theta0:  95.00%
  CV    :  22.00%
  Power :  45.78%
  n.tgt :  22
Simulated data set
  PE    :  95.76%
  90% CI:  79.49% – 115.37%
  CV    :  25.58%
  Power :  29.84%
Sample size estimation based on simulated data set
  n     :  28
  Power :  81.73%
10,000 bootstrapped studies (each n = 22)
  PE (median)      :  95.55%
    95% CI         :  83.71% – 110.84%
  lower CL (median):  84.43%
    95% CI         :  75.52% –  97.12%
  upper CL (median): 108.07%
    95% CI         :  92.31% – 126.57%
  CV (median)      :  23.31%
    95% CI         :  16.89% –  28.05%
  Empiric power    :  78.07%


[image]

Note that we would need 22 subjects based on the simulation goalposts but 28 would be required based on the simulated data set. Though the PE was slightly ‘better’ (95.76% instead of 95%), the CV was ‘worse’ (25.58% instead of 22%). Hence, the bootstrapped result (with n = 22) is expected to be slightly underpowered (empiric power is the number of passing studies / the number of simulations).

Show the first six rows of the simulated data and the bootstrapped studies as well as the structure of the data.frames:

print(head(data), row.names = FALSE); str(data)
 subject period sequence treatment             PK
       1      1       RT         R 0.987956758728
       1      2       RT         T 1.138788480791
       2      1       RT         R 1.290885185744
       2      2       RT         T 0.894663486095
       3      1       RT         R 1.257311382884
       3      2       RT         T 0.879437588333
'data.frame':   24 obs. of  5 variables:
 $ subject  : Factor w/ 12 levels "1","2","3","4",..: 1 1 2 2 3 3 4 4 5 5 ...
 $ period   : Factor w/ 2 levels "1","2": 1 2 1 2 1 2 1 2 1 2 ...
 $ sequence : Factor w/ 2 levels "RT","TR": 1 1 1 1 1 1 1 1 1 1 ...
 $ treatment: Factor w/ 2 levels "R","T": 1 2 1 2 1 2 1 2 1 2 ...
 $ PK       : num  0.988 1.139 1.291 0.895 1.257 ...

print(head(res), row.names = FALSE); str(res)
     PE  lower  upper    BE             CV
  84.56  75.84  94.27 FALSE 0.203303322180
  90.14  80.44 101.01  TRUE 0.217749281620
  88.38  78.42  99.60 FALSE 0.232894145006
 121.50 108.45 136.13 FALSE 0.221183626922
  84.70  77.56  92.50 FALSE 0.169932184060
  95.79  84.69 108.33  TRUE 0.239046540681
'data.frame':   10000 obs. of  5 variables:
 $ PE   : num  84.6 90.1 88.4 121.5 84.7 ...
 $ lower: num  75.8 80.4 78.4 108.5 77.6 ...
 $ upper: num  94.3 101 99.6 136.1 92.5 ...
 $ BE   : logi  FALSE TRUE FALSE FALSE FALSE TRUE ...
 $ CV   : num  0.203 0.218 0.233 0.221 0.17 ...


Duno why we need bootstrapping at all. We could directly use the results of the data set:

simple <- 100 * CI.BE(pe = PE / 100, CV = CVw, n = 22)
cat(sprintf("%.2f%% (%.2f%% \u2013 %.2f%%)", PE, simple[1], simple[2]), "\n")
95.76% (84.01% – 109.15%)

IMHO, that’s sufficiently close to the bootstrap’s 95.55% (84.43% – 108.07%). Given, if real-world data deviates from lognormal (e.g., contains discordant outliers) the bootstrapped results (since nonparametric) may be more reliable.

❝ I am intending to use it to verify the robustness of the conclusion of bioinequivalence. :ok:


Not sure what you mean by that. For any failed study the bloody post hoc power will be <50%. When you give the PE and CV in the function sampleN.TOST() you get immediately to number of subjects you would have needed to demonstrate BE (see the example above and lines 91–92 of the script).

❝ Is there any member of this forum have experience with bootstrap methods?


ElMaestro. ;-)

❝ Does We have any code can be aplied for failed crossovers design? :confused:


I hope the one at the end helps (never done that before, bugs are possible). If you want to use your own data, ignore lines 19–44 and provide a data.frame of results (named data). The column names are mandatory: subject (integer), period (integer), sequence ("RT" or "TR"), treatment ("R" or "T"), PK (untransformed).
Once you have that, use this code

cols       <- c("subject", "period", "sequence", "treatment")
data[cols] <- lapply(data[cols], factor)

to convert the columns to factors needed in the model.


library(PowerTOST)
empty <- function(n, design = "2x2x2") {
  # returns empty factorized data.frame
  if (!design %in% c("2x2x2", "2x2"))
    stop("Only simple crossover implemented.")
  data <- data.frame(subject   = rep(1:n, each = 2),
                     period    = rep(1:2L, n),
                     sequence  = c(rep(c("RT"), n),
                                   rep(c("TR"), n)),
                     treatment = c(rep(c("R", "T"), n/2),
                                   rep(c("T", "R"), n/2)),
                     PK = NA_real_)
  cols       <- c("subject", "period", "sequence", "treatment")
  data[cols] <- lapply(data[cols], factor)
  return(data)
}

alpha   <- 0.05 # guess ;-)
CV      <- 0.22 # within-subject CV
theta0  <- 0.95 # T/R-ratio
n       <- 12L  # for these values underpowered (22 needed for >=80% power)
n.new   <- 22L  # bootstrap sample size
nsims   <- 1e4L # number of simulations
setseed <- TRUE # for reproducibility (FALSE for other ones)
data    <- empty(n = n) # gets the structure (balanced sequences)
if (setseed) {
  set.seed(123456) # fixed seed
} else {
  set.seed(NULL)   # random seed
}
# PK responses in raw scale based on the lognormal distribution
sd      <- CV2se(CV)
T       <- data.frame(subject = 1:n, PK = exp(rnorm(n = n,
                                                    mean = log(theta0),
                                                    sd = sd)))
R       <- data.frame(subject = 1:n, PK = exp(rnorm(n = n,
                                                    mean = log(1),
                                                    sd = sd)))
for (j in 1:nrow(data)) { # assign responses
  if (data$treatment[j] == "R")
    data$PK[j] <- R$PK[which(R$subject == data$subject[j])]
  if (data$treatment[j] == "T")
    data$PK[j] <- T$PK[which(T$subject == data$subject[j])]
}
ow    <- options("digits")
options(digits = 12) # more digits for ANOVA
on.exit(ow)          # ensure that options are reset if an error occurs
# keep it simple (nested model with "subject %in% sequence" is bogus)
orig  <- lm(log(PK) ~ sequence + subject + period + treatment,
                      data = data)
CVw   <- mse2CV(anova(orig)["Residuals", "Mean Sq"])
PE    <- 100 * exp(coef(orig)[["treatmentT"]])
CI    <- 100 * exp(confint(orig, "treatmentT",
                           level = 1 - 2 * alpha))
res   <- data.frame(PE = rep(NA_real_, nsims),
                    lower = NA_real_, upper = NA_real_,
                    BE = FALSE, CV = NA_real_)
pb    <- txtProgressBar(0, 1, 0, char = "\u2588", width = NA, style = 3)
dummy <- data.frame(subject   = factor(NA_integer_, levels = 1:n.new),
                    period    = factor(NA_integer_, levels = c(1, 2)),
                    sequence  = factor(NA_character_, levels = c("RT", "TR")),
                    treatment = factor(NA_character_, levels = c("R", "T")),
                    PK = NA_real_)
for (j in 1L:nsims) {
  boot <- dummy
  idx  <- sample(1:n, size = n.new, replace = TRUE) # the workhorse
  for (k in 1:n.new) {
    tmp         <- data[data$subject == idx[k], ]
    tmp$subject <- k
    boot        <- rbind(boot, tmp)    # append values
  }
  boot <- boot[!is.na(boot$subject), ] # remove first row with NAs
  options(digits = 12)
  on.exit(ow)
  mod         <- lm(log(PK) ~ sequence + subject + period + treatment,
                              data = boot)
  res[j, 5]   <- mse2CV(anova(mod)["Residuals", "Mean Sq"])
  # rounding acc. to GLs
  res[j, 1]   <- round(100 * exp(coef(mod)[["treatmentT"]]), 2)
  res[j, 2:3] <- round(100 * exp(confint(mod, "treatmentT",
                                        level = 1 - 2 * alpha)), 2)
  if (res$lower[j] >= 80 & res$upper[j] <= 125) res$BE[j] <- TRUE
  setTxtProgressBar(pb, j/nsims)
} # bootstrapping in progress...
close(pb)
probs   <- c(0.025, 0.5, 0.975) # nonparametric 95% CI and median
PE.q    <- quantile(res$PE,    probs = probs)
lower.q <- quantile(res$lower, probs = probs)
upper.q <- quantile(res$upper, probs = probs)
CV.q    <- quantile(res$CV,    probs = probs)
tmp     <- sampleN.TOST(CV = CVw, theta0 = PE / 100,
                        details = FALSE, print = FALSE)
txt     <- paste0("Simulation goalposts",
                  "\n  n     : ", sprintf("%3.0f", n),
                  "\n  theta0: ", sprintf("%6.2f%%", 100 * theta0),
                  "\n  CV    : ", sprintf("%6.2f%%", 100 * CV),
                  "\n  Power : ", sprintf("%6.2f%%",
                  100 * power.TOST(CV = CV, theta0 = theta0, n = n)),
                  "\n  n.tgt : ", sprintf("%3.0f",
                  sampleN.TOST(CV = CV, theta0 = theta0,
                               details = FALSE, print = FALSE)[["Sample size"]]),
                  "\nSimulated data set",
                  "\n  PE    : ", sprintf("%6.2f%%", PE),
                  "\n  90% CI: ", sprintf("%6.2f%% \u2013 %6.2f%%",
                  CI[1], CI[2]),
                  "\n  CV    : ", sprintf("%6.2f%%", 100 * CVw),
                  "\n  Power : ", sprintf("%6.2f%%",
                  100 * power.TOST(CV = CVw, theta0 = PE / 100, n = n)),
                  "\nSample size estimation based on simulated data set",
                  "\n  n     : ",
                  sprintf("%3.0f", tmp["Sample size"]),
                  "\n  Power : ",
                  sprintf("%6.2f%%", 100* tmp["Achieved power"]),
                  "\n", formatC(nsims, big.mark = ","),
                  " bootstrapped studies (each n = ", n.new, ")",
                  "\n  PE (median)      : ",
                  sprintf("%6.2f%%", PE.q[2]),
                  "\n    95% CI         : ",
                  sprintf("%6.2f%% \u2013 %6.2f%%",
                  PE.q[1], PE.q[3]),
                  "\n  lower CL (median): ",
                  sprintf("%6.2f%%", lower.q[2]),
                  "\n    95% CI         : ",
                  sprintf("%6.2f%% \u2013 %6.2f%%",
                  lower.q[1], lower.q[3]),
                  "\n  upper CL (median): ",
                  sprintf("%6.2f%%", upper.q[2]),
                  "\n    95% CI         : ",
                  sprintf("%6.2f%% \u2013 %6.2f%%",
                  upper.q[1], upper.q[3]),
                  "\n  CV (median)      : ",
                  sprintf("%6.2f%%", 100 * CV.q[2]),
                  "\n    95% CI         : ",
                  sprintf("%6.2f%% \u2013 %6.2f%%",
                  100 * CV.q[1], 100 * CV.q[3]),
                  "\n  Empiric power    : ",
                  sprintf("%6.2f%%", 100 * sum(res$BE / nsims)), "\n")
h       <- hist(res$PE, breaks = "FD", plot = FALSE)
plot(h, freq = FALSE, axes = FALSE, font.main = 1,
     main = paste("Histogram of",
                  formatC(nsims, big.mark = ","),
                  "bootstrapped studies"), xlab = "Point estimate",
                  col = "bisque", border = "darkgrey")
axis(1, at = pretty(h$breaks),
     labels = sprintf("%.0f%%", pretty(h$breaks)))
axis(2, las = 1)
axis(3, at = c(lower.q[2], PE.q[2], upper.q[2]),
     labels = sprintf("%.2f%%", c(lower.q[2], PE.q[2], upper.q[2])),
     tick = FALSE, line = -0.75)
abline(v = c(lower.q[2], PE.q[2], upper.q[2]),
       col = "blue", lwd = c(1, 2, 1))
abline(v = c(80, 125), lty = 2, col = "red")
legend("topright", lty = c(2, 1, 1), lwd = c(1, 2, 1), bg = "white",
       col = c("red", "blue", "blue"), cex = 0.9,
       legend = c("BE limits", "Median", "Nonparametric 90% CI"))
box()
cat(txt)



Edit: Couldn’t resist. I introduced an ‘outlier’ (divided the PK-value of subject 1 after R by three). Homework: Find out where to insert the following lines.
data$PK[1] <- data$PK[1] / 3
outl       <- data.frame(sim = 1L:nsims, count = 0L)
  outl$count[j] <- length(which(boot$PK == data$PK[1]))
max.ol     <- max(outl$count)
num.ol     <- data.frame(outliers = 0L:max.ol, boostraps = 0L)
for (j in 1:nrow(num.ol)) {
  num.ol$boostraps[j] <- formatC(sum(outl$count == num.ol$outliers[j]),
                                 big.mark = ",")
}
extremes       <- res[outl$sim[outl$count == max.ol], ]
extremes$width <- extremes$upper - extremes$lower
extremes$CV    <- sprintf("%.2f%%", 100 * extremes$CV)
print(num.ol, row.names = FALSE)
print(extremes[with(extremes, order(width, PE)), ], row.names = FALSE)


I got:

Simulated data set
  PE    : 104.94%
  90% CI:  80.28% – 137.19%
  CV    :  37.43%
  Power :   4.38%
Sample size estimation based on simulated data set
  n     :  58
  Power :  81.29%
10,000 bootstrapped studies (each n = 22)
  PE (median)      : 104.50%
    95% CI         :  86.72% – 129.29%
  lower CL (median):  87.58%
    95% CI         :  75.87% – 106.79%
  upper CL (median): 124.98%
    95% CI         :  97.89% – 157.60%
  CV (median)      :  33.99%
    95% CI         :  20.77% –  43.83%
  Empiric power    :  37.89%


[image]

The simple (parametric) approach gave 104.94% (86.93% – 126.69%) which is wider than the bootstrapped 104.50% (87.58% – 124.98%). Reasonable at a first look cause ANOVA is sensitive to outliers. Is this what you are interested in?
However, since we want to bootstrap larger studies, we have to sample with replacement (line 66). Some studies contain the outlier as well but others none or even more than one. In my example:

 outliers bootstraps
        0      1,452
        1      2,959
        2      2,870
        3      1,678
        4        704
        5        240
        6         77
        7         16
        8          2
        9          2

These are the most extreme bootstraps with 9 ‘outliers’ (note the PE, the CV, and the width of the CI):

     PE  lower  upper    BE     CV width
 150.07 120.72 186.55 FALSE 40.51% 65.83
 143.55 109.21 188.68 FALSE 49.51% 79.47

The bootstrapped distribution should resemble the original one. Whereas in the original we have 1/12 (8.33%) outliers, due to sampling with replacement in these extreme cases we have 9/22 (40.9%).

[image]


If we desire a substantially larger sample size, I’m not convinced whether bootstrapping is useful at all.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2021-07-09 16:15
(1252 d 15:02 ago)

@ Helmut
Posting: # 22464
Views: 6,532
 

 Bootstrapping BE: An attempt in 🇷

Hi Weidson, hi all,

❝ ❝ I am intending to use it to verify the robustness of the conclusion of bioinequivalence. :ok:


❝ Not sure what you mean by that. For any failed study the bloody post hoc power will be <50%. When you give the PE and CV in the function sampleN.TOST() you get immediately to number of subjects you would have needed to demonstrate BE (see the example above and lines 91–92 of the script).


❝ ❝ Is there any member of this forum have experience with bootstrap methods?


❝ ElMaestro. ;-)


I do bootstrapping once in a while. Not often, just sometimes.

But I did not quite understand what you were trying to achieve Weidson. The term "robustness" lost its meaning to me some months ago following an opinion expressed by a regulator in relation to robustness and simulation.

Anyways, in dissolution trials we can often do a simple f2 comparison and if it "fails" then bootstrapping of the same data is the final attempt and it may sometimes actually lead to approval. Perhaps your idea is to do something along similar lines with a BE trial?
In that case, Helmut's code above goes a long way towards this goal, you can probably easily extend it with BCa-derived intervals. But, let me be frank, in the current regulatory climate here and there I have no particular reason to think it will lead to regulatory acceptance regardless of the numerical result.

An area where I think bootstrapping is totally ok and very useful is when you want to derive a sample size and you have pilot trial data. If the residual is totally weirdly distributed in the pilot trial, then a sample size calc. in the classical fashion can be wasted time and effort even though the final study always has to be evaluated in the usual parametric fashion involving the assumption of a normal residual. This is where a bootstrap sample size approach can be very justified and useful. But it, too, has some shortcomings. Such as the assumption of the GMR. You can't easily make provisions for assuming some other than what you have seen in the pilot. Nasty. :-D

But I digress. :-)

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2021-07-09 16:50
(1252 d 14:27 ago)

@ ElMaestro
Posting: # 22465
Views: 6,446
 

 Bootstrapping BE: Desultory thoughts

Hi ElMaestro,

❝ The term "robustness" lost its meaning to me some months ago following an opinion expressed by a regulator in relation to robustness and simulation.


Oh dear! Any details?

❝ […] in dissolution trials we can often do a simple f2 comparison and if it "fails" then bootstrapping of the same data is the final attempt and it may sometimes actually lead to approval.


Regrettably the other way ’round (see there). An ƒ2 ≥50 does not guarantee acceptance any more. The lower bootstrapped CL  has to be  is preferred to be ≥50 as well.

❝ An area where I think bootstrapping is totally ok and very useful is when you want to derive a sample size and you have pilot trial data. If the residual is totally weirdly distributed in the pilot trial, then a sample size calc. in the classical fashion can be wasted time and effort even though the final study always has to be evaluated in the usual parametric fashion involving the assumption of a normal residual.


Agree – in principle.

❝ This is where a bootstrap sample size approach can be very justified and useful. But it, too, has some shortcomings. Such as the assumption of the GMR. You can't easily make provisions for assuming some other than what you have seen in the pilot. Nasty. :-D


Yep. Another issue are ‘outliers’ like in my example. Does it make sense to assume to face them in the pivotal as well? I hope not. Then what? Drop them from the pilot data and bootstrap that?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2021-07-09 18:29
(1252 d 12:49 ago)

@ Helmut
Posting: # 22466
Views: 6,485
 

 Bootstrapping BE: Desultory thoughts

Hi Hötzi,

❝ Oh dear! Any details?


Read here.

"The Applicant should demonstrate that the consumer risk is not inflated above 5% with the proposed design and alpha expenditure rule, taking into account that simulations are not considered sufficiently robust and analytical solutions are preferred."

:lol3:

❝ Yep. Another issue are ‘outliers’ like in my example. Does it make sense to assume to face them in the pivotal as well? I hope not. Then what? Drop them from the pilot data and bootstrap that?


If you believe an observation is an outlier, for one reason or another, probably it does not make sense to include that observation in the planning. Ot at last this sounds like a healthy argument. On the other hand if you do take the outliers into consideration for the subsequent steps then often the sample size just gets larger.
At any rate, that aberrant value -whether you call it an outlier or not- is more or less what causes the residual to have a bonkers distribution. There may be several of them in the worst case. And then of course there's the issue with a positive and negative residual of equal magnitude subject-wise for a 222BE design. This is more of a triviality-by-design. When it rains it pours. :-)

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2021-07-09 23:04
(1252 d 08:13 ago)

@ ElMaestro
Posting: # 22467
Views: 6,422
 

 Bootstrapping BE: Desultory thoughts

Hi ElMaestro,

❝ ❝ Oh dear! Any details?


Read here.


❝ "The Applicant should demonstrate that the consumer risk is not inflated above 5% with the proposed design and alpha expenditure rule, taking into account that simulations are not considered sufficiently robust and analytical solutions are preferred."


:lol3:


Jesusfuckingchrist!

Full ACK to everything else you wrote.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Achievwin
★★  

US,
2021-08-17 21:26
(1213 d 09:51 ago)

@ unique_one
Posting: # 22524
Views: 5,998
 

 Results Analysis

❝ 90% CI AUC: Lower limit - 98.35, Upper limit - 132.50, T/R ratio of AUC: 112.2

❝ 90% CI Cmax: Lower limit - 95.38, Upper limit - 127.80, T/R ratio of Cmax: 109.5


❝ Questions:

❝ 1. Primary reason for BE Failure?


Looking at your confidence intervals it looks like your ISCV computation may not be accurate, I am getting 33.6 for AUC. Your study is under powered!!!!!.

Assuming high bound of 34% ISCV and Ratio of 112 you 136 subjects (completers) for a standard 2x2 study design.

To improve probability of success go for 3-way (if your RLD CV is tight) or 4-way cross over replicated design with RSABE option. You have a shot at satisfying Bioequivalence.

Hope this helps
UA Flag
Activity
 Admin contact
23,336 posts in 4,902 threads, 1,698 registered users;
38 visitors (0 registered, 38 guests [including 8 identified bots]).
Forum time: 06:18 CET (Europe/Vienna)

Only dead fish go with the current.    Scuba divers' proverb

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5