Sereng
☆    

USA,
2022-05-12 19:40
(937 d 07:52 ago)

Posting: # 22977
Views: 4,178
 

 Back Calculating Sample Size [Power / Sample Size]

Dear colleagues, if you have data from a completed BE study where the upper bound of the 90% CI of Cmax was outside the acceptance range, is it possible to calculate a new sample size that would likely meet the BE requirements (or declare futility)? In this case, the following are details of the completed BE study, where the reference drug Cmax had almost twice the CV of the Test drug.

Parallel Group Design
Two Groups (n=70/group)
Cmax (Ref): 478 +/- 434 (mean +/- SD)
Cmax (Test): 489 +/- 909 (mean +/- SD)
Ratio (90% CI): 109.00 (87.00-135.00)

Regards,

Biostatistically Challenged CEO
ElMaestro
★★★

Denmark,
2022-05-13 00:29
(937 d 03:04 ago)

@ Sereng
Posting: # 22981
Views: 3,618
 

 Back Calculating Sample Size

Hi Sereng,

❝ Parallel Group Design

❝ Two Groups (n=70/group)

❝ Ratio (90% CI): 109.00 (87.00-135.00)


I am getting around CV = 156% (pooled variance estimate).

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-13 01:10
(937 d 02:23 ago)

@ ElMaestro
Posting: # 22982
Views: 3,699
 

 Back Calculating Sample Size

Hi ElMaestro & Sereng,

❝ ❝ Parallel Group Design

❝ ❝ Two Groups (n=70/group)

❝ ❝ Ratio (90% CI): 109.00 (87.00-135.00)


❝ I am getting around CV = 156% (pooled variance estimate).


Hhm…

library(PowerTOST)
CV <- CI2CV(lower = 0.87, upper = 1.35, n = 140, design = "parallel")
sampleN.TOST(CV = CV, theta0 = sqrt(0.87 * 1.35), design = "parallel")

+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design: 2 parallel groups
log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.8
BE margins = 0.8 ... 1.25
True ratio = 1.083744, 
CV = 0.9227379

Sample size (total)
 n     power
750   0.800246


❝ ❝ is it possible to calculate a new sample size that would likely meet the BE requirements …


See above.

❝ ❝ … (or declare futility)?


If this is not a blockbuster and/or you have a large budget, yes.
Furthermore, there is no guarantee that you will observe exactly the same T/R-ratio and CV in another study. Especially the T/R-ratio is nasty. In PowerTOST a Bayesian method is implemented, which takes the uncertainties of the estimated T/R-ratio and CV of the provious study into account.

library(PowerTOST)
m      <- 140
CV     <- 0.9227379
theta0 <- 1.083744
design <- "parallel"
res    <- data.frame(method = c("naïve",
                                "uncertain CV",
                                "uncertain T/R-ratio",
                                "both uncertain"),
                     n = NA_integer_, power = NA_real_)
res[1, 2:3] <- sampleN.TOST(CV = CV, theta0 = theta0, targetpower = 0.8,
                            design = design, print = FALSE)[7:8]
res[2, 2:3] <- expsampleN.TOST(CV = CV, theta0 = theta0,
                               targetpower = 0.80,
                               design = design,
                               prior.parm = list(m = m, design = design),
                               prior.type = "CV",
                               details = FALSE, print = FALSE)[9:10]
res[3, 2:3] <- expsampleN.TOST(CV = CV, theta0 = theta0,
                               targetpower = 0.80,
                               design = design,
                               prior.parm = list(m = m, design = design),
                               prior.type = "theta0",
                               details = FALSE, print = FALSE)[9:10]
res[4, 2:3] <- expsampleN.TOST(CV = CV, theta0 = theta0,
                               targetpower = 0.80,
                               design = design,
                               prior.parm = list(m = m, design = design),
                               prior.type = "both",
                               details = FALSE, print = FALSE)[9:10]
print(res, row.names = FALSE)

              method     n     power
               naïve   750 0.8002443
        uncertain CV   760 0.8008385
 uncertain T/R-ratio 13764 0.8000035
      both uncertain 14858 0.8000001

Terrible.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2022-05-13 03:38
(936 d 23:55 ago)

@ Helmut
Posting: # 22983
Views: 3,577
 

 Back Calculating Sample Size

Hi Hötzi,

how embarassing. I forgot it was parallel. My bad. :sleeping:

Pass or fail!
ElMaestro
Sereng
☆    

USA,
2022-05-18 07:32
(931 d 20:01 ago)

@ Helmut
Posting: # 22997
Views: 3,220
 

 Back Calculating Sample Size

Hi Helmut, pardon my ignorance but I believe you are stating n=750 per group in this parallel group study (total n=1500) using the assumptions from the completed study? Correct? Thanks!

Biostatistically Challenged CEO
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-18 10:55
(931 d 16:38 ago)

@ Sereng
Posting: # 23000
Views: 3,264
 

 PowerTOST: Total sample size

Hi Sereng,

❝ […] I believe you are stating n=750 per group in this parallel group study (total n=1500) using the assumptions from the completed study? Correct?


Nope. Pasted from my previous post:

Sample size (total)
 n     power
750   0.800246

The sample size functions of PowerTOST give always the total sample. If you are interested in the background, see this article.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
dshah
★★  

India,
2022-05-13 13:54
(936 d 13:39 ago)

@ ElMaestro
Posting: # 22985
Views: 3,583
 

 Back Calculating Sample Size

Hi All!

Even I am getting around CV = 156% (pooled variance estimate) considering N=140 (N=70/group).
Only when N=70 is considered, I am getting CV~92%.

Dear Sereng! From the details of Cmax (Ref): 478 +/- 434 (mean +/- SD) and Cmax (Test): 489 +/- 909 (mean +/- SD), it can be seen that reference Cmax is less variable than test one which is opposite to what is stated in original post. Or is it typo?

Regards,
Divyen
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-16 17:01
(933 d 10:32 ago)

@ dshah
Posting: # 22990
Views: 3,482
 

 Algebra

Hi Divyen!

❝ Even I am getting around CV = 156% (pooled variance estimate) considering N=140 (N=70/group).

❝ Only when N=70 is considered, I am getting CV~92%.


How did you calculate it?

The lower and upper limits \(\small{\{L,U\}}\) of a \(\small{1-2\,\alpha}\) confidence interval for given sample sizes \(\small{n_1,\,n_2}\)1 are calculated by $$\{L,U\}=\exp\left(\log_e(PE)\mp t_{1-\alpha,n_1+n_2-2}\cdot \sqrt{MSE\cdot(1/n_1+1/n_2)} \right)\textsf{.}\tag{1}$$ Hence, for given \(\small{1-2\,\alpha}\), \(\small{\{L,U\}}\), and \(\small{n_1,\,n_2}\) we can derive the \(\small{MSE}\) and subsequently calculate the \(\small{CV}\):
  1. Calculate the point estimate \(\small{PE}\) and \(\small{\Delta_\text{CL}}\):$$PE=\sqrt{L\cdot U}\tag{2}$$$$\Delta_\text{CL}=\log_{e}PE-\log_{e}L\;\small{\textsf{or}}\tag{3a}$$$$\Delta_\text{CL}=\log_{e}U-\log_{e}PE\textsf{,}\phantom{\small{\textsf{or}}}\tag{3b}$$ where \(\small{\Delta_\text{CL}}\) is the difference between one of the confidence limits and the point estimate in \(\small{\log_{e}\textsf{-}}\)scale (aka the ‘log half-width’). In practice select \(\small{(3\text{a})}\) or \(\small{(3\text{b})}\) which is based on the most significant digits.

  2. \(\small{\log_{e}}\)-transform \(\small{(1)}\):$$\log_e\{L,U\}=\log_e(PE)\mp t_{1-\alpha,n_1+n_2-2}\cdot \sqrt{MSE\cdot(1/n_1+1/n_2)}\tag{4}$$
  3. Rearrange \(\small{(3)}\), substitute in \(\small{(4)}\), and cancel \(\small{\log_e(PE)}\):$$\require{cancel}\eqalign{\Delta_\text{CL}&=\cancel{\log_e(PE)-\log_e(PE)+}t_{1-\alpha,n_1+n_2-2}\cdot \sqrt{MSE\cdot(1/n_1+1/n_2)}\\
    &=t_{1-\alpha,n_1+n_2-2}\cdot \sqrt{MSE\cdot(1/n_1+1/n_2)}}\tag{5}$$
  4. Solve \(\small{(5)}\) for \(\small{MSE}\): $$\eqalign{\sqrt{MSE\cdot(1/n_1+1/n_2)}&=\Delta_\text{CL}/t_{1-\alpha,n_1+n_2-2}\\
    MSE\cdot(1/n_1+1/n_2)&=\left(\Delta_\text{CL}/t_{1-\alpha,n_1+n_2-2}\right)^2\\
    MSE&=\frac{\left(\Delta_\text{CL}/t_{1-\alpha,n_1+n_2-2}\right)^2}{1/n_1+1/n_2}\\
    \small{\color{Blue}{\textsf{Note that if }}}\color{Blue}{n_1=n_2:}MSE&=\frac{\color{Blue}{N}\cdot\left(\Delta_\text{CL}/t_{1-\alpha,\color{Blue}{N}-2}\right)^2}{\color{Blue}{4}}}\tag{6}$$ If only the total sample size \(\small{N}\) is known and \(\small{n_1 \neq n_2}\), the calculated \(\small{MSE}\) will be larger than the true one. Hence, if used in a sample size estimation it will be conservative.

  5. Finally as usual:$$CV=\sqrt{\exp(MSE)-1}\tag{7}$$
This algorithm is implemented in the function CI2CV() of PowerTOST.

With Sereng’s numbers:$$\{L,U\}=\{0.87,1.35\},\,n_1=n_2=70,\,N=n_1+n_2=140\\\nu=N-2=138,\,\alpha=0.05,\,t_{1-\alpha,\nu}=1.65597$$ $$PE\approx\sqrt{0.87\times1.35}\approx1.08374\tag{←2}$$ $$\Delta_\text{CL}\approx\log_e1.35-\log_e1.08374\approx\log_e1.08374-\log_e0.87\approx0.21968\tag{←3}$$ $$MSE\approx\frac{\left(0.21968/1.65597\right)^2}{1/70+1/70}\approx\frac{140\times\left(0.21968/1.65597\right)^2}{4}\approx0.61595\tag{←6}$$ $$CV\approx\sqrt{\exp(0.61595)-1}\approx\color{DarkGreen}{0.92272}\tag{←7}$$ Check whether the recalculated \(\small{MSE}\) is correct: $$\{L,U\}\approx100\exp\left(\log_e 1.08374\mp 1.65597\times \sqrt{0.61595\times(1/70+1/70)}\right)\sim\color{DarkGreen}{\{87,135\}}\;\tiny{\blacksquare}\tag{←1}$$ If results don’t match:
  • Check the data you used;
  • if the correct data was used, the confidence interval was calculated by the Welch-test and not by the conventional t-test (see this post). I haven’t tried yet to derive the \(\small{MSE}\) from it.
I get your \(\small{CV}\) for a balanced 2×2×2 crossover design with \(\small{N=140}\). In such a case the denominator in the last line of \(\small{(6)}\) would be \(\small{2}\) instead of \(\small{4}\).2,3
Consequently, the \(\small{MSE}\) doubles and the \(\small{CV}\) is much larger than the correct \(\small{\approx92\%}\): $$\eqalign{MSE&\approx\frac{140\times\left(0.21968/1.65597\right)^2}{\color{Red}{2}}\approx1.23193\\
CV&\approx100\sqrt{\exp(1.23193)-1}\approx\color{Red}{156\%}
}$$ Try:

library(PowerTOST)
print(known.designs()[c(1, 3), c(2:3, 6, 9)], row.names = FALSE)
   design  df bk              name
 parallel n-2  4 2 parallel groups
    2x2x2 n-2  2   2x2x2 crossover

The first column gives the design argument to be used in the functions CI2CV() and CI.BE(), the second are the degrees of freedom (where n is the total sample size), and bk is the ‘design constant’ or the denominator used in \(\small{(6)}\).
You will see that with your \(\small{CV}\) the backcalculated confidence interval does not match the original \(\small{\left\{87,135\right\}}\):

CV  <- 1.558152
MSE <- log(CV^2 + 1)                                                # trivial
CI  <- round(100 * exp(log(1.083744) + c(-1, +1) *
                       1.65597 * sqrt(MSE * (1 / 70 + 1 / 70))), 2) # (1)
cat("CI =", paste0("{", paste(CI, collapse = ", ") , "}\n"))
isTRUE(all.equal(c(87, 135), CI))                                   # check
CI = {79.43, 147.86}
[1] FALSE


In PowerTOST without rounding:

library(PowerTOST)
CV <- CI2CV(lower = 0.87, upper = 1.35, n = c(70, 70), design = "parallel")
CI <- 100 * CI.BE(CV = CV, pe = sqrt(0.87 * 1.35), n = c(70, 70), design = "parallel")
cat("CV =", CV, "\nCI =", paste0("{", paste(CI, collapse = ", ") , "}\n"))
isTRUE(all.equal(100 * c(0.87, 1.35), as.numeric(CI)))             # check
CV = 0.9227379
CI = {87, 135}

[1] TRUE


If you don’t trust in the functions of PowerTOST, use the formulas from above in Base [image]:

L        <-  87 / 100
U        <- 135 / 100
n1       <- n2 <- 70
N        <- n1 + n2
PE       <- sqrt(L * U)                                  # (2)
signif.L <- nchar(as.character(signif(L * 100, 12)))
signif.U <- nchar(as.character(signif(U * 100, 12)))
if (signif.L >= signif.U) {
  Delta.CL <- log(PE) - log(L)                           # (3a)
} else {
  Delta.CL <- log(U) - log(PE)                           # (3b)
}
alpha    <- 0.05
nu       <- N - 2
t.value  <- qt(p = 1 - alpha, df = nu)
MSE      <- (Delta.CL / t.value)^2 / (1 / n1 + 1 / n2)   # (6)
CV       <- sqrt(exp(MSE) - 1)                           # (7)
CI       <- exp(log(PE) + c(-1, +1) *
                t.value * sqrt(MSE * (1 / n1 + 1 / n2))) # (1)
txt      <- paste(
            paste0("\n", paste(rep("—", 30), collapse = "")),
            "\nGiven",
            paste0("\n", paste(rep("—", 30), collapse = "")),
            "\nn1     =",   sprintf("%3.0f", n1),
            "\nn2     =", sprintf("%3.0f", n2),
            "\nalpha  =", sprintf("%6.5g", alpha),
            "\nL      =",  sprintf("%6.2f%%", 100 * L),
            "\nU      =", sprintf("%6.2f%%", 100 * U),
            paste0("\n", paste(rep("—", 30), collapse = "")),
            "\nCalculated             Formula",
            paste0("\n", paste(rep("—", 30), collapse = "")),
            "\nN      =", sprintf("%3.0f", N),
            "\nPE     =", sprintf("%6.2f%%", 100 * PE),
            "          (2)",
            "\nDelta  =", sprintf("%9.5f", Delta.CL),
            "        (3)",
            "\nnu     =", sprintf("%3.0f", nu),
            "\nt      =", sprintf("%9.5f", t.value),
            "\nMSE    =", sprintf("%9.5f", MSE),
            "        (6)",
            "\nCV     =", sprintf("%6.2f%%", 100 * CV),
            "          (7)",
            paste0("\n", 100 * (1 - 2 * alpha), "% CI ="),
            sprintf("{%.2f%%,", 100 * CI[1]),
            sprintf("%.2f%%} (1)", 100 * CI[2]),
            paste0("\n", paste(rep("—", 30), collapse = "")))
if (isTRUE(all.equal(100 * c(L, U), round(100 * CI, 2)))) {
  txt <- paste0(txt, "\nThe calculated CI agrees with\nthe given one.\n")
} else {
  txt <- paste0(txt, "\nThe calculated CI does not\nagree with given one! ",
               "Please\ncheck.\n")
}
cat(txt)
——————————————————————————————
Given
——————————————————————————————
n1     =  70
n2     =  70
alpha  =   0.05
L      =  87.00%
U      = 135.00%
——————————————————————————————
Calculated             Formula
——————————————————————————————
N      = 140
PE     = 108.37%           (2)
Delta  =   0.21968         (3)
nu     = 138
t      =   1.65597
MSE    =   0.61597         (6)
CV     =  92.27%           (7)
90% CI = {87.00%, 135.00%} (1)
——————————————————————————————
The calculated CI agrees with
the given one.



  1. In a parallel design the indices refer to the groups and in a crossover to the sequences.
  2. Schütz H. Sample Size Estimation for BE Studies. Bucarest, 19 March 2013. Slides 26–29.
  3. Yuan J, Tong T, Tang M-L. Sample Size Calculation for Bioequivalence Studies Assessing Drug Effect and Food Effect at the Same Time With a 3-Treatment Williams Design. Regul Sci. 2013; 47(2): 242–7. doi:10.1177/2168479012474273.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2022-05-16 20:58
(933 d 06:34 ago)

@ Helmut
Posting: # 22993
Views: 3,500
 

 Algebra

:clap:
Algebra rules.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-05-17 16:17
(932 d 11:16 ago)

@ Sereng
Posting: # 22994
Views: 3,396
 

 Parallel designs: Don’t use the (conventional) t-test!

Hi Sereng,

❝ […] the reference drug Cmax had almost twice the CV of the Test drug.

❝ Parallel Group Design

❝ Two Groups (n=70/group)

❝ Ratio (90% CI): 109.00 (87.00-135.00)


Since in this post (based on the t-test assuming equal variances) I could reproduce your results:
According to the FDA’s guidance (Section IV.B.1.d.):

For parallel designs, the confidence interval for the difference of means in the log scale can be computed using the total between-subject variance.1 […] equal variances should not be assumed.
    (my emphasis)


Though you had equally sized groups, variances were not equal.
This calls for the Welch-test with Satter­th­waite’s approximation2 of the degrees of freedom:3,4 $$\eqalign{\nu&\approx\frac{\left(\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}\right)^2}{\frac{s_1^4}{n_1^2\,(n_1-1)} + \frac{s_2^4}{n_2^2\,(n_2-1)}}\\
&\approx\frac{\left(\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}\right)^2}{\frac{1}{n_1-1}\left(\frac{s_1^2}{n_1}\right)^2 + \frac{1}{n_2-1}\left(\frac{s_2^2}{n_2}\right)^2}}
$$ For good reasons in [image], SAS, and other software packages it is the default.
  • Using a pre-test (F-test, Levene’s test, Bartlett’s test, Brown–Forsythe test) – as recommended in the past – is bad practice because it will inflate the Type I Error.5
  • If \({s_{1}}^{2}={s_{2}}^{2}\;\wedge\;n_1=n_2\), the formula given above reduces to the simple \(\nu=n_1+n_2-2\) anyhow.
  • In all other cases the Welch-test is conservative, which is a desirable property.
In SPSS both the conventional t-test and the Welch-test are performed. Always use the second row of the table of results.

@Divyen: If the confidence interval based on my derivation does not match the reported one, it is evident that the Welch-test was used. In such a case calculating the \(\small{MSE}\) is not that trivial. Maybe I will try it later.


  1. Misleading terminology. There is no ‘total between-subject variance’. In a parallel design only the total vari­ance – which is pooled from the between- and within-subject variances – is accessible.
  2. Satterthwaite FE. An Approximate Distribution of Estimates of Variance Components. Biom Bull. 1946; 2(6): 110–4. doi:10.2307/3002019.
  3. Both formulas are given in the literature. They are equivalent.
  4. Allwood M. The Satterthwaite Formula for Degrees of Freedom in the Two-Sample t-Test. College Board. 2008. [image] Open access.
  5. Zimmermann DW. A note on preliminary tests of equality of variances. Br J Math Stat Psychol. 2004; 57(1): 173–81. doi:10.1348/000711004849222.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
23,332 posts in 4,899 threads, 1,660 registered users;
24 visitors (0 registered, 24 guests [including 10 identified bots]).
Forum time: 02:33 CET (Europe/Vienna)

I don’t write drafts.
I write from the beginning to the end,
and when it’s finished, it’s done.    Clifford Geertz

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5