Bebac user
☆    

Egypt,
2022-03-23 10:38
(754 d 04:10 ago)

Posting: # 22860
Views: 3,016
 

 Sample size and Replicated studies [Power / Sample Size]

Partially replicated study, I use this code for sample size estimation

sampleN.scABEL(CV=0.3532)

+++++++++++ scaled (widened) ABEL +++++++++++
            Sample size estimation
   (simulation based on ANOVA evaluation)
---------------------------------------------
Study design: 2x3x3 (partial replicate)
log-transformed data (multiplicative model)
1e+05 studies for each step simulated.

alpha  = 0.05, target power = 0.8
CVw(T) = 0.3532; CVw(R) = 0.3532
True ratio = 0.9
ABE limits / PE constraint = 0.8 ... 1.25
EMA regulatory settings
- CVswitch            = 0.3
- cap on scABEL if CVw(R) > 0.5
- regulatory constant = 0.76
- pe constraint applied


Sample size search
 n     power
45   0.7829
48   0.8045



But, I know that I can't use more than 40 volunteers in any BE study

In these cases, what should I do ?
dshah
★★  

India/United Kingdom,
2022-03-23 12:30
(754 d 02:18 ago)

@ Bebac user
Posting: # 22861
Views: 2,528
 

 Sample size and Replicated studies

Dear Bebac user!

As your ISCV is high, you are opting for partial/fully replicate design.
But why the 90% BE range is still 80.00-125.00% where you know that you can apply the HVD concept to widen the boundary?
Regards,
Divyen Shah
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-23 13:43
(754 d 01:05 ago)

@ dshah
Posting: # 22863
Views: 2,478
 

 Output of sampleN.scABEL()

Hi Divyen,

❝ But why the 90% BE range is still 80.00-125.00% where you know that you can apply the HVD concept to widen the boundary?


Seemingly the output of the function sampleN.scABEL() is not clear enough. It gives:

ABE limits / PE constraint = 0.8 ... 1.25
EMA regulatory settings
- CVswitch            = 0.3
- cap on scABEL if CVw(R) > 0.5
- regulatory constant = 0.76
- pe constraint applied

What does it mean? The sample size is based on simulations (by default 100,000 studies; see this article). In a certain fraction of the simulated studies \(\small{CV_\textrm{wR}\le30\%}\) and hence, ABE has to applied, where the conventional limits are 80.00–125.00%. Only if \(\small{CV_\textrm{wR}>30\%}\), ABEL can be applied but additionally to assessing whether the 90% CI lies within the now expanded limits, the point estimate has to lie within 80.00–125.00%.
See this article for the decision scheme.

In coding the function sampleN.scABEL.ad() I tried to be more specific (see also my post below) and it gives for \(\small{CV_\textrm{wT}=CV_\textrm{wR}=0.3532}\):

Regulatory settings: EMA (ABEL)
Switching CVwR     : 0.3
Regulatory constant: 0.76

Expanded limits    : 0.7706 ... 1.2977
Upper scaling cap  : CVwR > 0.5
PE constraints     : 0.8000 ... 1.2500

If you use the function with a substantially lower \(\small{CV_\textrm{wR}}\) you would get:

Regulatory settings: EMA (ABE)
Switching CVwR     : 0.3
BE limits          : 0.8000 ... 1.2500
Upper scaling cap  : CVwR > 0.5
PE constraints     : 0.8000 ... 1.2500


Note that the sample size tables of the ‘The Two Lászlós’* don’t reach below \(\small{CV_\textrm{wR}=30\%}\). They state:

»In view of the consequences of the mixed approach, it could be judicious to consider larger numbers of sub­jects at variations fairly close to 30%.«


You could assess at which \(\small{CV_\textrm{wR}}\) (on the average) we will switch in the simulations.

library(PowerTOST)
fun <- function(x) {
  n.1 <- sampleN.TOST(CV = x, theta0 = theta0,
                      targetpower = target,
                      design = design, details = FALSE,
                      print = FALSE)[["Sample size"]]
  n.2 <- sampleN.scABEL(CV = x, theta0 = theta0,
                        targetpower = target,
                        design = design, details = FALSE,
                        print = FALSE)[["Sample size"]]
  return(n.1 - n.2)
}
theta0 <- 0.90
target <- 0.80
design <- "2x3x3"
cat(design, "design:",
    sprintf("Equal sample sizes for ABE and ABEL at CV = %.2f%%.",
            100 * uniroot(fun, interval = c(0.3, 0.4), extendInt = "yes")$root), "\n")

2x3x3 design: Equal sample sizes for ABE and ABEL at CV = 27.90%.



  • Tóthfalusi L, Endrényi L. Sample Sizes for Designing Bioequivalence Studies for Highly Variable Drugs. J Pharm Phar­ma­ceut Sci. 2011; 15(1): 73–84. doi:10.18433/j3z88f. [image] Open access.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2022-03-23 17:33
(753 d 21:15 ago)

@ Helmut
Posting: # 22865
Views: 2,482
 

 Output of sampleN.scABEL() - expanded limits ?

Dear Helmut,

❝ In coding the function sampleN.scABEL.ad() I tried to be more specific (see also my post below) and it gives for \(\small{CV_\textrm{wT}=CV_\textrm{wR}=0.3532}\):

Regulatory settings: EMA (ABEL)

Switching CVwR     : 0.3

Regulatory constant: 0.76

Expanded limits    : 0.7706 ... 1.2977

Upper scaling cap  : CVwR > 0.5

PE constraints     : 0.8000 ... 1.2500

...


The Expanded limits are only valid if the CV's are exactly the assumed ones.
They don't play any role during the simulations because the expansion applied depends on the CVwR of actual study, gradually different from study to study of the simulation.
That was the reason why the ugly programmer of the function sampleN.scABEL() abstain from giving that numbers :cool:.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-23 17:45
(753 d 21:03 ago)

@ d_labes
Posting: # 22866
Views: 2,401
 

 Output of sampleN.scABEL() - expanded limits ?

Dear Detlew,

❝ ❝ In coding the function sampleN.scABEL.ad() I tried to be more specific (see also my post below) and it gives for \(\small{CV_\textrm{wT}=CV_\textrm{wR}=0.3532}\):

Regulatory settings: EMA (ABEL)

❝ ❝ Expanded limits    : 0.7706 ... 1.2977

❝ ❝ Upper scaling cap  : CVwR > 0.5

...


❝ The Expanded limits are only valid if the CV's are exactly the assumed ones.

❝ They don't play any role during the simulations because the expansion applied depends on the CVwR of actual study, gradually different from study to study of the simulation.


Of course, you are right.

❝ That was the reason why the ugly programmer of the function sampleN.scABEL() abstain from giving that numbers :cool:.


Don’t know which one of us is uglier. Beauty lies in the eye of the beholder. When it comes to coding, I’m notorious for producing ‘Spaghetti viennese’. I decided to give the expanded limits because in the simulations of the empiric Type I Error theta0 is set to the upper one.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
dshah
★★  

India/United Kingdom,
2022-03-23 19:08
(753 d 19:40 ago)

@ Helmut
Posting: # 22867
Views: 2,436
 

 Output of sampleN.scABEL()

Thank you Helmut!
But my point for Bebac user was- He/She shall enhance the limit based on Swr and then calculate the subject size. I am still a fan of Dave's FARTSSIE and uses older version. [image]
Although I agree that fully replicate design will be more useful for Bebac user!
On the other part, if the assumed PE is 0.9126-1.096, subject size of 40 could still be good considering no dropouts (Using Boss button;-)).
Regards,
Divyen
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-23 20:42
(753 d 18:06 ago)

@ dshah
Posting: # 22868
Views: 2,567
 

 Don’t use FARTSSIE for SABE

Hi Divyen,

❝ But my point for Bebac user was- He/She shall enhance the limit based on Swr and then calculate the subject size. I am still a fan of Dave's FARTSSIE and uses older version.


See this post and followings. Even if you are a fan of Dave (as I am), that’s not even wrong.
  • The partial replicate design is not supported – though I don’t miss it.
  • The sample size is estimated for the 2×2×2 design and then adjusted according to the selected design (⅔ for the 3-period full replicate and ½ for the 4-period full replicate).
  • But:
    • In ABEL the limits are not fixed like in ABE but depend on the (observed, realized) \(\small{\widehat{s}_\textrm{wR}}\) instead of an assumed one. You can only assume a \(\small{\widehat{\sigma}_\textrm{wR}}\) and simulate a reasonable large number of studies with different ones.
      Hint: \(\small{\widehat{s}_\textrm{wR}^2}\) follows a \(\small{\chi^2\textsf{-}}\)dis­tri­bution with \(\small{n-2}\) degrees of freedom and \(\small{\theta_0}\) follows a lognormal distribution.
    • Additionally you have to take the upper cap of scaling (50% for the EMA and ≈57.4% for Health Canada) and the PE constraint (within 80.00–125.00%) into account.
      [image]
Let’s try to reproduce what FARTSSIE does:

library(PowerTOST)
balance <- function(n, seq) {
  return(as.integer(seq * (n %/% seq + as.logical(n %% seq))))
}
sampleN.FARTSSIE <- function(CV, theta0, targetpower, design) {
  admin   <- as.integer(substr(design, 1, 1)) *
             as.integer(substr(design, 3, 3)) *
             as.integer(substr(design, 5, 5))
  LU      <- setNames(scABEL(CV), c("theta1", "theta2"))
  n.cross <- sampleN.TOST(CV = CV, theta0 = theta0,
                          theta1 = LU[["theta1"]],
                          theta2 = LU[["theta2"]],
                          targetpower = targetpower,
                          design = "2x2x2",
                          method = "noncentral",
                          print = FALSE)[["Sample size"]]
  # Note: FARTSSIE leaves rounding up to
  #       balanced sequences to the user

  return(balance(n.cross * 8 / admin,
                 as.integer(substr(design, 3, 3))))
}
theta0      <- 0.90
targetpower <- 0.80
CV          <- seq(0.3, 0.5, 0.05)
# The partial replicate design (TRR|RTR|RRT) is not supported in FARTSSIE
# and Balaam’s design (TR|RT|TT|RR) is not supported in sampleN.scABEL().

res         <- data.frame(CV = CV, design = rep(c("2x2x3", "2x2x4"),
                                                each = length(CV)),
                          n.FARTSSIE = NA_integer_, pwr.FARTSSIE = NA_real_,
                          n.scABEL = NA_integer_, pwr.scABEL = NA_real_,
                          pwr.SABE = NA_real_, pwr.PE = NA_real_,
                          pwr.ABE = NA_real_)
for (j in 1:nrow(res)) {
  res$n.FARTSSIE[j]   <- sampleN.FARTSSIE(CV = res$CV[j], theta0 = theta0,
                                          targetpower = targetpower,
                                          design = res$design[j])
  res$pwr.FARTSSIE[j] <- power.scABEL(CV = res$CV[j], theta0 = theta0,
                                      n = res$n.FARTSSIE[j], design = res$design[j])
  tmp                 <- sampleN.scABEL(CV = res$CV[j], theta0 = theta0,
                                        targetpower = targetpower,
                                        design = res$design[j], details = FALSE,
                                        print = FALSE)
  res$n.scABEL[j]     <- tmp[["Sample size"]]
  res$pwr.scABEL[j]   <- tmp[["Achieved power"]]
  # retrieve components: pure SABE, PE within constraints, ABE
  res[j, 7:9]         <- suppressMessages(
                           power.scABEL(CV = res$CV[j], theta0 = theta0,
                                        design =res$design[j],
                                        n = res$n.scABEL[j],
                                        details = TRUE)[2:4])
}
names(res)[c(4, 6)] <- "power"
res[, 4:9] <- signif(res[, 4:9], 4)
print(res, row.names = FALSE)

   CV design n.FARTSSIE  power n.scABEL  power pwr.SABE pwr.PE pwr.ABE
 0.30  2x2x3         54 0.8242       50 0.8016   0.8016 0.9900  0.7450
 0.35  2x2x3         44 0.7634       50 0.8037   0.8039 0.9776  0.6351
 0.40  2x2x3         36 0.7246       46 0.8073   0.8086 0.9548  0.5164
 0.45  2x2x3         34 0.7198       42 0.8002   0.8006 0.9266  0.4127
 0.50  2x2x3         32 0.6880       42 0.8035   0.8035 0.9065  0.3484
 0.30  2x2x4         40 0.8525       34 0.8028   0.8028 0.9906  0.7517
 0.35  2x2x4         32 0.7929       34 0.8118   0.8119 0.9786  0.6424
 0.40  2x2x4         28 0.7829       30 0.8066   0.8074 0.9528  0.5069
 0.45  2x2x4         26 0.7846       28 0.8112   0.8116 0.9266  0.4107
 0.50  2x2x4         24 0.7536       28 0.8143   0.8143 0.9065  0.3476

Only for CV 30% the sample size is higher than required (money lost). For higher CVs studies will be underpowered. What we also see: The PE-constraint gets increasingly important.

In your screenshot of FARTSSIE (which version?) I see limits of 75.00–133.33% for CV 35%. That’s not correct for ABEL, where the limits are 77.23–129.48% because $$\small{\eqalign{
s_\textrm{wR}&=\sqrt{\log_{e}(CV_\textrm{wR}^2+1)}\\
\left\{L,U\right\}&=100\exp(\mp 0.760\,\cdot s_\textrm{wR})
}}$$ For the conditions of the Gulf Cooperation Council1 (namely widened limits if \(\small{CV_\textrm{wR}>30\%}\)) you would need …

sampleN.scABEL(CV = 0.35, design = "2x2x3", regulator = "GCC", details = FALSE)

+++++++++++ scaled (widened) ABEL +++++++++++
            Sample size estimation
   (simulation based on ANOVA evaluation)
---------------------------------------------
Study design: 2x2x3 (3 period full replicate)
log-transformed data (multiplicative model)
1e+05 studies for each step simulated.

alpha  = 0.05, target power = 0.8
CVw(T) = 0.35; CVw(R) = 0.35
True ratio = 0.9
ABE limits / PE constraint = 0.8 ... 1.25
Widened limits = 0.75 ... 1.333333
Regulatory settings: GCC

Sample size
 n     power
42   0.8139

… and not only 34 – which would be correct only if the fixed limits are applied independent from the CV. Such an approach is currently only recommended for HVD(P)s by the League of Arab States2 and in South Africa.3

sampleN.TOST(CV = 0.35, theta0 = 0.9, theta1 = 0.75, design = "2x2x3")

+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design: 2x2x3 (3 period full replicate)
log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.8
BE margins = 0.75 ... 1.333333
True ratio = 0.9,  CV = 0.35

Sample size (total)
 n     power
34   0.811011


❝ On the other part, if the assumed PE is 0.9126-1.096, subject size of 40 could still be good considering no dropouts (Using Boss button;-)).


Nope (see above). Let’s be silent about the Boss Button, please.


  1. Executive Board of the Health Ministers’ Council for GCC States. The GCC Guidelines for Bio­equi­va­lence. May 2021. Online.
  2. League of Arab States, Higher Technical Committee for Arab Pharmaceutical Industry. Harmonised Arab Guide­line on Bioequivalence of Generic Pharmaceutical Products. Cairo. March 2014. Online.
  3. Medicines Control Council. Registration of Medicines: Biostudies. Pretoria. June 2015. Online.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2022-03-23 12:43
(754 d 02:05 ago)

@ Bebac user
Posting: # 22862
Views: 2,617
 

 Sample size larger than clinical capacity

Hi Bebac user,

❝ Partially replicated study,…


If possible, avoid the partial replicate design. If you want only three periods (say, you are concerned about a potentially higher drop­out-rate or larger sampled blood volume in a four-period full replicate design), use one of the two-sequence three-period full replicate designs (TRT|RTR or TRR|RTT). Why?
  • The sample size will be similar. For a comparison of study costs, see this article.
  • Additional to \(\small{CV_\textrm{wR}}\) you can estimate \(\small{CV_\textrm{wT}}\).
    • Although agencies are only concerned about ‘outliers’ of the reference, here you can assess the test as well. If the study fails and you have less ‘outliers’ after the test than after the reference, you get ammunition for an argument. An example is dasatinib, where the reference is a terrible formulation \(\small{\textsf{(}CV_\textrm{wR}\gg CV_\textrm{wT}}\), more ‘out­liers’, sub­stan­tial batch-to-batch variability).
    • If the study fails but \(\small{CV_\textrm{wT}<CV_\textrm{wR}}\) (quite often, since pharmaceutical technology improves), you can use this information in planning the next one. The sample size will be smaller than based on the partial replicate, where you have to assume \(\small{CV_\textrm{wT}=CV_\textrm{wR}}\). See also this article.

Sample size search

n     power

45   0.7829

48   0.8045


❝ But, I know that I can't use more than 40 volunteers in any BE study


❝ In these cases, what should I do ?


Two options.
  1. Perform the study in one of the two-sequence four-period full replicate designs (TRTR|RTRT or TRRT|RTTR or TTRR|RRTT).

    library(PowerTOST)
    sampleN.scABEL(CV = 0.3532, design = "2x2x4", details = FALSE)

    +++++++++++ scaled (widened) ABEL +++++++++++
                Sample size estimation
       (simulation based on ANOVA evaluation)
    ---------------------------------------------
    Study design: 2x2x4 (4 period full replicate)
    log-transformed data (multiplicative model)
    1e+05 studies for each step simulated.

    alpha  = 0.05, target power = 0.8
    CVw(T) = 0.3532; CVw(R) = 0.3532
    True ratio = 0.9
    ABE limits / PE constraint = 0.8 ... 1.25
    Regulatory settings: EMA

    Sample size
     n     power

    34   0.8137


    If you are concerned about the inflation of the Type I Error (see this article), adjust the \(\small{\alpha}\) – which will slightly negatively impact power – or in order to preserve power, increase the sample size after adjustment accordingly.

    sampleN.scABEL.ad(CV = 0.3532, design = "2x2x4", details = TRUE)

    +++++++++++ scaled (widened) ABEL ++++++++++++
                Sample size estimation
            for iteratively adjusted alpha
       (simulations based on ANOVA evaluation)
    ----------------------------------------------
    Study design: 2x2x4 (4 period full replicate)
    log-transformed data (multiplicative model)
    1,000,000 studies in each iteration simulated.

    Assumed CVwR 0.3532, CVwT 0.3532
    Nominal alpha      : 0.05
    True ratio         : 0.9000
    Target power       : 0.8
    Regulatory settings: EMA (ABEL)
    Switching CVwR     : 0.3
    Regulatory constant: 0.76
    Expanded limits    : 0.7706 ... 1.2977
    Upper scaling cap  : CVwR > 0.5
    PE constraints     : 0.8000 ... 1.2500


    n  34, nomin. alpha: 0.05000 (power 0.8137), TIE: 0.0652

    Sample size search and iteratively adjusting alpha
    n  34,   adj. alpha: 0.03647 (power 0.7758), rel. impact on power: -4.67%

    n  38,   adj. alpha: 0.03629 (power 0.8131), TIE: 0.05000
    Compared to nominal alpha's sample size increase of 11.8% (~study costs).


    Keep in mind that we plan the study for the worst case condition. If reference-scaling for AUC is not acceptable (in many jurisdictions applying ABEL only for Cmax), you may run into trouble (see this article). In your case the maximum \(\small{CV_\textrm{w}}\) for ABE in a partial replicate design is 24.2%. If it is larger, you would exhaust the clinical capacity and have to opt for #2 anyway.

  2. Split the study into groups. I recommend to have one group at the maximum capacity of the clinical site. For the partial replicate design that means 40|8 and for the two-sequence three-period full replicate 40|10. Why don’t use equally sized groups?
    • If for any crazy reasons an agency does not allow pooling the data, you still have some power to show BE in the large group. In your case you get 74% power with 40 subjects but only 55% with each of the groups of 24 subjects.
    • Furthermore, what would you do – if you are lucky – and one group passes but the other one fails (likely)? Present only the passing one to the agency? I bet that you will be asked for the other one as well. In your case the chance that both groups pass, is 0.552 or just 31%.

    In general European agencies are fine with pooling the data and use of the conventional model $$\small{sequence,\,subject(sequence),\,period,\,treatment}\tag{1}$$ I recommend to give a justification in the protocol: Subjects with similar demographics, all randomized at study outset, groups dosed within a limited time interval.
    If you are wary (the ‘belt plus suspenders’ approach), pool the data but modify the model taking the group-effects into account, i.e., use $$\small{group,\,sequence,\,subject(group\times sequence),\,period(group),\,group\times sequence,\,treatment}\tag{2}$$ With \(\small{(2)}\) you have only \(\small{N_\textrm{Groups}-1}\) degrees of freedom less than in \(\small{(1)}\) and hence, the loss in power is practically negligible.
    What you must not do: Perform a pre-test on the \(\small{group\times treatment}\) interaction and only if \(\small{p(G\times T)\ge 0.1}\) use \(\small{(2)}\). As in any test at level \(\small{\alpha=0.1}\) you will face a false positive rate of 10%. Then what? Further­more, any pre-test inflates the Type I Error.
For background see this article. Good luck.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Brus
★    

Spain,
2022-03-23 15:25
(753 d 23:23 ago)

@ Bebac user
Posting: # 22864
Views: 2,549
 

 Sample size and Replicated studies

Dear Bebac user,

Why you can’t use more than 40 volunteers in any BE study?

Does it appears in any guideline?

Best regards,
UA Flag
Activity
 Admin contact
22,983 posts in 4,822 threads, 1,648 registered users;
36 visitors (0 registered, 36 guests [including 2 identified bots]).
Forum time: 15:48 CEST (Europe/Vienna)

Complex, statistically improbable things are by their nature
more difficult to explain than
simple, statistically probable things.    Richard Dawkins

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5