Sample size larger than clinical capacity [Power / Sample Size]

posted by Helmut Homepage – Vienna, Austria, 2022-03-23 12:43 (764 d 05:55 ago) – Posting: # 22862
Views: 2,684

Hi Bebac user,

❝ Partially replicated study,…


If possible, avoid the partial replicate design. If you want only three periods (say, you are concerned about a potentially higher drop­out-rate or larger sampled blood volume in a four-period full replicate design), use one of the two-sequence three-period full replicate designs (TRT|RTR or TRR|RTT). Why?

Sample size search

n     power

45   0.7829

48   0.8045


❝ But, I know that I can't use more than 40 volunteers in any BE study


❝ In these cases, what should I do ?


Two options.
  1. Perform the study in one of the two-sequence four-period full replicate designs (TRTR|RTRT or TRRT|RTTR or TTRR|RRTT).

    library(PowerTOST)
    sampleN.scABEL(CV = 0.3532, design = "2x2x4", details = FALSE)

    +++++++++++ scaled (widened) ABEL +++++++++++
                Sample size estimation
       (simulation based on ANOVA evaluation)
    ---------------------------------------------
    Study design: 2x2x4 (4 period full replicate)
    log-transformed data (multiplicative model)
    1e+05 studies for each step simulated.

    alpha  = 0.05, target power = 0.8
    CVw(T) = 0.3532; CVw(R) = 0.3532
    True ratio = 0.9
    ABE limits / PE constraint = 0.8 ... 1.25
    Regulatory settings: EMA

    Sample size
     n     power

    34   0.8137


    If you are concerned about the inflation of the Type I Error (see this article), adjust the \(\small{\alpha}\) – which will slightly negatively impact power – or in order to preserve power, increase the sample size after adjustment accordingly.

    sampleN.scABEL.ad(CV = 0.3532, design = "2x2x4", details = TRUE)

    +++++++++++ scaled (widened) ABEL ++++++++++++
                Sample size estimation
            for iteratively adjusted alpha
       (simulations based on ANOVA evaluation)
    ----------------------------------------------
    Study design: 2x2x4 (4 period full replicate)
    log-transformed data (multiplicative model)
    1,000,000 studies in each iteration simulated.

    Assumed CVwR 0.3532, CVwT 0.3532
    Nominal alpha      : 0.05
    True ratio         : 0.9000
    Target power       : 0.8
    Regulatory settings: EMA (ABEL)
    Switching CVwR     : 0.3
    Regulatory constant: 0.76
    Expanded limits    : 0.7706 ... 1.2977
    Upper scaling cap  : CVwR > 0.5
    PE constraints     : 0.8000 ... 1.2500


    n  34, nomin. alpha: 0.05000 (power 0.8137), TIE: 0.0652

    Sample size search and iteratively adjusting alpha
    n  34,   adj. alpha: 0.03647 (power 0.7758), rel. impact on power: -4.67%

    n  38,   adj. alpha: 0.03629 (power 0.8131), TIE: 0.05000
    Compared to nominal alpha's sample size increase of 11.8% (~study costs).


    Keep in mind that we plan the study for the worst case condition. If reference-scaling for AUC is not acceptable (in many jurisdictions applying ABEL only for Cmax), you may run into trouble (see this article). In your case the maximum \(\small{CV_\textrm{w}}\) for ABE in a partial replicate design is 24.2%. If it is larger, you would exhaust the clinical capacity and have to opt for #2 anyway.

  2. Split the study into groups. I recommend to have one group at the maximum capacity of the clinical site. For the partial replicate design that means 40|8 and for the two-sequence three-period full replicate 40|10. Why don’t use equally sized groups?
    • If for any crazy reasons an agency does not allow pooling the data, you still have some power to show BE in the large group. In your case you get 74% power with 40 subjects but only 55% with each of the groups of 24 subjects.
    • Furthermore, what would you do – if you are lucky – and one group passes but the other one fails (likely)? Present only the passing one to the agency? I bet that you will be asked for the other one as well. In your case the chance that both groups pass, is 0.552 or just 31%.

    In general European agencies are fine with pooling the data and use of the conventional model $$\small{sequence,\,subject(sequence),\,period,\,treatment}\tag{1}$$ I recommend to give a justification in the protocol: Subjects with similar demographics, all randomized at study outset, groups dosed within a limited time interval.
    If you are wary (the ‘belt plus suspenders’ approach), pool the data but modify the model taking the group-effects into account, i.e., use $$\small{group,\,sequence,\,subject(group\times sequence),\,period(group),\,group\times sequence,\,treatment}\tag{2}$$ With \(\small{(2)}\) you have only \(\small{N_\textrm{Groups}-1}\) degrees of freedom less than in \(\small{(1)}\) and hence, the loss in power is practically negligible.
    What you must not do: Perform a pre-test on the \(\small{group\times treatment}\) interaction and only if \(\small{p(G\times T)\ge 0.1}\) use \(\small{(2)}\). As in any test at level \(\small{\alpha=0.1}\) you will face a false positive rate of 10%. Then what? Further­more, any pre-test inflates the Type I Error.
For background see this article. Good luck.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
22,993 posts in 4,828 threads, 1,652 registered users;
110 visitors (0 registered, 110 guests [including 2 identified bots]).
Forum time: 19:38 CEST (Europe/Vienna)

Never never never never use Excel.
Not even for calculation of arithmetic means.    Martin Wolfsegger

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5