Achievwin
★★  

US,
2024-02-20 20:49
(296 d 10:26 ago)

Posting: # 23869
Views: 2,480
 

 Power TOST question [Power / Sample Size]

Naïve question:

When we use pa.ABE for parallel study N determination output gives N for certain power (e.g. if N=424 for 85% power) is this N is for total (N=424) or per arm (212+212)?

Kindly explain.:confused::confused:
d_labes
★★★

Berlin, Germany,
2024-02-22 10:54
(294 d 20:21 ago)

@ Achievwin
Posting: # 23871
Views: 2,024
 

 Power TOST question

Dear Achievwin,

❝ When we use pa.ABE for parallel study N determination output gives N for certain power (e.g. if N=424 for 85% power) is this N is for total (N=424) or per arm (212+212)?


Call the man page of the function sampleN.TOST() and read under Details

The estimated sample size gives always the total number of subjects (not subject/sequence in crossovers or subjects/group in parallel designs – like in some other software packages)."


That is valid for all the other sample size estimating functions.

The power calculating functions accept a single number as sample size, here also the total.
Also accepted a vector of sample sizes, here meaning the sample sizes of groups.

Regards,

Detlew
Achievwin
★★  

US,
2024-02-22 19:34
(294 d 11:41 ago)

@ d_labes
Posting: # 23872
Views: 1,933
 

 Power TOST question

Thank you
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-02-23 11:06
(293 d 20:08 ago)

@ Achievwin
Posting: # 23877
Views: 1,917
 

 Total sample size vs. vector of groups sizes

Hi Achievwin,

diving deeper into the matter. I was guessing your example and show the syntax Detlew mentioned above:

library(PowerTOST)
power.TOST(CV = 0.73, design = "parallel", n = 424)         # total sample size
[1] 0.8511768
power.TOST(CV = 0.73, design = "parallel", n = c(212, 212)) # vector of group sizes
[1] 0.8511768

Identical…

If you are interested in post hoc power (I hope your aren’t…), always give the vector of group sizes. If you give the total sample size, the functions of PowerTOST try to keep groups (or sequences in crossovers) as balanced as possible. If your study was more unbalanced, it will calculate a power which is slightly higher than the true one.

library(PowerTOST)
n <- sampleN.TOST(CV = 0.73, targetpower = 0.85, design = "parallel")[["Sample size"]]

+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design: 2 parallel groups
log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.85
BE margins = 0.8 ... 1.25
True ratio = 0.95,  CV = 0.73

Sample size (total)
 n     power
424   0.851177

So far, so good. But what about dropouts? Generally you dose more subjects and the number of eligible subjects will not be exactly as anticipated…

do.rate  <- 0.1           # anticipated dropout-rate 10% in both groups
n.adj    <- ceiling(n / (1 - do.rate)) # adjust the sample size and round up (conservative)
do.act   <- c(0.12, 0.08) # actual dropout-rates in groups
eligible <- round(n.adj / 2 * (1 - do.act))
cat("total sample size       :", n, paste0("(", n / 2, " per group)"),
    "\nanticipated dropout-rate:",
    sprintf("%.3g%% (in both groups)", 100 * do.rate),
    "\ndosed subjects          :",
    sum(dosed), paste0("(", n.adj / 2, " in both groups)"),
    "\nactual dropout-rates    :",
    sprintf("%.3g%% total", 100 * (1 - sum(eligible) / n.adj)),
    paste0("(", paste(sprintf("%.3g%%", 100 * do.act), collapse = " and "),
    " in the groups)"),
    "\ntotal eligible subjects :", sum(eligible),
    paste0("(", paste(eligible, collapse = " and "), " in the groups)"), "\n")

total sample size       : 424 (212 per group)
anticipated dropout-rate: 10% (in both groups)
dosed subjects          : 472 (236 in both groups)
actual dropout-rates    : 9.96% total (12% and 8% in the groups)
total eligible subjects : 425 (208 and 217 in the groups)


power.TOST(CV = 0.73, design = "parallel", n = sum(eligible)) # guessing
Unbalanced design. n(i)=213/212 assumed.
[1] 0.8519593

Note the message!

power.TOST(CV = 0.73, design = "parallel", n = eligible)      # exact
[1] 0.8518124


The functions of PowerTOST cannot ‘know’ whether a study was unbalanced. It guesses that only if the total sample size is odd (parallel design) or not a multiple of the number of sequences (crossovers). Otherwise, you don’t get a message like above (because in a parallel design it assumes that n1 = n2 = n/2). Try this one:

do.rate  <- 0.10
do.act   <- c(0.125, 0.095)

total sample size       : 424 (212 per group)
anticipated dropout-rate: 10% (in both groups)
dosed subjects          : 472 (236 in both groups)
actual dropout-rates    : 11% total (12.5% and 9.5% in the groups)
total eligible subjects : 420 (206 and 214 in the groups)


power.TOST(CV = 0.73, design = "parallel", n = sum(eligible))
[1] 0.8479974

No message, estimate too high because n1 = n2 = n/2 = 210 is assumed!

power.TOST(CV = 0.73, design = "parallel", n = eligible)
[1] 0.8478754


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
23,336 posts in 4,902 threads, 1,698 registered users;
46 visitors (0 registered, 46 guests [including 10 identified bots]).
Forum time: 07:15 CET (Europe/Vienna)

Only dead fish go with the current.    Scuba divers' proverb

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5