## Total sample size vs. vector of groups sizes [Power / Sample Size]

diving deeper into the matter. I was guessing your example and show the syntax Detlew mentioned above:

`library(PowerTOST)`

power.TOST(CV = 0.73, design = "parallel", n = 424) # total sample size

[1] 0.8511768

power.TOST(CV = 0.73, design = "parallel", n = c(212, 212)) # vector of group sizes

[1] 0.8511768

If you are interested in

*post hoc*power (I hope your aren’t…), always give the vector of group sizes. If you give the total sample size, the functions of

`PowerTOST`

try to keep groups (or sequences in crossovers) as balanced as possible. If your study was more unbalanced, it will calculate a power which is slightly higher than the true one.`library(PowerTOST)`

n <- sampleN.TOST(CV = 0.73, targetpower = 0.85, design = "parallel")[["Sample size"]]

+++++++++++ Equivalence test - TOST +++++++++++

Sample size estimation

-----------------------------------------------

Study design: 2 parallel groups

log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.85

BE margins = 0.8 ... 1.25

True ratio = 0.95, CV = 0.73

Sample size (total)

n power

424 0.851177

So far, so good. But what about dropouts? Generally you dose more subjects and the number of eligible subjects will not be exactly as anticipated…

`do.rate <- 0.1 # anticipated dropout-rate 10% in both groups`

n.adj <- ceiling(n / (1 - do.rate)) # adjust the sample size and round up (conservative)

do.act <- c(0.12, 0.08) # actual dropout-rates in groups

eligible <- round(n.adj / 2 * (1 - do.act))

cat("total sample size :", n, paste0("(", n / 2, " per group)"),

"\nanticipated dropout-rate:",

sprintf("%.3g%% (in both groups)", 100 * do.rate),

"\ndosed subjects :",

sum(dosed), paste0("(", n.adj / 2, " in both groups)"),

"\nactual dropout-rates :",

sprintf("%.3g%% total", 100 * (1 - sum(eligible) / n.adj)),

paste0("(", paste(sprintf("%.3g%%", 100 * do.act), collapse = " and "),

" in the groups)"),

"\ntotal eligible subjects :", sum(eligible),

paste0("(", paste(eligible, collapse = " and "), " in the groups)"), "\n")

total sample size : 424 (212 per group)

anticipated dropout-rate: 10% (in both groups)

dosed subjects : 472 (236 in both groups)

actual dropout-rates : 9.96% total (12% and 8% in the groups)

total eligible subjects : 425 (208 and 217 in the groups)

power.TOST(CV = 0.73, design = "parallel", n = sum(eligible)) # guessing

Unbalanced design. n(i)=213/212 assumed.

[1] 0.8519593

Note the message!

`power.TOST(CV = 0.73, design = "parallel", n = eligible) # exact`

[1] 0.8518124

The functions of

`PowerTOST`

cannot ‘know’ whether a study was unbalanced. It *guesses*that only if the total sample size is odd (parallel design) or not a multiple of the number of sequences (crossovers). Otherwise, you don’t get a message like above (because in a parallel design it assumes that

*n*

_{1}=

*n*

_{2}=

*n*/2). Try this one:

`do.rate <- 0.10`

do.act <- c(0.125, 0.095)

total sample size : 424 (212 per group)

anticipated dropout-rate: 10% (in both groups)

dosed subjects : 472 (236 in both groups)

actual dropout-rates : 11% total (12.5% and 9.5% in the groups)

total eligible subjects : 420 (206 and 214 in the groups)

power.TOST(CV = 0.73, design = "parallel", n = sum(eligible))

[1] 0.8479974

No message, estimate too high because *n*_{1} = *n*_{2} = *n*/2 = 210 is assumed!

`power.TOST(CV = 0.73, design = "parallel", n = eligible)`

[1] 0.8478754

*Dif-tor heh smusma*🖖🏼 Довге життя Україна!

_{}

Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮

Science Quotes

### Complete thread:

- Power TOST question Achievwin 2024-02-20 19:49 [Power / Sample Size]
- Power TOST question d_labes 2024-02-22 09:54
- Power TOST question Achievwin 2024-02-22 18:34
- Total sample size vs. vector of groups sizesHelmut 2024-02-23 10:06

- Power TOST question Achievwin 2024-02-22 18:34

- Power TOST question d_labes 2024-02-22 09:54