## ABEL is a framework (decision scheme) [Power / Sample Size]

❝ […]: why does the sample size estimation with *R* with package *PowerTOST* differ between `sampleN.TOST`

and `sampleN.scABEL`

when CV=30%?

*realized*(observed)

*CV*

_{wR}, the upper constraint

*uc*= 50%, and the point estimate.

Therefore, simulations are needed. At a

*true*

*CV*

_{wR}= 30% we have – roughly* – a 50% chance that in the actual study

*CV*

_{wR}> 30%. Then we could expand the limits, gain power for a given sample size, or need less subjects for a certain power than we would need in ABE.

❝ So why do the sample size estimations differ? They have the same arguments (design="2x3x3", theta0=0.95 ...). CV is 30% so there should be no scaling (conventional BE limits, i.e., 80.00-125.00).

You could also remove all scaling conditions. Then you get the same sample size than with

`sampleN.TOST()`

:`reg <- reg_const(regulator = "USER", r_const = 1, CVswitch = Inf, CVcap = Inf, pe_constr = FALSE)`

sampleN.scABEL(CV = 0.3, theta0 = 0.95, design = "2x3x3", regulator = reg, details = FALSE)

+++++++++++ scaled (widened) ABEL +++++++++++

Sample size estimation

(simulation based on ANOVA evaluation)

---------------------------------------------

Study design: 2x3x3 (partial replicate)

log-transformed data (multiplicative model)

1e+05 studies for each step simulated.

alpha = 0.05, target power = 0.8

CVw(T) = 0.3; CVw(R) = 0.3

True ratio = 0.95

ABE limits / PE constraint = 0.8 ... 1.25

USER defined regulatory settings

- CVswitch = Inf

- no cap on scABEL

- regulatory constant = 1

- no pe constraint

Sample size

n power

30 0.8196

❝ Does it have to do anything with simulations? But even when I increase number of simulations, the differences aren't that big.

❝ Or if I reformulate the question, why dont the following powers match:

❝ a) `power.TOST(CV=0.3, theta0=0.95,design="2x3x3", n=30)`

`❝ [1] 0.8204004`

❝ b) `power.scABEL(CV=0.3, theta0=0.95, design="2x3x3", n=30)`

`❝ [1] 0.85977`

You asked the wrong question in b) because

`power.scABEL(CV=0.3, theta0=0.95, design="2x3x3", `**n=27**)

[1] 0.82566

Let’s consider an example where expanding the limits is less likely.

**ABE (exact)**

`sampleN.TOST(CV = 0.25, theta0 = 0.95, design = "2x3x3")`

+++++++++++ Equivalence test - TOST +++++++++++

Sample size estimation

-----------------------------------------------

Study design: 2x3x3 (partial replicate)

log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.8

BE margins = 0.8 ... 1.25

True ratio = 0.95, CV = 0.25

Sample size (total)

n power

21 0.814342

**ABEL (simulations)**

Note that for the partial replicate design you should use subject simulations by the function`sampleN.scABEL.sdsims()`

instead of simulating the associated statistics by the function`sampleN.scABEL()`

.

`sampleN.scABEL.sdsims(CV = 0.25, theta0 = 0.95, design = "2x3x3", details = FALSE)`

+++++++++++ scaled (widened) ABEL +++++++++++

Sample size estimation

(simulation based on ANOVA evaluation)

---------------------------------------------

Study design: 2x3x3 (partial replicate)

log-transformed data (multiplicative model)

1e+05 studies for each step simulated.

alpha = 0.05, target power = 0.8

CVw(T) = 0.25; CVw(R) = 0.25

True ratio = 0.95

ABE limits / PE constraint = 0.8 ... 1.25

Regulatory settings: EMA

Sample size

n power

21 0.8223

BTW, I would

*never ever*assume

`theta = 0.95`

for a HVD(P). That’s why in the reference-scaling functions of `PowerTOST`

`theta = 0.90`

is the default.- The simulated variance \(\small{s_\text{wR}^2}\) depends on its associated \(\small{\chi^2}\)-distribution with \(\small{n-2}\) degrees of freedom. The \(\small{\chi^2}\)-distribution is skewed to the right. Hence, when simulating any \(\small{CV_\text{wR}=x}\), you will obtain slightly more values \(\small{>x}\) than \(\small{<x}\).

Try this to show the asymmetric distribution:

`n <- 1e10`

df <- 27 - 2

set.seed(1234567)

x <- rchisq(n = 1e6, df = df)

y <- hist(x, breaks = "FD", plot = FALSE)

plot(y, freq = FALSE, col = "lightblue", border = NA,

xlim = c(0, max(y$mids)), cex.main = 1, las = 1,

xlab = bquote("Number of random samples of "*chi^2 == .(n)),

main = bquote(italic(df) == .(df)))

abline(v = y$mids[y$density == max(y$density)], col = "blue")

box()

*Dif-tor heh smusma*🖖🏼 Довге життя Україна!

_{}

Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮

Science Quotes

### Complete thread:

- sampleN.TOST vs. sampleN.scABEL BEQool 2024-01-29 11:53 [Power / Sample Size]
- ABEL is a framework (decision scheme)Helmut 2024-01-29 13:52
- ABEL is a framework (decision scheme) BEQool 2024-01-30 10:52
- ABEL vs. ABE Helmut 2024-01-30 13:08
- ABEL vs. ABE BEQool 2024-02-04 19:24
- Being able to read does not hurt… Helmut 2024-02-04 21:12
- Being able to read does not hurt… BEQool 2024-02-05 13:20
- Being able to read does not hurt… Helmut 2024-02-05 16:01
- Vet BE Guideline mittyri 2024-02-05 16:49

- Being able to read does not hurt… Helmut 2024-02-05 16:01

- Being able to read does not hurt… BEQool 2024-02-05 13:20

- Being able to read does not hurt… Helmut 2024-02-04 21:12

- ABEL vs. ABE BEQool 2024-02-04 19:24

- ABEL vs. ABE Helmut 2024-01-30 13:08

- ABEL is a framework (decision scheme) BEQool 2024-01-30 10:52

- ABEL is a framework (decision scheme)Helmut 2024-01-29 13:52