sampleN.TOST vs. sampleN.scABEL [Power / Sample Size]
Hello!
I have searched the forum but couldn't find the answer to the following question: why does the sample size estimation with R with package PowerTOST differ between
Lets take a look at the following example:
a) Sample size estimation with
b) Sample size estimation with
So why do the sample size estimations differ? They have the same arguments (design="2x3x3", theta0=0.95 ...). CV is 30% so there should be no scaling (conventional BE limits, i.e., 80.00-125.00). Does it have to do anything with simulations? But even when I increase number of simulations, the differences aren't that big.
Or if I reformulate the question, why dont the following powers match:
a)
b)
Regards
BEQool
Edit: Category changed. [Helmut]
I have searched the forum but couldn't find the answer to the following question: why does the sample size estimation with R with package PowerTOST differ between
sampleN.TOST
and sampleN.scABEL
when CV=30%?Lets take a look at the following example:
a) Sample size estimation with
sampleN.TOST
sampleN.TOST(CV=0.3, theta0=0.95, design="2x3x3")
+++++++++++ Equivalence test - TOST +++++++++++
Sample size estimation
-----------------------------------------------
Study design: 2x3x3 (partial replicate)
log-transformed data (multiplicative model)
alpha = 0.05, target power = 0.8
BE margins = 0.8 ... 1.25
True ratio = 0.95, CV = 0.3
Sample size (total)
n power
30 0.820400
b) Sample size estimation with
sampleN.scABEL
sampleN.scABEL(CV=0.3, theta0=0.95, design="2x3x3")
+++++++++++ scaled (widened) ABEL +++++++++++
Sample size estimation
(simulation based on ANOVA evaluation)
---------------------------------------------
Study design: 2x3x3 (partial replicate)
log-transformed data (multiplicative model)
1e+05 studies for each step simulated.
alpha = 0.05, target power = 0.8
CVw(T) = 0.3; CVw(R) = 0.3
True ratio = 0.95
ABE limits / PE constraint = 0.8 ... 1.25
EMA regulatory settings
- CVswitch = 0.3
- cap on scABEL if CVw(R) > 0.5
- regulatory constant = 0.76
- pe constraint applied
Sample size search
n power
24 0.7814
27 0.8257
So why do the sample size estimations differ? They have the same arguments (design="2x3x3", theta0=0.95 ...). CV is 30% so there should be no scaling (conventional BE limits, i.e., 80.00-125.00). Does it have to do anything with simulations? But even when I increase number of simulations, the differences aren't that big.
Or if I reformulate the question, why dont the following powers match:
a)
power.TOST(CV=0.3, theta0=0.95,design="2x3x3", n=30)
[1] 0.8204004
b)
power.scABEL(CV=0.3, theta0=0.95, design="2x3x3", n=30)
[1] 0.85977
Regards
BEQool
Edit: Category changed. [Helmut]
Complete thread:
- sampleN.TOST vs. sampleN.scABELBEQool 2024-01-29 11:53 [Power / Sample Size]
- ABEL is a framework (decision scheme) Helmut 2024-01-29 13:52
- ABEL is a framework (decision scheme) BEQool 2024-01-30 10:52
- ABEL vs. ABE Helmut 2024-01-30 13:08
- ABEL vs. ABE BEQool 2024-02-04 19:24
- Being able to read does not hurt… Helmut 2024-02-04 21:12
- Being able to read does not hurt… BEQool 2024-02-05 13:20
- Being able to read does not hurt… Helmut 2024-02-05 16:01
- Vet BE Guideline mittyri 2024-02-05 16:49
- Being able to read does not hurt… Helmut 2024-02-05 16:01
- Being able to read does not hurt… BEQool 2024-02-05 13:20
- Being able to read does not hurt… Helmut 2024-02-04 21:12
- ABEL vs. ABE BEQool 2024-02-04 19:24
- ABEL vs. ABE Helmut 2024-01-30 13:08
- ABEL is a framework (decision scheme) BEQool 2024-01-30 10:52
- ABEL is a framework (decision scheme) Helmut 2024-01-29 13:52