## power.TOST.sim and uncertainty [Outliers]

Dear BEQool!

❝ Lets say I plan study with theta0=0.95, CV=0.20, design="2x2", targetpower=0.8 (all except CV are default settings in PowerTOST), so with sampleN.TOST I get N=20 and power=0.834680

> sampleN.TOST(theta0=0.95, CV=0.20, design="2x2", targetpower=0.8, print=FALSE)

 ❝   Design alpha  CV theta0 theta1 theta2 Sample size Achieved power Target power ❝     2x2  0.05   0.2  0.95    0.8   1.25          20      0.8346802          0.8

❝ If all of my assumptions in a study are exactly realized, doesnt it mean that I would 100% always get bioequivalent formulations (and the study would never fail)?

❝ If all of my assumptions in a study are exactly realized (pe=0.95, CV=0.20, design="2x2", n=20) then I would get the following confidence interval:

> CI.BE(pe=0.95,CV=0.20,n=20)

 ❝     lower     upper 

❝ 0.8522362 1.0589787

I think exactly realized means a bit different thing.
Take a look at power.TOST.sim() function in PowerTOST package
The description says:
Power is calculated by simulations of studies (PE via its normal distribution, MSE via its associated χ2 distribution) and application of the two one-sided t-tests. Power is obtained via ratio of studies found BE to the number of simulated studies.
So exactly realized means that the variance and centers of distributions used in power estimation were close to the true values, nothing more.

> power.TOST.sim(n = 20, CV = 0.2, theta0 = 0.95, nsims = 1000) [1] 0.832 # easy to find non-BE replicate > set.seed(5) > power.TOST.sim(n = 20, CV = 0.2, theta0 = 0.95, nsims = 1, setseed = FALSE) [1] 0 

Kind regards,
Mittyri