Power<50% [Power / Sample Size]
❝ CI.BE()
gives us the realized results (i.e., the values observed in a particular study), whereas power.scABEL()
the results of 105 simulations. Both the CV and PE are skewed to the right and, therefore, I would expect that simulated power is lower than assessing whether a particular study passes. However, such a large discrepancy is surprising for me.
What about function power.TOST()? I think that it doesnt give us the result based on simulations? Am I right? And this calculated power is also way below 50% (similar to the power calculated above):
power.TOST(theta0=0.95,n=36,design="2x2x3",CV=1.1, theta1=0.6983678, theta2=1.4319102)
0.2355959
I can find more such examples All are as expected close to the limit:
a)
CI.BE(pe=0.95,n=36,design="2x2x3",CV=0.5)
lower upper
0.8089197 1.1156855
power.TOST(theta0=0.95,n=36,design="2x2x3",CV=0.5)
0.4273464
b)
CI.BE(pe=0.95,n=36,design="2x2",CV=0.44)
lower upper
0.8033552 1.1234134
power.TOST(theta0=0.95,n=36,design="2x2",CV=0.44)
0.3786147
c)
CI.BE(pe=0.94,n=18,design="2x2",CV=0.28)
lower upper
0.8011079 1.1029725
power.TOST(theta0=0.94,n=18,design="2x2",CV=0.28)
0.4259032
...
❝ If any of the metrics shows power <50%, the study will fail (see this article).
Is it possible that the opposite is always true and this isnt?
So that if the study fails, the power will always be <50%; but if the power is <50%, it doesnt always mean that the study failed (will fail)?
Complete thread:
- Sequential Sample Size and Power Estimate in R xpresso 2023-09-19 13:40 [Power / Sample Size]
- IUT Helmut 2023-09-19 14:04
- IUT xpresso 2023-09-19 14:18
- power is analytically accessible Helmut 2023-09-19 15:46
- Power<50% BEQool 2024-03-01 14:26
- Power<50% Helmut 2024-03-01 14:44
- Power<50%BEQool 2024-03-01 23:02
- Power<50% Helmut 2024-03-01 14:44
- IUT xpresso 2023-09-19 14:18
- IUT Helmut 2023-09-19 14:04