PowerTOST vs. simulation on ABEL [Power / Sample Size]
Dear Helmut, dear all,
Here the results of sample size estimation using PowerTOST with expanded acceptance limits according to the EMA guidance for the 3-period partial replicate design
(example call
Considering that:
Here the results of sample size estimation using PowerTOST with expanded acceptance limits according to the EMA guidance for the 3-period partial replicate design
(example call
CV <- 0.35
# widened acceptance limits
# round to 4 decimals to take the guidance to the letter
theta1 <- round(exp(-0.76*CV2se(0.35)),4) #use 0.5 here if CV>=0.5
theta2 <- round(exp(+0.76*CV2se(0.35)),4) #use 0.5 here if CV>=0.5
sampleN.TOST(CV=CV, theta0=1.00, theta1=theta1, theta2=theta2, design="2x3x3")
) compared to the ones based on simulation:power=80%
0.85 0.9 0.95 1.00 1.05 1.1 1.15 1.2
CV sim PT sim PT sim PT sim PT sim PT sim PT sim PT sim PT
30% 194 219 53 60 27 30 22 24 26 30 45 51 104 117 >201 483
35% 127 120 51 48 29 27 25 24 29 27 45 42 84 70 >201 189
40% 90 84 44 42 29 27 27 24 30 27 42 39 68 60 139 114
45% 77 66 40 36 29 27 27 24 29 27 37 33 57 51 124 84
50% 75 57 40 36 30 27 28 24 30 27 37 33 53 45 133 69
55% 81 66 42 42 32 30 30 30 32 30 40 39 56 54 172 81
60% 88 75 46 48 36 36 33 33 36 36 44 45 63 63 >201 93
65% 99 87 53 54 40 39 37 36 40 39 50 51 71 69 >201 108
70% 109 99 58 60 45 45 41 42 45 45 56 57 80 78 >201 120
75% 136 108 67 66 50 51 46 48 50 51 62 63 89 87 >201 135
80% 144 120 72 75 54 57 51 51 55 54 68 69 97 99 >201 150
power=90%
0.85 0.9 0.95 1.00 1.05 1.1 1.15 1.2
CV sim PT sim PT sim PT sim PT sim PT sim PT sim PT sim PT
30% >201 303 74 81 36 39 28 30 36 39 62 69 147 162 >201 666
35% 181 165 70 66 39 36 32 30 39 36 63 57 117 108 >201 258
40% 130 114 61 57 38 36 33 30 39 36 57 51 94 84 >201 159
45% 132 90 55 51 37 33 33 30 38 33 51 48 85 69 >201 117
50% 158 75 55 48 39 33 34 30 38 33 51 42 84 63 >201 93
55% 178 90 59 54 41 39 37 36 41 39 53 51 97 72 >201 111
60% 199 105 64 63 45 45 41 42 46 45 60 60 112 84 >201 129
65% >201 120 72 72 51 51 46 48 51 51 67 66 125 96 >201 147
70% >201 135 82 81 57 57 52 51 57 57 76 75 141 108 >201 165
75% >201 150 93 90 66 66 58 57 64 63 85 84 161 120 >201 186
80% >201 168 100 102 70 72 63 66 71 72 93 93 176 135 >201 207
Considering that:
- PowerTOST does not consider the peculiarities of the EMA method like mixed procedure around CV=30%, additional GMR constraint etc.
- PowerTOST gives only sample sizes for balanced designs (step size 3 here)
- the simulated power has a simulation error assigned with it
- At CV=30% the PowerTOST results are around 10% higher than the simulated ones. I guess this is due to the mixed procedure (no widening for estimated CV ≤30%, widening if estimated CV>30%, which occurs in the simulated data half to half)
- In the GMR range 0.9 - 1.1 and CV's >30% up to 50% the PowerTOST results are around 10% to low.
- In the GMR range 0.9 - 1.1 and CV's >50% the agreement is nearly perfect (max. diff. ±2).
- Outside the GMR range 0.9 - 1.1 (GMR=0.85, 1.15 and 1.2) the sample sizes based on simulation are consistently much higher. I guess this is due to the additional GMR constraint which only here comes to an appreciable influence.
—
Regards,
Detlew
Regards,
Detlew
Complete thread:
- Paper on RSABE/ABEL Helmut 2012-01-04 15:46 [Power / Sample Size]
- Paper on RSABE/ABEL earlybird 2012-01-10 14:11
- Paper on RSABE/ABEL Ben 2012-01-12 19:22
- Hyslop/Howe Helmut 2012-01-25 13:26
- Hyslop/Howe Ben 2012-01-25 17:51
- Hyslop/Howe Helmut 2012-01-25 13:26
- FARTSSIE on RSABE/ABEL d_labes 2012-01-25 11:12
- FARTSSIE on RSABE/ABEL earlybird 2012-01-26 10:44
- Paper on RSABE/ABEL Ben 2012-01-12 19:22
- PowerTOST vs. simulation on ABELd_labes 2012-01-25 15:29
- Paper on RSABE/ABEL earlybird 2012-01-10 14:11