Jay ☆ India, 20160108 07:49 (2819 d 15:37 ago) Posting: # 15802 Views: 12,868 

Dear all, As per discussions previous posts, the post of power is not taken into much consideration. But when the post hoc power of AUCt and AUCinf is observed around 90100% while the posthoc power of Cmax is less than 20%, then what would be the probable reason for the same. Regards, Jay 
ElMaestro ★★★ Denmark, 20160108 16:07 (2819 d 07:20 ago) @ Jay Posting: # 15805 Views: 11,587 

Hi Jay, ❝ As per discussions previous posts, the post of power is not taken into much consideration. ❝ But when the post hoc power of AUCt and AUCinf is observed around 90100% while the posthoc power of Cmax is less than 20%, then what would be the probable reason for the same. First of all, I think you should mention how you calculate posthoc power. Do you use GMR=0.95, GMR=1.0 or observed? Anyways, low posthoc power could be due to deviating GMR (for the case you use the observed GMR; this is likely a product failure) or it could be due to high variability, or both. If you have a situation with deviating Cmax point estimate but OK AUCt point estimate, then you are facing a challenge. Having said all, I must admit I am also in the camp of people who think posthoc power is either rubbish or of very limited value. — Pass or fail! ElMaestro 
Helmut ★★★ Vienna, Austria, 20160109 15:21 (2818 d 08:05 ago) @ Jay Posting: # 15807 Views: 11,437 

Hi Jay, I agree with ElMaestro. Can you give us (both for AUC and C_{max}):
— Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Jay ☆ India, 20160111 11:59 (2816 d 11:27 ago) @ Helmut Posting: # 15810 Views: 11,256 

Dear Helmut, It is pilot study with sample size of 18, with N=18, the GMR and IS %CV values for C_{max}, AUC_{t} and AUC_{inf} are 127.8 & 21.01, 105.1 & 16.71 and 105.49 & 16.73. The post hoc power observed is 2.4, 92.15 and 91.21 respectively. Regards, Jay 
Helmut ★★★ Vienna, Austria, 20160111 15:02 (2816 d 08:24 ago) @ Jay Posting: # 15812 Views: 11,384 

Hi Jay, ❝ It is pilot study with sample size of 18, with N=18, the GMR and IS %CV values for C_{max}, AUC_{t} and AUC_{inf} are 127.8 & 21.01, 105.1 & 16.71 and 105.49 & 16.73. The post hoc power observed is 2.4, 92.15 and 91.21 respectively. I guess this was a 3×3 Latin Square or a 6×3 Williams’ design? Otherwise I don’t come close to your numbers. Post hoc power of a pilot study is even more useless than the one of a pivotal study. However, with a PE of C_{max} outside of the acceptance range I would strongly suggest to reformulate. — Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Jay ☆ India, 20160113 09:24 (2814 d 14:02 ago) @ Helmut Posting: # 15819 Views: 11,293 

Dear Helmut, You are right, it was partial replicate design. The low power for Cmax, probable reasons may be as following,
Regards, Jay 
d_labes ★★★ Berlin, Germany, 20160113 12:59 (2814 d 10:27 ago) @ Jay Posting: # 15821 Views: 11,190 

Dear Jay, ❝ ... But what may be the probable reason for such huge difference between post hoc power of Cmax and AUC. Simple enough: PE for AUC not outside acceptance range. For Cmax with PE outside acceptance range you could even have a very large number of subjects in your study without rising your posthoc power: library(PowerTOST) On the contrary power is even decreasing. The posthoc power is useless because it is a 1:1 function of the BE test.
(Count up: post 992) Edit: power.TOST() calls changed to partial replicate design after Helmut's post below— Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20160113 13:38 (2814 d 09:48 ago) @ Jay Posting: # 15822 Views: 11,140 

Hi Jay, ❝ You are right, it was partial replicate design. No, I was wrong. 3×3 Latin Square ∨ 3×6×3 William’s design ≠ partial replicate. In the former two three treatments are compared and in the latter two (where R is administered twice). In the future please give all necessary information. We don’t have a crystal ball. See Detlew’s comments. When we are considering TOST, it’s simple: Power curves for any sample size intersect at the limits of the BE acceptance range with power exactly 0.05. That’s the rule of the game. If the PE is outside the AR, power will be <0.05. No software needed at all. Thus, given your PE: reformulate. The jury is out when it comes to RSABE. — Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20160113 14:21 (2814 d 09:05 ago) @ Helmut Posting: # 15824 Views: 11,196 

Dear Helmut, ❝ ... When we are considering TOST, it’s simple: Power curves for any sample size intersect at the limits of the BE acceptance range with power exactly 0.05. Really? Have a look onto this: power.TOST(CV=0.3, theta0=1.25, n=1002, design="2x3x3") Seems your statement "... any sample size ... with power exactly 0.05." is not thoroughly correct . Think about "The TOST procedure can be quite conservative, i.e. the achieved Type I error rate may be lower than the nominal 0.05 significance level"^{1} or "TOST is not the most powerful test". (Count up: post 993) ^{1}Jones B, Kenward MG. Design and analysis of crossover trials. Boca Raton: Chapman & Hall/CRC; 3^{rd} ed 2014 Chapter 13, Introduction (Count up: post 993) — Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20160114 02:29 (2813 d 20:57 ago) @ d_labes Posting: # 15825 Views: 11,133 

Dear Detlew, ❝ Seems your statement "... any sample size ... with power exactly 0.05." is not thoroughly correct . As usual you are correct. Seems that I’ve forgotten my own “Don’t panic!” slide. In the dark ages I used to write in my reports “α ≤0.05” and “≥90% CI” and earned questions. ❝ Think about "The TOST procedure can be quite conservative, i.e. the achieved Type I error rate may be lower than the nominal 0.05 significance level"^{1} Rightyright. 2×2×2 crossover, CV 30%, n 12–48: At GMR 0.8 and sample sizes <49 power starts to drop below 0.05. With n=12 it is only 0.0337. ❝ or "TOST is not the most powerful test". Leaving the question open: Which is the most powerful test? — Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20160115 21:08 (2812 d 02:18 ago) @ Helmut Posting: # 15834 Views: 10,692 

Dear Helmut, ❝ Leaving the question open: Which is the most powerful test? From the early days of (bio)equivalence tests there where some candidates: Berger, Hsu "Bioequivalence Trials, IntersectionUnion Tests and Equivalence Confidence Sets" Statist. Sci. Volume 11, Number 4 (1996), 283319. Hsu et al. Confidence intervals associated with tests for bioequivalence Biometrika (1994), 81,1 , pp. 10314 Brown L.D., Hwang J.T.G. and Munk A. An Unbiased Test for the Bioequivalence Problem The Annals of Statistics 1997, Vol. 25, No. 6, 2345–2367 and some more of these sort of papers. But unfortunately no one of these made it. BTW: I dont think that I tell you news with that citations . Count up: post 994 — Regards, Detlew 
zizou ★ Plzeň, Czech Republic, 20160124 23:12 (2803 d 00:14 ago) @ Helmut Posting: # 15850 Views: 10,282 

❝ At GMR 0.8 and sample sizes <49 power starts to drop below 0.05. With n=12 it is only 0.0337. Very nice and interesting 3D figure. Nevertheless in practice power is almost equal to 0.05 at GMR 0.8000 (if you replace axis "sample size" by "CV" in the figure and the points (for each CV) will be calculated as for minimum required sample size, the curve with power 0.05 (almost straight line) will be approximately at GMR 0.80). So alpha could be seen as less than 0.05 (with two decimals accuracy) only in pilot studies or in really underpowered studies. (If I am not mistaken.) ❝ Seems that I’ve forgotten my own “Don’t panic!” slide. In the dark ages I used to write in my reports “α ≤0.05” and “≥90% CI” and earned questions. To the slide, the figure is for classical 2x2 design? It seems strange to me, especially curve alpha=0.0500, which is raising and I'm not getting why it's descreasing with n>56. Try recalculation at n=60, CV=35% (last minor tick) ... with GMR 0.80, the power will be 0.0500 (so this point should be under the curve 0.0500 and not above) or am I wrong somewhere? Maybe I am just not understanding the slide without the comments presented in Barcelona. 
Helmut ★★★ Vienna, Austria, 20160125 15:15 (2802 d 08:11 ago) @ zizou Posting: # 15851 Views: 10,288 

Hi zizou, ❝ Nevertheless in practice power is almost equal to 0.05 at GMR 0.8000 (if you replace axis "sample size" by "CV" in the figure and the points (for each CV) will be calculated as for minimum required sample size, the curve with power 0.05 (almost straight line) will be approximately at GMR 0.80). Correct. Your wish is my command. For every CV/GMRcombo the sample size estimated for expected GMR 0.95, target power 80%: ❝ So alpha could be seen as less than 0.05 (with two decimals accuracy) only in pilot studies or in really underpowered studies. (If I am not mistaken.) Correct. ❝ To the slide, the figure is for classical 2x2 design? Yep. ❝ It seems strange to me, especially curve alpha=0.0500, which is raising and I'm not getting why it's descreasing with n>56. Try recalculation at n=60, CV=35% (last minor tick) ... with GMR 0.80, the power will be 0.0500 (so this point should be under the curve 0.0500 and not above) or am I wrong somewhere? No, you aren’t (see below). ❝ Maybe I am just not understanding the slide without the comments presented in Barcelona. The context were twostage designs. Since I started to present TIEcontours in 2012 attendees were worried about the ‘large red area’ in ‘Type 2’ TSDs (Method C, D,…) and TIEs close to 0.05 for low to moderate CVs. In the same presentation compare slide 16 (‘Type 1’ – Method B) with slide 23 (‘Type 2’ – Method C). In the former we get small TIEs caused by the fraction of studies which stop in the first stage (BE or not) evaluated with α 0.0294, whereas in the latter the evaluation might be done with α 0.05. Since in TSDs the TIE is based on simulations and R’s filled.contour() uses the raw data, the plots looked awful. Therefore, I opted for thin plate splines (package fields ) to smooth the data. Unfortunately when I decided to show the 2×2×2 fixed sample design as well I was lazy and used the same code (although no smoothing is required). Hence, the shape of contours at the extremes are an artifact. My fault.— Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 