Jay
☆    

India,
2016-01-08 06:49

Posting: # 15802
Views: 11,278
 

 Post Hoc Power [Power / Sample Size]

Dear all,

As per discussions previous posts, the post of power is not taken into much consideration.
But when the post hoc power of AUCt and AUCinf is observed around 90-100% while the posthoc power of Cmax is less than 20%, then what would be the probable reason for the same.

Regards,
Jay
ElMaestro
★★★

Denmark,
2016-01-08 15:07

@ Jay
Posting: # 15805
Views: 10,121
 

 Post Hoc Power

Hi Jay,

» As per discussions previous posts, the post of power is not taken into much consideration.
» But when the post hoc power of AUCt and AUCinf is observed around 90-100% while the posthoc power of Cmax is less than 20%, then what would be the probable reason for the same.

First of all, I think you should mention how you calculate post-hoc power. Do you use GMR=0.95, GMR=1.0 or observed?
Anyways, low post-hoc power could be due to deviating GMR (for the case you use the observed GMR; this is likely a product failure) or it could be due to high variability, or both.

If you have a situation with deviating Cmax point estimate but OK AUCt point estimate, then you are facing a challenge.

Having said all, I must admit I am also in the camp of people who think post-hoc power is either rubbish or of very limited value.

I could be wrong, but...
Best regards,
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2016-01-09 14:21

@ Jay
Posting: # 15807
Views: 10,036
 

 Post hoc Power

Hi Jay,

I agree with ElMaestro. Can you give us (both for AUC and Cmax):
  • The GMR and CV you used in study planning.
  • The target power and resulting sample size.
  • The eligible subjects in the study, GMR, and CV. If sequences were unbalanced, additionally subjects/sequence.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Jay
☆    

India,
2016-01-11 10:59

@ Helmut
Posting: # 15810
Views: 9,871
 

 Post hoc Power

Dear Helmut,

It is pilot study with sample size of 18, with N=18, the GMR and IS %CV values for Cmax, AUCt and AUCinf are 127.8 & 21.01, 105.1 & 16.71 and 105.49 & 16.73. The post hoc power observed is 2.4, 92.15 and 91.21 respectively.

Regards,
Jay
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2016-01-11 14:02

@ Jay
Posting: # 15812
Views: 9,924
 

 PE outside the acceptance range

Hi Jay,

» It is pilot study with sample size of 18, with N=18, the GMR and IS %CV values for Cmax, AUCt and AUCinf are 127.8 & 21.01, 105.1 & 16.71 and 105.49 & 16.73. The post hoc power observed is 2.4, 92.15 and 91.21 respectively.

I guess this was a 3×3 Latin Square or a 6×3 Williams’ design? Otherwise I don’t come close to your numbers. Post hoc power of a pilot study is even more useless than the one of a pivotal study. However, with a PE of Cmax outside of the acceptance range I would strongly suggest to reformulate.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Jay
☆    

India,
2016-01-13 08:24

@ Helmut
Posting: # 15819
Views: 9,817
 

 PE outside the acceptance range

Dear Helmut,

You are right, it was partial replicate design.

The low power for Cmax, probable reasons may be as following,
  • Lower Sample size
  • PE outside range
But what may be the probable reason for such huge difference between post hoc power of Cmax and AUC.

Regards,
Jay
d_labes
★★★

Berlin, Germany,
2016-01-13 11:59
(edited by d_labes on 2016-01-13 12:47)

@ Jay
Posting: # 15821
Views: 9,757
 

 Power for PE outside the acceptance range

Dear Jay,

» ... But what may be the probable reason for such huge difference between post hoc power of Cmax and AUC.

Simple enough: PE for AUC not outside acceptance range.

For Cmax with PE outside acceptance range you could even have a very large number of subjects in your study without rising your post-hoc power:
library(PowerTOST)
power.TOST(CV=0.2101, theta0=1.278, n=18, design="2x3x3")
[1] 0.02239529
power.TOST(CV=0.2101, theta0=1.278, n=102, design="2x3x3")
[1] 0.005853529
power.TOST(CV=0.2101, theta0=1.278, n=1002, design="2x3x3")
[1] 5.443279e-06

On the contrary power is even decreasing.

The post-hoc power is useless because it is a 1:1 function of the BE test.
  • BE for AUCt, AUCinf proven
  • BE for Cmax not proven
Full stop.

(Count up: post 992)

Edit: power.TOST() calls changed to partial replicate design after Helmut's post below

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2016-01-13 12:38

@ Jay
Posting: # 15822
Views: 9,710
 

 PE at border of the AR: power ≡ 0.05

Hi Jay,

» You are right, it was partial replicate design.

No, I was wrong. 3×3 Latin Square ∨ 3×6×3 William’s design partial replicate. In the former two three treatments are compared and in the latter two (where R is administered twice). In the future please give all necessary information. We don’t have a crystal ball.

See Detlew’s comments. When we are considering TOST, it’s simple: Power curves for any sample size intersect at the limits of the BE acceptance range with power exactly 0.05. That’s the rule of the game. If the PE is outside the AR, power will be <0.05. No software needed at all. Thus, given your PE: reformulate.

The jury is out when it comes to RSABE. ;-)

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
★★★

Berlin, Germany,
2016-01-13 13:21

@ Helmut
Posting: # 15824
Views: 9,727
 

 PE at border of the AR: power ≡ 0.05 ?

Dear Helmut,

» ... When we are considering TOST, it’s simple: Power curves for any sample size intersect at the limits of the BE acceptance range with power exactly 0.05.

Really? Have a look onto this:
power.TOST(CV=0.3, theta0=1.25, n=1002, design="2x3x3")
[1] 0.05
power.TOST(CV=0.3, theta0=1.25, n=102, design="2x3x3")
[1] 0.05
power.TOST(CV=0.3, theta0=1.25, n=30, design="2x3x3")
[1] 0.0499998
power.TOST(CV=0.3, theta0=1.25, n=18, design="2x3x3")
[1] 0.0497759
power.TOST(CV=0.3, theta0=1.25, n=15, design="2x3x3")
[1] 0.04881418
power.TOST(CV=0.3, theta0=1.25, n=12, design="2x3x3")
[1] 0.04446453

Seems your statement "... any sample size ... with power exactly 0.05." is not thoroughly correct :cool:.
Think about "The TOST procedure can be quite conservative, i.e. the achieved Type I error rate may be lower than the nominal 0.05 significance level"1 or "TOST is not the most powerful test".

(Count up: post 993)

1Jones B, Kenward MG.
Design and analysis of cross-over trials.
Boca Raton: Chapman & Hall/CRC; 3rd ed 2014
Chapter 13, Introduction

(Count up: post 993)

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2016-01-14 01:29

@ d_labes
Posting: # 15825
Views: 9,652
 

 PE at border of the AR: power ≤0.05!

Dear Detlew,

» Seems your statement "... any sample size ... with power exactly 0.05." is not thoroughly correct :cool:.

As usual you are correct. :ok:

Seems that I’ve forgotten my own “Don’t panic!” slide. In the dark ages I used to write in my reports “α ≤0.05” and “≥90% CI” and earned questions.

» Think about "The TOST procedure can be quite conservative, i.e. the achieved Type I error rate may be lower than the nominal 0.05 significance level"1

Rightyright. 2×2×2 crossover, CV 30%, n 12–48:

[image]

At GMR 0.8 and sample sizes <49 power starts to drop below 0.05. With n=12 it is only 0.0337.

» or "TOST is not the most powerful test".

Leaving the question open: Which is the most powerful test?

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
★★★

Berlin, Germany,
2016-01-15 20:08

@ Helmut
Posting: # 15834
Views: 9,320
 

 UMP - uniformly more powerful test

Dear Helmut,

» Leaving the question open: Which is the most powerful test?

From the early days of (bio)equivalence tests there where some candidates:

Berger, Hsu
"Bioequivalence Trials, Intersection-Union Tests and Equivalence Confidence Sets"
Statist. Sci. Volume 11, Number 4 (1996), 283-319.

Hsu et al.
Confidence intervals associated with tests for bioequivalence
Biometrika (1994), 81,1 , pp. 103-14

Brown L.D., Hwang J.T.G. and Munk A.
An Unbiased Test for the Bioequivalence Problem
The Annals of Statistics 1997, Vol. 25, No. 6, 2345–2367

and some more of these sort of papers.

But unfortunately no one of these made it.

BTW: I dont think that I tell you news with that citations :cool:.


Count up: post 994

Regards,

Detlew
zizou
★    

Plzeň, Czech Republic,
2016-01-24 22:12

@ Helmut
Posting: # 15850
Views: 8,911
 

 PE at border of the AR: power ≤0.05!

» At GMR 0.8 and sample sizes <49 power starts to drop below 0.05. With n=12 it is only 0.0337.

Very nice and interesting 3D figure.
Nevertheless in practice power is almost equal to 0.05 at GMR 0.8000 (if you replace axis "sample size" by "CV" in the figure and the points (for each CV) will be calculated as for minimum required sample size, the curve with power 0.05 (almost straight line) will be approximately at GMR 0.80).
So alpha could be seen as less than 0.05 (with two decimals accuracy) only in pilot studies or in really underpowered studies. (If I am not mistaken.)

» Seems that I’ve forgotten my own “Don’t panic!” slide. In the dark ages I used to write in my reports “α ≤0.05” and “≥90% CI” and earned questions.

To the slide, the figure is for classical 2x2 design? It seems strange to me, especially curve alpha=0.0500, which is raising and I'm not getting why it's descreasing with n>56. Try recalculation at n=60, CV=35% (last minor tick) ... with GMR 0.80, the power will be 0.0500 (so this point should be under the curve 0.0500 and not above) or am I wrong somewhere?
Maybe I am just not understanding the slide without the comments presented in Barcelona.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2016-01-25 14:15

@ zizou
Posting: # 15851
Views: 8,862
 

 Smoothing artifact

Hi zizou,

» Nevertheless in practice power is almost equal to 0.05 at GMR 0.8000 (if you replace axis "sample size" by "CV" in the figure and the points (for each CV) will be calculated as for minimum required sample size, the curve with power 0.05 (almost straight line) will be approximately at GMR 0.80).

Correct. Your wish is my command. For every CV/GMR-combo the sample size estimated for expected GMR 0.95, target power 80%:
[image]

» So alpha could be seen as less than 0.05 (with two decimals accuracy) only in pilot studies or in really underpowered studies. (If I am not mistaken.)

Correct.

» To the slide, the figure is for classical 2x2 design?

Yep.

» It seems strange to me, especially curve alpha=0.0500, which is raising and I'm not getting why it's descreasing with n>56. Try recalculation at n=60, CV=35% (last minor tick) ... with GMR 0.80, the power will be 0.0500 (so this point should be under the curve 0.0500 and not above) or am I wrong somewhere?

No, you aren’t (see below).

» Maybe I am just not understanding the slide without the comments presented in Barcelona.

The context were two-stage designs. Since I started to present TIE-contours in 2012 attendees were worried about the ‘large red area’ in ‘Type 2’ TSDs (Method C, D,…) and TIEs close to 0.05 for low to moderate CVs. In the same presentation compare slide 16 (‘Type 1’ – Method B) with slide 23 (‘Type 2’ – Method C). In the former we get small TIEs caused by the fraction of studies which stop in the first stage (BE or not) evaluated with α 0.0294, whereas in the latter the evaluation might be done with α 0.05.
Since in TSDs the TIE is based on simulations and R’s filled.contour() uses the raw data, the plots looked awful. Therefore, I opted for thin plate splines (package fields) to smooth the data. Unfortunately when I decided to show the 2×2×2 fixed sample design as well I was lazy and used the same code (although no smoothing is required). Hence, the shape of contours at the extremes are an artifact. My fault.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Activity
 Thread view
Bioequivalence and Bioavailability Forum |  Admin contact
19,514 posts in 4,141 threads, 1,336 registered users;
online 4 (0 registered, 4 guests [including 4 identified bots]).
Forum time (Europe/Vienna): 11:24 CEST

I have not failed 700 times. I have not failed once.
I have succeeded in proving
that those 700 ways will not work.    Thomas Alva Edison

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
HTML5