VSL
☆    

India,
2015-07-01 21:37
(3192 d 18:25 ago)

Posting: # 15008
Views: 8,465
 

 post hoc power [Power / Sample Size]

Dear,
It has been seen that In protocol, larger sample size is selected to just pass the study without any scientific justification or selecting high T/R in sample size calculation. Therefore, in many cases, post hoc power is more than 100% or nearly 100%. It is like forced bio-equivalence.
Does regulator seek justification for this?
Thanks and Regards...
VSL


Edit: Category changed; see also this post #1[Helmut]
ElMaestro
★★★

Denmark,
2015-07-01 22:08
(3192 d 17:54 ago)

@ VSL
Posting: # 15009
Views: 6,885
 

 post hoc power

Hi VSL,

❝ It has been seen that In protocol, larger sample size is selected to just pass the study without any scientific justification or selecting high T/R in sample size calculation. Therefore, in many cases, post hoc power is more than 100% or nearly 100%. It is like forced bio-equivalence.

❝ Does regulator seek justification for this?


Power > 100% is a stretch :-D But I know what you mean - power is higher than it would perhaps need to be if you are content with e.g. the usual 80% or 90%. The study will likely pass if you have good control over the T/R. Power is the applicant's problem, usually. It is difficult to argue on basis of ethics and science that 85% power is OK while 95% power is not, or the other way around. And so forth.
You'll see that some companies do not feel extremely confident their true T/R is e.g. 0.95; it is the true T/R that determines power together with variability, yet true T/R is never known. Thus, there is a point in sometimes having a seemingly high samnple size when you are not totally in the driver's seat re. the T/R. And when the trial passes with flying colours and a T/R close to 1.0 then it looks like the trial was over-powered and in hindsight you're thinking sample size was way too high. But you didn't know that in advance, and therefore such a posthoc power consideration can be unjustified.
Remember that even after the trial the true T/R is not known.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-07-02 03:18
(3192 d 12:44 ago)

@ VSL
Posting: # 15010
Views: 7,271
 

 I’m sick of “100.0% power”

Hi VSL,

first of all: I agree with what ElMaestro wrote…

❝ […] in many cases, post hoc power is more than 100%


Implying a negative (!) producer’s risk, since \(\small{\beta=1-power}\)? :-D

❝ or nearly 100%. It is like forced bio-equivalence.


“Forced BE” is an unofficial term describing large sample sizes where an extremely “bad” T/R-ratio is assumed beforehand and an attempt is made to “squeeze” the confidence interval to within the acceptance range. Can be an issue for the ethics committee. The EC has to deal with the risk for the subjects in the study. If I would be a member of an EC and know of a couple of studies with 24–36 subjects (same drug/re­fe­rence), I would for sure raise my hand against a protocol suggesting 120.
Post hoc power is completely irrelevant. Stop calculating it!

On another note are you sure the ~100% power are correct? I stopped counting reports on my desk where the number was just wrong. Completely wrong. Sorry to say, but some CROs are notorious for that. Example (one of the most renowned [image] CROs, study report hot off the press, June 2015). 2×2 crossover, one PK metric failed: 90% CI 112.44–127.73%, n=71 (36/35), CVintra 23.08%, “power” reported with 100.0%. Soft­ware SAS, code not given but obviously perfectly wrong – or as ElMaestro once called it:

Garbage in, garbage out.
It’s very very simple.

If power would really have been 100% (producer’s risk zero!), why the heck did the study fail? The correct (again and again and again: irrelevant!) post hoc power for the TOST procedure is just 29%. I assume (!) that the CRO estimated “the power of the t-test to show a difference of 20%”. This was coined the “80/20 rule” by the FDA back in 1972 (‼)1 …
  • Conclude BE if the Test is not significantly different from Reference and
  • there is ≥80% power to detect a 20% difference.
… which was shown2 to be crap and rightly thrown into the statistical garbage bin decades ago.3

❝ Does regulator seek justification for this?


Not that I recall. Their job is to assess the patient’s risk. If you are asked for post hoc power, tell them to study their own guidelines. None (!) requires it. As you cannot be punished for being lucky (low “power”) the same is true for wasting money (very high “power”). However, in almost all cases I have seen it’s a plain miscalculation.



  1. Skelly JP. A History of Biopharmaceutics in the Food and Drug Administration 1968–1993. AAPS J. 2010;12(1):44–50. doi:10.1208/s12248-009-9154-8. [image] Free Full text.
  2. Schuirmann DJ. A comparison of the Two One-Sided Tests Procedure and the Power Approach for Assessing the Equivalence of Average Bioavailability. J Pharmacokin Biopharm. 1987;15(6):657–80. doi:10.1007/BF01068419.
  3. FDA/CDER. Guidance for Industry. Statistical Procedures for Bioequivalence Studies Using a Standard Two-Treatment Crossover Design. Jul 1992. [image] Internet Archive.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,638 registered users;
73 visitors (0 registered, 73 guests [including 7 identified bots]).
Forum time: 15:02 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5