VSL
Junior

India,
2015-07-01 19:37

Posting: # 15008
Views: 6,559
 

 post hoc power [Power / Sample Size]

Dear,
It has been seen that In protocol, larger sample size is selected to just pass the study without any scientific justification or selecting high T/R in sample size calculation. Therefore, in many cases, post hoc power is more than 100% or nearly 100%. It is like forced bio-equivalence.
Does regulator seek justification for this?
Thanks and Regards...
VSL


Edit: Category changed. [Helmut]
ElMaestro
Hero

Denmark,
2015-07-01 20:08

@ VSL
Posting: # 15009
Views: 5,226
 

 post hoc power

Hi VSL,

» It has been seen that In protocol, larger sample size is selected to just pass the study without any scientific justification or selecting high T/R in sample size calculation. Therefore, in many cases, post hoc power is more than 100% or nearly 100%. It is like forced bio-equivalence.
» Does regulator seek justification for this?

Power > 100% is a stretch :-D But I know what you mean - power is higher than it would perhaps need to be if you are content with e.g. the usual 80% or 90%. The study will likely pass if you have good control over the T/R. Power is the applicant's problem, usually. It is difficult to argue on basis of ethics and science that 85% power is OK while 95% power is not, or the other way around. And so forth.
You'll see that some companies do not feel extremely confident their true T/R is e.g. 0.95; it is the true T/R that determines power together with variability, yet true T/R is never known. Thus, there is a point in sometimes having a seemingly high samnple size when you are not totally in the driver's seat re. the T/R. And when the trial passes with flying colours and a T/R close to 1.0 then it looks like the trial was over-powered and in hindsight you're thinking sample size was way too high. But you didn't know that in advance, and therefore such a posthoc power consideration can be unjustified.
Remember that even after the trial the true T/R is not known.

if (3) 4

x=c("Foo", "Bar")
b=data.frame(x)
typeof(b[,1]) ##aha, integer?
b[,1]+1 ##then let me add 1



Best regards,
ElMaestro

"(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018.
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2015-07-02 01:18

@ VSL
Posting: # 15010
Views: 5,441
 

 I’m sick of “100.0% power”

Hi VSL,

first of all: I agree with what ElMaestro wrote…

» […] in many cases, post hoc power is more than 100%

Implying a negative (!) producer’s risk, since β = 1 – power? :-D

» or nearly 100%. It is like forced bio-equivalence.

“Forced BE” is an unofficial term describing large sample sizes where an extremely “bad” T/R-ratio is assumed beforehand and an attempt is made to “squeeze” the confidence interval to within the acceptance range. Can be an issue for the ethics committee. The EC has to deal with the risk for the subjects in the study. If I would be a member of an EC and know of a couple of studies with 24–36 subjects (same drug/re­fe­rence), I would for sure raise my hand against a protocol suggesting 120.
Post hoc power is completely irrelevant. Stop calculating it!

On another note are you sure the ~100% power are correct? I stopped counting reports on my desk where the number was just wrong. Completely wrong. Sorry to say, but some CROs are notorious for that. Example (one of the most renowned [image] CROs, study report hot off the press, June 2015). 2×2 crossover, one PK metric failed: 90% CI 112.44–127.73%, n=71 (36/35), CVintra 23.08%, “power” reported with 100.0%. Soft­ware SAS, code not given but obviously perfectly wrong – or as ElMaestro once called it:

Garbage in, garbage out.
It’s very very simple.

If power would really have been 100% (producer’s risk zero!), why the hell did the study fail? The correct (again and again and again: irrelevant!) post hoc power for the TOST is just 29%. I assume (!) that the CRO estimated “the power of the t-test to show a difference of 20%”. This was coined the “80/20 rule” by the FDA in the 1980s:
  • Conclude BE if the Test is not significantly different from the Reference and
  • there is ≥80% power to detect a 20% difference.
… which rightly was thrown into the statistical garbage bin decades ago (IIRC by the FDA in 1992).

» Does regulator seek justification for this?

Not that I recall. Their job is to assess the patient’s risk. As you cannot be punished for being lucky (low “power”) the same is true for wasting money (very high “power”). However, in almost all cases I have seen it’s a plain miscalculation.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Activity
 Thread view
Bioequivalence and Bioavailability Forum |  Admin contact
19,286 posts in 4,100 threads, 1,317 registered users;
online 9 (2 registered, 7 guests [including 8 identified bots]).

I try not to think with my gut.
If I’m serious about understanding the world,
thinking with anything besides my brain, as tempting as that might be,
is likely to get me into trouble.    Carl Sagan

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
HTML5