## Kudos! [Power / Sample Size]

Hi zizou,

what a fantastic post deserving a barnstar! First comments (others later, time allowing).

» FDA […] states in section II. BACKGROUND:

C. Bioequivalence
» Bioequivalence is defined in § 320.1 as:
» the absence of a significant difference in the rate and extent to which the active ingredient or active moiety in pharmaceutical equivalents or pharmaceutical alternatives becomes available at the site of drug action when administered at the same molar dose under similar conditions in an appropriately designed study.

» When value 1 is outside of 90% CI of GMR then there is a significant difference (maybe except for some border cases). - End of fun, of course this statistically significant difference is not relevant for FDA according to other statements requiring the confidence interval in the BE limits.

The guidance quotes the definition given in the 21CFR320.1. Hence, it refers to a law. Difficult to change (if you are not Mr Trump). I guess it goes back to the 1980s where no statistics were used at all (the crazy 75/75-rule). Therefore, I also guess that significant is not meant in the statistical sense. Merriam-Webster tells me:
1. having meaning; especially: suggestive <a significant glance>
1. having or likely to have influence or effect: important <a significant piece of legislation>; also: of a noticeably or measurably large amount <a significant number of layoffs> <producing significant profits>
2. probably caused by something other than mere chance <statistically significant correlation between vitamin deficiency and disease>
Does the FDA’s definition say “statistically significant”? Nope. The definition means 2.a. and not 2.b.

» Nevertheless the EMA's statement "The number of subjects to be included in the study should be based on an appropriate sample size calculation." should be sufficient (to keep "Forced BE" away), if controlled correctly.

This “appropriate” drives me nuts. Borrowed from ICH-E9. All previous versions of the BE-GL (or NfG) were better in this respect, IMHO.

» (Many of regulatory authorities think that sample size is issue of ethical committees. Many of ethical committees are not able to assess the sample size estimation.)

Yes to both.

» Should be the requirement of the inclusion of 100% in 90% CI included in the sample size estimation? (I would bet it has never been used.) ... It's quite problematic task.
» (So this post is only for interest to know. - I can not imagine the actual use after I made some power analysis.)

You are not the first. Another hero of Danish ancestry (ElMaestro) came up with some nice compiled C-code already in 2009. Download EFG and play with it.
T/R 0.95, CV 30%, n 40, 1 mio sim’s give:
EFG2.01 Brute force Power for BE: 81.6035, Power in Denmark: 76.5942
PowerTOST: 81.5845, Power.TOST.sim: 81.5615, your code for the ‘PE-rule’: 76.5145
Quick & extremely dirty:
library(PowerTOST) 100*power.TOST(alpha=0.5, CV=0.3, theta0=0.95, n=40, theta1=0.8, theta2=1)-1.25 # [1] 76.57964

I had some problems in Denmark with drugs of extremely low variability (<10%). Studies ‘overpowered’ even with the minimum acceptable sample size n=12. Stated in the protocol that I’ll expect a significant difference. In my case it was not BE (line extensions, formulation changes) and patients are dose-titrated (high between subject variability). No questions from the Vikings ever since.

»   power_sim[i,j]=power.sim2x2(CV=CV[ i ]/100,GMR=0.95,n=n[j],nsims=1E4, alpha=0.05, lBEL=0.8, uBEL=1.25)

You discovered that spaces are necessary in CV[ i ].
Now you learned the hard way why I made it my habit to start loops with the index j.
i in square brackets is one of the BBCodes used in the forum’s scripts (see also here).

Dif-tor heh smusma 🖖
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes