Denmark Curiosa (1 in 90% CI in 0.8-1.25) [Power / Sample Size]

posted by zizou – Plzeň, Czech Republic, 2017-02-08 21:03  – Posting: # 17040
Views: 19,468

Dear d_labes,
thanks for comments (especially for the second).

» One minor comment to your code:
» The number of sims you used (1e4 as far as I could see) is a little bit low. Especially if you come into regions with power <=0.05.

You are right. I did the code faster for testing and unfortunately I forgot to change it to the higher number before producing figures. Nevertheless it was enough for the purpose of the post (combined with not much important subject of the post). Additionally all <=0.05 values were black in the levelplots, and I was much more interested in the power about 80 or close to 90%.

» Another comment to the Danish requirement:
» ... that another construction of the CI may be used, namely

CI+ = (low, 1)    if high is < 1
CI+ = (1, high)   if low is >1
CI+ = (low, high) if low is <=1 and high is >=1


» where low and high are the conventional confidence interval limits.
Fascinating but nasty - it seems like a tool to make the results nicer:
» This observation makes the Danish requirement statistical nonsense.
I think the observation with alternative CI is not the right reason for classifying the requirement as nonsense - alternative CI is 95% CI. EMA guideline 1401 and discussed Danish requirement, both contain the 90% confidence interval, so it is not possible to use this (same alpha 0.05, but CI widened to 95%). I had similar theoretical idea to adjust alpha, ie. to widen the CI to get the 100% in CI (valid for GMR in interval from sqrt(0.8) to 1/sqrt(0.8)). Nevertheless in the Danish requirement is stated 90% CI - no option for widening CI (even with same alpha used as in the reference).

The property of extending CI to include 100% without adjusting the alpha is interesting. I had to remind myself what the confidence interval means. I always slide in mind to probability which is common misunderstanding - citation:

"A 95% confidence interval does not mean that for a given realised interval calculated from sample data there is a 95% probability the population parameter lies within the interval. Once an experiment is done and an interval calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval."

If it was probability (but it is not), the change of 90% CI would be to the something between 90-95% CI. For example 90% CI 87-98%, then there would be probability 5% that true GMR value is outside of the 90% CI lower and 5% outside higher. When no change of lower limit of 90% CI and adding some values up to value 100% to create a new CI, there would still be 5% probability to have true GMR lower and several percent probability to have true GMR higher than 100%. - Text in red is wrong (it's how the CI may be wrongly understood).

1) When we are constructing classical 90% CI (alpha=0.05) - the 90% of hypothetically observed confidence intervals contains the true GMR.
2) When we are constructing alternative 95% CI (alpha=0.05) - the 95% of hypothetically observed alternative confidence intervals contains the true GMR,
where alternative 95% CI is classical 90% CI extended to 100% value.

So when we use the second CI construction according to the references mentioned by d_labes (as standard 90% CI extended to have 100% in), we get 95% CI (with assumption that true GMR is not equal to 100% - it's not limiting us, in that case we get the best CI (i.e. 100% CI)).

» See also here.

I tried simulations (with patience) (and with reading the steps to find out what was done there), nevertheless I failed to get reported results (with seed after the # symbol).

mean(coverge.ci90) # ci90 is the classical 90% CI for difference T-R
#[1] 0.90086
mean(coverge.ci95) # ci95 is the classical 90% CI extended at one side to zero (CI is for difference T-R)
#[1] 1 # For T100=R100 it is obviously always equal to 1. Wow 100% CI! Great! Because true difference T-R is a fixed value (zero), all simulated (hypothetically observed) confidence intervals will hold the true difference.
       # Otherwise (i.e. T100 differs from R100) it's about 0.95 ... due to simulations I used the word "about" but it's 0.95. (My shame I was expecting something between 0.9-0.95 until reminding myself the definitions.)

Complete thread:

Activity
 Admin contact
20,779 posts in 4,351 threads, 1,444 registered users;
online 24 (1 registered, 23 guests [including 10 identified bots]).
Forum time: 13:33 CEST (Europe/Vienna)

All we know about the world teaches us that the effects of A and B
are always different—in some decimal place—for any A and B.
Thus asking “are the effects different?” is foolish.    John W. Tukey

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5