Package MBESS and Power curiosity [🇷 for BE/BA]

posted by d_labes  – Berlin, Germany, 2009-10-15 16:57 (5728 d 00:58 ago) – Posting: # 4366
Views: 33,806

Dear Helmut! Dear all R adepts!

Helmut, as you suggested I have contacted Kem Phillips.
He has confirmed that sigma in case of logscale=TRUE is sqrt(log(1+CV^2)).
He has also confirmed the error if n>344 and will change the code in MBESS accordingly.

With respect to the power curiosity in my post "The Power at limits" above we are still in an ongoing discussion. Kem's first supposition was numerical problems. But considering the results I will present below he is thoughtful.

Because PASS2008 gives the same results I have confronted Jerry Hinze with that curiosity. Here his answer:
"This is surprising to me, but since it occurs in a range that is of little interest, I don't see the need to investigate it further. It could be due to a number of numerical problems that occur when dealing with such a small sample size. However, since as you say, it matches SAS Analyst, I would conjecture that it is probably due to the definition of the equivalence test--that is, the definition probably leads to this result."

For me I had seen the need to investigate it further.:-P
I performed an "in-silico" experiment to obtain an empirical power estimate.
I have simulated data sets for 2x2 cross-over with equal subjects in the two sequences using my beasty number cruncher SAS, performed the TOST and recorded the incidences of stating bioequivalence.
Here my results:

ratio0=0.95, CV=0.65, BE limits 0.8 ... 1.25

                Power       SAS      EatMyShorts,    empirical
     Power  (R-code with  magic DL   Apfelstrudel,    power
 n  (MBESS)   Owen's Q)  (Owen's Q)  trapez meffud (50.000 simul.)
----------------------------------------------------------------
 4  0.004542  0.004542    0.004542     0.004542      0.00452.
 6  0.001620  0.001620    0.001620     0.001620      0.00146.
 8  0.000976  0.000976    0.000976     0.000976      0.00086.
10  0.000799  0.000799    0.000799     0.000799      0.00080.
12  0.000797  0.000797    0.000797     0.000797      0.00074.
14  0.000907  0.000907    0.000907     0.000907      0.00084.
16  0.001131  0.001131    0.001131     0.001131      0.00110.
18  0.001501  0.001501    0.001501     0.001501      0.00132.
20  0.002077  0.002077    0.002078     0.002077      0.00230.
22  0.002952  0.002952    0.002952     0.002952      0.00352.
24  0.004253  0.004253    0.004253     0.004253      0.00468.

30  0.012708  0.012708    0.012708     0.012708      0.01216.
40  0.058717  0.058717    0.058717     0.058715      0.05666.
50  0.157587  0.157587    0.157587     0.157587      0.15790.

BTW: ElMaestro, can you tell me how Apfelstrudel has found his way to Gent/Belgium :confused:?

As you can see, all results from the different implementations have the same behavior. So Jerry Hinze's presumption seems true that this behavior is inherent to the test definition.

BTW: The empirical power in that region badly converges to a stationary value. One has to use huge number of simulated datasets to obtain a reliable value to some degree. 50.000 is obviously yet to low.

Greetings to all other pettifoggers (Hair splitting needs sharp tools and haven't got anything better to do :-D ).

Regards,

Detlew

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,672 registered users;
47 visitors (0 registered, 47 guests [including 11 identified bots]).
Forum time: 17:55 CEST (Europe/Vienna)

Medical researches can be divided into two sorts:
those who think that meta is better and those
who believe that pooling is fooling.    Stephen Senn

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5