d_labes
★★★

Berlin, Germany,
2013-08-09 16:49
(4280 d 11:38 ago)

Posting: # 11254
Views: 7,040
 

 Partial replicate design, EMA evaluation, Type I error [Power / Sample Size]

Dear All!

Today is my birthday and, in the tradition of the hobbits, here is my birthday gift for you - a long sermon.

During development of the functions for power/sample size for scaled ABE in PowerTOST I noticed that for the 2x3x3 crossover design using the EMA recommended evaluation the power values calculated via subject data sims in case of CVwT<CVwR were markedly higher, in case of CVwT>CVwR they were markedly lower compared to the simulations of the key statistics pe, mse and σ2wR used for high-speed simulations.

To explore into this the question arose “How performs the EMA recommended evaluation of the replicate designs (“Use the same ANOVA model as for the classical 2x2x2 crossover”) for deciding ABE in terms of type I error (alpha), especially if the homoscedasticity assumption is not true?”
The following table summarizes the simulated alpha values via subject data:

CVwT   CVwR   pooled
               CV      n   'alpha' power.TOST
0.3    0.3    0.3     12   0.0440   0.0445
                      24   0.0505   0.0500
                      36   0.0500   0.0501
0.4    0.4    0.4     12   0.0164   0.0164
                      24   0.0483   0.0482
                      36   0.0500   0.0500
0.5    0.5    0.5     12   0.0027   0.0028
                      24   0.0323   0.0324
                      36   0.0484   0.0484
0.3    0.4    0.3690  12   0.0202   0.0251
                      24   0.0361   0.0495
                      36   0.0363   0.0500
0.3    0.5    0.4407  12   0.0068   0.0084
                      24   0.0267   0.0443
                      36   0.0285   0.0498
0.4    0.3    0.3359  12   0.0434   0.0354
                      24   0.0659   0.0499
                      36   0.0663   0.0500
0.5    0.3    0.3754  12   0.0319   0.0232
                      24   0.0757   0.0493
                      36   0.0783   0.0500

power.TOST results calculated with pooled CV (= mse2CV((CV2mse(CVwT) + 2* CV2mse(CVwR))/3))

Wow! While performing as expected if σ2wT2wT, comparable to the evaluation via power.TOST(), the empirical alpha values via subject data simulations are much too conservative in case of CVwT<CVwR. In case of CVwT>CVwR they are too liberal up to a considerable alpha inflation!

This observation resembles well known results for one-way or two-way ANOVA, showing that the usual F-test for testing effects are no longer valid, may be too liberal or too conservative, if the assumption of equal variances is violated.

The only explanation could be that the distributional assumption (“mse is chi-squared distributed”) no longer holds if the homoscedasticity is not true. As far as I know there is no way out here since there is no solution to the question of the mse distribution within the crossover ANOVA for the case of heteroscedasticity, beside to use mixed model software. More over the EMA forced us to use this fixed effects ANOVA without allowing mixed models evaluation.

Thus we had to stuck with subject data simulations with the burden of very long simulation run-times if we wish to calculate empirical power / alpha for the EMA method within a 2x3x3 design with all bells and whistles.

So far so bad. Any body out there to prove me wrong?

BTW: Do you remember the regulatory body abandoning Potvin C for 2-stage studies because a maximum of empirical alpha's of 0.0510 were reported in the Potvin et.al. paper, claiming an alpha-inflation for that method :cool:?

Regards,

Detlew
ElMaestro
★★★

Denmark,
2013-08-09 17:29
(4280 d 10:58 ago)

@ d_labes
Posting: # 11255
Views: 5,552
 

 Partial replicate design, EMA evaluation, Type I error

Dear d_labes,

happy birthday :party:

Very interesting. Why not aim for publication of such data? They could be interesting to a very wide audience.

Btw: Talking about CVwT sound odd for a 3-period study where just the ref is replicated. Can you elaborate how these sims were performed?

Pass or fail!
ElMaestro
d_labes
★★★

Berlin, Germany,
2013-08-09 17:56
(4280 d 10:32 ago)

@ ElMaestro
Posting: # 11256
Views: 5,744
 

 How to sim

Dear ElMaestro,

❝ Btw: Talking about CVwT sound odd for a 3-period study where just the ref is replicated. Can you elaborate how these sims were performed?


In very short: All effects fixed, that means without loss of generality no period, no sequence and no subject effects (! according to EM :-D) i.e. all set to zero. Normal distribution of the log(PKmetric) with µT = log(GMR) + µR and s2wT for the period a subject receives Test and µR and s2wR for the periods a subject receives Reference. s2wX as usual = log(1.0 + CVwX^2).

Apply the EMA evaluation, i.e. calculate the 90% CI's and count studies with CI in usual acceptance range (note we are talking here about the ABE term).

Regards,

Detlew
ElMaestro
★★★

Denmark,
2013-08-09 21:49
(4280 d 06:38 ago)

@ d_labes
Posting: # 11257
Views: 5,542
 

 How to sim

Hi d_labes,

❝ In very short: All effects fixed, that means without loss of generality no period, no sequence and no subject effects (! according to EM :-D) i.e. all set to zero.


He is right, that's a bit easier than adding constants that later on cancel out anyway. What a clever person.

❝ Normal distribution of the log(PKmetric) with µT = log(GMR) + µR and s2wT for the period a subject receives Test and µR and s2wR for the periods a subject receives Reference. s2wX as usual = log(1.0 + CVwX^2).


I can't really see if that's correct. I would do
logPK=F+S+error
where F is the treatment effect, S is the subject effect and even though we evaluate it as a fixed effect we model it as random here (this is justified, although it perhaps sounds silly). So basically the S is normal with mean zero and variances s2b.

Let's say a particular subject is in seq RRT:
Derive one random Gaussian with variance s2b (it is the same you re-use for one subject).
Derive three within's, two for ref and one for test. To generate three observations add the formulation effects to the three different withins and the between for the subject. Done. It may come to the same thing as what you did, I can't tell...

Hang on... I need to think more about this. I have a feelign I might be editing this post again soon :-D

Pass or fail!
ElMaestro
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,672 registered users;
34 visitors (0 registered, 34 guests [including 6 identified bots]).
Forum time: 04:28 CEST (Europe/Vienna)

The whole purpose of education is
to turn mirrors into windows.    Sydney J. Harris

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5