ElMaestro
★★★

Denmark,
2014-09-30 13:23
(3863 d 10:15 ago)

Posting: # 13619
Views: 14,853
 

 Proc power [Power / Sample Size]

Hi all,

I am not a SAS user but have noted that with SAS/Proc Power you can calculate sample size expressed as number of comparisons (or comparison pairs) for a given level of power while taking correlation into consideration. This seems relevant since in a 222BE-study the data are generally positively correlated within subject.

I wonder what the equations then look like? Tried to look at the online manuals etc, but clearly the power to know is sending on frequencies at which I can't receive. So... can anyone shed a little light over this?
Many thanks in advance and have a great day.

Pass or fail!
ElMaestro
d_labes
★★★

Berlin, Germany,
2014-10-01 11:26
(3862 d 12:12 ago)

@ ElMaestro
Posting: # 13628
Views: 13,173
 

 Proc power

Dear ElMaestro,

❝ I am not a SAS user but have noted that with SAS/Proc Power you can calculate sample size expressed as number of comparisons (or comparison pairs) for a given level of power while taking correlation into consideration. This seems relevant since in a 222BE-study the data are generally positively correlated within subject.


Could you please elaborate? I haven't yet discovered such things in Proc Power.

❝ I wonder what the equations then look like? Tried to look at the online manuals etc, but clearly the power to know is sending on frequencies at which I can't receive. ...


Me too :-D.

Regards,

Detlew
jag009
★★★

NJ,
2014-10-01 22:55
(3862 d 00:43 ago)

@ ElMaestro
Posting: # 13639
Views: 13,141
 

 Proc power

Hi ElMaestro,

❝ I am not a SAS user but have noted that with SAS/Proc Power you can calculate sample size expressed as number of comparisons (or comparison pairs) for a given level of power while taking correlation into consideration. This seems relevant since in a 222BE-study the data are generally positively correlated within subject.


Example output? I have poor imagination :-)

John
ElMaestro
★★★

Denmark,
2014-10-01 23:02
(3862 d 00:36 ago)

@ jag009
Posting: # 13640
Views: 13,228
 

 screen shot

Hi D_labes and jag,


example output:
[image]

Pass or fail!
ElMaestro
d_labes
★★★

Berlin, Germany,
2014-10-02 15:00
(3861 d 08:38 ago)

@ ElMaestro
Posting: # 13642
Views: 13,058
 

 Paired design

Hi Großer Meister,

this is power/sample size calculations for the so-called "paired design",
i.e. you have f.i. measurements before/after on the same subject and need a ratio/difference of before vs. after.

N pairs is here the number of such paired observations.
It has nothing to do with number of comparisons, whatever you want to compare.

Hope I had you understand correctly.

Regards,

Detlew
ElMaestro
★★★

Denmark,
2014-10-02 15:33
(3861 d 08:05 ago)

@ d_labes
Posting: # 13643
Views: 13,072
 

 Paired design

Thanks a lot d_labes,

❝ this is power/sample size calculations for the so-called "paired design",

❝ i.e. you have f.i. measurements before/after on the same subject and need a ratio/difference of before vs. after.


❝ N pairs is here the number of such paired observations.

❝ It has nothing to do with number of comparisons, whatever you want to compare.



I am not sure how to get my head around it. 'Before' and 'after' is in the sense of a fixed effect exactly the same as 'T' and 'R': A column of sneaky 1's and 0's here and a column of sexy 1's and 0's there;
in either case we'd be working on y=Blah+e
where Blah has two levels (Before and after, or Test and Reference), leaving out other fixed stuff here because it doesn't affect power.

Could you possibly check what a result from Proc Power would then look like for corr=0.0 and e.g. gmr=0.95, CV=0.25, n=18 or something ? Would that coincide with the usual power calculations from the world's finest package for power calculations widely known as PowerTOST?

Many thanks for your help.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-02 15:43
(3861 d 07:55 ago)

@ ElMaestro
Posting: # 13644
Views: 13,080
 

 Paired design

My Capt’n,

sorry to interfere.

❝ in either case we'd be working on y=Blah+e

❝ where Blah has two levels (Before and after, or Test and Reference), leaving out other fixed stuff here because it doesn't affect power.


In your OP you wrote about a 222 design – which has period+sequence in the model. The only application of paired designs I know of (in BA, not BE) are ones where you com­pare PK metrics in steady state to single dose (e.g., AUCτ to AUC in order to assess deviation from linear PK). Naturally we have not sequence here (MD always after SD) and have to assume no period effect.


Edit: Due to +1 df, the paired model is always more powerful than the cross-over.

library(PowerTOST)
known.designs()[c(3, 12), ]
    no design  df df2 steps bk bknif bkni            name
3    1  2x2x2 n-2 n-2     2  2   1/2  0.5 2x2x2 crossover
12 100 paired n-1 n-1     1  2   2/1  2.0    paired means

power.TOST(CV=0.2, n=18, design="2x2x2")
[1] 0.7912399
power.TOST(CV=0.2, n=18, design="paired")
[1] 0.793409

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2014-10-02 16:00
(3861 d 07:38 ago)

@ Helmut
Posting: # 13645
Views: 13,105
 

 Paired design

power.TOST(CV=0.25, n=18, design="2x2x2")
[1] 0.5801284             
         
power.TOST(CV=0.25, n=18, design="paired")
[1] 0.5831445


...just to add the data for the requested CV of 25% ;-)

Kindest regards, nobody
ElMaestro
★★★

Denmark,
2014-10-02 16:25
(3861 d 07:13 ago)

@ Helmut
Posting: # 13646
Views: 13,106
 

 Paired design

Hi Hötzi,

thanks for your input.
I am ok with the fixed effects, they are not really what is behind the question.
Let us take the long way around my question:

I consider a 222BE study a paired design - it just has the added complexity of some fixed factors which are constants in a model. Fixed factors are just constants, no bother.
That's is the reason why (to perspectivise a little):
  1. Your can omit the anova and linear model and just apply the simple t-test formulae as given by Chow & Liu 2009 eqn 3.3.9. Yes, looks funny under the sq.root sign but that's just because of the presence of the two sequence which affect the way we calculate the reidual. We could just as well work with one sequence (before and after) and 3.3.9 would in that case be re-written and look more appealing.
  2. Linear model and mixed models for 222BE give the same results as long as you only look at the pairs.
  3. Potvin and Karalis and even the nutcase Fuglsang can get away with simulating differences of pairs when they look at power in 222BE without spending as much as a tenth of a second's thought on sequence or period interference in the muddle, as long as we keep in mind they steal a little df?
Basically shouldn't hence the Proc Power approach alluded to above give the same as powerTOST at corr=0 possibly with the single little difference that fair comparison sould necessitate the df allocation n-2 and not n-1 as we'd do with uncomplicated pairwise comparison?

Pass or fail!
ElMaestro
ElMaestro
★★★

Denmark,
2014-10-04 20:13
(3859 d 03:25 ago)

@ ElMaestro
Posting: # 13648
Views: 12,992
 

 A wee little hole in the coconut

Aaaaaaah,

finally got a little crack in the coconut.
Google was my friend but it was hard work: Check this out.

Anyone got the Philips paper from 1990?


Edit half an hour later:
Using the world's best package for power calculation known as power.TOST:

cv=sqrt(exp(0.1003) - 1.0)
sampleN.TOST(CV=cv, theta0=1.1, targetpower=.95)


+++++++++++ Equivalence test - TOST +++++++++++
            Sample size estimation
-----------------------------------------------
Study design:  2x2 crossover
log-transformed data (multiplicative model)

alpha = 0.05, target power = 0.95
BE margins        = 0.8 ... 1.25
Null (true) ratio = 1.1,  CV = 0.3248115

Sample size (total)
 n     power
136   0.952220


Had a moment of panic as the numbers apparently didn't match, but they do. Example 3 gave 68 subjects per sequence or 136 in total. Me likey. Gosh I am such a jerk.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-10-04 21:23
(3859 d 02:15 ago)

@ ElMaestro
Posting: # 13649
Views: 12,918
 

 A wee little hole in the coconut

Dear Apprentice of power estimations,

❝ Anyone got the Philips paper from 1990?


I’ll send you a scan. Note that the paper deals with untransformed data (AR 0.80–1.20). More relevant the one by Diletti et al.*

❝ Using the world's best package for power calculation known as power.TOST:


cv=sqrt(exp(0.1003) - 1.0)


Another guy not reading the man-pages. ;-)
Try this goodie:
sampleN.TOST(CV=mse2CV(0.1003), theta0=1.1, targetpower=.95)


  • Diletti E, Hauschke D, and VW Steinijans
    Sample size determination for bioequivalence assessment by means of con­fi­dence intervals
    Int J Clin Pharm Ther Toxicol 29(1), 1–8 (1991)
    PMID: 2004861

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ben
★    

2014-10-05 12:43
(3858 d 10:55 ago)

@ ElMaestro
Posting: # 13650
Views: 12,977
 

 Paired design

Dear Maestro/All,

❝ Could you possibly check what a result from Proc Power would then look like for corr=0.0 and e.g. gmr=0.95, CV=0.25, n=18 or something ? Would that coin­cide with the usual power calculations from the world's finest package for power calculations widely known as PowerTOST?


proc power;
   pairedmeans test=equiv_ratio
      lower = 0.8
      upper = 1.25
      meanratio = 0.95
      corr = 0
      cv = 0.25
      npairs = 18
      power = .;
run;


gives 0.583. This is the same as power.TOST(CV=0.25, n=18, design="paired"). If we change corr = 0 to for example corr = 0.6, proc power gives 0.940.

Interesting... I went back to equation (3.5) in Patterson and Jones* and calculated the variance of the estimator of the treatment difference when taking into account the correlation (imho equation (3.5) assumes sB2 = 0, i.e. rho = 0). When doing so I end up with 2/n * sT2 * (1-rho). But as we know sT2 * (1-rho) is exactly equal to sW2. So the variance of the estimator of the treatment effect remains the same, even when taking into account the correlation (:confused:). Thus, the power is always the same, regardless of the correlation. In light of the observation above from proc power this does not make sense ... On the other hand: A higher correlation will result in higher sB2 which is compensated for via the subject effect in the model. Also, when simulating data sets with correlated outcomes and checking the power, the same result was obtained (i.e. power always the same).

Any thoughts, apparently there must be some error somewhere...??

Best, Ben

* Bioequivalence and Statistics in Clinical Pharmacology
ElMaestro
★★★

Denmark,
2014-10-05 16:01
(3858 d 07:37 ago)

@ Ben
Posting: # 13652
Views: 12,766
 

 Paired design

Hi Ben,

❝ So the variance of the estimator of the treatment effect remains the same, even when taking into account the correlation (:confused:).


Basically, if you know the ratio and you know the variance estimate from it, then you shouldn't have any degree of freedom to choose a level of correlation between T and R, no?
Perhaps this SAS stuff provides an answer to a question that is slightly different from the one we think we are asking.

Pass or fail!
ElMaestro
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,673 registered users;
92 visitors (0 registered, 92 guests [including 3 identified bots]).
Forum time: 23:39 CEST (Europe/Vienna)

There are two possible outcomes: if the result confirms the
hypothesis, then you’ve made a measurement. If the result is
contrary to the hypothesis, then you’ve made a discovery.    Enrico Fermi

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5