Helmut
★★★  Vienna, Austria,
2006-12-28 21:33

Posting: # 422
Views: 31,176

## R-Code for Power in 2×2 Cross-over [R for BE/BA]

Dear all,

following this post and just out of curiosity I implemented SAS-code by Jones and Kenward (2000)`` in R (v2.4.1).
To quote the beginning of Jones'/Kenward's SAS-Code (which is copyrighted material and therefore not given here; but you can afford buying a book for sure):
```/*** WARNING : PROGRAM OFFERED FOR USE WITHOUT ANY GUARANTEES    ***/ /*** NO LIABILITY IS ACCEPTED FOR ANY LOSS RESULTING FROM USE OF ***/ /*** THIS SET OF SAS INTRUCTIONS                                 ***/```

Their example (pp 336-337) yields from a within-subject standard deviation of 0.355 in 58 subjects and a point estimate of 1.00 power=90.4%.
Following R-code gives 90.3988%.
```  # significance level   a      <- 0.05   # variance of difference of two observations on log scale   # sigmaW = within-subjects standard deviation   sigmaW <- 0.355   s      <- sqrt(2)*sigmaW   # total number of subjects (needs to be a multiple of 2)   # for power calculation   n      <- 58   # error degrees of freedom for AB/BA crossover with   # n subjects in total   n2     <- n-2   # ratio = mu_T/mu_R   ratio  <- 1.00   # calculate power   t1     <- qt(1-a,n2)   t2     <- -t1   nc1    <- sqrt(n)*(log(ratio)-log(0.8))/s   nc2    <- sqrt(n)*(log(ratio)-log(1.25))/s   df     <- sqrt(n2)*(nc1-nc2)/(2*t1)   prob1  <- pt(t1,df,nc1)   prob2  <- pt(t2,df,nc2)   answer <- prob2-prob1   power  <- answer*100   power```

The following code reproduces the power-curves for n=8/12/18/24/30/36/40/60 and CV=20% given in Fig.1c by Diletti et al. (1991)``
``` a       <- 0.05       # alpha CV      <- 0.20       # intra-subject coefficient of variation # calculate within-subjects standard deviation # from CV sigmaW  <- sqrt(log(CV^2+1)) s       <- sqrt(2)*sigmaW Theta1  <- 0.80       # lower acceptance limit Theta2  <- 1/Theta1   # upper acceptance limit Step    <- 0.005      # Interval ssx     <- 0.65       # x-coordinate of sample size labels in % CVLabel <- paste("CV =",as.character(CV*100),"%" ) # vector of sample sizes for (n in c(8,12,18,24,30,36,48,60))   {   n2    <- n-2   # vector of ratios from 'Theta1' to 'Theta2', interval 'Step'   assign("ratio",seq(Theta1,Theta2,Step))   t1    <- qt(1-a,n2)   t2    <- -t1   nc1   <- sqrt(n)*(log(ratio)-log(Theta1))/s   nc2   <- sqrt(n)*(log(ratio)-log(Theta2))/s   df    <- sqrt(n2)*(nc1-nc2)/(2*t1)   prob1 <- pt(t1,df,nc1)   prob2 <- pt(t2,df,nc2)   power <- prob2-prob1   if (n == 8)     # set-up of plot, first curve     {       plot(ratio, power, axes=F, type="l", lwd=2, col=2,         xlim=c(Theta1,Theta2), ylim=c(0,1),         main="2×2 Cross-over", xlab="µT/µR", ylab="Power" )       text(1, 0.4, CVLabel, cex=1.25, adj=0.5 )       axis(1, at=seq(Theta1,Theta2,Step*10),         labels=seq(Theta1,Theta2,Step*10), pos=0 )       axis(2, at=seq(0,1,0.1), labels=seq(0,1,0.1), las=1 )       abline(h=0.05, col = "black", lty = "dotted" )       abline(h=seq(0.6,1,0.1), col = "black", lty = "dotted" )       abline(v=seq(Theta1,Theta2,Step*10), col = "black", lty = "dotted" )       box()     }   else            # add curves to plot     lines(ratio, power, lwd=2, col=2 )     if (ratio[as.integer(length(ratio)*ssx)] )       text(ratio[as.integer(length(ratio)*ssx)],         power[as.integer(length(ratio)*ssx)], n, adj=0.5 )   }``` Values calculated by the R-code for n=24 match Diletti et al.'s Fig.1c:
```+------+-------+ |  PE  | Power | +------+-------+ | 0.85 | 28.16 | | 0.90 | 64.08 | | 0.95 | 89.19 | | 1.00 | 96.28 | | 1.05 | 89.90 | | 1.10 | 70.02 | | 1.15 | 41.92 | | 1.20 | 18.30 | +------+-------+```

Please note, this code is not validated, but only a toy you can play around with!

With some ambition you can modify the code (essentially varying the point estimate instead of the sample size) in order to match also Diletti et al.'s Fig.2c. Now it should be only a small step to re-calculate their table of sample sizes...
1. Jones B and MG Kenward
Design and Analysis of Cross-Over Trials
Chapman & Hall/CRC, Boca Raton, (2nd Edition 2000)
2. Diletti E, Hauschke D and VW Steinijans
Sample size determination for bioequivalence assessment by means of confidence intervals
Int J Clin Pharm Ther Toxicol 29/1, 1-8 (1991)

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Helmut
★★★  Vienna, Austria,
2007-01-01 23:59
(edited by HS on 2007-06-04 12:41)

@ Helmut
Posting: # 428
Views: 28,948

## Power in Diletti's sample size table

Dear all,

I performed some comparisons of the code based on Jones/Kenward implemented in R, Diletti's table and results obtained from StudySize (v2.0.1). Quite interesting...

All comparisons were done for CV=20% with ratios of 0.85-1.20. Dilleti reported sample sizes to obtain >80% power, calculated odd sample sizes were reported rounded up to the next even number (`underlined`):
``` +======+=====+===========+===========+ |  GMR |  n  |  R-code   | StudySize | +------+-----+-----------+-----------+ | 0.85 | 134 | 0.8014178 |  0.80167  | | 0.90 |  38 | 0.8140704 |  0.81536  | | 0.95 |  20 | 0.8300156 |  0.83451  | | 1.00 |  16 | 0.8214263 |  0.83305  | | 1.05 |  18 | 0.7950343 |  0.79996  | | 1.10 |  32 | 0.8084890 |  0.80992  | | 1.15 |  72 | 0.8035456 |  0.80411  | | 1.20 | 294 | 0.8017617 |  0.80182  | +======+=====+===========+===========+```

Power with sample sizes given by Diletti et al. at a GMR of 1.05 were below 80%, calculated both with R and StudySize...

A Monte Carlo Simulation (1000000 runs) for GMR=1.05 and 18 subjects in StudySize resulted in:
Power `0.8006` (95% coverage probability: `0.7998-0.8013`).

Differences may be due to the implementation of the algorithm to obtain the value of the noncentral t-distribution by numeric integration...

Maybe somebody of you has access to SAS or software specialized in power analysis (e.g., PASS or nQuery Advisor) and would like to check these results?

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Helmut
★★★  Vienna, Austria,
2007-01-06 15:17

@ Helmut
Posting: # 441
Views: 28,959

## Approximate Power

Dear all,

Dieter Hauschke informed me that Jones’/Kenward’s code uses Satterthwaite’s approximation for degrees of freedom (for heterogenous variances), which was corrected in a recent book.``
In power calculations degrees of freedom are fixed by the number of subjects (and sequences).

If you want to play around again, please replace code line #22

`df    <- sqrt(n2)*(nc1-nc2)/(2*t1)`

with

`df    <- n2`  Power (CV=20%) by the (old) new codes for n=24 are:
``` +======+=======+=======+ |  GMR |  old  |  new  | +------+-------+-------+ | 0.85 | 28.16 | 26.86 | | 0.90 | 64.08 | 63.73 | | 0.95 | 89.19 | 89.60 | | 1.00 | 96.28 | 96.72 | | 1.05 | 89.90 | 90.32 | | 1.10 | 70.02 | 69.89 | | 1.15 | 41.92 | 40.85 | | 1.20 | 18.30 | 17.04 | +======+=======+=======+```

Power >80% (CV=20%) by the (old) new codes are:
``` +======+=====+===========+===========+ |  GMR |  n  |    old    |    new    | +------+-----+-----------+-----------+ | 0.85 | 134 | 0.8014178 | 0.8017723 | | 0.90 |  38 | 0.8140704 | 0.8154940 | | 0.95 |  20 | 0.8300156 | 0.8346802 | | 1.00 |  16 | 0.8214263 | 0.8331980 | | 1.05 |  18 | 0.7950343 | 0.8001854 | | 1.10 |  32 | 0.8084890 | 0.8100682 | | 1.15 |  72 | 0.8035456 | 0.8042181 | | 1.20 | 294 | 0.8017617 | 0.8019247 | +======+=====+===========+===========+```

Power is >80% for all combinations tested.

» Maybe somebody of you has access to SAS or software specialized in power analysis (e.g., PASS or NQuery) and would like to check these results?

According to Dieter Hauschke Owen’s exact method is implemented in nQuery Advisor.

Since there is still some confusion about different methods for sample size estimation in bioequivalence (software, tables, approximations) an entire chapter of a new book`` will be devoted to this issue.
1. S Patterson and B Jones
Bioequivalence and Statistics in Clinical Pharmacology
Chapman & Hall/CRC, Boca Raton, pp 230 (2006)
2. D Hauschke, VW Steinijans and I Pigeot
Bioequivalence Studies in Drug Development: Methods and Applications
Wiley, New York (2007)

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
mathews
★

India,
2008-04-03 13:19

@ Helmut
Posting: # 1757
Views: 28,095

## Power Calculation for replicate crossover design

Dear all,

Please tell me how to calculate power for a replicate crossover design (2 treatment 2 sequence 4 period).

Regards

Matz
Helmut
★★★  Vienna, Austria,
2008-04-03 13:49

@ mathews
Posting: # 1758
Views: 28,637

## Power Calculation for replicate crossover design

Dear Matz,

you will get the same power in N subjects (TRRT|RTTR) as compared to a 2×2×2 design in N subjects (half the sample size - but same number of treatments!). If you perform a study in a 2-treatment 2-sequence 3-period design (e.g., TRT|RTR) the same power is obtained in N subjects.`1)`
1. László Tóthfalusi
Scaled Average Bioequivalence to Evaluate Bioequivalence of Highly Variable Drugs
Presentation given at “Dissolution Testing, Bioavailability & Bioequivalence Conference”
Budapest, May 24th, 2007

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
mathews
★

India,
2008-04-04 13:21

@ Helmut
Posting: # 1763
Views: 27,585

## Power Calculation for replicate crossover design

Dear HS,

Thanks for your reply.

How we can modify the above R code for a replicate crossover design.
The above code used within subject s.d for a 2x2 design, but in a replicate crossover design we will get within subject s.d for both test and reference.

So which value should be used?...

Regards,

Matz
Helmut
★★★  Vienna, Austria,
2009-09-25 19:20

@ Helmut
Posting: # 4255
Views: 26,259

## Exact Power

Dear all,

I made an investment and was able to compare results from R with the current version 7 of nQuery Advisor:*
``` +======+=====+===========+========+==========+ |  GMR |  n  |  R-code   | nQuery Advisor 7  | +------+-----+-----------+--------+----------+ | 0.85 | 134 | 0.8017723 | 80.17  | 80.17722 | | 0.90 |  38 | 0.8154940 | 81.54  | 81.54941 | | 0.95 |  20 | 0.8346802 | 83.46  | 83.46802 | | 1.00 |  16 | 0.8331980 | 83.32  | 83.32001 | | 1.05 |  18 | 0.8001854 | 80.01  | 80.01854 | | 1.10 |  32 | 0.8100682 | 81.00  | 81.00682 | | 1.15 |  72 | 0.8042181 | 80.42  | 80.42180 | | 1.20 | 294 | 0.8019247 | 80.19  | 80.19246 | +======+=====+===========+========+==========+```

nQuery requires sqrt(MSE) rather than CV as input = sqrt(ln(CV²+1)). The maximum precision is 6 decimal places. Therefore 0.1980422 rather than the value in full precision (0.1980422004353650284627675024887) like in R is used. But you see this value only if clicking the respective cell; both displayed and printed is 0.198042. nQuery gives power in percent with a maximum number of two decimal digits. That's a pity and values in the first column were the best one could get (display/print). If clicking in each cell power is given in percent to 5 decimal digits (7 significant digits); second column. It's clear that results are not rounded, but truncated. This looked weird first, but at least for power it makes sense.
Values agree nicely with the exception of a small discrepancy at GMR=1 (83.31980% vs. 83.32001%).
If I recall it correctly Dieter Hauschke wrote something about numerical problems in power estimation at T/R=1 in one of his papers. I love validating software. • Janet D Elashoff
nQuery Advisor Version 7.0
Statistical Solutions, Cork, Ireland (2007)

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
ElMaestro
★★★

Denmark,
2009-09-27 23:27

@ Helmut
Posting: # 4256
Views: 26,208

## Exact Power

Hi HS,

» [Blahdeeblah] Therefore 0.1980422 rather than the
» value in full precision (0.1980422004353650284627675024887) [more blah]

Try to put yourself in the shoes of the software writer. There are two known options to getting the calculation of power. One is the bruteforce approach, the other goes via probabilities of the non-central t distribution.
The former can be cumbersome and is generally not preferred (takes time but converges asymptotically to the true power with the number of resamples if the random number generator works (and trust me, it is not that hard to make one that does!)), but there is still preference for the latter. Now, actually programming a stable Al Gore Rhythm which get the probs from the nct dist is a nightmare, so the developer of course googles it and finds some really neat source code. This is what the developers of R did, and a dollar gets ten that this is what others do as well.
And here's the deal: If you look at that source you can easily see that it is based on some (necessary) helper constants and short-cuts and convergence criteria and what not. In other words, whatever the algo does it is not exact. It is approximate, but how good that approximation actually is is unknown (unknown error will be a function of the conditions, by the way).
We know R's power is an approximation since it uses an adapted ASA243. So when two or more applications get the same result it could be either because they are both right or because they both make the same errors (as would be the case if they are based on mods of the same algo etc), or it could be pure chance. And we have absolutely no way of telling which one is the right one until some clever guy works out the integrals exactly. I'll buy you a Mozartkugel if you do it.

» I love validating software. Decompile your recent acquisition and check if they one way or another use the ASA243. If they do, why validate?

EM.
Helmut
★★★  Vienna, Austria,
2009-09-28 15:21

@ ElMaestro
Posting: # 4260
Views: 26,775

## Software validation

Hi ElMaestro!

» Try to put yourself in the shoes of the software writer.

Oh, I took these boots on many years ago. And they got muddy almost immediatelly. I tried to get a numerical approximation of the cdf of the t-distribution which lead me to the gamma-function and headaches. Finally I ended up with numerical approximations.[1,2]

» […] whatever the algo does it is not exact. It is approximate, but how good that approximation actually is is unknown.

Sure.

» […] we have absolutely no way of telling which one is the right one until some clever guy works out the integrals exactly.

Definitely not me. I’m getting more stupid every day.

» I'll buy you a Mozartkugel if you do it.

THX, no needs.

» Decompile your recent acquisition and check if they one way or another use the ASA243.

I guess the SW was built using the Borland C++ compiler. You know that reverse engineering of software is a breach of the license? BTW, according to the manual the ‘Illinois method’ (Algorithm AS 184, FORTRAN source) is used. I assume AS 243 is ‘better’ than AS 184. But to which extent? Is it my job to go through pros and cons of algorithms?

» If they do, why validate?

First, different algos are used. Second, I have to validate software - or not?
At the product's site I found a nice statement:

These sets of solutions were reviewed by Janet Elashoff, who checked for consistency, face validity, and for computational accuracy against other sources.

Face validity?

What I actually wanted to do was to check my R code (which is also part of bear) with a piece of SW of high reputation. Checking the sample size I wasn’t satisfied (only integers), therefore I opted to do it the other way 'round (power). It made me angry that there is no (easy) way to obtain more than 4 significant digits. By this external validation is effectively prevented.
Another example from the dirt track of SW validation: Diletti et al. published exact samples size tables for BE ranges of 0.9-11 and 0.7–1.43. When trying to validate my code against these tables I found small discrepancies (only at low CVs and close to the acceptance range). By trial-and-error I got the solution: I could reproduce tables only if setting the upper acceptance limit not to the reciprocal of the lower AL (0.9-1, 0.7-1) but to exactly 1.1111 or 1.4286…
BTW, nitpicking Diletti’s Table 1 (power 70%, T/R=1, CV=7.5%) and nQuery 7 give n=4, whilst my R code, StudySize 2.0.1, and FARTSSIE 1.6 come up with n=6. Now what? Go with a democratic vote of 3:2 for 6?

1. Gardiner DA and BF Bombay
An Approximation to Student's t
Technometrics 7(4), 71–2 (1965)
2. Abramowitz M and IA Stegun
Handbook of mathematical functions
National Bureau of Standards, Applied Mathematics Series – 55, Washington, D.C., pp 948–9 (10th Printing with corrections 1972)
online resource
3. RV Lenth
Statistical Algorithms. Algorithm AS 243: Cumulative Distribution Function of the Non-Central t Distribution
Applied Statistics 38(1), 185–9 (1989)
4. Bohrer R, Schervish M, and J Sheft
Statistical Algorithms. Algorithm AS 184: Non-Central Studentized Maximum and Related Multiple-t Probabilities
Applied Statistics 31(1), 309–17 (1982)
5. Diletti E, Hauschke D, and VW Steinijans
Sample size determination: Extended tables for the multiplicative model and bioequivalence ranges of 0.9 to 1.11 and 0.7 to 1.43
Int J Clin Pharmacol Ther Toxicol 30/Suppl.1, S59-62 (1992)
6. Diletti E, Hauschke D, and VW Steinijans
Sample size determination for bioequivalence assessment by means of confidence intervals
Int J Clin Pharmacol Ther Toxicol 29(1), 1-8 (1991)

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
ElMaestro
★★★

Denmark,
2009-09-28 19:12

@ Helmut
Posting: # 4261
Views: 26,040

## All the important answers are here!

Dear HS,

» Oh, I took these boots on many years ago. And they got muddy almost immediatelly. I tried to get a numerical approximation of the cdf of the t-distribution which lead me to the gamma-function and headaches.
» Finally I ended up with numerical approximations.[1,2]

Let me highfive you for this. I did the same, in C, and it took me quite some time, only to find out later that C actually comes with a beautiful function called lgamma which does it and returns the logarithm. I wasted a good part of my youth reinventing the wheel it seems » BTW, nitpicking Diletti’s Table 1 (power 70%, T/R=1, CV=7.5%) and nQuery 7 give n=4, whilst my R code, StudySize 2.0.1, and FARTSSIE 1.6 come up with n=6. Now what? Go with a democratic vote of 3:2 for 6?

Two solutions:
1. Ask dlabes
2. Get your power from a program that bases its evaluation on brute force rather than on numerical integration.
EM.
Helmut
★★★  Vienna, Austria,
2009-09-28 19:52

@ ElMaestro
Posting: # 4263
Views: 26,053

## All the important answers are here!

Hi ElMaestro!

» Let me highfive you for this. BTW, great subject line! 42? (or look what Google has to say).

» I wasted a good part of my youth reinventing the wheel

Welcome to the club of wheel re-inventors.

» » BTW, nitpicking Diletti's Table 1 […]
» Two solutions:
» 1. Ask dlabes

Ha! Have you read his post?

With that code you can reproduce the block for power=80% […] with some minor deviations for CV=5% and 7.5%.

» 2. Get your power from a program that bases its evaluation on brute force rather than on numerical integration.

Do you have one handy?

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
ElMaestro
★★★

Denmark,
2009-09-28 20:18

@ Helmut
Posting: # 4264
Views: 26,035

## All the important answers are here!

Hi HS,

» » 2. Get your power from a program that bases its evaluation on brute force rather than on numerical integration.
»
» Do you have one handy?

I think I can make you an elmaestrolophystic piece of software for this; will take "some time"* to write it, I guess.

EM.

*: Remember Alan Turing? He proved that there is no way of knowing if an algorithm converges until it converges. If it converges. Unfortunately it works the same way when I fire up my C compiler. In spite of good intentions and fancy plans I may get personally stuck in an infinite loop.
By the way, it is pretty much the same with women and telephones, but that's a completely different story. And another forum, I guess.
yjlee168
★★  Kaohsiung, Taiwan,
2009-09-28 21:30
(edited by yjlee168 on 2009-09-28 22:00)

@ Helmut
Posting: # 4265
Views: 26,026

## All the important answers are here!

Dear Helmut,

I just tried PASS and got the following (for power 70%, T/R=1, CV=7.5%).
```Power Analysis of 2x2 Cross-Over Design for Testing Equivalence Using Ratios Page/Date/Time   1    2009/9/29 03:32:58 Numeric Results for Testing Equivalence Using a Cross-Over Design                    Lower     Upper         Total      Equiv.    Equiv.         Coefficient        Sample      Limit     Limit    True           of          Size    of Ratio  of Ratio   Ratio   Variation Power     (N)       (RL)      (RU)    (R1)        (COV)   Alpha    Beta 0.7290     4     0.8000     1.2500   1.0000     0.0750   0.0500  0.2710 0.8434     6     0.8500     1.1765   1.0000     0.0750   0.0500  0.1566 0.7780    10     0.9000     1.1111   1.0000     0.0750   0.0500  0.2220```

N = 4. Sorry for my previous message (I set power= 0.8, not 0.7!).
» [...]
» Do you have one handy?

All the best,
---Yung-jin Lee
bear v2.8.4:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
★★★  Vienna, Austria,
2009-09-28 23:18

@ yjlee168
Posting: # 4266
Views: 26,150

## All the important answers are here!

Good morning, bears!

Thanks for the PASS-run. Now the game is tied at 3:3 for sample sizes of 4 or 6 (target power 70%, T/R=1, CV=7.5%).

```                  power %      n Diletti's table  >70           4 PASS              72.90        4 nQuery 7          71.559       4                   97.943       6 StudySize 2.0.1   15.9         NA (4: <70)                   70           6 (4.927)                   72.69        6 (5)                   93.201       6 R-code            66.674       NA (4: <70)                   93.096       6 (5)                   98.697       6 FARTSSIE 1.6      66.674       NA (4: <70)                   93.096       6 (5)                   98.697       6```

Underlined sample sizes are rounded up to the next even integer (calculated results in parentheses). From the manual I would say that PASS uses also AS 243.

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-09-29 13:37

@ Helmut
Posting: # 4267
Views: 25,986

## 42! SASophylistic power oracle

Dear guys,

let me take part in your hairsplitting party .
Hope you have already waited for me.

Here are the results of the SAS Proc Power.
Code can be found here, of course appropriately adapted.

```                 The POWER Procedure         Equivalence Test for Paired Mean Ratio                Fixed Scenario Elements        Distribution                   Lognormal        Method                             Exact        Lower Equivalence Bound              0.8        Upper Equivalence Bound             1.25        Alpha                               0.05        Geometric Mean Ratio                   1        Coefficient of Variation           0.075        Correlation                            0               Computed Power                         N             Index    Pairs    Power                1        3    0.604                2        4    0.867                3        5    0.968                4        6    0.993 ```
The power is given with 3 decimals only by default. No simple way to change.

Seems the power is calculated in this case totally wrong!
This is astonishing enough, because the same Proc Power code can reproduce most of the sample sizes given in Diletti et.al., at least for power 80%.
Ok, Proc Power is called experimental by SAS, usually means the user shall do the testing and report bugs. Others call such software beta.

My own rolled code (relying on Owen's Q-functions, undocumented but available in SAS) gives:
```         Coeff. of         variation   'true'  lower BE  upper BE        achieved   required  alpha     (%)      µ(T/R)    bound    bound     N     power      power   0.05     7.5        1       0.8      1.25      4     0.729014    0.7   0.05     7.5        1       0.8      1.25      6     0.986992    0.7   0.05     7.5        1       0.8      1.25      8     0.999577    0.7```

Regards,

Detlew
Helmut
★★★  Vienna, Austria,
2009-09-29 14:52

@ d_labes
Posting: # 4269
Views: 26,372

## Al Gore Rhythms

Dear D Labes!

» Hope you have already waited for me.

Yesss!

» Here are the results of the SAS Proc Power.
» Seems the power is calculated in this case totally wrong!

Fascinating.

» My own rolled code (relying on Owens Q-functions, undocumented but available in SAS) gives:

That’s the more interesting part (homebrew)!

```                        n  % power    n  % power ------------------------------------------------ #1 Diletti et al.       4 >70         6 >90 #2 PASS                 4  72.90 #3 D Labes' SAS code    4  72.9014    6  98.6992 #4 nQuery Advisor 7     4  71.559     6  97.943 #5 R-code               4  66.674     6  98.697 #6 FARTSSIE 1.6         4  66.674     6  98.697 #7 StudySize 2.0.1      4  15.9 (!)   6  93.201```

n=4 takes the lead by 4:3! Observations:
#5 (based on Algorithm AS 243) and #6 give identical results. This is not surprising, because the VBA-routine within FARTSSIE was written by Russell V. Lenth, the author of AS 243. #2/3 agree; for n=6 power is comparable with #5/6. #4 calculates lower power than #1-3, but still would suggest n=4. The difference between #1-3 and #5/6 might be due to different algorithms for estimating the noncentral t-distribution (#1/3: ?, #2: AS 243?, #4: AS 184, #5/6: AS 243). #7 uses approximations (Ref. 2 in this post); however, yields conservative results.

Maybe I should contact Dieter Hauschke to join the party…

Cheers,
Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
ElMaestro
★★★

Denmark,
2009-09-29 18:14

@ Helmut
Posting: # 4270
Views: 25,964

## Al Gore Rhythms

Dear HS,

» n=4 takes the lead by 4:3!

Eighth observation: I made a little application that goes under the codename EFG (which is short for ElF#ckupGrande) and which implements brute force in conjunction with Mersenne Twister and some very ugly elmaestrolophystic code clutter.

At N=4, CV=0.075 and T/R=1.0 I get power = 72.7191% with 2000000 iterations.
At N=6 power is 98.97435%.

EM.
yjlee168
★★  Kaohsiung, Taiwan,
2009-09-29 23:55

@ Helmut
Posting: # 4272
Views: 26,000

## Al Gore Rhythms

Dear Helmut,

Please read my message of this thread. Sorry for my mistake. I find that PASS can reach the results that are pretty close to D Labes's SAS run.

» That's the more interesting part (homebrew)!

```                        n  % power    n  % power  ------------------------------------------------  #1 Diletti et al.       4 >70         6 >90  #2 PASS                 4  72.90      6  98.70 <-- YJ  #3 D Labes' SAS code    4  72.9014    6  98.6992  #4 nQuery Advisor 7     4  71.559     6  97.943  #5 R-code               4  66.674     6  98.697  #6 FARTSSIE 1.6         4  66.674     6  98.697  #7 StudySize 2.0.1      4  15.9 (!)   6  93.201```

All the best,
---Yung-jin Lee
bear v2.8.4:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
★★  Kaohsiung, Taiwan,
2009-09-29 23:49

@ yjlee168
Posting: # 4271
Views: 25,876

## All the important answers are here!

Dear Helmut,

I made some correction with PASS-run for the power when N = 6. Please help to update your collection data. Thanks.

```Power Analysis of 2x2 Cross-Over Design for Testing Equivalence Using Ratios   Numeric Results for Testing Equivalence Using a Cross-Over Design                       Lower     Upper          Total      Equiv.    Equiv.         Coefficient         Sample      Limit     Limit    True           of           Size    of Ratio  of Ratio   Ratio   Variation  Power     (N)       (RL)      (RU)    (R1)        (COV)   Alpha    Beta  0.7290     4     0.8000     1.2500   1.0000     0.0750   0.0500  0.2710  0.9870     6     0.8000     1.2500   1.0000     0.0750   0.0500  0.0130```

All the best,
---Yung-jin Lee
bear v2.8.4:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
KDA
☆

India,
2019-01-23 04:24

@ Helmut
Posting: # 19800
Views: 2,391

## R-Code for Power in 2×2 Cross-over

Dear all,

I am a amateur in BA/BE area and trying to learn from this forum. I would appreciate if anyone kindly answer few of my queries for the r-code used to reproduce Fig.1c by Diletti et al. (1991).

Q1) If it is 2x2 cross-over study, test and ref drug would be given only once. Can we get true intra-subject co-efficient of variance (CV)from 2x2 cross-over study?

Q2) If it is possible to calculate the intra-subject co-efficient of variance (CV)from 2x2 cross-over study, is this value coming from ref drug only?

Q3) What is the equation for calculating this intra-subject co-efficient of variance (CV)?

Q4) In the equation "s = sqrt(2)*sigmaW", what 's' stands for? why '2' is square-rooted?

Q5) I have used the following r-code for calculating point estimate and CI limit of AUC.

```  A1 <- data\$AUC[which(data\$TRT=="R1" & data\$PER==1)]   B2 <- data\$AUC[which(data\$TRT=="R2" & data\$PER==2)]   B1 <- data\$AUC[which(data\$TRT=="R2" & data\$PER==1)]   A2 <- data\$AUC[which(data\$TRT=="R1" & data\$PER==2)]     AB <- log(B2)-log(A1)   BA <- log(B1)-log(A2)   n1 <- length(AB)   mAB <- mean(AB)   vAB <- var(AB)   n2 <- length(BA)   mBA <- mean(BA)   vBA <- var(BA)   mD <- mean(log(c(B1,B2))) - mean(log(c(A1,A2)))   MSE <- (((n1-1)*vAB + (n2-1)*vBA)/(n1+n2-2))/2   alpha <- 0.05     lo <- mD - qt(1-alpha,n1+n2-2)*sqrt(MSE)*sqrt((1/(2*n1) + 1/(2*n2)))   hi <- mD + qt(1-alpha,n1+n2-2)*sqrt(MSE)*sqrt((1/(2*n1) + 1/(2*n2))     PE <- 100*exp(mD)   90_CI_Lower <- 100*exp(lo)   90_CI_Higher <- 100*exp(hi)   SE <- sqrt(MSE)*sqrt((1/(2*n1) + 1/(2*n2)))   SD <- sqrt(MSE) ```

Can I calculate CV by the following equation?

CV=SD/(exp(mD))

Based on my understanding, this my calculated CV is not the same as the 'CV' that represent intra-subject co-efficient of variance used to reproduce Fig.1c by Diletti et al. (1991)

what code/formula should I use if I want to calculate intra-subject co-efficient of variance?

Regards
KDA
Ohlbe
★★★

France,
2019-01-27 18:24

@ KDA
Posting: # 19814
Views: 2,075

## R-Code for Power in 2×2 Cross-over

Dear KDA,

» Q1) If it is 2x2 cross-over study, test and ref drug would be given only once. Can we get true intra-subject co-efficient of variance (CV)from 2x2 cross-over study?

No. You need a replicate design study.

I'll let others answer Q5.

Regards
Ohlbe
Bioequivalence and Bioavailability Forum |  Admin contact
19,478 posts in 4,133 threads, 1,333 registered users;
online 11 (0 registered, 11 guests [including 9 identified bots]).
Forum time (Europe/Vienna): 20:38 CEST

But it is in matters beyond the limits of mere rule
that the skill of the analyst is evinced.
He makes in silence a host of observations and inferences…    Edgar Allan Poe

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by Ing. Helmut Schütz 