d_labes
★★★

Berlin, Germany,
2011-01-14 16:21
(5218 d 11:09 ago)

Posting: # 6417
Views: 11,794
 

 Power for unbalanced cross-over [Power / Sample Size]

Dear all,

the following may be a little bit of hair-splitting, but thus I'm known to you :cool::

The formulas for the power and sample size are given in many references and obviously also implemented in software for power calculations and sample size estimation are formulated in terms of the total sample size.
Implicit assumption for these formulas is that the sequence groups are balanced, i.e. have the same number of subjects randomized to them. That would imply that for instance for the classical 2x2 cross-over design only even numbers of subjects are reasonable.

Nevertheless sometimes uneven numbers of subjects necessary to achieve a target power given the CV and true ratio are reported. See for instance Helmut's last famous lecture, slide 33. These may be in error or at least based on incorrect power values calculated via the formulas assuming a balanced design.

The key terms in the power calculations are the so called non-centrality parameters which are given in case of the classical 2x2 crossover with log-transformed values of th PK metrics as
  delta1 = (µT - µR - ln(lBE))/(se*sqrt(2/N))
  delta2 = (µT - µR - ln(uBE))/(se*sqrt(2/N))

with µT and µR the means for Test and Reference, se=sqrt(MSE), lBE and uBE the lower and upper acceptance ranges and N the total number of subjects.
If these parameters are to be calculated for unbalanced cross-over, i.e. different numbers of subjects in the sequence groups, the term sqrt(2/N) has to be replaced by
  sqrt((1/n1+1/n2)/2)
which gives different non-centrality parameters and in turn different power values if n1 not equal n2.

Here an example:
CV=10%, N=7 which can at its best (least unbalanced) realised by n1=3 and n2=4, whichever sequence TR or RT is 1 or 2 is unimportant.
# power.TOST also uses formulas based on assuming balanced sequence groups
# n is here the total number

> power.TOST(CV=0.1, n=7)
[1] 0.8625377
# modified code to account for "unbalancedness"
[1] 0.8560221


Ok, the difference is not that much and I must confess that I have not found any instance where the uneven number reported for 2x2 cross-over is invalidated by the power values calculated taking into account the "unbalancedness" if it is assumed only 1 subject.

But I think for cases where the power itself is of value, f.i. for the first step of the evaluation of a 2-stage design via Potvin's Method C where the power has to be checked, it can make a difference if the sample size is small compared to the imbalance.
# modified code to account for "unbalancedness"
# CV=10%, n1=2, n2=5 -> N=7

[1] 0.791085


Power: That which statisticians are always calculating but never have.
Stephen Senn


Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-01-14 17:16
(5218 d 10:15 ago)

@ d_labes
Posting: # 6418
Views: 10,158
 

 Great post!

Dear D. Labes!

❝ […] See for instance Helmut's last famous lecture


THX for calling it "famous".

❝ […] the term sqrt(2/N) has to be replaced by

  sqrt((1/n1+1/n2)/2)

❝ which gives different non-centrality parameters and in turn different power values if n1 not equal n2.


Right.

# modified code to account for "unbalancedness"


Great. I guess you are considering updating your package. :-D

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2011-12-22 10:30
(4876 d 17:00 ago)

@ Helmut
Posting: # 7806
Views: 10,105
 

 Santa is coming

Dear Helmut! Dear All!

❝ […] the term sqrt(2/N) has to be replaced by

  sqrt((1/n1+1/n2)/2)

❝ which gives different non-centrality parameters and in turn different power values if n1 not equal n2.


"Was lange währt wird gut" (Good things come to those who wait).

Let me dedicate PowerTOST version 0.9-2 (on CRAN now with release date 24-Dec-2011) to You as my Christmas gift. Of course also to all of you which doesn't celebrate this festival or don't go there (but to Florida :pirate:).
But! Do not open before Xmas :cool:!

Have a look into the NEWS and notice especially the function power2.TOST().
Hope that at least all adepts of 2-stage designs will find this gift to some extent useful :wink:.

❝ But I think for cases where the power itself is of value, f.i. for the first step of the evaluation of a 2-stage design via Potvin's Method C where the power has to be checked, it can make a difference



The power to know isn't necessarily SAS.

Best Christmas wishes and a Happy New Year,
Detlew Labes
Ben
★    

2012-01-02 18:39
(4865 d 08:51 ago)

@ d_labes
Posting: # 7851
Views: 9,379
 

 PowerTOST

Dear D. Labes,

First, I want to say thanks for the great package PowerTOST!
I saw that you added some new features, in particular the df2 degrees of freedom. The latter brings me to my second point. In your excerpt (or the manual) you wrote "The df2 are also more appropriate if the planning of sample size is done based on CV’s originating from real mixed model analysis (via Proc MIXED in SAS or lme() in R)". Why is that? Is there a reference where this is mentioned (haven't seen it explicitly in Senn's "Cross-over Trials in Clinical Research")?

Thank you!

Best regards,
Ben

PS: Of course, everyone else can comment on this as well
PPS: Why don't you include balanced incomplete block designs (e.g. the 3x4x3 design mentioned in Chow and Liu's 'Design and Analysis of Bioavailability and Bioequivalence Studies') into PowerTOST?


Edit: Moved to this thread. [Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-01-02 18:53
(4865 d 08:37 ago)

@ Ben
Posting: # 7852
Views: 9,381
 

 BIBDs

Dear Ben!

❝ PPS: Why don't you include balanced incomplete block designs (e.g. the 3x4x3 design mentioned in Chow and Liu's 'Design and Analysis of Bioavailability and Bioequivalence Studies') into PowerTOST?


I can only guess: pragmatism? Personally I haven’t seen a single one in my entire career. ;-)

@Detlew: THX again. I sneeked only into the help files before Christmas (was tough to keep my hands off). Unwrapped the gift a few days later. How nice! In the help files you wrote:


Note

Scripts for creation of these data.frame's can be found in the \test sub-directory of the package. Comparing the results of that scripts to the corresponding data.frames can be used for validation purposes



Hhm. I don’t have such a directory. :confused:

R
└ R-2.14.1
  └ library
    └ PowerTOST
      ├ data
      ├ doc
      ├ help
      ├ html
      ├ Meta
      └ R


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2012-01-17 12:49
(4850 d 14:42 ago)

@ Helmut
Posting: # 7955
Views: 9,209
 

 Uuups

Dear Helmut!

Back from some additional weeks of summer to warm my old bones on Lanzarote :cool: I noticed your post that late. Sorry.

❝ ❝ Scripts for creation of these data.frame's can be found in the \test sub-directory of the package. Comparing the results of that scripts to the corresponding data.frames can be used for validation purposes


❝ Hhm. I don’t have such a directory. :confused:


Uuuupppps :ponder:.
You are right. Installing PowerTOST or updating it will not result in having such a subdirectory also it is present in the tarball I had uploaded to CRAN.

Seems I hadn't really understand what the purpose of the \tests subdirectory in a R-package is for and how it functions. Any R guru out there which don't let me die in blithe ignorance?
Until I have fixed this issue you can download the source code tarball to get access to the scripts.

Have (or had) a nice start into the New Year 2012 to All.
And remember! Its the last year before the final end (21 DEC 2012) :lol3:.

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-06-12 01:03
(4339 d 03:28 ago)

(edited on 2013-06-12 22:47)
@ d_labes
Posting: # 10773
Views: 7,860
 

 the \tests subdirectory in a R-package

Dear Detlew,

Just happen to be here. I find these are all ready-to-run scripts under .../tests folder. You may consider to put these .R scripts under PowerTOST/R folder, and then create a /PowerTOST/demo folder with a file called '00Index' which should contain the following texts.
RatioF_test     test for blabla...
parallel_test   test for blabla...
...

Except for the file '00Index' under /PowerTOST/demo, you will need the following files under /PowerTOST/demo depending on how many test files you have:
RatioF_test.R
parallel_test.R
...


Taking 'RatioF_test.R' as an example (similar with others), it has only one line itself as follow:
test_RatioF()

After compliled and installed, then users can easily access the test files from R console.
library(PowerTOST)
demo(RatioF_test)     ### will run test_RatioF.R
demo(parallel_test)   ### will run test_parallel.R
...


Currently, users can go to R folder/library/PowerTOST/tests to open any .R script with an ASCII editor. Select all scripts and copy. And then paste under R console to run. In this case, the screen of R console may look messed because of mixing up the codes and outputs all together. Using demo(...) way will be much, much better. It won't show any R scripts or codes, but only outputs. Clean and beautiful.

❝ [...]

❝ Seems I hadn't really understand what the purpose of the \tests subdirectory in a R-package is for and how it functions. Any R guru out there which don't let me die in blithe ignorance?

❝ [...]


All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2013-06-14 15:28
(4336 d 13:02 ago)

@ yjlee168
Posting: # 10790
Views: 7,817
 

 To demo() or not

Dear Yung-jin!

Nice to see you now more often here :cool:.

You answered a question not really asked :-D.
Thanks nevertheless for your suggestion, I will give it a try if my time will allow it.

But some buts:
-The scripts are not intended to demonstrate something in PowerTOST but to test the results for correctness. Thus it seems illogical to me to create demos.

-To avoid the clutter between R statements and output (occurs if one works as you described) there is a simpler way: use the R function source() in the console, f.i.
source("C:/path/name/to/scripts/test_2x2.R")
and you will not see any echo of the statements in the script but only the output.
If you not use the plain console or the simple RGui but an IDE (integrated development environment), and each real R-user should, you can even do the run of scripts by a single mouse click.

As IDE I recommend RStudio or the among many developers of all kind well known Eclipse with the add-in StatET.

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-06-16 00:04
(4335 d 04:26 ago)

@ d_labes
Posting: # 10802
Views: 7,811
 

 demo() is not demo

Dear Detlew,

❝ Nice to see you now more often here :cool:.


Thank you. I dive more often than surf. Really enjoy this forum.

❝ But some buts:

❝ -The scripts are not intended to demonstrate something in PowerTOST but to test the results for correctness. Thus it seems illogical to me to create demos.


Yes, I know. Sorry to confuse you a little bit here. I just try to provide a easy-to-assess way for the R scripts that you put under .../tests folder. It doesn't mean that these R scripts are all demo.

❝ -To avoid the clutter between R statements and output (occurs if one works as you described) there is a simpler way: use the R function source() in the console, f.i.

source("C:/path/name/to/scripts/test_2x2.R")

❝ and you will not see any echo of the statements in the script but only the output.


Yes, you're right. How I can forget this function! I know this function from a R book (The art of R programming: tour of statistical software design by Norman Matloff, 2011) while ago.

❝ As IDE I recommend RStudio or the among many developers of all kind well known Eclipse with the add-in StatET.


I used RStudio before and it is very nice IDE. However, I got some forced to close situations when I used R v3.0.0 alpha and beta. So I switched back to a plain editor again. I still have RStudio installed on my computer. I will try Eclipse with StatET later. Thank you for your information.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-09-29 01:23
(4230 d 03:07 ago)

@ d_labes
Posting: # 11574
Views: 7,393
 

 OT: issue of '\tests' subdir again

Dear Detlew,

Recently I added one more subdirectory '\tests' to my packages because I was asked to add 'test' by CRAN maintainer. Thus it reminds me your previous post again. I saw you put the subdir '\tests' under '\inst', not the same subdir of '\R'. In this case, there are two subdir under '\inst': one is '\tests' and the other is '\doc' in your PowerTOST package. After installation, both '\tests' and '\doc' will appear under package directory. If you want to test if all these R scripts work properly (not for demo), you probably should move 'tests' subdir from '\inst\tests' to '\tests'. That is one more upper level and the same dir as '\R'. (Reference:Writing R Extensions). Then when you check your package with command line of R CMD check --as-cran pk-name, you will see if all test codes are OK. For example, I check my package stab, I get the following message:

F:\R>R CMD check --as-cran stab

* using log directory 'F:/R/stab.Rcheck'
* using R version 3.0.2 (2013-09-25)
* using platform: i386-w64-mingw32 (32-bit)
...[cut many lines here] * checking Rd \usage sections ... OK
* checking Rd contents ... OK
* checking for unstated dependencies in examples ... OK
* checking examples ... NONE
* checking for unstated dependencies in tests ... OK
* checking tests ...
  Running 'stab_test_run.R'
 OK
      <--- here shows all results of test runs (I just had one test file for this example)
* checking PDF version of manual ... OK

See
  'F:/R/stab.Rcheck/00check.log'
for details.


Under the directory of '\stab.Rcheck', there is subdir called '\tests' and all results (outputs) of test runs can be found there with file extension of .Rout. In this case, the subsir '\tests' won't be included in the binary package file, but it is still included in the source tarball. That is the subdir '\tests' will not appear under package after package installation. I don't know if this is what you want or not. I guess CRAN maintainer will depend on this to examine if there is any 'test' included with a package. If you put '\tests' under '\inst', you will not see the messages of '* checking tests ...'

ps. Helmut could not find the subdir '\tests' in his previous post. That was because you added subdir of '\inst\tests' since v0.9-10. However, at that moment (2012-01-02), what Helmut installed was v0.9-0 or even earlier ver. Of course, there was no subdir of '\tests' yet. From CRAN archives, it shows:
...
PowerTOST_0.9-0.tar.gz  2011-Dec-15 18:07:47   255.8K   application/x-gzip
PowerTOST_0.9-10.tar.gz 2012-Jul-20 14:14:36   299.7K   application/x-gzip
...


Helmut should be able to find '\tests' subdir now.

❝ ...

❝ Seems I hadn't really understand what the purpose of the \tests subdirectory in a R-package is for and how it functions. Any R guru out there which don't let me die in blithe ignorance?

❝ Until I have fixed this issue you can download the source code tarball to get access to the scripts.

❝ ...


All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2012-01-17 12:53
(4850 d 14:37 ago)

@ Ben
Posting: # 7956
Views: 9,160
 

 PowerTOST robust

Dear Ben,

❝ First, I want to say thanks for the great package PowerTOST!


Thanx for the flowers.

❝ I saw that you added some new features, in particular the df2 degrees of freedom. The latter brings me to my second point. In your excerpt (or the manual) you wrote "The df2 are also more appropriate if the planning of sample size is done based on CV’s originating from real mixed model analysis (via Proc MIXED in SAS or lme() in R)". Why is that? Is there a reference where this is mentioned (haven't seen it explicitly in Senn's "Cross-over Trials in Clinical Research")?


Sorry. No reference available. It's my personal observation looking at results of the FDA code for replicate studies (which uses Proc MIXED as you know).

Regards,

Detlew
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,672 registered users;
30 visitors (0 registered, 30 guests [including 4 identified bots]).
Forum time: 04:31 CEST (Europe/Vienna)

The whole purpose of education is
to turn mirrors into windows.    Sydney J. Harris

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5