Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-06-17 14:33
(3206 d 23:21 ago)

Posting: # 14964
Views: 13,887
 

 Adjusting for ex­pect­ed drop­out-rate [Power / Sample Size]

Dear all,

regularly I come across interesting methods for adjusting the sample size. Strange enough many people apply this formula:
\(n_{\textrm{adj}}=100\times n_{\textrm{des}}\times (1+dor) \tag{1}\)
where nadj is the adjusted sample size (dosed subjects), ndes the desired sample size (as estimated for the desired power), and dor the expected dropout-rate in percent. nadj is rounded up to give balanced sequences (crossover) or equal group sizes (parallel). This formula is flawed – especially for high sample sizes and/or high dropout-rates.
Example: 2×2 crossover and expected dropout-rate 15%

 ndes  nadj   n
  12    14   12
  24    28   24
  30    36   31
  36    42   36
  48    56   48
  64    74   63
  72    84   71
  96   112   95
 120   138  117
 144   166  141

If one applies this formula, the number of eligible subjects n might be lower than desired – resulting in a potential drop in power. Don’t shoot yourself in the foot. Use this one instead:
\(n_{\textrm{adj}}=100\times n_{\textrm{des}}/ (100-dor) \tag{2}\)
which gives:

 ndes  nadj   n
  12    16   14
  24    30   26
  30    36   31
  36    44   37
  48    58   49
  64    76   65
  72    86   73
  96   114   97
 120   142  121
 144   170  145

If the dropout-rate is ≈ as expected, the desired power will always be preserved (since n ≥ ndes).
(1) is flawed because the actual dropout-rate is based on the dosed subjects (i.e., calculated down­wards from nadj – not upwards from ndes).
On the other hand for low sample sizes and/or dropout-rates (2) might be overly con­ser­vative. Of course you could “pick out the best” from both (i.e., the nadj which will lead to the lowest nndes).

A nice statement* [about an anticipated dropout rate of 15%]:

Note a very common mistake when calculating the total sample size is to multiply the evaluable sample size by 1.15 and not divide by 0.85.


An all to common error though in daily life many people fail to calculate the net value from the total amount and VAT as well. If the total is 110.– and the VAT is 10%, the net value is 100.– (i.e., 110/1.10) and not 99.– (110×0.90)… :-D


  • Julious SA. Sample Sizes for Clinical Trials. Boca Raton: CRC Press; 2010. p.53.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2015-06-18 11:02
(3206 d 02:51 ago)

@ Helmut
Posting: # 14965
Views: 11,518
 

 Percentage calculation

Dear Helmut!

Many, many people (including myself) have slept or where ill at time percentage calculation was teached. :-D

Regards,

Detlew
zizou
★    

Plzeň, Czech Republic,
2015-08-19 01:34
(3144 d 12:19 ago)

@ Helmut
Posting: # 15295
Views: 10,762
 

 Adjusting for ex­pect­ed drop­out-rate - Nitpicking

Dear Helmut.

Should be considered possibility of unbalanced design after adjusting for expected dropout-rate? See example below.
Let's plan 2x2 crossover study with following assumptions/requirements:
CV = 0.262
alpha = 0.05
power >= 0.80
ratio = 0.95
acceptance range: 0.80-1.25

It implies minimum sample size ndes = 30.
Expected dropout-rate = 15%.
nadj=36 (n1=18, n2=18)

Now let's see that all assumptions are respected, but power falls under desired 80% in unbalanced cases.

In the first line of the next table "TR" stands for number of dropouts from sequence TR, "RT" likewise and "Probability" stands for probability of case when exactly 6 dropouts are expected.

TR RT Probability     Power
6  0  p_1=0.009530792 <80%
5  1  p_2=0.07917889  <80%
4  2  p_3=0.2403645   <80%
3  3  p_4=0.3418517   >80%
2  4  p_5=0.2403645   <80%
1  5  p_6=0.07917889  <80%
0  6  p_7=0.009530792 <80%

Note: Probability was calculated using combination without repetition p_1=("18 choose 6")/("36 choose 6")=18/36*17/35*16/34*15/33*14/32*13/31=0.009530792 .
# R code:
p_1=choose(18,6)/choose(36,6)
p_2=(choose(18,5)*choose(18,1))/choose(36,6)
p_3=(choose(18,4)*choose(18,2))/choose(36,6)
p_4=(choose(18,3)*choose(18,3))/choose(36,6)
2*p_1+2*p_2+2*p_3+p_4 # Check that sum of p_i is equal to 1.


In case of 3 dropouts with TR and 3 with RT, the power is 80.1%.
Otherwise the power is supposed to be lower than 80% - not much lower of course. With unbalanced case 4:2 dropouts, the power is 79.9%. Then if it's getting more unbalanced, the power is decreasing (as you know).

So in this example we have 65.8% (100*(1-p_4)) probability that we shoot ourselves in the foot :no:.
Maybe not so big shot - planned number of subjects finished will be achieved :-), but sample size wasn't estimated with option of getting unbalanced in mind. Even though there is almost 2/3 probability that it will ends unbalanced (if assumptions are right, and no better GMR than expected, no lower CV estimated from residual mean square than expected, and no lower dropout-rate than expected).

Best regards,
zizou

"Remember, with great power comes great responsibility."
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-08-19 04:20
(3144 d 09:34 ago)

@ zizou
Posting: # 15296
Views: 10,839
 

 Good point!

Hi zizou,

good points! First of all I could reproduce your numbers; nice excursion into combinatorics.

library(PowerTOST)
CV  <- 0.262
dor <- 0.15
n.des  <- sampleN.TOST(CV=CV, details=F, print=F)[["Sample size"]]
n.adj  <- round(n.des/(1-dor)/2)*2
do     <- n.adj-n.des
n.TR   <- seq(n.adj/2-do, n.adj/2, 1)
n.RT   <- seq(n.adj/2, n.adj/2-do, -1)
res    <- matrix(nrow=length(n.TR), ncol=7, byrow=T, dimnames=list(rep(NULL, 5),
            c("n", "TR", "RT", "TR.do", "RT.do", "prob", "power")))
res[, 1] <- n.TR+n.RT
res[, 2] <- n.TR
res[, 3] <- n.RT
res[, 4] <- n.adj/2-n.TR
res[, 5] <- n.adj/2-n.RT
for (j in seq_along(n.TR)) {
  res[j, 6] <- sprintf("%.9f",
                (choose(n.adj/2, n.adj/2-n.TR[j])*
                choose(n.adj/2, n.adj/2-n.RT[j]))
                /choose(n.adj, do))
  res[j, 7] <- sprintf("%.5f", power.TOST(CV=CV, n=c(n.TR[j], n.RT[j])))
}
Sum.p  <- sum(as.numeric(res[, 6]))
res    <- data.frame(res)
print(res, row.names=F); cat("Sum p:", Sum.p, "\n")

  n TR RT TR.do RT.do        prob   power
 30 12 18     6     0 0.009530792 0.78425
 30 13 17     5     1 0.079178886 0.79345
 30 14 16     4     2 0.240364474 0.79877
 30 15 15     3     3 0.341851697 0.80051
 30 16 14     2     4 0.240364474 0.79877
 30 17 13     1     5 0.079178886 0.79345
 30 18 12     0     6 0.009530792 0.78425
Sum p: 1


❝ So in this example we have 65.8% (100*(1-p_4)) probability that we shoot ourselves in the foot :no:.


Depends on how rigid the bullet is. Full metal jacket are not even the 6/0 (or 0/6)-cases (power 78.4%). The others (with a much higher probability) are soft-balls.

❝ Maybe not so big shot - planned number of subjects finished will be achieved :-), but sample size wasn't estimated with option of getting unbalanced in mind. Even though there is almost 2/3 probability that it will ends unbalanced (if assumptions are right, and no better GMR than expected, no lower CV estimated from residual mean square than expected, and no lower dropout-rate than expected).


Correct. I would be wary to assume very different dropout-rates in sequences. Theoretically they should occur at random (nTR ~ nRT). If we expect different dropout-rates a priori, IMHO this would also imply that we expect a true (!) sequence effect – which would confound the treatment effect. Opening a can of worms. :-D

"Remember, with great power comes great responsibility."


Power. That which statisticians are always calculating
but never have.
    Stephen Senn (Statistical Issues in Drug Development, Wiley 2004, p197)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
intuitivepharma
☆    

India,
2015-08-19 09:47
(3144 d 04:06 ago)

@ Helmut
Posting: # 15297
Views: 10,584
 

 Good point!

Dear Helmut,

I am getting the following error while executing the R Code.

Error in res[j, 7] <- sprintf("%.5f", power.TOST(CV = CV, n = c(n.TR[j],  :
  number of items to replace is not a multiple of replacement length


And the out put is

n TR RT TR.do RT.do        prob power
 30 12 18     6     0 0.009530792  <NA>
 30 13 17     5     1        <NA>  <NA>
 30 14 16     4     2        <NA>  <NA>
 30 15 15     3     3        <NA>  <NA>
 30 16 14     2     4        <NA>  <NA>
 30 17 13     1     5        <NA>  <NA>
 30 18 12     0     6        <NA>  <NA>


Am I missing some thing during copy paste.

Thanks & Regards,
IP.
d_labes
★★★

Berlin, Germany,
2015-08-19 12:12
(3144 d 01:42 ago)

@ intuitivepharma
Posting: # 15300
Views: 10,621
 

 Copy paste error

Dear intuitivepharma,

❝ Am I missing some thing during copy paste.


seems so. On my machine the code works.
Try it once more.

Regards,

Detlew
intuitivepharma
☆    

India,
2015-08-19 13:18
(3144 d 00:36 ago)

@ d_labes
Posting: # 15302
Views: 10,651
 

 Copy paste error

Dear Detlew,

Thanks for the reply. I have uploaded the screen shot. It seems that there is no copy paste error.
[image]

Its lunch time in India :hungry:, hence susceptible to postprandial syndrome.

Thanks & Regards,
IP.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-08-19 13:44
(3144 d 00:09 ago)

@ intuitivepharma
Posting: # 15304
Views: 10,634
 

 PowerTOST <1.2.7?

Hi IP,

the line
   Loading required package: mvtnorm
is a hint that you are using an outdated version of PowerTOST. Since the first five columns are correct, try this:

library(PowerTOST)
power.TOST(CV=0.262, n=30)        # case 1
# [1] 0.8005133                       my result
power.TOST(CV=0.262, n=c(15, 15)) # case 2
# [1] 0.8005133                       my result


If the first case “works” and you get an error in the second case, try

power2.TOST(CV=0.262, n=c(15, 15))


If you get the correct result this time I suspect that your version is 1.2-5. Try

packageVersion("PowerTOST")

My code requires ≥1.2.6 (2015-01-23). power2.TOST() was depreciated in 1.2.6 and removed in 1.2.7 (2015-06-03); the current version is 1.2.8 (2015-07-10).

In my posts I always presume the current version of R and libraries.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
intuitivepharma
☆    

India,
2015-08-26 16:41
(3136 d 21:13 ago)

@ Helmut
Posting: # 15339
Views: 10,266
 

 PowerTOST <1.2.7?

Dear Helmut,

Sorry for the delayed update. Due to admin restrictions and firewall policies I was unable to update online [R & PowerTOST]. After lot of logistics got PowerTOST downloaded [v1.2.8] offline, installed and ran the code. It gave concurrent results, however received a warning

Warning message: In sampleN.TOST(CV = CV, details = F, print = F) : bytecode version mismatch; using eval

Still on an older version of R [3.1.2 (2014-10-31)], due to which received another warning,

Warning message:package ‘PowerTOST’ was built under R version 3.2.2.

Will update R package too with the help of my admin.

Thanks a lot for your time and guidance.

Thanks & Regards,
IP.
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,639 registered users;
82 visitors (0 registered, 82 guests [including 11 identified bots]).
Forum time: 12:54 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5