xpresso
☆    

United States,
2023-09-19 15:40
(360 d 03:40 ago)

Posting: # 23720
Views: 3,224
 

 Sequential Sample Size and Power Estimate in R [Power / Sample Size]

Hi there! I am looking for insight into how to calculate power and sample size estimation for separate PK endpoints (Auclast, Aucinf, etc.). In particular, I am looking to randomly sample different sample sizes and estimate the power to pass for bioequivalence for each given sample size. For example, with 30 subjects (15 per arm) the power for this subset of samples to pass for bioequivalence would be ___ %.


I have already simulated the 1:1 parallel trial of 1,000 individuals and repeated the trial 10,000 times (denoted by the rep column below). I have been using the Rxode2 package in R to simulate, with wanting to calculate the power and sample size estimation based on their "Weight-based dosing example", seen here : https://nlmixrdevelopment.github.io/RxODE-manual/examples.html.

Below is what the dataset currently looks like:
[image]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2023-09-19 16:04
(360 d 03:17 ago)

@ xpresso
Posting: # 23721
Views: 2,678
 

 IUT

Hi xpresso & welcome to the club!

I think that you are overdoing it.
If you have multiple PK metrics and all must pass (that’s the regulatory requirement) at level \(\small{\alpha}\) – in general 0.05 – you are protected against inflation of the type I error by the intersection-union principle.1 No need for simulations,2 simply calculate power for given T/R-ratio, CV, and the sample size. If any of the metrics shows power <50%, the study will fail (see this article).


  1. Berger RL, Hsu JC. Bioequivalence Trials, Intersection-Union Tests and Equivalence Confidence Sets. Stat Sci. 1996; 11(4): 283–302. JSTOR:2246021.
  2. Zeng A. The TOST confidence intervals and the coverage probabilities with R simulation. March 14, 2014. Online.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
xpresso
☆    

United States,
2023-09-19 16:18
(360 d 03:02 ago)

@ Helmut
Posting: # 23722
Views: 2,659
 

 IUT

❝ Hi xpresso & welcome to the club!

Hi! Thank you for the warm welcome and response! I have been privately reading the materials in the forum over the past couple months and finally made an account.

❝ I think that you are overdoing it. […]

This is good to know! I have considered the intersection-union principle after reading previous posts in the forum. I was hoping to compute the predictive power for different numbers of subjects for each trial and was using bootstrapping to generate this value. Does this make more sense? Would you happen to have any resources/examples? I am new to this topic and working on this project out of pure interest (forgive me if this is not the place for these questions). Thank you for your help :-D


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5[Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2023-09-19 17:46
(360 d 01:35 ago)

@ xpresso
Posting: # 23723
Views: 2,697
 

 power is analytically accessible

Hi xpresso

❝ […] I have considered the intersection-union principle after reading previous posts in the forum. I was hoping to compute the predictive power for different numbers of subjects for each trial and was using bootstrapping to generate this value. Does this make more sense? Would you happen to have any resources/examples?


Of course, you could do bootstrapping but I still think that it’s over the top. For given T/R-ratio, CV, n, and design power can be directly calculated (contrary to the sample size, where you need an iterative procedure). If your are interested in some combinations, see these examples for a parallel design and a 2×2×2 crossover.
In short, whatever you want, simply set up nested loops and assign the result to a data frame (or matrix if more than two dimensions).
As a single point metric Cmax is in general the most variable one. Hence, assessing power for it should cover the other metrics as well. I like IUT.

❝ I am new to this topic and working on this project out of pure interest (forgive me if this is not the place for these questions).


That’s the best reason. It’s the right place.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
BEQool
★    

2024-03-01 15:26
(196 d 02:54 ago)

(edited by on 2024-03-01 15:36)
@ Helmut
Posting: # 23887
Views: 1,606
 

 Power<50%

❝ If any of the metrics shows power <50%, the study will fail (see this article).


Hello, is the statement above always true?

For example, in a hypothetical (and very unrealistic :-D) scenario for Cmax with PE=95%, CV=110%, 36 subjects in a 2x2x3 design we get the following 90% CI:
CI.BE(pe=0.95,n=36,design="2x2x3",CV=1.1)
   lower    upper
0.701628 1.286294


Widened acceptance limits are (the maximum ones):
scABEL(CV=1.1)
    lower     upper
0.6983678 1.4319102


So we pass the study although the power is <50%:
power.scABEL(theta0=0.95,n=36,design="2x2x3",CV=1.1)
  0.23718


Why do we pass the study if the power is below 50%?

BEQool
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-03-01 15:44
(196 d 02:36 ago)

@ BEQool
Posting: # 23888
Views: 1,634
 

 Power<50%

Hi BEQool,

❝ Why do we pass the study if the power is below 50%?

Let’s dive deeper into you example:

library(PowerTOST)
n   <- sampleN.scABEL(CV = 1.1, theta0 = 0.95, design = "2x2x3",
                      details = FALSE, print = FALSE)[["Sample size"]]
n   <- seq(n, 36, -2)
res <- data.frame(n = n, lower = NA_real_, upper = NA_real_, power = NA_real_)
for (j in seq_along(n)) {
  res[j, 2:3] <- CI.BE(CV = 1.1, pe = 0.95, n = n[j], design = "2x2x3")
  res[j, 4]   <- power.scABEL(CV = 1.1, theta0 = 0.95, n = n[j], design = "2x2x3")
}
print(signif(res, 4), row.names = FALSE)

  n  lower upper  power
 88 0.7838 1.151 0.8080
 86 0.7821 1.154 0.7974
 84 0.7803 1.157 0.7866
 82 0.7784 1.159 0.7750
 80 0.7764 1.162 0.7626
 78 0.7744 1.165 0.7503
 76 0.7723 1.169 0.7360
 74 0.7701 1.172 0.7226
 72 0.7679 1.175 0.7070
 70 0.7655 1.179 0.6911
 68 0.7631 1.183 0.6743
 66 0.7606 1.187 0.6567
 64 0.7579 1.191 0.6373
 62 0.7551 1.195 0.6180
 60 0.7522 1.200 0.5972
 58 0.7492 1.205 0.5745
 56 0.7460 1.210 0.5504
 54 0.7426 1.215 0.5267
 52 0.7391 1.221 0.5008
 50 0.7353 1.227 0.4733
 48 0.7314 1.234 0.4453
 46 0.7272 1.241 0.4134
 44 0.7227 1.249 0.3816
 42 0.7180 1.257 0.3490
 40 0.7129 1.266 0.3122
 38 0.7075 1.276 0.2751
 36 0.7016 1.286 0.2372

Strange. I have to think about it… :ponder:
CI.BE() gives us the realized results (i.e., the values observed in a particular study), whereas power.scABEL() the results of 105 simulations. Both the CV and PE are skewed to the right and, therefore, I would expect that simulated power is lower than assessing whether a particular study passes. However, such a large discrepancy is surprising for me.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
BEQool
★    

2024-03-02 00:02
(195 d 18:18 ago)

(edited by on 2024-03-02 08:47)
@ Helmut
Posting: # 23889
Views: 1,668
 

 Power<50%

CI.BE() gives us the realized results (i.e., the values observed in a particular study), whereas power.scABEL() the results of 105 simulations. Both the CV and PE are skewed to the right and, therefore, I would expect that simulated power is lower than assessing whether a particular study passes. However, such a large discrepancy is surprising for me.


What about function power.TOST()? I think that it doesnt give us the result based on simulations? Am I right? And this calculated power is also way below 50% (similar to the power calculated above):

power.TOST(theta0=0.95,n=36,design="2x2x3",CV=1.1, theta1=0.6983678, theta2=1.4319102)
0.2355959


I can find more such examples :-) All are as expected close to the limit:
a) CI.BE(pe=0.95,n=36,design="2x2x3",CV=0.5)
    lower     upper
0.8089197 1.1156855


power.TOST(theta0=0.95,n=36,design="2x2x3",CV=0.5)
0.4273464


b) CI.BE(pe=0.95,n=36,design="2x2",CV=0.44)
    lower     upper
0.8033552 1.1234134


power.TOST(theta0=0.95,n=36,design="2x2",CV=0.44)
0.3786147


c) CI.BE(pe=0.94,n=18,design="2x2",CV=0.28)
    lower     upper
0.8011079 1.1029725


power.TOST(theta0=0.94,n=18,design="2x2",CV=0.28)
0.4259032


...

❝ If any of the metrics shows power <50%, the study will fail (see this article).


Is it possible that the opposite is always true and this isnt?
So that if the study fails, the power will always be <50%; but if the power is <50%, it doesnt always mean that the study failed (will fail)?
UA Flag
Activity
 Admin contact
23,224 posts in 4,878 threads, 1,654 registered users;
26 visitors (0 registered, 26 guests [including 8 identified bots]).
Forum time: 19:21 CEST (Europe/Vienna)

One of the symptoms of an approaching nervous breakdown
is the belief that one’s work is terribly important.    Bertrand Russell

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5