xpresso ☆ United States, 2023-09-19 15:40 (360 d 03:40 ago) Posting: # 23720 Views: 3,224 |
|
Hi there! I am looking for insight into how to calculate power and sample size estimation for separate PK endpoints (Auclast, Aucinf, etc.). In particular, I am looking to randomly sample different sample sizes and estimate the power to pass for bioequivalence for each given sample size. For example, with 30 subjects (15 per arm) the power for this subset of samples to pass for bioequivalence would be ___ %. I have already simulated the 1:1 parallel trial of 1,000 individuals and repeated the trial 10,000 times (denoted by the rep column below). I have been using the Rxode2 package in R to simulate, with wanting to calculate the power and sample size estimation based on their "Weight-based dosing example", seen here : https://nlmixrdevelopment.github.io/RxODE-manual/examples.html. Below is what the dataset currently looks like: |
Helmut ★★★ Vienna, Austria, 2023-09-19 16:04 (360 d 03:17 ago) @ xpresso Posting: # 23721 Views: 2,678 |
|
Hi xpresso & welcome to the club! I think that you are overdoing it. If you have multiple PK metrics and all must pass (that’s the regulatory requirement) at level \(\small{\alpha}\) – in general 0.05 – you are protected against inflation of the type I error by the intersection-union principle.1 No need for simulations,2 simply calculate power for given T/R-ratio, CV, and the sample size. If any of the metrics shows power <50%, the study will fail (see this article).
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
xpresso ☆ United States, 2023-09-19 16:18 (360 d 03:02 ago) @ Helmut Posting: # 23722 Views: 2,659 |
|
❝ Hi xpresso & welcome to the club! ❝ I think that you are overdoing it. […] This is good to know! I have considered the intersection-union principle after reading previous posts in the forum. I was hoping to compute the predictive power for different numbers of subjects for each trial and was using bootstrapping to generate this value. Does this make more sense? Would you happen to have any resources/examples? I am new to this topic and working on this project out of pure interest (forgive me if this is not the place for these questions). Thank you for your helpEdit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut] |
Helmut ★★★ Vienna, Austria, 2023-09-19 17:46 (360 d 01:35 ago) @ xpresso Posting: # 23723 Views: 2,697 |
|
Hi xpresso ❝ […] I have considered the intersection-union principle after reading previous posts in the forum. I was hoping to compute the predictive power for different numbers of subjects for each trial and was using bootstrapping to generate this value. Does this make more sense? Would you happen to have any resources/examples? Of course, you could do bootstrapping but I still think that it’s over the top. For given T/R-ratio, CV, n, and design power can be directly calculated (contrary to the sample size, where you need an iterative procedure). If your are interested in some combinations, see these examples for a parallel design and a 2×2×2 crossover. In short, whatever you want, simply set up nested loops and assign the result to a data frame (or matrix if more than two dimensions). As a single point metric Cmax is in general the most variable one. Hence, assessing power for it should cover the other metrics as well. I like IUT. ❝ I am new to this topic and working on this project out of pure interest (forgive me if this is not the place for these questions). That’s the best reason. It’s the right place. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
BEQool ★ 2024-03-01 15:26 (196 d 02:54 ago) (edited by on 2024-03-01 15:36) @ Helmut Posting: # 23887 Views: 1,606 |
|
❝ If any of the metrics shows power <50%, the study will fail (see this article). Hello, is the statement above always true? For example, in a hypothetical (and very unrealistic ) scenario for Cmax with PE=95%, CV=110%, 36 subjects in a 2x2x3 design we get the following 90% CI: CI.BE(pe=0.95,n=36,design="2x2x3",CV=1.1) Widened acceptance limits are (the maximum ones): scABEL(CV=1.1) So we pass the study although the power is <50%: power.scABEL(theta0=0.95,n=36,design="2x2x3",CV=1.1) Why do we pass the study if the power is below 50%? BEQool |
Helmut ★★★ Vienna, Austria, 2024-03-01 15:44 (196 d 02:36 ago) @ BEQool Posting: # 23888 Views: 1,634 |
|
Hi BEQool, ❝ Why do we pass the study if the power is below 50%?
CI.BE() gives us the realized results (i.e., the values observed in a particular study), whereas power.scABEL() the results of 105 simulations. Both the CV and PE are skewed to the right and, therefore, I would expect that simulated power is lower than assessing whether a particular study passes. However, such a large discrepancy is surprising for me.— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
BEQool ★ 2024-03-02 00:02 (195 d 18:18 ago) (edited by on 2024-03-02 08:47) @ Helmut Posting: # 23889 Views: 1,668 |
|
❝ What about function power.TOST()? I think that it doesnt give us the result based on simulations? Am I right? And this calculated power is also way below 50% (similar to the power calculated above): power.TOST(theta0=0.95,n=36,design="2x2x3",CV=1.1, theta1=0.6983678, theta2=1.4319102) I can find more such examples All are as expected close to the limit: a) CI.BE(pe=0.95,n=36,design="2x2x3",CV=0.5) b) CI.BE(pe=0.95,n=36,design="2x2",CV=0.44) c) CI.BE(pe=0.94,n=18,design="2x2",CV=0.28) ... ❝ If any of the metrics shows power <50%, the study will fail (see this article). Is it possible that the opposite is always true and this isnt? So that if the study fails, the power will always be <50%; but if the power is <50%, it doesnt always mean that the study failed (will fail)? |