xpresso ☆ United States, 20230919 15:40 (307 d 12:27 ago) Posting: # 23720 Views: 2,829 

Hi there! I am looking for insight into how to calculate power and sample size estimation for separate PK endpoints (Auclast, Aucinf, etc.). In particular, I am looking to randomly sample different sample sizes and estimate the power to pass for bioequivalence for each given sample size. For example, with 30 subjects (15 per arm) the power for this subset of samples to pass for bioequivalence would be ___ %. I have already simulated the 1:1 parallel trial of 1,000 individuals and repeated the trial 10,000 times (denoted by the rep column below). I have been using the Rxode2 package in R to simulate, with wanting to calculate the power and sample size estimation based on their "Weightbased dosing example", seen here : https://nlmixrdevelopment.github.io/RxODEmanual/examples.html. Below is what the dataset currently looks like: 
Helmut ★★★ Vienna, Austria, 20230919 16:04 (307 d 12:03 ago) @ xpresso Posting: # 23721 Views: 2,344 

Hi xpresso & welcome to the club! I think that you are overdoing it. If you have multiple PK metrics and all must pass (that’s the regulatory requirement) at level \(\small{\alpha}\) – in general 0.05 – you are protected against inflation of the type I error by the intersectionunion principle.^{1} No need for simulations,^{2} simply calculate power for given T/Rratio, CV, and the sample size. If any of the metrics shows power <50%, the study will fail (see this article).
— Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
xpresso ☆ United States, 20230919 16:18 (307 d 11:49 ago) @ Helmut Posting: # 23722 Views: 2,328 

❝ Hi xpresso & welcome to the club! ❝ I think that you are overdoing it. […] This is good to know! I have considered the intersectionunion principle after reading previous posts in the forum. I was hoping to compute the predictive power for different numbers of subjects for each trial and was using bootstrapping to generate this value. Does this make more sense? Would you happen to have any resources/examples? I am new to this topic and working on this project out of pure interest (forgive me if this is not the place for these questions). Thank you for your helpEdit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut] 
Helmut ★★★ Vienna, Austria, 20230919 17:46 (307 d 10:21 ago) @ xpresso Posting: # 23723 Views: 2,358 

Hi xpresso ❝ […] I have considered the intersectionunion principle after reading previous posts in the forum. I was hoping to compute the predictive power for different numbers of subjects for each trial and was using bootstrapping to generate this value. Does this make more sense? Would you happen to have any resources/examples? Of course, you could do bootstrapping but I still think that it’s over the top. For given T/Rratio, CV, n, and design power can be directly calculated (contrary to the sample size, where you need an iterative procedure). If your are interested in some combinations, see these examples for a parallel design and a 2×2×2 crossover. In short, whatever you want, simply set up nested loops and assign the result to a data frame (or matrix if more than two dimensions). As a single point metric C_{max} is in general the most variable one. Hence, assessing power for it should cover the other metrics as well. I like IUT. ❝ I am new to this topic and working on this project out of pure interest (forgive me if this is not the place for these questions). That’s the best reason. It’s the right place. — Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
BEQool ★ 20240301 15:26 (143 d 11:41 ago) (edited by on 20240301 15:36) @ Helmut Posting: # 23887 Views: 1,270 

❝ If any of the metrics shows power <50%, the study will fail (see this article). Hello, is the statement above always true? For example, in a hypothetical (and very unrealistic ) scenario for Cmax with PE=95%, CV=110%, 36 subjects in a 2x2x3 design we get the following 90% CI: CI.BE(pe=0.95,n=36,design="2x2x3",CV=1.1) Widened acceptance limits are (the maximum ones): scABEL(CV=1.1) So we pass the study although the power is <50%: power.scABEL(theta0=0.95,n=36,design="2x2x3",CV=1.1) Why do we pass the study if the power is below 50%? BEQool 
Helmut ★★★ Vienna, Austria, 20240301 15:44 (143 d 11:23 ago) @ BEQool Posting: # 23888 Views: 1,298 

Hi BEQool, ❝ Why do we pass the study if the power is below 50%?
CI.BE() gives us the realized results (i.e., the values observed in a particular study), whereas power.scABEL() the results of 10^{5} simulations. Both the CV and PE are skewed to the right and, therefore, I would expect that simulated power is lower than assessing whether a particular study passes. However, such a large discrepancy is surprising for me.— Diftor heh smusma 🖖🏼 Довге життя Україна! _{} Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
BEQool ★ 20240302 00:02 (143 d 03:05 ago) (edited by on 20240302 08:47) @ Helmut Posting: # 23889 Views: 1,330 

❝ What about function power.TOST()? I think that it doesnt give us the result based on simulations? Am I right? And this calculated power is also way below 50% (similar to the power calculated above): power.TOST(theta0=0.95,n=36,design="2x2x3",CV=1.1, theta1=0.6983678, theta2=1.4319102) I can find more such examples All are as expected close to the limit: a) CI.BE(pe=0.95,n=36,design="2x2x3",CV=0.5) b) CI.BE(pe=0.95,n=36,design="2x2",CV=0.44) c) CI.BE(pe=0.94,n=18,design="2x2",CV=0.28) ... ❝ If any of the metrics shows power <50%, the study will fail (see this article). Is it possible that the opposite is always true and this isnt? So that if the study fails, the power will always be <50%; but if the power is <50%, it doesnt always mean that the study failed (will fail)? 