Two PK metrics: Inflation of the Type I Error [Two-Stage / GS Designs]
Dear all,
related to this thread about dropouts. To run the R-code you need package
Let’s assume a CV of 25% for Cmax and 15% for AUC, Potvin ‘Method B’ (αadj 0.0294). We want to play it safe and plan the first stage like a fixed sample design (T/R 0.95, 80% power). Hence, we start with 28 subjects. In the interim the CVs are higher than expected; for Cmax a CV of 30% and for AUC 20%. Say Cmax is not BE (94.12% CI) and power <80%. Hence, we should initiate the second stage. Re-estimated sample size:
I think that in the past everybody (including myself) looked only at the PK metric with the highest variability and ignored the other one. Likely not a good idea.
Which options do we have for the PK metric with the lower variability?
Of course, this issue is not limited to TSDs but applies to GSDs with (blinded/unblinded) sample size re-estimation as well.
related to this thread about dropouts. To run the R-code you need package
Power2Stage
0.4.6+.Let’s assume a CV of 25% for Cmax and 15% for AUC, Potvin ‘Method B’ (αadj 0.0294). We want to play it safe and plan the first stage like a fixed sample design (T/R 0.95, 80% power). Hence, we start with 28 subjects. In the interim the CVs are higher than expected; for Cmax a CV of 30% and for AUC 20%. Say Cmax is not BE (94.12% CI) and power <80%. Hence, we should initiate the second stage. Re-estimated sample size:
library(Power2Stage)
print(sampleN2.TOST(CV=0.30, n1=28), row.names=FALSE) # Cmax
# Design alpha CV theta0 theta1 theta2 n1 Sample size Achieved power Target power
# 2x2 0.0294 0.3 0.95 0.8 1.25 28 20 0.8177478 0.8
print(sampleN2.TOST(CV=0.20, n1=28), row.names=FALSE) # AUC
# Design alpha CV theta0 theta1 theta2 n1 Sample size Achieved power Target power
# 2x2 0.0294 0.2 0.95 0.8 1.25 28 0 0.8922371 0.8
I think that in the past everybody (including myself) looked only at the PK metric with the highest variability and ignored the other one. Likely not a good idea.
Which options do we have for the PK metric with the lower variability?
- Assess BE with a lower sample size. In the example above ignore the second stage entirely. If the CV would be 25% instead of 20%, assess only the first six subjects of the 20 in the second stage (i.e., in the pooled analysis 28+6=34 instead of 48).
- Use the data of all subjects and adjust α more (i.e., a wider CI). How?
- Or?
Of course, this issue is not limited to TSDs but applies to GSDs with (blinded/unblinded) sample size re-estimation as well.
—
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Two PK metrics: Inflation of the Type I ErrorHelmut 2017-11-12 11:57 [Two-Stage / GS Designs]
- Two PK metrics: Inflation of the Type I Error? d_labes 2017-11-12 17:46
- A place to start ElMaestro 2017-11-12 21:43
- A place to start? d_labes 2017-11-13 16:02
- A place to start? Helmut 2017-11-13 16:51
- Scientific gut feeling d_labes 2017-11-15 10:34
- Five minutes gone - power.tsd.2m() arose d_labes 2017-11-15 13:15
- Scientific gut feeling d_labes 2017-11-15 10:34
- A better place to start. ElMaestro 2017-11-14 00:04
- Nope Helmut 2017-11-14 00:21
- Nope nobody 2017-11-14 08:11
- I've meditated hard ElMaestro 2017-11-14 12:41
- Nope Helmut 2017-11-14 00:21
- A place to start? Helmut 2017-11-13 16:51
- A place to start? d_labes 2017-11-13 16:02