Helmut Hero Vienna, Austria, 20171112 11:57 Posting: # 17971 Views: 1,354 

Dear all, related to this thread about dropouts. To run the Rcode you need package Power2Stage 0.4.6+.Let’s assume a CV of 25% for C_{max} and 15% for AUC, Potvin ‘Method B’ (α_{adj} 0.0294). We want to play it safe and plan the first stage like a fixed sample design (T/R 0.95, 80% power). Hence, we start with 28 subjects. In the interim the CVs are higher than expected; for C_{max} a CV of 30% and for AUC 20%. Say C_{max} is not BE (94.12% CI) and power <80%. Hence, we should initiate the second stage. Reestimated sample size:
I think that in the past everybody (including myself) looked only at the PK metric with the highest variability and ignored the other one. Likely not a good idea. Which options do we have for the PK metric with the lower variability?
Of course, this issue is not limited to TSDs but applies to GSDs with (blinded/unblinded) sample size reestimation as well. — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20171112 17:46 @ Helmut Posting: # 17972 Views: 1,236 

Dear Helmut! Very good question. Next question . My gut feeling says: Don't worry, be happy . What we had to do if we take two or more metrics into consideration is to combine the results of both metrics, i.e. some sort of intersectionunion test (IUT). The IUT is known to be conservativ up to very conservative. For illustration let's look at the results in a single stage design using Ben's function power.2TOST() : We don't know rho, the correlation berween both PK metrics, so lets look at the extremes. power.2TOST(CV=c(0.3,0.2), n=28, theta0=c(1., 1.25), rho=0) green: conservative red: very conservative This behavior should protect against an additional alpha inflation due to combining the results of both metrics if you control the TIE (alpha) of each. Ok. All this is only analogy and gut feeling. We only know exactly what's going on, if we simulate. But I doubt if time spent and effort of doing this pays off. — Regards, Detlew 
ElMaestro Hero Denmark, 20171112 21:43 @ Helmut Posting: # 17973 Views: 1,217 

Hi Hötzi, can you run a series of sims where you include "20 more subjects than necessary" in stage 2 where a stage 2 is called for? Just to see how things look from that perspective. — I could be wrong, but… Best regards, ElMaestro  Bootstrapping for dissolution data is a relatively new hobby of mine. 
d_labes Hero Berlin, Germany, 20171113 16:02 @ ElMaestro Posting: # 17974 Views: 1,160 

Dear ElMaestro, » can you run a series of sims where you include "20 more subjects than necessary" in stage 2 where a stage 2 is called for? that could indeed be done within the (in)famous package Power2Stage , function power.2stage.fC() using the argument n2.min . At least partially because n2.min assures a minimum sample size for stage 2, if stage 2 is called for and the estimated sample size is smaller than n2.min .But that's not the problem Helmut has described, at least as I understand it. Here we initiate a stage 2 from the perspective of the AUC evaluation where it is not neccessary to initiate one. If having more than necessary number of subjects is a problem at all? My gut feeling says no. — Regards, Detlew 
Helmut Hero Vienna, Austria, 20171113 16:51 @ d_labes Posting: # 17975 Views: 1,158 

Dear Detlew, » that could indeed be done within the (in)famous package Power2Stage , function power.2stage.fC() using the argument n2.min .… and power.2stage() without the need of specifying a futility criterion.» At least partially because n2.min assures a minimum sample size for stage 2, if stage 2 is called for and the estimated sample size is smaller than n2.min .Right. » But that's not the problem Helmut has described, at least as I understand it. Here we initiate a stage 2 from the perspective of the AUC evaluation where it is not neccessary to initiate one. Right as well. I tried to modify 5+ years old code for subject simulations but lost my patience. » If having more than necessary number of subjects is a problem at all? My gut feeling says no. From the producer’s perspective, of course, not. My gut feeling tells me: The adjusted α showed (well, in simulations…) that the TIE is controlled if we follow the frameworks exactly, i.e., initiate the second stage only if necessary based on the interim and with the reestimated sample size.* If we increase the sample size, the chance of passing BE increases and hence, the TIE. Misusing Power2Stage :
— Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20171115 10:34 @ Helmut Posting: # 17981 Views: 1,061 

Dear Helmut, » My gut feeling tells me: Woman are better in gut feeling, NLYW especially . » The adjusted α showed (well, in simulations…) that the TIE is controlled if we follow the frameworks exactly, i.e., initiate the second stage only if necessary based on the interim and with the reestimated sample size. If we increase the sample size, the chance of passing BE increases and hence, the TIE. Wait, wait ... I will be back in 5 min with some quick shoot code to verify your claim. Meanwhile prepare a decision scheme with two metrics. So that I can see if my implementation is correct. My brain gets crowded if I think about the decision scheme. So many "What if". And here you are trained as scuba diver I was told from somebody . — Regards, Detlew 
d_labes Hero Berlin, Germany, 20171115 13:15 (edited by d_labes on 20171115 13:47) @ d_labes Posting: # 17982 Views: 1,027 

Dear Helmut, dear All, » » The adjusted α showed (well, in simulations…) that the TIE is controlled if we follow the frameworks exactly, i.e., initiate the second stage only if necessary based on the interim and with the reestimated sample size. If we increase the sample size, the chance of passing BE increases and hence, the TIE. » Wait, wait ... I will be back in 5 min with some quick shoot code to verify your claim. 5 minutes are gone . See Power2Stage on Github. Function power.tsd.2m() .It's pure Potvin B, abbreviated with implicit power monitoring, i.e. via sample size estimation, which gives n2=0 in case of 'enough' power. What's missing at the moment is some sort of correlation between the PK metrics analogous to what is possible in power.2TOST() with the argument rho . Here I don't know at the moment at which place it comes into play. Suggestions are welcome.The function is not public at the moment. Not in the mood to document . Example: # install from GitHub via # devtools::install_github("Detlew/Power2Stage") library(Power2Stage) Take the results with a grain of salt. Don't know how to validate. Suggestions here also welcome. But I think the results look plausible. Edited numbers after removing a little bug. — Regards, Detlew 
ElMaestro Hero Denmark, 20171114 00:04 @ d_labes Posting: # 17976 Views: 1,174 

Hi d_labes, » But that's not the problem Helmut has described, at least as I understand it. Here we initiate a stage 2 from the perspective of the AUC evaluation where it is not neccessary to initiate one. Best I can come up with is as follows: The only thing that determines the need for for stage 2 is the CV. AUCt is generally having a lower CV than Cmax (for the record, I said 'generally', I did not say 'always'). Often it is about 520 percentage points. So run the sims in the usual fashion, but add 520 points on the CV for the power and sample size estimation. — I could be wrong, but… Best regards, ElMaestro  Bootstrapping for dissolution data is a relatively new hobby of mine. 
Helmut Hero Vienna, Austria, 20171114 00:21 @ ElMaestro Posting: # 17977 Views: 1,184 

Hi ElMaestro, » Best I can come up with is as follows: » The only thing that determines the need for for stage 2 is the CV. » » So run the sims in the usual fashion, but add 520 points on the CV for the power and sample size estimation. That’s not my point. Meditate over my example. We would initiate the second stage with 28 subjects (driven by the CV of C_{max} 0.30) but should stop after the first stage for AUC (CV 0.20). — Regards, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
nobody Senior 20171114 08:11 @ Helmut Posting: # 17978 Views: 1,115 

IANAL, but I think what SanDiegoman is proposing is to calculate BEoutcome (pass/fail) after stage 1 and compare it to BEoutcome after the (unnecessary) second stage for, let's say, 1 gazillion of studies and see if there is a meaningful difference. Isn't this a special example of the more general "forced BE" theme in onestage designs? — Kindest regards, nobody 
ElMaestro Hero Denmark, 20171114 12:41 @ Helmut Posting: # 17979 Views: 1,150 

Hi Hötzi, » That’s not my point. Meditate over my example. We would initiate the second stage with 28 subjects (driven by the CV of C_{max} 0.30) but should stop after the first stage for AUC (CV 0.20). Can you reformulate? I meditated hard and I don't see why my suggestion does not address it. It will treat AUCt as if it having a higher CV for everything except BE evaluation. It is if I get your question correctly exactly what is going on. So perhaps I did not understand you. While I sit in Lotus position and humming gently, do you think you could somehow put other words to the part I don't get? — I could be wrong, but… Best regards, ElMaestro  Bootstrapping for dissolution data is a relatively new hobby of mine. 