BEQool
★    

2024-03-04 09:45
(272 d 12:26 ago)

Posting: # 23891
Views: 3,264
 

 Two-Stage Sequential Potvin Designs [Two-Stage / GS Designs]

Hello all!

Lately I have been reading about Two-Stage Sequential (Potvin method B) Designs and I have some questions.

1. If we dont pass BE in the first stage, why do we have to assume a fixed GMR (i.e. 95% - same as before the 1st stage) when doing sample size estimation for the 2nd stage? Why dont we just use the observed GMR from the 1st stage? Because TIE would increase?
Has anyone ever gone into the 2nd stage with sample size estimation based on observed GMR from the 1st stage and the agency accepted it without any objections?
Additionally, if we use fixed GMR for both stages, can we also use GMR for example 92% from the beginning (not exactly GMR=95%)?

2. I understand the difference between Group Sequential Designs and Adaptive Two-Stage Sequential Designs, but I have troubles understanding what exactly are Add-On Designs. Here Add-On Designs are defined as "Sample sizes of both groups have a lower limit." Could anyone please explain what this exactly means? :-)

3. And the last question, according to this slide there are currently no Two Stage Sequential methods for replicate designs and designs with 2 formulations.
a) why cant we use Two-Stage Sequential design for replicate designs (for example 2x2x3)? I assume you have to use the same design in both stages (in 1st and 2nd), but other than that, why cant we use it (using adjusted alpha of course)?
b) regarding 2 formulations, so there is no way to go into the study with 2 test formulations and then choose the best one to go into the 2nd stage if we dont pass the BE in the 1st stage? Of course alpha should be adjusted both for 2 test formulations and one interim analysis - so I would say Bonferroni (most conservative) alpha correction with alpha 1.667% (3 hypotheses) or alpha 1.25% (4 hypotheses) should be used?

Thank you and best regards
BEQool
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-03-05 14:56
(271 d 07:15 ago)

@ BEQool
Posting: # 23893
Views: 2,837
 

 TSD, GSD, Add-Ons, etc.

Hi BEQool,

❝ 1. If we dont pass BE in the first stage, why do we have to assume a fixed GMR (i.e. 95% - same as before the 1st stage) when doing sample size estimation for the 2nd stage?

Because the methods (not only Potvin’s) were only validated for a fixed GMR. No own simulations needed.
BTW, I once saw a deficiency letter form a European assessor stating the Potvin’s grid (Method B!) was not narrow enough and an inflated type I error ‘might be observed between the tabulated values’. Bullshit. Excuse my French.

❝ Why dont we just use the observed GMR from the 1st stage? Because TIE would increase?

In this case you have to (a) perform own simulations covering a wide range of n1 and CV with a reasonably narrow grid in order to find a suitable \(\alpha_\text{adj}\) and (b) convince regulators that this adjustment is justified, i.e., controls the type I error. If you want to give it a try, set usePE = TRUE in the power-functions of the package Power2Stage. Don’t forget to assess also power and the distribution of expected total sample sizes. Might go through the ceiling. You can consider the alternative functions *.fC(), allowing to specify a futility on the sample size, the PE or its CI. Its a hit and miss.

❝ Has anyone ever gone into the 2nd stage with sample size estimation based on observed GMR from the 1st stage and the agency accepted it without any objections?

Not me.

❝ Additionally, if we use fixed GMR for both stages, can we also use GMR for example 92% from the beginning (not exactly GMR=95%)?

If you performed own simulations before (see above), yes.

❝ 2. I understand the difference between Group Sequential Designs and Adaptive Two-Stage Sequential Designs, but I have troubles understanding what exactly are Add-On Designs. Here Add-On Designs are defined as "Sample sizes of both groups have a lower limit." Could anyone please explain what this exactly means? :-)

In a classical GSD you assume a ‘worst case’ CV. Based on that you design the study for N subjects. According to Pocock’s approach you perform one interim analysis at exactly n/2 with an adjusted \(\alpha_\text{adj}\). If you pass BE, you stop. Otherwise, you continue with the second group of n/2 subject and perform an analysis of pooled data at the end with the same \(\alpha_\text{adj}\). Drawback: Even if you fail by a small margin in the first group, you have to go full throttle. Ethically and economically questionable, IMHO.
Note that Potvin’s \(\alpha_\text{adj}=0.0294\) in their TSD with sample size re-estimation (not a GSD!) was a lucky punch. That’s for a superiority testing in a parallel design and known variances. For equivalence \(\alpha_\text{adj}=0.0304\)… Known variance? Gimme a break.
Apart from Mexico (?) Add-On designs are history. Japan recommended them for ages but without adjusting the level of the tests, which lead to a massive inflation of the type I error.

❝ 3. And the last question, according to this slide there are currently no Two Stage Sequential methods for replicate designs and designs with 2 formulations.

Correct. A manuscript was submitted last year to the AAPS Journal but not accepted. The authors failed to demonstrate control of the type I error. I was not surprised (see below).

❝ a) why cant we use Two-Stage Sequential design for replicate designs (for example 2x2x3)? I assume you have to use the same design in both stages (in 1st and 2nd), but other than that, why cant we use it (using adjusted alpha of course)?

I guess you are thinking about scaled average bioequvalence (ABEL or RSABE)? How would you find a suitable one? For the type I error you need 106 simulations. Doable in a 2×2×2 and parallel design because there is an exact formula for power. Multiply that with the number of grid points. Takes some time, but OK. On the contrary in SABE there is no analytical solution for power and we need 105 simulations for the sample size re-estimation of every grid point. See this article. Good luck!

❝ b) regarding 2 formulations, so there is no way to go into the study with 2 test formulations and then choose the best one to go into the 2nd stage if we dont pass the BE in the 1st stage? Of course alpha should be adjusted both for 2 test formulations and one interim analysis - so I would say Bonferroni (most conservative) alpha correction with alpha 1.667% (3 hypotheses) or alpha 1.25% (4 hypotheses) should be used?

If you are adventurous… I did it once, but stopped in the interim for success. Was accepted but there is no guarantee.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2024-03-05 21:51
(271 d 00:20 ago)

@ BEQool
Posting: # 23894
Views: 2,710
 

 Two-Stage Sequential Potvin Designs

Hi BEQool,

❝ 1. If we dont pass BE in the first stage, why do we have to assume a fixed GMR (i.e. 95% - same as before the 1st stage) when doing sample size estimation for the 2nd stage? Why dont we just use the observed GMR from the 1st stage? Because TIE would increase?

❝ Has anyone ever gone into the 2nd stage with sample size estimation based on observed GMR from the 1st stage and the agency accepted it without any objections?


My views differ perhaps a bit from Helmut's. I think everybody who played around with TSD's tried it and found out it did not work.
The reason is very simple:
After the first stage you will very, very often (painfully often!) end up in a scenario where it is impossible to achieve 80% power or whatever you are aiming for. Often course it depends on the simulated CV and GMR but this is a very concrete and practical drawback that completely kills all practical use of the observed GMR for planning of stage 2.
Remember: Power is the same as alpha when you are on the acceptance border. So, at GMR=0.8 your power can never exceed alpha. Which means that even well within acceptance borders there are areas of GMR where you cannot achieve 80% power regardless of your sample size. It completely does not work.
I played around with all sorts of ways to circumvent it. Didn't come up with anything of practical use. For example, trying to halve the distance from the observed GMR to 1 and plugging this into sample size calc. I.e. if we get a GMR of 1.28 then we do GMR'=1.00+(1.28-1.00)/2 = 1.14 and plug it into the sample size calc. Tried many other variants of such a principle. Nothing that I tried works well.

❝ b) regarding 2 formulations, so there is no way to go into the study with 2 test formulations and then choose the best one to go into the 2nd stage if we dont pass the BE in the 1st stage? Of course alpha should be adjusted both for 2 test formulations and one interim analysis - so I would say Bonferroni (most conservative) alpha correction with alpha 1.667% (3 hypotheses) or alpha 1.25% (4 hypotheses) should be used?


There is a paper about running a stage 1 with two test formulations. Then picking the best one and doing stage 2 with it. I think it was some Danish weirdo who published it. That wasn't a replicate design, though.

Pass or fail!
ElMaestro
BEQool
★    

2024-03-10 11:11
(266 d 11:00 ago)

@ ElMaestro
Posting: # 23896
Views: 2,615
 

 Two-Stage Sequential Potvin Designs

Helmut and ElMaestro,
thank you for your comments and explanations, much is clearer now :ok:

❝ I guess you are thinking about scaled average bioequvalence (ABEL or RSABE)? How would you find a suitable one? For the type I error you need 106 simulations. Doable in a 2×2×2 and parallel design because there is an exact formula for power. Multiply that with the number of grid points. Takes some time, but OK. On the contrary in SABE there is no analytical solution for power and we need 105 simulations for the sample size re-estimation of every grid point. See this article. Good luck!

Yes, I was thinking about ABEL. Thanks for the link to the article, I havent come across this postscritp, very useful!

❝ There is a paper about running a stage 1 with two test formulations. Then picking the best one and doing stage 2 with it. I think it was some Danish weirdo who published it. That wasn't a replicate design, though.

Thanks, I have found it and read it. Great paper with great explanations!
I just have a question regarding 1st stage study(ies) here. If I am not mistaken, the paper describes two studies in the 1st stage - each with their own test formulation (A or B + reference) ("At the end of the day, even though the two trials at stage 1 are conducted as 2-treatment, 2-sequence, 2-period trials").
Could we also perform one study in the 1st stage with 2 test treatments (so 3-treatment, 6-sequence, 3-period) instead of 2 studies seperately? But then probably all of these calculations (alphas, TIE, power...) mentioned in the article wouldnt apply? Or would they?

In the paper you calculated power and alphas for different combinations of CV (0.1-1.0 by 0.1), N (12, 24, 36, 48 and 60) and GMR. So if one goes into a 1st stage with any deviations from these combinations (for example N1=20 or CV=0.25), one would have to perform additional simulations with this combination?

BEQool
ElMaestro
★★★

Denmark,
2024-03-10 21:18
(266 d 00:53 ago)

@ BEQool
Posting: # 23897
Views: 2,605
 

 Two-Stage Sequential Potvin Designs

Hi BEQool,

❝ I just have a question regarding 1st stage study(ies) here. If I am not mistaken, the paper describes two studies in the 1st stage - each with their own test formulation (A or B + reference) ("At the end of the day, even though the two trials at stage 1 are conducted as 2-treatment, 2-sequence, 2-period trials").

❝ Could we also perform one study in the 1st stage with 2 test treatments (so 3-treatment, 6-sequence, 3-period) instead of 2 studies seperately? But then probably all of these calculations (alphas, TIE, power...) mentioned in the article wouldnt apply? Or would they?


The results in the paper would apply to just that kind of design, but it might not be too far off to assume it would be very similar .... but not identical due to df's. So we'd need new simulations for a design where stage 1 combines two tests and one ref. Certainly this can be done.

❝ In the paper you calculated power and alphas for different combinations of CV (0.1-1.0 by 0.1), N (12, 24, 36, 48 and 60) and GMR. So if one goes into a 1st stage with any deviations from these combinations (for example N1=20 or CV=0.25), one would have to perform additional simulations with this combination?


Potvin's idea, one that was copied in subsequent papers form various authors is to evaluate a range of CV's and GMR's that cover most naturally occurring combinations. You never know the true GMR or a true CV - you just get some estimates. The grid of combinations (CV, GMR) is then chosen so as to be dense enough that we're comfortable to conclude that once we've found good alphas there will not be overall alpha inflation even for a true (CV, GMR) not falling exactly on a grid point. Some like it, some don't.
Potvin and co-workers later used bounded optimisation to relax the worry anyone might still have. And then those who had worries just found some other reasons to worry, leaving the impression that this is not always about science and the concern for the patient's risk but occasionally about something else entirely.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-03-10 21:49
(266 d 00:22 ago)

@ ElMaestro
Posting: # 23898
Views: 2,646
 

 Too coarse grid?

Hi ElMaestro,

❝ Potvin's idea, one that was copied in subsequent papers form various authors is to evaluate a range of CV's and GMR's that cover most naturally occurring combinations. You never know the true GMR or a true CV - you just get some estimates. The grid of combinations (CV, GMR) is then chosen so as to be dense enough that we're comfortable to conclude that once we've found good alphas there will not be overall alpha inflation even for a true (CV, GMR) not falling exactly on a grid point. Some like it, some don't.


The Czech SÚKL didn’t like it last year. I saw a deficiency letter lamenting that “Potvin’s grid was too coarse and it cannot be guaranteed that the type I error is controlled.”
Showed them this one (step size of n1 = 2, step size of CV = 1%). Case closed.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2024-03-10 22:06
(266 d 00:04 ago)

@ Helmut
Posting: # 23899
Views: 2,578
 

 Too coarse grid?

Hi Hötzi,

❝ Showed them this one (step size of n1 = 2, step size of CV = 1%). Case closed.


Didn't they ask for imbalanced results? :-D

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-03-12 12:40
(264 d 09:31 ago)

@ ElMaestro
Posting: # 23903
Views: 2,556
 

 Balance…

Hi ElMaestro,

❝ Didn't they ask for imbalanced results? :-D


Oh dear, no! 🤪
I never had to initiate the second stage so far… I power the first stage always in such a way that I get a reasonable high chance to show BE already. Small first stages are stupid.

For the protocol: If one prefers a belt plus suspenders, an [image]-script to keep the pooled data set as balanced as possible.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
BEQool
★    

2024-03-11 21:45
(265 d 00:26 ago)

@ ElMaestro
Posting: # 23901
Views: 2,544
 

 Two-Stage Sequential Potvin Designs

Hello ElMaestro,

thank you for the explanation!

BEQool
UA Flag
Activity
 Admin contact
23,328 posts in 4,898 threads, 1,660 registered users;
83 visitors (0 registered, 83 guests [including 14 identified bots]).
Forum time: 22:11 CET (Europe/Vienna)

Satisfaction of one’s curiosity is one of the greatest sources
of happiness in life.    Linus Pauling

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5