Another Two-Stage ‘idea’ (lengthy) [Two-Stage / GS Designs]

posted by Helmut Homepage – Vienna, Austria, 2012-12-01 02:50 (4522 d 21:37 ago) – Posting: # 9650
Views: 11,849

Hi simulators!

I stumbled upon another goodie. A presentation at last years “AAPS Workshop on Facilitating Oral Product Development and Reducing Regulatory Burden through Novel Approaches to Assess Bioavailability / Bioequivalence”:

Alfredo García-Arieta
Current Issues in BE Evaluation in the European Union
23 October, Washington

The views expressed in this
presentation are the personal views of
the author and may not be
understood or quoted as being made
on behalf or reflecting the position of
the Spanish Agency for Medicines
and Health Care Products or the
European Medicines Agency or one
of its committees or working parties


Two-stage design





What puzzles me here is the “proposed final sample size”. I don’t understand why performing the stage in a smaller sample size should violate the consumer’s risk. The contrary was already demonstrated by Potvin et al. with Method B. Furthermore how would a sponsor derive the final sample size? This idea smells of the original work of Pocock, which would require for one interim analysis the first stage to be ½ of the final size. In my understanding (corrections welcome) the framework might look like this:

[image]

I’ve performed some simulations for CV 20% (see at the end of the post).

I wonder why so many new “methods” popped up in the last years (remember this one?) without any [sic] published evidence that they maintain the patient’s risk.

Example: αadj 0.0294, CV 20%, GMR 0.95, target power 80%, n1 12 (np 24), 106 simulations, exact sample size estimation (Owen’s Q)
                 αemp        1–βemp   ntot   5%  50%  95%  % in stage 2
Method B     :  0.046293    0.841332  20.6  12   18   40    56.3795
Pseudo Pocock:  0.050903*   0.896622  21.9  12   24   40    58.6798

* significantly >0.05 (limit 0.05036).

Such a method would require simulations for every single study – in order to come up with a proposed final sample size and a suitable αadj. It don’t expect 0.0294 to be applicable. Imagine a situation where the proposed sample size was 48 (n1 24), and the expected CV 20%. Actually it was 25%. Instead of running the second stage in ten subjects (nestn1=34–24) we would have to dose another 24 subjects. Of course we could set the proposed size just a little bit larger than n1 (would go with nest most of the time) – but this is not even “intended Pocock” any more and we end up with Method B without power. Patient’s risk? No idea, again. Show me simulations, please.
Why do we have to waste our time assessing all this “methods”? I would prefer that their inventors do their job. If they have done it already – which I doubt –, they should publish it.



[image]
Patient’s risk significantly inflated with n1 12 – although within Potvin’s maximum acceptable inflation (0.052); less conservative than Method B (especially at higher sample sizes in the first stage) where αemp asymptotically approaches αadj of 0.0294.

[image]
Higher power than Method B because studies proceeding to the second stage with nest < nprop have to be performed in npropn1 subjects (instead of in only nestn1).

[image]

[image]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,688 registered users;
26 visitors (0 registered, 26 guests [including 6 identified bots]).
Forum time: 01:27 CEST (Europe/Vienna)

Do not put your faith in what statistics say until you have carefully
considered what they do not say.    William W. Watt

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5