Nonbinding futility rule [Two-Stage / GS Designs]

posted by Ben – 2018-06-15 15:58 (901 d 05:09 ago) – Posting: # 18908
Views: 8,632

Dear Detlew,

» My first thought was: Set fCpower = 1, that results in do not use the power futility criterion. This gives n2=16 for mittyri's example
» interim.tsd.in(GMR1=0.89, CV1=0.2575165, n1=38, fCpower=1).
»
» Your suggestion
» interim.tsd.in(GMR1=0.89, CV1=0.2575165, n1=38, ssr.conditional = "error")
» gives also n2=16. Astonishing or correct?

This is correct. Please note that if fCpower = 1, then (as intended) the futility criterion regarding power of stage 1 never applies. If you then encounter a scenario where power of stage 1 is greater than targetpower (this must not happen, but it can happen), then the conditional estimated target power will be negative. Thus, we would have a problem with this being the target power for sample size calculation. To avoid this from happening the function automatically sets the target power for recalculation to targetpower (which is equivalent to ssr.conditional = "error"). See 'Details' in the man page.

» Avoiding the conditional sample size re-estimation, i.e. using the conventional sample size re-estimation via
» interim.tsd.in(GMR1=0.89, CV1=0.2575165, n1=38, ssr.conditional = "no")
» gives n2=4. Ooops? Wow!

I have to think about that :confused:

» IIRC the term "nonbinding" in the context of sequential designs is used for flexibility in stopping or continuing due to external reasons. Do we have such here?
For example?

» Binding, nonbinding - does it have an impact on the alpha control? I think not, but are not totally sure.
Non-binding: Type 1 error is protected, even if the futility criterion is ignored.
Binding: Type 1 error is protected only if the futility criterion will be adhered to. ('Binding' is not common practice, authorities don't want this).

Best regards,
Ben.

Complete thread:

Activity
 Admin contact
21,232 posts in 4,427 threads, 1,481 registered users;
online 7 (0 registered, 7 guests [including 5 identified bots]).
Forum time: Wednesday 21:08 UTC (Europe/Vienna)

A big data-analyst is an expert
in producing misleading conclusions from huge datasets.
It is much more efficient to use a statistician,
who can do the same with small ones.    Stephen Senn

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5