Now what? w & w* examples [Two-Stage / GS Designs]

posted by Helmut Homepage – Vienna, Austria, 2018-06-11 15:57 (2313 d 07:16 ago) – Posting: # 18883
Views: 16,845

Hi Ben,

❝ I hope there will be applications and more investigations in the future regarding this.


So do I – once we solved the mystery of finding a “suitable” n1 and specifying “appropriate” weights.

❝ ❝ Some regulatory statisticians told me to prefer a first stage as estimated for a fixed-sample design (i.e., the second stage is solely a ‘safety net’).


❝ Sounds interesting. At first this sounds nice, but I am a bit puzzled about it. "Safety net" sounds like we have a rather good understanding about the CV …


Not necessarily good but a “guesstimate”.

❝ … but in case we observe some unforeseen value we have the possibility to add some extra subjects.

❝ However, in such a case we could just go with a fixed design and adapt the Power.


I’m not sure what you mean here. In a fixed sample design I would rather work with the upper CL of the CV or – if not available – assume a reasonably higher CV than my original guess rather than fiddling around with power.

❝ In a TSD setting we typically have no good understanding about the CV... Do I miss something here?


(1) Yep and (2) no.

❝ Based on what assumptions would we select n1 (= fixed design sample size)? We typically have some range of possible values and we don't know where we will be.


I was just quoting a regulatory statistician (don’t want to out him). Others didn’t contradict him. So likely he wasn’t alone with his point of view.

❝ For n1 I would then rather use the lower end of this range. Comments?


Very interesting. I expected that the sample size penalty (n2) will be higher if we use a low n1. Of course it all depends on which CV we observe in stage 1.

library(PowerTOST)
library(Power2Stage)
CVguess <- 0.2 # from a fixed design study with n=18
CL      <- CVCL(CV=CVguess, df=18-2, side="2-sided")
# Sample sizes of fixed desiogn based on guesstimate CV and its CL
n1.fix  <- sampleN.TOST(CV=CVguess, print=FALSE)[["Sample size"]]
n1.75   <- floor(0.75*n1.fix) + as.integer(0.75*n1.fix) %%2
n1.lo   <- sampleN.TOST(CV=CL[["lower CL"]], print=FALSE)[["Sample size"]]
n1.hi   <- sampleN.TOST(CV=CL[["upper CL"]], print=FALSE)[["Sample size"]]
# In all variants use the guesstimate
x       <- power.tsd.in(CV=CVguess, n1=n1.fix)
ASN.fix <- x$nmean
med.fix <- x$nperc[["50%"]]
p.fix   <- as.numeric(x[25:24])
pct.fix <- x$pct_s2
x       <- power.tsd.in(CV=CVguess, n1=n1.75)
ASN.75  <- x$nmean
med.75  <- x$nperc[["50%"]]
p.75    <- as.numeric(x[25:24])
pct.75  <- x$pct_s2
x       <- power.tsd.in(CV=CVguess, n1=n1.lo)
ASN.lo  <- x$nmean
med.lo  <- x$nperc[["50%"]]
p.lo    <- as.numeric(x[25:24])
pct.lo  <- x$pct_s2
x       <- power.tsd.in(CV=CVguess, n1=n1.hi)
ASN.hi  <- x$nmean
med.hi  <- x$nperc[["50%"]]
p.hi    <- as.numeric(x[25:24])
pct.hi  <- x$pct_s2
result  <- data.frame(CV=c(rep(CVguess, 2), CL),
                      n1=c(n1.fix, n1.75, n1.lo, n1.hi),
                      CV.obs=rep(CVguess, 4),
                      ASN=c(ASN.fix, ASN.75, ASN.lo, ASN.hi),
                      median=c(med.fix, med.75, med.lo, med.hi),
                      pwr.stg1=c(p.fix[1], p.75[1], p.lo[1], p.hi[1]),
                      pwr=c(p.fix[2], p.75[2], p.lo[2], p.hi[2]),
                      pct.2=c(pct.fix, pct.75, pct.lo, pct.hi))
row.names(result) <- c("like fixed", "75% of fixed", "lower CL", "upper CL")
print(signif(result, 4))

                 CV n1 CV.obs   ASN median pwr.stg1    pwr  pct.2
like fixed   0.2000 20    0.2 21.59     20   0.7325 0.8514 18.840
75% of fixed 0.2000 16    0.2 19.90     16   0.5946 0.8371 33.780
lower CL     0.1483 12    0.2 20.66     16   0.3834 0.8261 55.630
upper CL     0.3084 42    0.2 42.00     42   0.9740 0.9740  0.006

If we base n1 on the lower end and the CV is close to the guesstimate that’s the winner. One the other hand there is a ~56% chance of proceeding to the second stage which is not desirable – and contradicts the concept of a “safety net”. ;-) A compromise would be 75% of the fixed sample design.
The pessimistic approach would be crazy.

❝ More comments on the weights:


Have to chew on that…

❝ Brings us back to: We should plan with a realistic/slightly optimistic scenario.


Seems so.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,249 posts in 4,885 threads, 1,652 registered users;
55 visitors (0 registered, 55 guests [including 7 identified bots]).
Forum time: 23:14 CEST (Europe/Vienna)

The rise of biometry in this 20th century,
like that of geometry in the 3rd century before Christ,
seems to mark out one of the great ages or critical periods
in the advance of the human understanding.    R.A. Fisher

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5