N sufficiently large‽ [Two-Stage / GS Designs]

posted by Helmut Homepage – Vienna, Austria, 2015-12-03 15:56 (3060 d 18:24 ago) – Posting: # 15695
Views: 12,126

Dear Detlew & Ben,

❝ Two-sided or not two-sided, that is the question!


Yessir!

2*(1-pnorm(rep(mean(bds2.poc$upper.bounds),2)))

❝ simsalabim, Pocock's natural constant!


mean(bds2.poc$upper.bounds)
[1] 2.17897

Therefore,

[1] 0.02933386 0.02933386

Close! Actually:

rep(2*(1-pnorm(2.178)), 2)
[1] 0.02940604 0.02940604

2.178 from Jennison/Turnbull1 Table 2.1
‘Exact’

library(mvtnorm)
mu    <- c(0, 0)
sigma <- diag(2); sigma[sigma == 0] <- 1/sqrt(2)
C     <- qmvnorm(1-0.05, tail="both.tails", mean=mu,
                 sigma=sigma)$quantile
C
[1] 2.178273
rep(2*(1-pnorm(C)), 2)
[1] 0.0293857 0.0293857

I think that Kieser/Rauch are correct in their lament about one- vs. two-sided Pocock’s limits. They argue for 0.0304 (which Jones/Kenward2 used in chapter 13 as well). Jennison/Turnbull give Cp (K=2, α=0.10) 1.875:

rep(1-pnorm(1.875), 2)
[1] 0.03039636 0.03039636

Or

C <- qmvnorm(1-2*0.05, tail="both.tails", mean=mu,
             sigma=sigma)$quantile
C
[1] 1.875424
rep(1-pnorm(C), 2)
[1] 0.03036722 0.03036722

Furthermore:

library(ldbounds)
C <- mean(bounds(t=c(0.5, 1), iuse=c(2, 2), alpha=rep(0.05, 2))$upper.bounds)
C
[1] 1.875529
rep(1-pnorm(C), 2)
[1] 0.03035998 0.03035998

It’s a mess!

In chapter 12 Jones/Kenward (in the context of blinded sample re-estimation) report an inflation of the TIE. The degree of inflation depends on the timing of the interim (the earlier, the worse). They state:

“In the presence of Type I error rate inflation, the value of α used in the TOST must be reduced, so that the achieved Type I error rate is no larger than 0.05.”

(my emphasis)
They recommend an iterative algorithm [sic] by Golkowski et al3 and conclude:

“[…] before using any of the methods […], their operating characteristics should be evalu­ated for a range of values of n1, CV and true ratio of means that are of interest, in order to decide if the Type I error rate is controlled, the power is adequate and the potential maxi­mum total sample size is not too great.”


Given all that, I’m not sure whether the discussion of proofs, exact values, etc. does make sense at all. This wonderful stuff is based solely on normal theory and I’m getting bored by reading the phrase “when N is sufficiently large” below a series of fancy formulas. Unless someone comes up with a proof for small samples (many tried, all failed so far) I rather stick to simulations.


  1. Jennison C, Turnbull BW. Group sequential methods with applications to clinical trials. Boca Raton: Chapman & Hall/CRC; 1999.
  2. Jones B, Kenward MG. Design and analysis of cross-over trials. Boca Raton: Chapman & Hall/CRC; 3rd ed 2014.
  3. Golkowski D, Friede T, Kieser M. Blinded sample size reestimation in crossover bioequivalence trials. Pharm Stat. 2014;13(3):157–62. doi 10.1002/pst.1617

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
22,988 posts in 4,825 threads, 1,658 registered users;
103 visitors (0 registered, 103 guests [including 6 identified bots]).
Forum time: 11:20 CEST (Europe/Vienna)

The whole purpose of education is
to turn mirrors into windows.    Sydney J. Harris

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5