Adaptive TSD vs. “classical” GSD [Two-Stage / GS Designs]
❝ […] the TSD from Potvin et al appears to have astonishing design features. The classical GSD or the adaptive two-stage design according to the inverse normal method rely on a formal statistical framework: mathematical theorems including proofs are available on why they work, what properties they have and how they should be applied. This is nice. For the Potvin approach we only have simulations for certain scenarios at hand. Even though it appears to be good, it is not clear if this is always the case. More information on that topic with some more elaborations contains for example the article from Kieser and Rauch (2015).
I agree that the frameworks of Potvin etc. are purely empirical. To show whether a given α maintains the TIE for a desired range of n1/CV and target power takes 30 minutes in
Power2Stage
. I’m not sure whether the two lines in Kieser/Rauch fulfill the requirements of a formal proof. IMHO, it smells more of a claim. At least Gernot Wassmer told me that it it not that easy.❝ ❝ In a TSD one would opt for a stage 1 sample size of ~75% of the fixed sample design.
❝
❝ Reference? Some software packages give an inflation factor that helps determining the study size… Anyhow, I think such a rule of thumb is too strict and inflexible.
See the discussion in my review, Table 3 in the Supplementary Material, and R-code at the end.
GMR : 0.95
target power : 0.8
‘best guess’ CV : 0.2
Fixed sample size: 20
power : 0.835
‘Type 1’ TSD, n1 : 16 (80.0% of N)
ASN=E[N] : 20.0
power (interim): 0.619
power (final) : 0.851
❝ Consider for example two alternative scenarios:
❝ ● Pre-planned n1 = 52 and final N = 78 (i.e. n2 = 26). The average sample number (ASN) is smaller than for the Potvin TSD. Power is higher up until a certain point where the CV gets too high.
Hhm. See the code at the end. I tried to implement your suggestions.
CV% method alpha[1] alpha[2] ASN=E[N] power TIE
30.00 GSD 0.03817 0.02704 55.2 0.9631 0.05009 ns
30.00 Potvin B 0.02940 0.02940 52.4 0.8673 0.03062 ns
40.00 GSD 0.03817 0.02704 61.2 0.8236 0.05026 ns
40.00 Potvin B 0.02940 0.02940 64.5 0.8283 0.04396 ns
44.19 GSD 0.03817 0.02704 64.0 0.7492 0.04995 ns
44.19 Potvin B 0.02940 0.02940 76.2 0.8250 0.04725 ns
50.00 GSD 0.03817 0.02704 67.7 0.6365 0.04980 ns
50.00 Potvin B 0.02940 0.02940 99.1 0.8211 0.04841 ns
60.00 GSD 0.03817 0.02704 73.3 0.4224 0.04432 ns
60.00 Potvin B 0.02940 0.02940 151.4 0.8079 0.04253 ns
Type I Error?
❝ ● Pre-planned n1 = 48, n2 = 48. ASN comparable, Power similarly as above.
CV% method alpha[1] alpha[2] ASN=E[N] power TIE
30.00 GSD 0.03101 0.02973 56.5 0.9858 0.05004 ns
30.00 Potvin B 0.02940 0.02940 48.9 0.8535 0.03202 ns
40.00 GSD 0.03101 0.02973 69.6 0.8927 0.04996 ns
40.00 Potvin B 0.02940 0.02940 64.0 0.8290 0.04540 ns
49.65 GSD 0.03101 0.02973 82.2 0.7451 0.04841 ns
49.65 Potvin B 0.02940 0.02940 100.4 0.8173 0.04821 ns
50.00 GSD 0.03101 0.02973 82.6 0.7382 0.04840 ns
50.00 Potvin B 0.02940 0.02940 102.0 0.8184 0.04811 ns
60.00 GSD 0.03101 0.02973 91.9 0.5492 0.03998 ns
60.00 Potvin B 0.02940 0.02940 154.6 0.8025 0.03988 ns
![[image]](img/uploaded/image358.png)
![[image]](https://static.bebac.at/img/youtube.png)
Well…
❝ Therefore, I think the GSD has some charme and can be useful in situations with uncertainty.
If (if!) you have some clue about the variability.
❝ Moreover, the advantage is that we do not have to rely on only simulation results from certain parameter settings.
30 minutes.

I will again chew on the e-mail conversation we had last April.
R-codes
1. Find n1 for TSDs based on a ‘best guess’ CV.
library(PowerTOST)
library(Power2Stage)
stg1 <- function(x) {
power.2stage(n1=x, method="B", CV=CV,
alpha=rep(0.0294, 2), theta0=0.95,
targetpower=0.8)$nmean
} # defaults to Potvin B
method <- "B"
alpha <- rep(0.0294, 2)
GMR <- 0.95
targetpower <- 0.8
CV <- 0.2
methods <- c("B", "C")
types <- c("\u2018Type 1\u2019", "\u2018Type 2\u2019")
fix <- sampleN.TOST(CV=CV, targetpower=targetpower,
theta0=GMR, details=F, print=F)
N <- fix[["Sample size"]]
pwr <- fix[["Achieved power"]]
n1 <- round(optimize(stg1, interval=c(12, N),
tol=0.1)$minimum, 0)
n1 <- n1 + n1%%2
res <- power.2stage(method=method, n1=n1, CV=CV,
alpha=alpha, theta0=GMR,
targetpower=targetpower, details=F)
cat("\nGMR :", GMR,
"\ntarget power :", targetpower,
"\n\u2018best guess\u2019 CV :", CV,
"\nFixed sample size:", N,
sprintf("%s %.3f", "\n power :", pwr),
sprintf("%s %d %s%.1f%% %s",
paste0("\n", types[match(method, methods)], " TSD, n1 :"),
n1, "(", 100*n1/N, "of N)"),
sprintf("%s %.1f", "\n ASN=E[N] :", res$nmean),
sprintf("%s %.3f", "\n power (interim):", res$pBE_s1),
sprintf("%s %.3f", "\n power (final) :", res$pBE), "\n")
2. Comparison of GSD and TSD
library(ldbounds)
library(PowerTOST)
library(Power2Stage)
findCV <- function(x) power.TOST(CV=x, n=n1+n2)-0.8
n1 <- 52 # 48
n2 <- 26 # 48
t <- c(n1/(n1+n2), 1)
bnds <- bounds(t=t, iuse=2, alpha=0.05)
alpha <- 1-pnorm(bnds$upper.bounds)
CVest <- uniroot(findCV, interval=c(0.01, 3), tol=1e-7)$root
CV <-sort(c(seq(0.3, 0.6, 0.1), CVest))
sig <- binom.test(x=0.05*1e6, n=1e6, alternative="less",
conf.level=1-0.05)$conf.int[2]
res <- matrix(data=NA, nrow=length(CV)*2, ncol=8, byrow=TRUE,
dimnames=list(NULL, c("CV%", "method", "alpha[1]",
"alpha[2]", "ASN=E[N]", "power",
"TIE", " ")))
k <- 0
for (j in seq_along(CV)) {
k <- k + 1
GSD.TIE <- power.2stage.GS(alpha=alpha, n=c(n1, n2), CV[j],
theta0=1.25, nsims=1e6, details=FALSE)
GSD.pwr <- power.2stage.GS(alpha=alpha, n=c(n1, n2), CV[j],
theta0=0.95, nsims=1e5, details=FALSE)
Pot.TIE <- power.2stage(alpha=rep(0.0294, 2), n1=n1, CV=CV[j],
theta0=1.25, nsims=1e6, details=FALSE)
Pot.pwr <- power.2stage(alpha=rep(0.0294, 2), n1=n1, CV=CV[j],
theta0=0.95, nsims=1e5, details=FALSE)
res[k, 1] <- sprintf("%.2f", CV[j]*100)
res[k, 2] <- "GSD"
res[k, 3] <- sprintf("%.5f", alpha[1])
res[k, 4] <- sprintf("%.5f", alpha[2])
res[k, 5] <- sprintf("%.1f", (1-GSD.pwr$pct_s2/100)*n1 +
(GSD.pwr$pct_s2/100)*(n1+n2))
res[k, 6] <- sprintf("%.4f", GSD.pwr$pBE)
res[k, 7] <- sprintf("%.5f", GSD.TIE$pBE)
if (GSD.TIE$pBE <= sig) res[k, 8] <- "ns" else res[k, 8] <- "*"
k <- k + 1
res[k, 1] <- sprintf("%.2f", CV[j]*100)
res[k, 2] <- "Potvin B"
res[k, 3:4] <- rep(sprintf("%.5f", 0.0294), 2)
res[k, 5] <- sprintf("%.1f", Pot.pwr$nmean)
res[k, 6] <- sprintf("%.4f", Pot.pwr$pBE)
res[k, 7] <- sprintf("%.5f", Pot.TIE$pBE)
if (Pot.TIE$pBE <= sig) res[k, 8] <- "ns" else res[k, 8] <- "*"
}
print(as.data.frame(res), row.names=FALSE)
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Adaptive TSD vs. “classical” GSD Helmut 2015-11-27 19:05 [Two-Stage / GS Designs]
- Adaptive TSD vs. “classical” GSD ElMaestro 2015-11-27 19:54
- “classical” GSD - E[n] d_labes 2015-11-30 11:15
- Apples are pears by comparing the weight Helmut 2015-12-01 16:35
- Apples are pears by comparing the weight d_labes 2015-12-03 09:16
- Apples are pears by comparing the weight Helmut 2015-12-03 13:10
- Oranges d_labes 2015-12-03 13:56
- Apples are pears by comparing the weight Helmut 2015-12-03 13:10
- Apples are pears by comparing the weight d_labes 2015-12-03 09:16
- Apples are pears by comparing the weight Helmut 2015-12-01 16:35
- Adaptive TSD vs. “classical” GSD Ben 2015-12-02 19:27
- Adaptive TSD vs. “classical” GSDHelmut 2015-12-03 03:11
- “classical” GSD alpha's d_labes 2015-12-03 09:47
- N sufficiently large‽ Helmut 2015-12-03 14:56
- An other one with 0.0304 d_labes 2015-12-03 16:15
- An other one with 0.0304 Helmut 2015-12-03 16:26
- An other one with 0.0304 d_labes 2015-12-03 16:15
- N sufficiently large‽ Helmut 2015-12-03 14:56
- Adaptive TSD vs. “classical” GSD Ben 2016-01-10 12:43
- “classical” GSD alpha's d_labes 2015-12-03 09:47
- Adaptive TSD vs. “classical” GSDHelmut 2015-12-03 03:11