Deviating from assumptions [Power / Sample Size]
Dear all,
in this recent post I claimed that drop-outs have a lower impact on power of a study than the CV and – especially – the GMR. The order is sample size < CV ≪ GMR. I prepared some R-code (given at the end) which allows a quick comparison.
THX to Detlew for reminding me about R’s
Two examples showing the influence of actual values deviating from assumptions (I have chosen a minimum power of 70% for the actual study):
In real studies all potential influences occur simultaneously. In the first example both GMR=0.94, CV=23%, n=24 or GMR=0.93, CV=20%, n=23 will result in ~80% power.
I know some CROs whose PIs notoriously try to “convince” volunteers not to leave the study being afraid of loosing power.
Such a practice is not only unethical but completely unjustified!
R-code (developed in
in this recent post I claimed that drop-outs have a lower impact on power of a study than the CV and – especially – the GMR. The order is sample size < CV ≪ GMR. I prepared some R-code (given at the end) which allows a quick comparison.
THX to Detlew for reminding me about R’s
uniroot()
.Two examples showing the influence of actual values deviating from assumptions (I have chosen a minimum power of 70% for the actual study):
- GMR 0.95, CV 20%, target power 90%, 2×2 cross-over design:
We estimate a sample size of 26 subjects and could expect ~92% power if all assumptions would be realized in the study.
min. power parameter value rel. change
(%) (%)
───────────────────────────────────────────
70 CV 27.29 +36.4
70 GMR 0.9044 - 4.80
73.54 n 16 -38.5
The sample size is the least sensitive one (with 15 subjects we would get 69.93% power and a rel. change of -42.3%), followed by the CV. The GMR is the most nasty one.
- GMR 1.05, CV 34%, target power 80%, parallel design:
We estimate a sample size of 94 subjects (47 per group) and could expect ~80% power if all assumptions would be realized in the study.
min. power parameter value rel. change
(%) (%)
───────────────────────────────────────────
70 CV 38.47 +13.1
70 GMR 1.076 + 2.48
70.12 n 75 -20.2
Again the sensitivity is GMR ≫ CV > sample size.
In real studies all potential influences occur simultaneously. In the first example both GMR=0.94, CV=23%, n=24 or GMR=0.93, CV=20%, n=23 will result in ~80% power.
I know some CROs whose PIs notoriously try to “convince” volunteers not to leave the study being afraid of loosing power.
Such a practice is not only unethical but completely unjustified!
R-code (developed in
R 3.1.6
64-bit, PowerTOST 1.1-12
):CV <- 0.20
GMR <- 0.95
pwr.target <- 0.90 # wishful thinking...
pwr.min <- 0.70 # we don't want to drop below that...
des <- "2x2" # call known.designs() to see PowerTOST's
# only designs with two sequences and "parallel"
# are implemented yet for drop-outs
####################################################################
# Don't modify below this line unless you know what you are doing. #
####################################################################
designs <- c("2x2", "2x2x2", "2x2x3", "2x2x4", "parallel")
require(PowerTOST)
pwrCV <- function(x, ...) { power.TOST(CV=x, n=n.est, theta0=GMR, design=des)-pwr.min }
pwrGMR <- function(x, ...) { power.TOST(theta0=x, n=n.est, CV=CV, design=des)-pwr.min }
res <- sampleN.TOST(CV=CV, theta0=GMR, targetpower=pwr.target, design=des,
print=F, details=F)
n.est <- res[1, "Sample size" ]
pwr.est <- res[1, "Achieved power"]
seg <- 50; segs <- seq(seg-1)
########################################
# max. CV for minimum acceptable power #
########################################
CV.max <- uniroot(pwrCV, c(CV, 10*CV), tol=1e-7, n=n.est, theta0=GMR)$root
CVs <- seq(CV, CV.max, length.out=seg)
pBECV <- NULL
for(j in 1:length(CVs)) {
pBECV <- c(pBECV, 100*power.TOST(CV=CVs[j], n=n.est, theta0=GMR, design=des))
}
######################################
# min. GMR for minimum accept. power #
######################################
ifelse(GMR <= 1, int <- c(0.8, 1), int <- c(GMR, 1.25))
GMR.min <- uniroot(pwrGMR, int, tol=1e-7, n=n.est, CV=CV, design=des)$root
GMRs <- seq(GMR.min, GMR, length.out=seg)
pBEGMR <- NULL
for(j in 1:length(GMRs)) {
pBEGMR <- c(pBEGMR, 100*power.TOST(CV=CV, n=n.est, theta0=GMRs[j], design=des))
}
####################################
# min. n for minimum accept. power #
# workaround, since uniroot() does #
# not accept two vectors as limits #
####################################
if(des %in% designs) {
Ns <- seq(n.est, 12, by=-1)
j <- 0
pwrN <- pwr.est
n.min <- NULL; pBEn <- NULL
while(pwrN >= pwr.min) {
j <- j+1
n1 <- Ns[j]-floor(Ns[j]/2)
n2 <- Ns[j]-n1
n <- c(n1, n2)
pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
if(pwrN >= pwr.min) {
n.min <- c(n.min, sum(n))
pBEn <- c(pBEn, 100*pwrN)
} else {
break
}
}
}
op <- par()
par(mar=c(c(4, 4, 2.5, 0.5))+0.1) # default for B, L, T, R: c(5, 4, 4, 2) + 0.1
clr <- colorRampPalette(c("blue", "red"))(seg)
split.screen(c(2, 2))
screen(1)
plot(100*CVs, pBECV, type="n", main=paste0("Higher variability\n",
"GMR = ", GMR, ", n = ", n.est), lwd=2, xlab="CV%", ylab="% power", las=1, cex.main=1)
abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
segments(100*CVs[segs], pBECV[segs], 100*CVs[segs+1], pBECV[segs+1], lwd=2, col=clr[segs])
points(100*CVs[1], pBECV[1], col=clr[1], pch=16, cex=1.25)
points(100*CVs[seg], pBECV[seg], col=clr[seg], pch=16, cex=1.25)
text(100*CV, 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("CV = ",
signif(100*CV.max, 4), "% (", round(100*pwr.min, 2), "%)"), cex=0.9, pos=4)
screen(2)
clr <- rev(clr)
plot(GMRs, pBEGMR, type="n", main=paste0("Larger deviation from 1\n",
"CV = ", 100*CV, "%, n = ", n.est), lwd=2, xlim=c(GMR, GMR.min),
xlab="GMR", ylab="% power", las=1, cex.main=1)
abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
segments(GMRs[segs], pBEGMR[segs], GMRs[segs+1], pBEGMR[segs+1], lwd=2, col=clr[segs])
points(GMRs[1], pBEGMR[1], col=clr[1], pch=16, cex=1.25)
points(GMRs[seg], pBEGMR[seg], col=clr[seg], pch=16, cex=1.25)
text(GMR, 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("GMR = ",
signif(GMR.min, 4), " (", round(100*pwr.min, 2), "%)"), cex=0.9, pos=4)
screen(3)
if(des %in% designs) {
plot(n.min, pBEn, type="n", main=paste0("Drop-outs\n",
"GMR = ", GMR, ", CV = ", 100*CV, "%"), lwd=2, xlim=c(max(n.min), min(n.min)),
ylim=c(100*pwr.min, 100*pwr.est), xlab="n", ylab="% power", las=1, cex.main=1)
abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
clr <- colorRampPalette(c("blue", "red"))(length(n.min))
points(n.min, pBEn, pch=16, cex=0.8, col=clr)
points(n.min[1], pBEn[1], col=clr[1], pch=16, cex=1.25)
points(n.min[length(n.min)], pBEn[length(n.min)], col=clr[length(n.min)],
pch=16, cex=1.25)
text(max(n.min), 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("n = ",
min(n.min), " (", signif(min(pBEn), 4), "%)"), cex=0.9, pos=4)
} else {
plot(1, type="n", axes=F, xlab="", ylab="")
legend("topleft", legend=c(paste0("Note:\nAnalysis of drop-outs",
"\nnot implemented for\n", des, " design.")), bty="n", cex=1)
}
screen(4)
plot(1, type="n", axes=F, xlab="", ylab="")
legend("topleft", legend=c(paste0("design: ", des), "assumed values:",
paste0(" CV = ", round(100*CV, 2), "%, GMR = ", round(GMR, 4)), "power:",
paste0(" target = ", round(100*pwr.target, 2), "%"),
paste0(" expected = ", signif(100*pwr.est, 4), "% (n = ", n.est, ")"),
paste0(" minimum acceptable = ", round(100*pwr.min, 2), "%")), bty="n", cex=1)
close.screen(all=T)
par <- op
cat(paste0("\ndesign: ", des, "\nCV=", 100*CV, "%, GMR=", 100*GMR, "%, n=",
n.est, ", expected power=", signif(100*pwr.est, 4), "% (>target of ",
100*pwr.target, "%)\n\n"))
—
Dif-tor heh smusma 🖖🏼 Довге життя Україна!![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Deviating from assumptionsHelmut 2014-08-08 15:44 [Power / Sample Size]
- Some Nitpicking d_labes 2014-08-12 09:12
- Some Nitpicking Helmut 2014-08-12 10:49
- R-code for all ABE designs Helmut 2014-08-12 17:21
- Sensitivity analysis for all ABE designs d_labes 2014-08-13 09:30
- Sensitivity analysis for all ABE designs Helmut 2014-08-13 14:49
- R-code shortening d_labes 2014-08-13 16:32
- Suggestions / Sneak Preview Helmut 2014-08-13 16:42
- Suggestions / Sneak Preview d_labes 2014-08-15 09:01
- Suggestions / Sneak Preview Helmut 2014-08-15 12:02
- Mehl returned! d_labes 2014-08-15 11:27
- Mehl returned! Helmut 2014-08-15 11:42
- Suggestions / Sneak Preview d_labes 2014-08-15 09:01
- Suggestions / Sneak Preview Helmut 2014-08-13 16:42
- Sensitivity analysis for all ABE designs d_labes 2014-08-13 09:30
- Some Nitpicking d_labes 2014-08-12 09:12