Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-08 17:44
(3542 d 21:46 ago)

Posting: # 13353
Views: 24,082
 

 Deviating from assumptions [Power / Sample Size]

Dear all,

in this recent post I claimed that drop-outs have a lower impact on power of a study than the CV and – especially – the GMR. The order is sample size < CV GMR. I prepared some R-code (given at the end) which allows a quick comparison.
THX to Detlew for reminding me about R’s uniroot().

Two examples showing the influence of actual values deviating from assumptions (I have chosen a mini­mum power of 70% for the actual study):
  1. GMR 0.95, CV 20%, target power 90%, 2×2 cross-over design:
    We estimate a sample size of 26 subjects and could expect ~92% power if all assumptions would be realized in the study.

    [image]

    min. power  parameter  value    rel. change
       (%)                              (%)    
    ───────────────────────────────────────────
      70           CV      27.29      +36.4    
      70           GMR      0.9044    - 4.80   
      73.54        n       16         -38.5    


    The sample size is the least sensitive one (with 15 subjects we would get 69.93% power and a rel. change of -42.3%), followed by the CV. The GMR is the most nasty one.

  2. GMR 1.05, CV 34%, target power 80%, parallel design:
    We estimate a sample size of 94 subjects (47 per group) and could expect ~80% power if all assumptions would be realized in the study.

    [image]

    min. power  parameter  value    rel. change
       (%)                              (%)    
    ───────────────────────────────────────────
      70           CV      38.47      +13.1    
      70           GMR      1.076     + 2.48   
      70.12        n       75         -20.2    


    Again the sensitivity is GMR ≫ CV > sample size.
This little exercise is not a substitute for the sensitivity analysis which should be performed in study planning according to ICH E9 (see f.i. this presentation, slides 49–53) but support the fact that drop-outs influence the study’s power to the least extent if compared to the other parameters.
In real studies all potential influences occur simultaneously. In the first example both GMR=0.94, CV=23%, n=24 or GMR=0.93, CV=20%, n=23 will result in ~80% power.

I know some CROs whose PIs notoriously try to “convince” volunteers not to leave the study being afraid of loosing power.
Such a practice is not only unethical but completely unjustified!


R-code (developed in R 3.1.6 64-bit, PowerTOST 1.1-12):

CV         <- 0.20
GMR        <- 0.95
pwr.target <- 0.90  # wishful thinking...
pwr.min    <- 0.70  # we don't want to drop below that...
des        <- "2x2" # call known.designs() to see PowerTOST's
                    # only designs with two sequences and "parallel"
                    # are implemented yet for drop-outs
####################################################################
# Don't modify below this line unless you know what you are doing. #
####################################################################
designs <- c("2x2", "2x2x2", "2x2x3", "2x2x4", "parallel")
require(PowerTOST)
pwrCV  <- function(x, ...) { power.TOST(CV=x, n=n.est, theta0=GMR, design=des)-pwr.min }
pwrGMR <- function(x, ...) { power.TOST(theta0=x, n=n.est, CV=CV, design=des)-pwr.min }
res    <- sampleN.TOST(CV=CV, theta0=GMR, targetpower=pwr.target, design=des,
  print=F, details=F)
n.est   <- res[1, "Sample size"   ]
pwr.est <- res[1, "Achieved power"]
seg     <- 50; segs <- seq(seg-1)
########################################
# max. CV for minimum acceptable power #
########################################
CV.max  <- uniroot(pwrCV,  c(CV, 10*CV), tol=1e-7, n=n.est, theta0=GMR)$root
CVs <- seq(CV, CV.max, length.out=seg)
pBECV <- NULL
for(j in 1:length(CVs)) {
  pBECV <- c(pBECV, 100*power.TOST(CV=CVs[j], n=n.est, theta0=GMR, design=des))
}
######################################
# min. GMR for minimum accept. power #
######################################
ifelse(GMR <= 1, int <- c(0.8, 1), int <- c(GMR, 1.25))
GMR.min <- uniroot(pwrGMR, int, tol=1e-7, n=n.est, CV=CV, design=des)$root
GMRs <- seq(GMR.min, GMR, length.out=seg)
pBEGMR <- NULL
for(j in 1:length(GMRs)) {
  pBEGMR <- c(pBEGMR, 100*power.TOST(CV=CV, n=n.est, theta0=GMRs[j], design=des))
}
####################################
# min. n for minimum accept. power #
# workaround, since uniroot() does #
# not accept two vectors as limits #
####################################
if(des %in% designs) {
  Ns <- seq(n.est, 12, by=-1)
  j <- 0
  pwrN <- pwr.est
  n.min <- NULL; pBEn <- NULL
  while(pwrN >= pwr.min) {
    j <- j+1
    n1 <- Ns[j]-floor(Ns[j]/2)
    n2 <- Ns[j]-n1
    n <- c(n1, n2)
    pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
    if(pwrN >= pwr.min) {
      n.min <- c(n.min, sum(n))
      pBEn <- c(pBEn, 100*pwrN)
    } else {
      break
    }
  }
}
op <- par()
par(mar=c(c(4, 4, 2.5, 0.5))+0.1) # default for B, L, T, R: c(5, 4, 4, 2) + 0.1
clr <- colorRampPalette(c("blue", "red"))(seg)
split.screen(c(2, 2))
screen(1)
plot(100*CVs, pBECV, type="n", main=paste0("Higher variability\n",
  "GMR = ", GMR, ", n = ", n.est), lwd=2, xlab="CV%", ylab="% power", las=1, cex.main=1)
abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
segments(100*CVs[segs], pBECV[segs], 100*CVs[segs+1], pBECV[segs+1], lwd=2, col=clr[segs])
points(100*CVs[1], pBECV[1], col=clr[1], pch=16, cex=1.25)
points(100*CVs[seg], pBECV[seg], col=clr[seg], pch=16, cex=1.25)
text(100*CV, 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("CV = ",
  signif(100*CV.max, 4), "% (", round(100*pwr.min, 2), "%)"), cex=0.9, pos=4)
screen(2)
clr <- rev(clr)
plot(GMRs, pBEGMR, type="n", main=paste0("Larger deviation from 1\n",
  "CV = ", 100*CV, "%, n = ", n.est), lwd=2, xlim=c(GMR, GMR.min),
  xlab="GMR", ylab="% power", las=1, cex.main=1)
abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
segments(GMRs[segs], pBEGMR[segs], GMRs[segs+1], pBEGMR[segs+1], lwd=2, col=clr[segs])
points(GMRs[1], pBEGMR[1], col=clr[1], pch=16, cex=1.25)
points(GMRs[seg], pBEGMR[seg], col=clr[seg], pch=16, cex=1.25)
text(GMR, 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("GMR = ",
  signif(GMR.min, 4), " (", round(100*pwr.min, 2), "%)"), cex=0.9, pos=4)
screen(3)
if(des %in% designs) {
  plot(n.min, pBEn, type="n", main=paste0("Drop-outs\n",
    "GMR = ", GMR, ", CV = ", 100*CV, "%"), lwd=2, xlim=c(max(n.min), min(n.min)),
    ylim=c(100*pwr.min, 100*pwr.est), xlab="n", ylab="% power", las=1, cex.main=1)
  abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
  clr <- colorRampPalette(c("blue", "red"))(length(n.min))
  points(n.min, pBEn, pch=16, cex=0.8, col=clr)
  points(n.min[1], pBEn[1], col=clr[1], pch=16, cex=1.25)
  points(n.min[length(n.min)], pBEn[length(n.min)], col=clr[length(n.min)],
    pch=16, cex=1.25)
  text(max(n.min), 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("n = ",
    min(n.min), " (", signif(min(pBEn), 4), "%)"), cex=0.9, pos=4)
} else {
  plot(1, type="n", axes=F, xlab="", ylab="")
  legend("topleft", legend=c(paste0("Note:\nAnalysis of drop-outs",
    "\nnot implemented for\n", des, " design.")), bty="n", cex=1)
}
screen(4)
plot(1, type="n", axes=F, xlab="", ylab="")
legend("topleft", legend=c(paste0("design: ", des), "assumed values:",
  paste0("  CV = ",  round(100*CV, 2), "%, GMR = ", round(GMR, 4)), "power:",
  paste0("  target = ",  round(100*pwr.target, 2), "%"),
  paste0("  expected = ",  signif(100*pwr.est, 4), "% (n = ", n.est, ")"),
  paste0("  minimum acceptable = ", round(100*pwr.min, 2), "%")), bty="n", cex=1)
close.screen(all=T)
par <- op
cat(paste0("\ndesign: ", des, "\nCV=", 100*CV, "%, GMR=", 100*GMR, "%, n=",
  n.est, ", expected power=", signif(100*pwr.est, 4), "% (>target of ",
  100*pwr.target, "%)\n\n"))

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2014-08-12 11:12
(3539 d 04:18 ago)

@ Helmut
Posting: # 13365
Views: 18,694
 

 Some Nitpicking

Dear Helmut!


R-code (developed in R 3.1.6 64-bit, PowerTOST 1.1-12):


AFAIK R3.1.6 is not out yet. Or are you using a development version?

For the rest of the post: :clap:

BTW: Todays scientific quote (Ronald G. Marks) link does not function.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-12 12:49
(3539 d 02:41 ago)

@ d_labes
Posting: # 13367
Views: 18,614
 

 Some Nitpicking

Dear Detlew!

❝ AFAIK R3.1.6 is not out yet. Or are you using a development version?


  1. Correct.
  2. Not at all. I have two icons on my desktop linking to the 32- and 64-bit versions of R 3.1.1. The second one is called “R 3.1.1 64bit”. Probably I’m suffering from a combination of stra­bis­mus and an extended blind spot. Should consult with my ophthalmologist. :cool:

❝ For the rest of the post: :clap:


THX. In the meantime I developed code covering all designs; I will post it later. I also played around with ABEL and RSABE – looks funny.

❝ BTW: Todays scientific quote (Ronald G. Marks) link does not function.


Oops. Corrected.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-12 19:21
(3538 d 20:08 ago)

@ d_labes
Posting: # 13370
Views: 19,329
 

 R-code for all ABE designs

Dear Detlew and all,

below the code for all designs covered in PowerTOST(), conventional (unscaled) ABE. You have to paste the functions only once to the R-console and call the function Sens() with values defined in the end or call it directly, e.g.,
Sens(CV=0.24, GMR=1.07, pwr.target=0.8, pwr.min=0.7, des="3x6x3")

[image]

Note that the code tries to keep the degree of imbalance as close as possible between sequences. In real life other combinations may occur. As already noted by Detlew in this post the more im­bal­anced a study is, the lower the power.*
Example (3 drop-outs from planned 18 ⇒ 15)
power2.TOST(CV=0.2, theta0=0.95, n=c(3, 3, 3, 2, 2, 2), design="3x6x3")
[1] 0.7089454
power2.TOST(CV=0.2, theta0=0.95, n=c(3, 3, 3, 3, 2, 1), design="3x6x3")
[1] 0.6464662



require(PowerTOST)
##################################################################
# Don't modify the functions unless you know what you are doing! #
# Input area area the end of the code.                           #
##################################################################
Sens <- function(CV, GMR, pwr.target, pwr.min, des) {
  pwrCV  <- function(x, ...) {
              power.TOST(CV=x, n=n.est, theta0=GMR,
              design=des)-pwr.min }
  pwrGMR <- function(x, ...) {
              power.TOST(CV=CV, n=n.est, theta0=x,
              design=des)-pwr.min }
  designs <- c(
    "2x2", "2x2x2", "2x2x3", "2x2x4",
    "3x3", "2x3x3",
    "4x4", "2x4x4", "2x4x2",
    "3x6x3",
    "paired", "parallel")
  res     <- sampleN.TOST(CV=CV, theta0=GMR, targetpower=pwr.target, design=des,
    print=F, details=F)
  n.est   <- res[1, "Sample size"   ]
  pwr.est <- res[1, "Achieved power"]
  seg     <- 50; segs <- seq(seg-1)
  ########################################
  # max. CV for minimum acceptable power #
  ########################################
  CV.max <- uniroot(pwrCV,  c(CV, 10*CV), tol=1e-7, n=n.est, theta0=GMR)$root
  CVs    <- seq(CV, CV.max, length.out=seg)
  pBECV  <- NULL
  for(j in 1:length(CVs)) {
    pBECV <- c(pBECV, 100*power.TOST(CV=CVs[j], n=n.est, theta0=GMR, design=des))
  }
  ######################################
  # min. GMR for minimum accept. power #
  ######################################
  ifelse(GMR <= 1, int <- c(0.8, 1), int <- c(GMR, 1.25))
  GMR.min <- uniroot(pwrGMR, int, tol=1e-7, n=n.est, CV=CV, design=des)$root
  GMRs    <- seq(GMR.min, GMR, length.out=seg)
  pBEGMR  <- NULL
  for(j in 1:length(GMRs)) {
    pBEGMR <- c(pBEGMR, 100*power.TOST(CV=CV, n=n.est, theta0=GMRs[j], design=des))
  }
  ####################################
  # min. n for minimum accept. power #
  # workaround, since uniroot() does #
  # not accept two vectors as limits #
  ####################################
  Ns <- seq(n.est, 12, by=-1) # don't drop below 12 subjects
  j <- 0
  pwrN <- pwr.est
  n.min <- NULL; pBEn <- NULL
  if(des == "paired") {
    while(pwrN >= pwr.min) {
      j <- j+1
      pwrN <- power.TOST(CV=CV, n=Ns[j], theta0=GMR, design=des)
      if(pwrN >= pwr.min) {
        n.min <- c(n.min, Ns[j])
        pBEn <- c(pBEn, 100*pwrN)
      } else {
        break
      }
    }
  }
  if(des == "3x3" | des == "2x3x3"){ # three sequences
    while(pwrN >= pwr.min) {
      j <- j+1
      n1 <- floor(Ns[j]/3)
      n2 <- floor(Ns[j]*2/3-n1)
      n3 <- Ns[j]-(n1+n2)
      n <- c(n1, n2, n3)
      pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
      if(pwrN >= pwr.min) {
        n.min <- c(n.min, sum(n))
        pBEn <- c(pBEn, 100*pwrN)
      } else {
        break
      }
    }
  }
  if(des == "4x4" | des == "2x4x4" | des == "2x4x2"){ # four seq's
    while(pwrN >= pwr.min) {
      j <- j+1
      n1 <- floor(Ns[j]/4)
      n2 <- floor(Ns[j]*2/4-n1)
      n3 <- floor(Ns[j]*3/4-(n1+n2))
      n4 <- Ns[j]-(n1+n2+n3)
      n <- c(n1, n2, n3, n4)
      pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
      if(pwrN >= pwr.min) {
        n.min <- c(n.min, sum(n))
        pBEn <- c(pBEn, 100*pwrN)
      } else {
        break
      }
    }
  }
  if(des == "3x6x3"){
    while(pwrN >= pwr.min) { # six seq's
      j <- j+1
      n1 <- floor(Ns[j]/6)
      n2 <- floor(Ns[j]*2/6-n1)
      n3 <- floor(Ns[j]*3/6-(n1+n2))
      n4 <- floor(Ns[j]*4/6-(n1+n2+n3))
      n5 <- floor(Ns[j]*5/6-(n1+n2+n3+n4))
      n6 <- Ns[j]-(n1+n2+n3+n4+n5)
      n <- c(n1, n2, n3, n4, n5, n6)
      pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
      if(pwrN >= pwr.min) {
        n.min <- c(n.min, sum(n))
        pBEn <- c(pBEn, 100*pwrN)
      } else {
        break
      }
    }
  } else { # parallel and all two sequence designs
    while(pwrN >= pwr.min) {
      j <- j+1
      n1 <- floor(Ns[j]/2)
      n2 <- Ns[j]-n1
      n <- c(n1, n2)
      pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
      if(pwrN >= pwr.min) {
        n.min <- c(n.min, sum(n))
        pBEn <- c(pBEn, 100*pwrN)
      } else {
        break
      }
    }
  }
  op <- par()
  par(mar=c(c(4, 4, 2.5, 0.5))+0.1) # default for B, L, T, R: c(5, 4, 4, 2) + 0.1
  clr <- colorRampPalette(c("blue", "red"))(seg)
  split.screen(c(2, 2))
  screen(1) ### Sensitivity of CV (GMR and n constant) ###
  plot(100*CVs, pBECV, type="n", main=paste0("Higher variability\n",
    "GMR = ", GMR, ", n = ", n.est), lwd=2, xlab="CV%", ylab="% power", las=1,
    cex.main=1)
  abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
  segments(100*CVs[segs], pBECV[segs], 100*CVs[segs+1], pBECV[segs+1], lwd=2, col=clr[segs])
  points(100*CVs[1], pBECV[1], col=clr[1], pch=16, cex=1.25)
  points(100*CVs[seg], pBECV[seg], col=clr[seg], pch=16, cex=1.25)
  text(100*CV, 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("CV = ",
    signif(100*CV.max, 4), "% (", round(100*pwr.min, 2), "%)"), cex=0.9, pos=4)
  screen(2) ### Sensitivity of GMT (CV and n constant) ###
  clr <- rev(clr)
  plot(GMRs, pBEGMR, type="n", main=paste0("Larger deviation from 1\n",
    "CV = ", 100*CV, "%, n = ", n.est), lwd=2, xlim=c(GMR, GMR.min),
    xlab="GMR", ylab="% power", las=1, cex.main=1)
  abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
  segments(GMRs[segs], pBEGMR[segs], GMRs[segs+1], pBEGMR[segs+1], lwd=2, col=clr[segs])
  points(GMRs[1], pBEGMR[1], col=clr[1], pch=16, cex=1.25)
  points(GMRs[seg], pBEGMR[seg], col=clr[seg], pch=16, cex=1.25)
  text(GMR, 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("GMR = ",
    signif(GMR.min, 4), " (", round(100*pwr.min, 2), "%)"), cex=0.9, pos=4)
  screen(3) ### Sensitivity of n (GMR and CV constant) ###
  plot(n.min, pBEn, type="n", main=paste0("Drop-outs\n",
    "GMR = ", GMR, ", CV = ", 100*CV, "%"), lwd=2, xlim=c(max(n.min), min(n.min)),
    ylim=c(100*pwr.min, 100*pwr.est), xlab="n", ylab="% power", las=1, cex.main=1)
  abline(h=c(100*pwr.target, 80, 100*pwr.min), lty=3, col="grey50")
  clr <- colorRampPalette(c("blue", "red"))(length(n.min))
  points(n.min, pBEn, pch=16, cex=0.8, col=clr)
  points(n.min[1], pBEn[1], col=clr[1], pch=16, cex=1.25)
  points(n.min[length(n.min)], pBEn[length(n.min)], col=clr[length(n.min)],
    pch=16, cex=1.25)
  text(max(n.min), 100*(pwr.min+(pwr.est-pwr.min)*0.1), labels=paste0("n = ",
    min(n.min), " (", signif(min(pBEn), 4), "%)"), cex=0.9, pos=4)
  screen(4) ### Some basic information ###
  plot(1, type="n", axes=F, xlab="", ylab="")
  legend("topleft", legend=c(paste0(des, " design", "; assumed:"),
    sprintf("  %s %1.0f%%%s %5.4f", "CV =", 100*CV, ", GMR =", GMR), "power:",
    sprintf("  %s %2.0f%%", "target =", 100*pwr.target),
    sprintf("  %s %5.2f%% %s %i%s", "expected =", 100*pwr.est, "(n =", n.est, ")"),
    sprintf("  %s %2.0f%%", "minimum acceptable =", 100*pwr.min), "acceptable rel. deviations:",
    sprintf("  %s %+5.1f%%", "CV =", 100*(CV.max-CV)/CV),
    sprintf("  %s %+5.2f%%", "GMR =", 100*(GMR.min-GMR)/GMR),
    sprintf("  %s %+5.1f%%", "n =", 100*(min(n.min)-n.est)/n.est)), bty="n", cex=1)
  close.screen(all=T)
  par <- op
  cat(sprintf("%s %s%1.0f%%%s%5.4f%s%i%s%5.2f%% %s %2.0f%%%s", des,
    "design:\nCV=", 100*CV, ", GMR=", GMR, ", n=", n.est, ", expected power=",
    100*pwr.est, "(>target of", 100*pwr.target, ")\n\n"))
}
#########################################################
# You need to paste the functions above only once to R. #
# Define values below (as fractions, not percent):      #
# #######################################################
CV     <- 0.20  # intra-subject for cross-overs & paired, total for parallel.
GMR    <- 0.95  # [sic]
target <- 0.90  # The power aimed at in designing the study.
min    <- 0.70  # The lowest power which is acceptable for the actual study.
des    <- "2x2" # See on top of the function for the implemented strings.
                # Call known.designs() to get information from PowerTOST
                # (dfs, design constants, ...)
######################################################
# Once you have defined values, call the function... #
######################################################
Sens(CV, GMR, target, min, des)
########################################
# ... or call it directly with values: #
########################################
# Sens(CV=0.24, GMR=1.07, pwr.target=0.8, pwr.min=0.7, des="3x6x3")


@R-Freaks: The construction of imbalanced sequences is both lengthy and clumsy. Any suggestions are welcome. ;-)

Too busy to develop the code for RSABE and ABEL at the moment.


  • If a sequence is completely lost, power2.TOST() – correctly? – gives power=0. See these two examples:
    power2.TOST(CV=0.2, n=c(1, 2, 3, 3, 3, 3), design="3x6x3")
    power2.TOST(CV=0.2, n=c(0, 3, 3, 3, 3, 3), design="3x6x3")

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2014-08-13 11:30
(3538 d 04:00 ago)

@ Helmut
Posting: # 13372
Views: 18,654
 

 Sensitivity analysis for all ABE designs

Dear Helmut!

❝ below the code for all designs covered in PowerTOST(), conventional (unscaled) ABE. You have to paste the functions only once to the R-console and call the function Sens() with values defined in the end or call it directly, e.g.,

Sens(CV=0.24, GMR=1.07, pwr.target=0.8, pwr.min=0.7, des="3x6x3")


THX for that code. Maybe there is a better solution than pasting the function into the R-console: Incorporating it into PowerTOST :cool:. May need some additional effort for checking the input and some effort for documenting.
May need also some thinking about the return value(s) of the function.

BTW: What did you mean by your sentence in your post above "This little exercise is not a substitute for the sensitivity analysis which should be performed in study planning ..."? Sens() not useful in real life?

❝ Note that the code tries to keep the degree of imbalance as close as possible between sequences. In real life other combinations may occur. As already noted by Detlew in this post the more im­bal­anced a study is, the lower the power.*


This is surely a minor shortcoming of your code. But if we assume drop-outs by random its a reasonable attempt I think in the planning phase where we don't have 'real life' data. And at least I have no "better" idea.

❝ ...

❝ @R-Freaks: The construction of imbalanced sequences is both lengthy and clumsy. Any suggestions are welcome. ;-)


Will try it if time allows :-D.
Idea: use number of sequences = steps from known.designs() and make a general solution of the n's in sequence groups - looping over sequences.

  • If a sequence is completely lost, power2.TOST() – correctly? – gives power=0. See these two examples:

    power2.TOST(CV=0.2, n=c(1, 2, 3, 3, 3, 3), design="3x6x3")

    ❝ power2.TOST(CV=0.2, n=c(0, 3, 3, 3, 3, 3), design="3x6x3")


If this is truly correct? Don't think so.
Simply the design changes to another one. Maybe also to one not implemented.
In your example it changes to "3x5x3". Not in known.designs()!
This can't handled properly in PowerTOST at moment. No df's, no design constant ...
Eventually power2.TOST() should throw an error/warning if it encounters some n's = zero.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-13 16:49
(3537 d 22:41 ago)

@ d_labes
Posting: # 13374
Views: 18,724
 

 Sensitivity analysis for all ABE designs

Dear Detlew!

❝ Maybe there is a better solution than pasting the function into the R-console: Incorporating it into PowerTOST :cool:.


Up to you. My primary aim was an educational one – such questions regularly pop up at my workshops and end up in eyes wide opened in disbelief. ;-)

❝ May need some additional effort for checking the input and some effort for documenting.

❝ May need also some thinking about the return value(s) of the function.


Yessir. A tabular output would be nice. The plots are less than ideal yet.

❝ BTW: What did you mean by your sentence in your post above "This little exercise is not a substitute for the sensitivity analysis which should be performed in study planning ..."? Sens() not useful in real life?


Only as a “first look” since one of the parameters is varied in each case and the respective others are still “carved from stone”. My understanding of a sensitivity analysis is to explore their combinations, i.e., playing games like:

“What happens to power if we have x drop-outs, the CV increases to y% and the GMR shifts by z?”

In my protocols I use to come up with a couple of “plausible” ones.
Let’s think about my first example, planning for 90% and aiming at a minimum acceptable power for each parameter of 80% this time. That would translate into CV 20⇒24.25%, GMR 0.95⇒0.9208, and n 26⇒19. Although the worst case is possible, it is not realistic.
power2.TOST(CV=0.2425, theta0=0.9208, n=c(10, 9))
[1] 0.520466


I don’t think that there is an easy (automatic) solution. One could set a lower limit for the power of individual parameters (say 70%) and with a shotgun-simulation explore combinations which will prevent the power to drop below 80%. Such a simulation must also allow parameters to vary towards “better” values. Tricky.
BTW, I knew that the GMR is such a “Prinzessin auf der Erbse”, but was surprised how sen­si­tive (compared to the others) she is. Maybe we could consider to throw the upper CL of the CV into the waste-bin and work with the CI of the GMR (with a high α = the producer’s risk here)?
In my first example we get:

2x2 design:
CV=20%, GMR=0.9500, n=26, expected power=91.76% (>target of 90%)

The GMR could drop from 0.95 to 0.9044 to give 70% power. Quick and dirty:

require(PowerTOST)
alpha <- 0.2
CV    <- 0.2
GMR   <- 0.95
n     <- 26
des   <- "2x2"
CI    <- as.numeric(CI.BE(alpha=alpha, CV=CV, pe=GMR, n=n, design=des))
delta <- abs(CI-1)
lim   <- c("Lower", "Upper")
ifelse(delta[1] > delta[2], j <- 1, j <-2)
cat(sprintf("%s %s%5.4f%s %5.4f%s", lim[j], "CL of GMR=",
  GMR, ":", CI[j], "\n"))

Lower CL of GMR=0.9500: 0.9063


Interesting.

❝ ❝ Note that the code tries to keep the degree of imbalance as close as possible between sequences. In real life :blahblah:


❝ This is surely a minor shortcoming of your code. But if we assume drop-outs by random its a reasonable attempt I think in the planning phase where we don't have 'real life' data. And at least I have no "better" idea.


Agree.

❝ ❝ @R-Freaks: The construction of imbalanced sequences is both lengthy and clumsy. Any suggestions are welcome. ;-)


❝ Will try it if time allows :-D.

❝ Idea: use number of sequences = steps from known.designs() and make a general solution of the n's in sequence groups - looping over sequences.


Yes, I thought along these lines as well. No spare time at the moment… On the other hand if the code is capsuled in your famous package who cares how elegant it might be? Bringing the execution time down by a fraction of second does not really matter.

❝ ❝ * If a sequence is completely lost, power2.TOST() – correctly? – gives power=0. […]

❝ If this is truly correct? Don't think so.


Me not either. ;-)

❝ Simply the design changes to another one […] Not in known.designs()!

❝ Eventually power2.TOST() should throw an error/warning if it encounters some n's = zero.


Would make sense, IMHO.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2014-08-13 18:32
(3537 d 20:58 ago)

@ Helmut
Posting: # 13375
Views: 18,624
 

 R-code shortening

Dear Helmut,

here comes my programming gem :cool: for the sensitivity with respect to sample size: Replace all the code for the different designs by

  ####################################
  # min. n for minimum accept. power #
  # workaround, since uniroot() does #
  # not accept two vectors as limits #
  ####################################

  Ns <- seq(n.est, 12, by=-1) # don't drop below 12 subjects
  j <- 0
  pwrN <- pwr.est
  n.min <- NULL; pBEn <- NULL
  # get # of sequences f.i. via this clumsy construct
  seqs <- known.designs()[known.designs()$design==des,"steps"]
  n    <- vector("numeric", length=seqs)
  ni   <- 1:seqs
  # may it be that j grows greater than length(Ns)?
  # paranoia

  nNs  <- length(Ns)
  while(pwrN >= pwr.min & j<nNs){
    j <- j+1
    # distribute total Ns to the sequence groups
    # n[-seqs] is n[1:(seqs-1)]

    n[-seqs] <- diff(floor(Ns[j]*ni/seqs))
    n[seqs]  <- Ns[j] -sum(n[-seqs])
    # can use allways power2.TOST() if PowerTOST V1.1-13 is used
    # else an if() construct is necessary switching to power.TOST()
    # in case of des=="paired"

    pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
    if(pwrN >= pwr.min) {
      # stepwise growing vectors is R-Inferno Circle 2
      # but here we dont know the dimension in advance. Right?

      n.min <- c(n.min, sum(n))
      pBEn  <- c(pBEn, 100*pwrN)
    } else {
      break
    }
  }
  # plot code follows
  ...

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-13 18:42
(3537 d 20:48 ago)

@ d_labes
Posting: # 13376
Views: 18,656
 

 Suggestions / Sneak Preview

Dear Detlew,

that’s really a gem! :thumb up:
I wasn’t aware that 1.1-13 is already on CRAN. See also some remarks I just added to my previous post about a CL of the GMR. ;-)


Edit: Some additions…

      # stepwise growing vectors is R-Inferno Circle 2

      # but here we dont know the dimension in advance. Right?


Unavoidable.

Sometimes people don’t know which version they (or their IT guys) installed. Maybe it’s safer to replace

    # can use allways power2.TOST() if PowerTOST V1.1-13 is used
    # else an if() construct is necessary switching to power.TOST()
    # in case of des=="paired"

    pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)


by

    if(packageVersion("PowerTOST") < "1.1.13" & des == "paired") {
      pwrN <- power.TOST(CV=CV, n=Ns[j], theta0=GMR, design=des)
    } else {
      pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
    }


Most input errors are already trapped by PowerTOST. At the beginning of Sens() add:
  if(pwr.min < 0.05) stop("Minimum acceptable power must not be <0.05!")
… otherwise uniroot() throws an error.
Consider adding ... to transfer optional parameters (theta1,…).

Tested:
R 3.0.2 (2013-09-25), PowerTOST 1.1-12 (2014-07-02) on 32bit Vista Business SP2
R 3.1.1 (2014-07-10), PowerTOST 1.1-13 (2014-08-12) on 64bit Win7 Pro SP1


Sneak Preview

[image]

[image]

To do: Explore the loss of power caused by lower variability. Very important for scaling. In the plots I have set the lower limit to 50% of the expected CV.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2014-08-15 11:01
(3536 d 04:29 ago)

@ Helmut
Posting: # 13382
Views: 18,390
 

 Suggestions / Sneak Preview

Dear Helmut,

❝ Sometimes people don’t know which version they (or their IT guys) installed. Maybe it’s safer to replace

❝ ...

❝ by

    if(packageVersion("PowerTOST") < "1.1.13" & des == "paired") {

❝       pwrN <- power.TOST(CV=CV, n=Ns[j], theta0=GMR, design=des)
❝     } else {
❝       pwrN <- power2.TOST(CV=CV, n=n, theta0=GMR, design=des)
❝     }


This is only necessary for a function definition outside package. Within the next release of PowerTOST, which will introduce the sensitivity analysis, the correct version of power2.TOST() is of course available.

❝ Most input errors are already trapped by PowerTOST. At the beginning of Sens() add:

  if(pwr.min < 0.05) stop("Minimum acceptable power must not be <0.05!")

❝ … otherwise uniroot() throws an error.


Done. Do you know the real reason behind that error?

❝ Consider adding ... to transfer optional parameters (theta1,…).


Very good idea :thumb up:. But ran into trouble doing so with some inexplicable (for me) errors. Seems have to study how to work with the argument '...'.


Sneak Preview


Funny :cool:.
Especially the deviation for the CV!
Seems we don't have to care for any deviation here. In the first look surprising.


Will send you the code of the package version of your function Sens() soon.
Some adaption of the argument names seems necessary to me to be in accordance with the naming within PowerTOST up to know (f.i. pwr.target=targetpower, pwr.min=minpower, GMR=theta0 ...). Although I must confess that I find your naming more appealing.

In the output you name the achieved power 'expected' power. But this term is already exhausted for the output of exppower.TOST().

I would suggest the function name to be something like psa.ABE() (power sensitivity analysis for ABE) or psa.TOST() to account for the upcoming things seen in your Sneak Preview.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-15 14:02
(3536 d 01:27 ago)

@ d_labes
Posting: # 13386
Views: 18,413
 

 Suggestions / Sneak Preview

Dear Detlew,

❝ ❝ Maybe it’s safer to replace […] :blahblah:

❝ This is only necessary for a function definition outside package. Within the next release of PowerTOST, which will introduce the sensitivity analysis, the correct version of power2.TOST() is of course available.


Yes, sure.

❝ ❝ Most input errors are already trapped by PowerTOST. […] otherwise uniroot() throws an error.

❝ Done. Do you know the real reason behind that error?


0.05 works and 0.049 doesn’t. Especially looking at the output-vector and the plots not the slightest idea why.

❝ ❝ Consider adding ...]

❝ But ran into trouble doing so with some inexplicable (for me) errors. Seems have to study how to work with the argument '...'.


I’ve tried it myself before. I was hoping that you know better than I do. Not important at the moment.

❝ ❝ Sneak Preview


❝ Funny :cool:.

❝ Especially the deviation for the CV!

❝ Seems we don't have to care for any deviation here. In the first look surprising.


That’s really nice to show the idea behind scaling. The CV is “dealt with” and the other parameters behave like in TOST.

❝ Some adaption of the argument names seems necessary to me to be in accordance with the naming within PowerTOST […]



Sure.

❝ I would suggest the function name to be something like psa.ABE() (power sensitivity analysis for ABE) or psa.TOST() to account for the upcoming things seen in your Sneak Preview.


I would suggest to avoid the term “sensitivity analysis”. Looking at the plots novices might get the false impression that effects may occur simultaneously “CV can go up to x%, the GMR to y, and with y drop-outs I have still 80% power”. We should make that very clear in the man-page. My suggestion is pa.TOST() (power analysis for TOST). In my code for scaling I have included both RSABE and ABEL, therefore, pascABEl() and paRSABE() – which would be inline with PowerTOST’s naming conventions is not an option. It should be easy to include FDA’s NTID-scaling as well. Maybe something like paScaledBE()?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2014-08-15 13:27
(3536 d 02:02 ago)

@ Helmut
Posting: # 13383
Views: 18,459
 

 Mehl returned!

Dear Helmut!

Just tried to mehl you. Answer:
"helmut.schuetz[image]bebac.at":
SMTP error from remote server in greeting:
host: mx.sil.at:
rejected because 212.227.15.15 is in a black list at zen.spamhaus.org

Any fall-back?


Forget it! Second try was success.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-08-15 13:42
(3536 d 01:47 ago)

@ d_labes
Posting: # 13384
Views: 18,316
 

 Mehl returned!

Dear Detlew!

❝ SMTP error from remote server in greeting:

❝ host: mx.sil.at:

❝ rejected because 212.227.15.15 is in a black list at zen.spamhaus.org


Fantastico! I cannot confirm that. I received one 11:33 from your gmx-account. Did you send the one which bounced back from your company’s address?

❝ Any fall-back?


helmut.schuetz[image]chello.at

I will send you my RSABE/ABEL-code soon. I will post some comments on your previous post as well.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,988 posts in 4,825 threads, 1,655 registered users;
98 visitors (0 registered, 98 guests [including 4 identified bots]).
Forum time: 15:30 CEST (Europe/Vienna)

The whole purpose of education is
to turn mirrors into windows.    Sydney J. Harris

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5