Function CVpooled (lengthy answer) [🇷 for BE/BA]

posted by Helmut Homepage – Vienna, Austria, 2018-01-21 18:21 (2258 d 15:06 ago) – Posting: # 18239
Views: 13,712

Hi Elena,

please RTFM: After attaching the package either in the R-console or in R-Studio type help(CVpooled) or ?CVpooled. Alternatively read the man-page online.

CVpooled(CVsCmax, alpha=0.2, logscale=TRUE)


Calculates the CV (pooled from your six studies – assuming that individual CVs were obtained from log-trans­formed data). Furthermore, CVs are weighed by the studies’ degrees of freedom. The dfs depend on how the studies were evaluated; here the default robust=FALSE is applied (more about that later). Hence, we get:

0.1677 with 181 degrees of freedom

In order to get the 1–α upper confidence limit of the pooled CV, you would have to use the print method and the additional argument verbose=TRUE:
print(CVpooled(CVsCmax, alpha=0.2), verbose=TRUE)
Pooled CV = 0.1677 with 181 degrees of freedom
Upper 80% confidence limit of CV = 0.1758


print(CVpooled(CVsCmax, alpha=0.2, robust=TRUE), digits=6, verbose=TRUE)

❝ Pooled CV = 0.175054 with 129 degrees of freedom (robust df's)

❝ Upper 80% confidence limit of CV = 0.185309


❝ Why are degrees of freedom so huge?


You are pooling from six studies which gives you a lot of information. In each of them the degrees of freedom are n–2 for the 2×2 studies and 2n–4 for the 3×3 studies. Type known.designs() to get an idea. We can hack the code of the function to look behind the curtains:
CVs.1 <- CVs.2 <- CVsCmax       # new data frames
CVs.1$obs <- CVs.2$obs <- 0     # column for observations
CVs.1$df  <- CVs.2$df <- 0      # column for the dfs
CVs.1$robust <- FALSE           # column for the model: fixed
CVs.2$robust <- TRUE            #                       mixed
CVs.1$w.var <- CVs.2$w.var <- 0 # column for weighted variances
for (j in seq_along(CVsCmax$n)) {
  dno <- PowerTOST:::.design.no(CVsCmax$design[j])
  if (!is.na(dno)) {
    dprop <- PowerTOST:::.design.props(dno)
    n <- CVsCmax$n[j]
    per <- as.integer(substr(CVsCmax$design[j],
                             (nchar(CVsCmax$design[j])+1)-1,
                             nchar(CVsCmax$design[j])))
    CVs.2$obs[j] <- CVs.1$obs[j] <- per*n
    CVs.1$df[j] <- eval(parse(text=dprop$df, srcfile=NULL))
    CVs.2$df[j] <- eval(parse(text=dprop$df2, srcfile=NULL))
    CVs.1$w.var[j] <- CV2mse(CVs.1$CV[j])*CVs.1$df[j]
    CVs.2$w.var[j] <- CV2mse(CVs.2$CV[j])*CVs.2$df[j]
  }
}
print(CVs.1, row.names=FALSE)
print(CVs.2, row.names=FALSE)

 PKmetric     CV  n design   source obs df robust     w.var
     Cmax 0.2617 23    2x2  study 1  46 21  FALSE 1.3911140
     Cmax 0.1216 24    2x2  study 2  48 22  FALSE 0.3229227
     Cmax 0.1426 24    2x2  study 3  48 22  FALSE 0.4428769
     Cmax 0.1480 27    3x3  study 4  81 50  FALSE 1.0833777
     Cmax 0.1476 27    3x3 study 4a  81 50  FALSE 1.0775921
     Cmax 0.2114 18    2x2  study 5  36 16  FALSE 0.6995224

 PKmetric     CV  n design   source obs df robust     w.var
     Cmax 0.2617 23    2x2  study 1  46 21   TRUE 1.3911140
     Cmax 0.1216 24    2x2  study 2  48 22   TRUE 0.3229227
     Cmax 0.1426 24    2x2  study 3  48 22   TRUE 0.4428769
     Cmax 0.1480 27    3x3  study 4  81 24   TRUE 0.5200213
     Cmax 0.1476 27    3x3 study 4a  81 24   TRUE 0.5172442
     Cmax 0.2114 18    2x2  study 5  36 16   TRUE 0.6995224


Check the dfs which are used to calculate the confidence limit based on the χ2 distribution):
cat("Sum of dfs (fixed effects model):", sum(CVs.1$df),
    "\nSum of dfs (mixed effects model):", sum(CVs.2$df), "\n")

Sum of dfs (fixed effects model): 181
Sum of dfs (mixed effects model): 129


The hard way: Calculate pooled weighted variances and back-transform to pooled CVs:
cat("Pooled CVs:",
    "\nfixed effects model (robust=FALSE):",
    mse2CV(sum(CVs.1$w.var)/sum(CVs.1$df)),
    "\nmixed effects model (robust=TRUE) :",
    mse2CV(sum(CVs.2$w.var)/sum(CVs.2$df)), "\n")

Pooled CVs:
fixed effects model (robust=FALSE): 0.1676552
mixed effects model (robust=TRUE) : 0.1750539


For the background of the calculation see this presentation (slides 32–35).

❝ And why did I get two different results 0.1677 and 0.1750?


In your second call

❝ print(CVpooled(CVsCmax, alpha=0.2, robust=TRUE), digits=6, verbose=TRUE)


you used the argument robust=TRUE (where in the first one implicitly the default robust=FALSE is used). When the CVs in the studies were calculated by ANOVA (or by SAS’ Proc GLM) the dfs given in column df of known.designs() are used. Only if the CVs were calculated by a mixed effects model (for the FDA e.g., by SAS Proc Mixed) values of column df2 are used. Note that for 2×2 designs the dfs are identical but for higher-order cross­overs the dfs are different (e.g., in your 3×3 the robust dfs are n–3). Since in the latter case the dfs are lower (129 vs. 181), less weight is given to the 3×3 studies with comparably low CVs (pooled CV 0.1751 vs. 0.1677), and the confidence interval is wider (upper CL 0.1853 vs. 0.1758).

❝ Is it acceptable and can I use any result to calculate sample size in PowerTOST?


No (see below).

❝ What result must I choose (0.1677, 0.1750, 0.1853) for calculating sample size?


The pooled ones (0.1677 or 0.1751) assume that it is “carved in stone” (i.e., is the “true” value). I don’t recommend that (if the actual CV will be larger than assumed the study it will be underpowered). If it will be smaller you might have lost money but gained power and the chance to demonstrate BE increases. Which alpha you use, is up to you. 0.2 is my suggestion (and the function’s default).1,2,3,4 Explore increasing levels of alpha:
alpha  <- c(0.05, 0.2, 0.25, 0.5)
digits <- 4
res <- data.frame(alpha=alpha, rob.1=FALSE, CV.1=NA, CL.1=NA,
                  rob.2=TRUE, CV.2=NA, CL.2=NA)
for (j in seq_along(alpha)) {
  CVp <- CVpooled(CVsCmax, alpha=alpha[j], robust=FALSE)
  res$CV.1[j] <- signif(CVp$CV, digits)
  res$CL.1[j] <- signif(CVp$CVupper, digits)
  CVp <- CVpooled(CVsCmax, alpha=alpha[j], robust=TRUE)
  res$CV.2[j] <- signif(CVp$CV, digits)
  res$CL.2[j] <- signif(CVp$CVupper, digits)
}
print(res, row.names=FALSE)

 alpha rob.1   CV.1   CL.1 rob.2   CV.2   CL.2
  0.05 FALSE 0.1677 0.1839  TRUE 0.1751 0.1955

  0.20 FALSE 0.1677 0.1758  TRUE 0.1751 0.1853
  0.25 FALSE 0.1677 0.1742  TRUE 0.1751 0.1833
  0.50 FALSE 0.1677 0.1680  TRUE 0.1751 0.1755


You have to know beforehand how the studies were evaluated in order to use the correct setting for the argument robust to get the correct pooled CV and its CL (0.1677 → 0.1758 for studies evaluated by ANOVA and 0.1751 → 0.1853 for ones evaluated by a mixed effects model).

❝ Is argument "digits" from function CVpooled simply a number of observations for a variable?


No. You get the total number of subjects by …
cat("Sum of n (N):", sum(CVsCmax$n), "\n")
Sum of n (N): 143

… and the total number of observations from one of the new data frames by:
cat("Sum of observations:", sum(CVs.1$obs), "\n") or
cat("Sum of observations:", sum(CVs.2$obs), "\n")
Sum of observations: 340

The argument digits tells the print argument which precision to use in the output, where 4 digits is the default:
print(CVpooled(CVsCmax), verbose=TRUE)
Pooled CV = 0.1677 with 181 degrees of freedom
Upper 80% confidence limit of CV = 0.1758


print(CVpooled(CVsCmax), verbose=TRUE, digits=7)
Pooled CV = 0.1676552 with 181 degrees of freedom
Upper 80% confidence limit of CV = 0.1758
098


Note that it is a little bit tricky to combine studies evaluated by fixed and mixed effect models. In such a case you have to provide the dfs yourself. Let’s assume that study 4 was evaluated by a fixed effects model (df = 2n–4) and study 4a by a mixed effects model (df = n–3). Provide the dfs in the additional column df (which takes precedence over the combination n and design and the argument robust is not observed any more). I recommend to give an additional column model (which is not used but for clarity):

CVs <- ("
  CV     | n  |design| source   | model | df
  0.2617 | 23 | 2x2  | study 1  | fixed | 21
  0.1216 | 24 | 2x2  | study 2  | fixed | 22
  0.1426 | 24 | 2x2  | study 3  | fixed | 22
  0.1480 | 27 | 3x3  | study 4  | fixed | 50
  0.1476 | 27 | 3x3  | study 4a | mixed | 24
  0.2114 | 18 | 2x2  | study 5  | fixed | 16 ")
txtcon <- textConnection(CVs)
CVdata <- read.table(txtcon, header=TRUE, sep="|", strip.white=TRUE, as.is=TRUE)
close(txtcon)
print(CVpooled(CVdata), verbose=TRUE)

Pooled CV = 0.1708 with 155 degrees of freedom
Upper 80% confidence limit of CV = 0.1798


Repeating it the hard way for comparison:
alpha        <- 0.2
digits       <- 4
CVdata$w.var <- CV2mse(CVdata$CV)*CVdata$df
dftot        <- sum(CVdata$df)
chi          <- qchisq(alpha, dftot)
pooledse2    <- sum(CVdata$w.var)/dftot
CV           <- mse2CV(pooledse2)
CLCV         <- se2CV(sqrt(pooledse2*dftot/chi))
cat("Pooled CV =", signif(CV, digits), "with", dftot, "degrees of freedom",
    "\nUpper", sprintf("%2i%%", 100*(1-alpha)), "CL of CV =",
    signif(CLCV, digits), "\n")

Pooled CV = 0.1708 with 155 degrees of freedom
Upper 80% CL of CV = 0.1798


Hope that helps.


  1. The idea behind α 0.2 is to have a producer’s risk of 20%.
  2. Gould recommended a slightly less conservative α 0.25:
    Gould AL. Group Sequential Extensions of a Standard Bioequivalence Testing Procedure. J Pharmacokinet Biopharm. 1995;23(1):57–86. PMID 8576845.
  3. Julious’ α 0.05 is very conservative and useful in a sensitivity analysis.
    Julious S. Sample Sizes for Clinical Trials. Boca Raton: Chapman & Hall / CRC Press; 2010. p. 113–114.
  4. Patterson & Jones recommend a rather liberal α 0.5:
    Patterson S, Jones B. Bioequivalence and Statistics in Clinical Pharmacology. Boca Raton: Chapman & Hall / CRC Press; 2nd ed. 2017. p. 134–136.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
93 visitors (0 registered, 93 guests [including 7 identified bots]).
Forum time: 09:28 CET (Europe/Vienna)

With four parameters I can fit an elephant,
and with five I can make him wiggle his trunk.    John von Neumann

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5