Elena777
☆    

Belarus,
2018-01-21 09:53
(2258 d 08:55 ago)

Posting: # 18238
Views: 14,684
 

 Function CVpooled (package PowerTOST) [🇷 for BE/BA]

Dear all! I am new to R and RStudio. It will be very useful to me if you help me with the following questions:

1. Here is an example of my data:

CVs <- ("
 PKmetric | CV     | n  |design|source   
    Cmax  | 0.2617 | 23 | 2x2  | study 1
    Cmax  | 0.1216 | 24 | 2x2  | study 2
    Cmax  | 0.1426 | 24 | 2x2  | study 3
    Cmax  | 0.1480 | 27 | 3x3  | study 4
    Cmax  | 0.1476 | 27 | 3x3  | study 4a 
    Cmax  | 0.2114 | 18 | 2x2  | study 5

")
txtcon <- textConnection(CVs)
CVdata <- read.table(txtcon, header=TRUE, sep="|", strip.white=TRUE, as.is=TRUE)
close(txtcon)

CVsCmax <- subset(CVdata, PKmetric=="Cmax")
CVpooled(CVsCmax, alpha=0.2, logscale=TRUE)

print(CVpooled(CVsCmax, alpha=0.2, robust=TRUE), digits=6, verbose=TRUE)


The result I got:

CVpooled(CVsCmax, alpha=0.2, logscale=TRUE)
0.1677 with 181 degrees of freedom
 
print(CVpooled(CVsCmax, alpha=0.2, robust=TRUE), digits=6, verbose=TRUE)
Pooled CV = 0.175054 with 129 degrees of freedom (robust df's)
Upper 80% confidence limit of CV = 0.185309


Why are degrees of freedom so huge? And why did I get two different results 0.1677 and 0.1750? Is it acceptable and can I use any result to calculate sample size in PowerTOST? What result must I choose (0.1677, 0.1750, 0.1853) for calculating sample size?

2. And the second question. Is argument "digits" from function CVpooled simply a number of observations for a variable?

Thank you in advance!
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-21 18:21
(2258 d 00:27 ago)

@ Elena777
Posting: # 18239
Views: 13,694
 

 Function CVpooled (lengthy answer)

Hi Elena,

please RTFM: After attaching the package either in the R-console or in R-Studio type help(CVpooled) or ?CVpooled. Alternatively read the man-page online.

CVpooled(CVsCmax, alpha=0.2, logscale=TRUE)


Calculates the CV (pooled from your six studies – assuming that individual CVs were obtained from log-trans­formed data). Furthermore, CVs are weighed by the studies’ degrees of freedom. The dfs depend on how the studies were evaluated; here the default robust=FALSE is applied (more about that later). Hence, we get:

0.1677 with 181 degrees of freedom

In order to get the 1–α upper confidence limit of the pooled CV, you would have to use the print method and the additional argument verbose=TRUE:
print(CVpooled(CVsCmax, alpha=0.2), verbose=TRUE)
Pooled CV = 0.1677 with 181 degrees of freedom
Upper 80% confidence limit of CV = 0.1758


print(CVpooled(CVsCmax, alpha=0.2, robust=TRUE), digits=6, verbose=TRUE)

❝ Pooled CV = 0.175054 with 129 degrees of freedom (robust df's)

❝ Upper 80% confidence limit of CV = 0.185309


❝ Why are degrees of freedom so huge?


You are pooling from six studies which gives you a lot of information. In each of them the degrees of freedom are n–2 for the 2×2 studies and 2n–4 for the 3×3 studies. Type known.designs() to get an idea. We can hack the code of the function to look behind the curtains:
CVs.1 <- CVs.2 <- CVsCmax       # new data frames
CVs.1$obs <- CVs.2$obs <- 0     # column for observations
CVs.1$df  <- CVs.2$df <- 0      # column for the dfs
CVs.1$robust <- FALSE           # column for the model: fixed
CVs.2$robust <- TRUE            #                       mixed
CVs.1$w.var <- CVs.2$w.var <- 0 # column for weighted variances
for (j in seq_along(CVsCmax$n)) {
  dno <- PowerTOST:::.design.no(CVsCmax$design[j])
  if (!is.na(dno)) {
    dprop <- PowerTOST:::.design.props(dno)
    n <- CVsCmax$n[j]
    per <- as.integer(substr(CVsCmax$design[j],
                             (nchar(CVsCmax$design[j])+1)-1,
                             nchar(CVsCmax$design[j])))
    CVs.2$obs[j] <- CVs.1$obs[j] <- per*n
    CVs.1$df[j] <- eval(parse(text=dprop$df, srcfile=NULL))
    CVs.2$df[j] <- eval(parse(text=dprop$df2, srcfile=NULL))
    CVs.1$w.var[j] <- CV2mse(CVs.1$CV[j])*CVs.1$df[j]
    CVs.2$w.var[j] <- CV2mse(CVs.2$CV[j])*CVs.2$df[j]
  }
}
print(CVs.1, row.names=FALSE)
print(CVs.2, row.names=FALSE)

 PKmetric     CV  n design   source obs df robust     w.var
     Cmax 0.2617 23    2x2  study 1  46 21  FALSE 1.3911140
     Cmax 0.1216 24    2x2  study 2  48 22  FALSE 0.3229227
     Cmax 0.1426 24    2x2  study 3  48 22  FALSE 0.4428769
     Cmax 0.1480 27    3x3  study 4  81 50  FALSE 1.0833777
     Cmax 0.1476 27    3x3 study 4a  81 50  FALSE 1.0775921
     Cmax 0.2114 18    2x2  study 5  36 16  FALSE 0.6995224

 PKmetric     CV  n design   source obs df robust     w.var
     Cmax 0.2617 23    2x2  study 1  46 21   TRUE 1.3911140
     Cmax 0.1216 24    2x2  study 2  48 22   TRUE 0.3229227
     Cmax 0.1426 24    2x2  study 3  48 22   TRUE 0.4428769
     Cmax 0.1480 27    3x3  study 4  81 24   TRUE 0.5200213
     Cmax 0.1476 27    3x3 study 4a  81 24   TRUE 0.5172442
     Cmax 0.2114 18    2x2  study 5  36 16   TRUE 0.6995224


Check the dfs which are used to calculate the confidence limit based on the χ2 distribution):
cat("Sum of dfs (fixed effects model):", sum(CVs.1$df),
    "\nSum of dfs (mixed effects model):", sum(CVs.2$df), "\n")

Sum of dfs (fixed effects model): 181
Sum of dfs (mixed effects model): 129


The hard way: Calculate pooled weighted variances and back-transform to pooled CVs:
cat("Pooled CVs:",
    "\nfixed effects model (robust=FALSE):",
    mse2CV(sum(CVs.1$w.var)/sum(CVs.1$df)),
    "\nmixed effects model (robust=TRUE) :",
    mse2CV(sum(CVs.2$w.var)/sum(CVs.2$df)), "\n")

Pooled CVs:
fixed effects model (robust=FALSE): 0.1676552
mixed effects model (robust=TRUE) : 0.1750539


For the background of the calculation see this presentation (slides 32–35).

❝ And why did I get two different results 0.1677 and 0.1750?


In your second call

❝ print(CVpooled(CVsCmax, alpha=0.2, robust=TRUE), digits=6, verbose=TRUE)


you used the argument robust=TRUE (where in the first one implicitly the default robust=FALSE is used). When the CVs in the studies were calculated by ANOVA (or by SAS’ Proc GLM) the dfs given in column df of known.designs() are used. Only if the CVs were calculated by a mixed effects model (for the FDA e.g., by SAS Proc Mixed) values of column df2 are used. Note that for 2×2 designs the dfs are identical but for higher-order cross­overs the dfs are different (e.g., in your 3×3 the robust dfs are n–3). Since in the latter case the dfs are lower (129 vs. 181), less weight is given to the 3×3 studies with comparably low CVs (pooled CV 0.1751 vs. 0.1677), and the confidence interval is wider (upper CL 0.1853 vs. 0.1758).

❝ Is it acceptable and can I use any result to calculate sample size in PowerTOST?


No (see below).

❝ What result must I choose (0.1677, 0.1750, 0.1853) for calculating sample size?


The pooled ones (0.1677 or 0.1751) assume that it is “carved in stone” (i.e., is the “true” value). I don’t recommend that (if the actual CV will be larger than assumed the study it will be underpowered). If it will be smaller you might have lost money but gained power and the chance to demonstrate BE increases. Which alpha you use, is up to you. 0.2 is my suggestion (and the function’s default).1,2,3,4 Explore increasing levels of alpha:
alpha  <- c(0.05, 0.2, 0.25, 0.5)
digits <- 4
res <- data.frame(alpha=alpha, rob.1=FALSE, CV.1=NA, CL.1=NA,
                  rob.2=TRUE, CV.2=NA, CL.2=NA)
for (j in seq_along(alpha)) {
  CVp <- CVpooled(CVsCmax, alpha=alpha[j], robust=FALSE)
  res$CV.1[j] <- signif(CVp$CV, digits)
  res$CL.1[j] <- signif(CVp$CVupper, digits)
  CVp <- CVpooled(CVsCmax, alpha=alpha[j], robust=TRUE)
  res$CV.2[j] <- signif(CVp$CV, digits)
  res$CL.2[j] <- signif(CVp$CVupper, digits)
}
print(res, row.names=FALSE)

 alpha rob.1   CV.1   CL.1 rob.2   CV.2   CL.2
  0.05 FALSE 0.1677 0.1839  TRUE 0.1751 0.1955

  0.20 FALSE 0.1677 0.1758  TRUE 0.1751 0.1853
  0.25 FALSE 0.1677 0.1742  TRUE 0.1751 0.1833
  0.50 FALSE 0.1677 0.1680  TRUE 0.1751 0.1755


You have to know beforehand how the studies were evaluated in order to use the correct setting for the argument robust to get the correct pooled CV and its CL (0.1677 → 0.1758 for studies evaluated by ANOVA and 0.1751 → 0.1853 for ones evaluated by a mixed effects model).

❝ Is argument "digits" from function CVpooled simply a number of observations for a variable?


No. You get the total number of subjects by …
cat("Sum of n (N):", sum(CVsCmax$n), "\n")
Sum of n (N): 143

… and the total number of observations from one of the new data frames by:
cat("Sum of observations:", sum(CVs.1$obs), "\n") or
cat("Sum of observations:", sum(CVs.2$obs), "\n")
Sum of observations: 340

The argument digits tells the print argument which precision to use in the output, where 4 digits is the default:
print(CVpooled(CVsCmax), verbose=TRUE)
Pooled CV = 0.1677 with 181 degrees of freedom
Upper 80% confidence limit of CV = 0.1758


print(CVpooled(CVsCmax), verbose=TRUE, digits=7)
Pooled CV = 0.1676552 with 181 degrees of freedom
Upper 80% confidence limit of CV = 0.1758
098


Note that it is a little bit tricky to combine studies evaluated by fixed and mixed effect models. In such a case you have to provide the dfs yourself. Let’s assume that study 4 was evaluated by a fixed effects model (df = 2n–4) and study 4a by a mixed effects model (df = n–3). Provide the dfs in the additional column df (which takes precedence over the combination n and design and the argument robust is not observed any more). I recommend to give an additional column model (which is not used but for clarity):

CVs <- ("
  CV     | n  |design| source   | model | df
  0.2617 | 23 | 2x2  | study 1  | fixed | 21
  0.1216 | 24 | 2x2  | study 2  | fixed | 22
  0.1426 | 24 | 2x2  | study 3  | fixed | 22
  0.1480 | 27 | 3x3  | study 4  | fixed | 50
  0.1476 | 27 | 3x3  | study 4a | mixed | 24
  0.2114 | 18 | 2x2  | study 5  | fixed | 16 ")
txtcon <- textConnection(CVs)
CVdata <- read.table(txtcon, header=TRUE, sep="|", strip.white=TRUE, as.is=TRUE)
close(txtcon)
print(CVpooled(CVdata), verbose=TRUE)

Pooled CV = 0.1708 with 155 degrees of freedom
Upper 80% confidence limit of CV = 0.1798


Repeating it the hard way for comparison:
alpha        <- 0.2
digits       <- 4
CVdata$w.var <- CV2mse(CVdata$CV)*CVdata$df
dftot        <- sum(CVdata$df)
chi          <- qchisq(alpha, dftot)
pooledse2    <- sum(CVdata$w.var)/dftot
CV           <- mse2CV(pooledse2)
CLCV         <- se2CV(sqrt(pooledse2*dftot/chi))
cat("Pooled CV =", signif(CV, digits), "with", dftot, "degrees of freedom",
    "\nUpper", sprintf("%2i%%", 100*(1-alpha)), "CL of CV =",
    signif(CLCV, digits), "\n")

Pooled CV = 0.1708 with 155 degrees of freedom
Upper 80% CL of CV = 0.1798


Hope that helps.


  1. The idea behind α 0.2 is to have a producer’s risk of 20%.
  2. Gould recommended a slightly less conservative α 0.25:
    Gould AL. Group Sequential Extensions of a Standard Bioequivalence Testing Procedure. J Pharmacokinet Biopharm. 1995;23(1):57–86. PMID 8576845.
  3. Julious’ α 0.05 is very conservative and useful in a sensitivity analysis.
    Julious S. Sample Sizes for Clinical Trials. Boca Raton: Chapman & Hall / CRC Press; 2010. p. 113–114.
  4. Patterson & Jones recommend a rather liberal α 0.5:
    Patterson S, Jones B. Bioequivalence and Statistics in Clinical Pharmacology. Boca Raton: Chapman & Hall / CRC Press; 2nd ed. 2017. p. 134–136.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Elena777
☆    

Belarus,
2018-01-27 11:11
(2252 d 07:38 ago)

@ Helmut
Posting: # 18296
Views: 13,200
 

 Function CVpooled (lengthy answer)

Helmut, I can`t thank you enough for this super-detailed answer! I`ve got it!
ElMaestro
★★★

Denmark,
2018-01-27 23:28
(2251 d 19:20 ago)

@ Elena777
Posting: # 18297
Views: 13,293
 

 Function CVpooled (package PowerTOST)

Hi all,

can we take one step back and ask which question the pooled CV provides an answer to?

"If the CV across all studies happens to be the same, then which CV is most likely representing all the studies?"
[yes, add alfalfa and limits if you wish]
Now, I hate to say it, but boring things like blood sampling regimen, analytical setups, stats model, within-batch variation of both T and R, volunteer constraints postdose, and other factors will influence directly on a CV. If you are in doubt, look at the list provided in this thread.

Please give me a good reason to believe all those factors (plus all the ones I forgot) are the same across just any two studies. You pick and show me. Until then I consider pooled CV's a curiosity as useful as headmounted toilet paper etc.

A cheeseburger is pretty well assumed to be the same regardless of whether you buy it at McDonald's in Pasadena, Paris or Peking. A cheeseburger is a cheeseburger is a cheeseburger. Until we bring Burger King into the picture. Or Hardee's. Or...

This pooled business is not the way forward, gentlemen and gentlewomen. But it is highly theoretically and academically appealing, I grant you all that.

Now if you will excuse me I must end this post here as I have to blow my nose. I am sure I put that roll of toilet paper somewhere, but I just can't seem to find it right now... Bloody hell...

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-28 01:27
(2251 d 17:21 ago)

@ ElMaestro
Posting: # 18298
Views: 13,220
 

 Common sense

Hi ElMaestro,

excellent post, especially re. burgers and toilet paper (another [image] odd couple)! :rotfl:

When it comes to pooling of CVs you are absolutely right – and got it terribly wrong.

Do you remember what you wrote a while ago? In an earlier post I wrote:

❝ Doing the math is just the first step. Before you pool CVs I would suggest to inspect whether the confidence intervals of the CVs overlap […]. If not, try to find out why (different CROs, populations, bioanalytical methods, …). Use common sense to decide which CVs are reliable enough to pool.


That’s what statistics is all about – numbers. But never forget the data generating process. Are results obtained by a method of a reputable CRO whose head wrote dozens of papers about trace-MS better than colorimetry done in a back yard lab in Timbuktu? Etc. etc.

I suppose it is tempting, if the only tool you have is a hammer,
to treat everything as if it were a nail.
     Abraham Maslow (The Psychology of Science, 1966)

See also this one. Again: Statistics is just one tool, common sense is another. Use both.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2018-01-28 09:43
(2251 d 09:05 ago)

@ Helmut
Posting: # 18299
Views: 13,138
 

 Common sense

Hi Hötzi,

❝ When it comes to pooling of CVs you are absolutely right – and got it terribly wrong.


❝ Do you remember what you wrote a while ago?


I go it terribly right. :-P

I don't remember every post I have written. Or more correctly and defintely more honestly, I don't remember any post I have written. :lookaround:
Contrasting my post with the one you linked to makes it very clear that I am an example of 'radicalisation' as news agencies would call it. :-D

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-28 12:38
(2251 d 06:10 ago)

@ ElMaestro
Posting: # 18300
Views: 13,207
 

 Alzheimer’s

Hi ElMaestro,

❝ I don't remember every post I have written.


So do I. The search function helps.

❝ Or more correctly and defintely more honestly, I don't remember any post I have written. :lookaround:


That’s the good thing with Alzheimer’s. Every day new acquaintances.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2018-01-28 14:52
(2251 d 03:56 ago)

@ ElMaestro
Posting: # 18301
Views: 13,150
 

 To pool or not to pool

Dear Öberster Größter Meister!

❝ Please give me a good reason to believe all those factors (plus all the ones I forgot) are the same across just any two studies. You pick and show me. Until then I consider pooled CV's a curiosity as useful as headmounted toilet paper etc.


Please hav a look at Guersey McPearson's prose, Essay "A dip in the Pool":

Quote: "It seems to me that people who object to pooling different studies but would quite happily accept any one of them on its own, if only it were large enough, for the purpose of informing medical decision making, should be given thinking lessons."

More in the whole essay.

Regards,

Detlew
ElMaestro
★★★

Denmark,
2018-01-29 01:07
(2250 d 17:41 ago)

@ Elena777
Posting: # 18302
Views: 13,026
 

 Function CVpooled (package PowerTOST)

Hi all,

I will take it in good spirit that for some it can be difficult to accept diverging opinions without lashing out in a personal fashion. My ambition with my everyday dialogue is to keep the conversation on the healthy side of the fine line that separates humor from venomous hints.


Heteroscedasticity, or non-homogenous variance, is a concrete and widespread matter that is an everyday concern at most pharmaceutical companies and CROs involved in multicenter trials. There are some compelling reasons for taking it seriously including scientific, ethical and pecuniary ones. The companies go to great lengths to control it as much as possible, including using the same equipment, having it calibrated at the same centralised metrology departments, use of same SOPs, and not least having all centres adhere to the same protocol. And much more. It is through such effort that companies can justify using a stats model that assume variance homogeneity implicitly. The alternative would be pretty nasty and untenable.
There is no particular scientific reason to assume variance homogeneity for studies done at different times, at different locations, under different designs, with different SOPs, using different equipment and protocols etc. If I am not mistaken this is what we are doing when we pool CVs like described here.

Note also the distinction between pooling effects, pooling effect ratios (they tend to be homogenous between sites) and pooling variances. I believe GMcP and other pooling proponents do not really allude to the latter?

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-29 18:53
(2249 d 23:55 ago)

@ ElMaestro
Posting: # 18303
Views: 13,199
 

 Pooling is fooling?

Hi ElMaestro and all,

❝ I will take it in good spirit that for some it can be difficult to accept diverging opinions without lashing out in a personal fashion. My ambition with my everyday dialogue is to keep the conversation on the healthy side of the fine line that separates humor from venomous hints.


If I crossed the line, please accept my sincere apologies.

❝ There is no particular scientific reason to assume variance homogeneity for studies done at different times, at different locations, under different designs, with different SOPs, using different equipment and protocols etc. If I am not mistaken this is what we are doing when we pool CVs like described here.


I examined data (CVintra of Cmax of MR MPH) from the public domain and some of my studies. Doesn’t matter whether a chiral method is used or not since the in vivo interconversion is negligible.
library(PowerTOST)
CVs <- ("
CV    |n |design|no|source                |type     |food    |reg|assay   |method
0.1971|33|3x6x3 |1 |Modi et al. 2000      |dose prop|fast    |SD |LC-MS/MS|chiral
0.2180|24|4x4   |2 |Midha et al 2001      |food eff |fast/fed|SD |GC/ECD  |chiral
0.1658|19|2x2x2 |3 |Markowitz et al. 2003 |BE       |fast    |SD |LC-MS/MS|achiral
0.1378|23|3x3   |4 |Rochdi et al. 2005    |dose prop|fast    |SD |LC-MS/MS|achiral
0.0890|12|2x2x2 |5 |Fischer et al. 2006   |sprinkle |fed     |SD |GC/MS   |achiral
0.2028|19|3x3   |6 |Patrick et al. 2007   |alcohol  |fed     |SD |LC-MS/MS|chiral
0.0870|24|3x3   |7 |Tuerck et al. 2007    |line ext |fast    |SD |LC-MS/MS|chiral
0.1415|27|4x4   |8a|Haessler et al. 2008  |BE       |fast    |SD |LC-MS/MS|achiral
0.1741|26|4x4   |8b|Haessler et al. 2008  |BE       |fed     |SD |LC-MS/MS|achiral
0.1965|13|2x2x2 |9 |Schütz et al. 2009    |BE       |fed     |SD |GC/MS   |achiral
0.1398|16|4x4   |10|Wang et al. 2004      |BE       |fast    |SD |LC-MS/MS|achiral
0.1202|12|4x4   |11|6520-9973-03          |food eff |fast/fed|SD |GC/MS   |achiral
0.2381|12|2x2x2 |12|6520-9979-04          |MR/IR    |fed     |SD |GC/MS   |achiral
0.2052|12|2x2x2 |13|EudraCT 2005-004375-38|BE       |fed     |SD |GC/MS   |achiral
0.1049|11|3x6x3 |14|EudraCT 2009-013059-31|pilot    |fed     |SD |GC/MS   |achiral
0.1793|15|2x2x2 |15|EudraCT 2009-015822-12|line ext |fed     |SD |GC/MS   |chiral
0.0854|16|2x2x2 |16|EudraCT 2010-021272-28|line ext |fed     |MD |GC/MS   |chiral
0.1347|18|3x6x3 |17|EudraCT 2011-002358-30|pilot    |fed     |SD |GC/MS   |chiral")
txtcon <- textConnection(CVs)
CVdata <- read.table(txtcon, header=TRUE, sep="|", strip.white=TRUE, as.is=TRUE)
close(txtcon)
alpha   <- 0.05
alphaCL <- 0.2
CVp   <- CVpooled(CVdata, alpha=alphaCL)
for (j in seq_along(row.names(CVdata))) {
  n <- CVdata$n[j]
  CVdata$pwr.GMR1[j] <- suppressMessages(power.TOST(CV=CVdata$CV[j], theta0=1,
                                                    n=n, design=CVdata$design[j]))
  CVdata$df[j] <- eval(parse(text=
                    known.designs()[which(known.designs()[["design"]] ==
                      CVdata$design[j]), "df"], srcfile=NULL))
  CL <- CVCL(CV=CVdata$CV[j], df=CVdata$df[j], side="2-sided", alpha=alpha)
  CVdata$CLlo[j] <- signif(CL[["lower CL"]], 4)
  CVdata$CLhi[j] <- signif(CL[["upper CL"]], 4)
  ifelse (CVdata$CLhi[j] > CVp$CVupper, CVdata$sig[j] <- "*",
                                        CVdata$sig[j] <- "ns")
  CVdata$N.CV[j] <- sampleN.TOST(CV=CVdata$CV[j], design="2x2x2",
                                    print=FALSE)[["Sample size"]]
  CVdata$N.CL[j] <- sampleN.TOST(CV=CVCL(CV=CVdata$CV[j], df=CVdata$df[j],
                                         side="2-sided", alpha=alphaCL)[["upper CL"]],
                                 design="2x2x2", print=FALSE)[["Sample size"]]
}
CVdata$w.var <- CV2mse(CVdata$CV)*CVdata$df
dftot        <- sum(CVdata$df)
CVCL         <- CVCL(CV=CVp$CV, df=dftot, side="2-sided", alpha=alpha)
print(CVp, verbose=TRUE); print(CVdata, row.names=FALSE)
ylim  <- c(1, max(as.numeric(row.names(CVdata)))+1)
xlim  <- range(c(CVdata$CLlo, CVdata$CLhi))
xlab  <- sprintf("CV (%i%% CL)", 100*(1-alpha))
ylab  <- "Study #"
ycorr <- 18*0.025/(diff(ylim))
dev.new(record=TRUE)
op <- par(ask=TRUE)
par(pty="s")
plot(CVdata$CV, row.names(CVdata), type="n", log="x", axes=FALSE,
     frame.plot=TRUE, xlim=xlim, ylim=ylim, xlab=xlab, ylab=ylab)
axis(1, at=pretty(xlim), labels=sprintf("%.0f%%", pretty(100*xlim)))
axis(2, at=1:nrow(CVdata), labels=CVdata$no, tick=FALSE, las=1)
axis(3, at=CVp$CV, labels=sprintf("%.2f%%", 100*CVp$CV))
abline(v=c(CVp$CV, CVCL[["upper CL"]]), lty=c(1, 3), col="blue")
for (j in seq_along(row.names(CVdata))) {
  if (CVdata$CV[j] > CVCL[["upper CL"]])
    points(CVdata$CV[j], j, pch=15, cex=1.1, col="red")
  arrows(x0=CVdata$CLlo[j], y0=j, x1=CVdata$CLhi[j], y1=j,
         length=ycorr*2, angle=90, code=3)
  points(CVdata$CV[j], j, pch=3, cex=1.5)
  mtext(4, text=sprintf("%4.1f%%", 100*CVdata$pwr.GMR1[j]), at=j,
        line=2.6, las=1, cex=0.85, adj=1)
}
loc <- max(as.numeric(row.names(CVdata)))+1
polygon(x=c(CVCL[["lower CL"]], CVp$CV, CVCL[["upper CL"]],
            CVp$CV, CVCL[["lower CL"]]),
        y=c(loc, loc-ycorr*8, loc, loc+ycorr*8, loc),
        border=NA, col="lightblue")
text(x=CVCL[["upper CL"]], y=loc, pos=4,
     labels=paste0("pooled CV (", 100*(1-alphaCL), "% CI)"))
mtext(4, text="power:\nGMR 1", at=loc, line=0.5, las=1, cex=0.85)
CVset1 <- subset(CVdata, no %in% c("5", "9", "13", "14", "15", "17"))
CVp1   <- CVpooled(CVset1, alpha=alpha)
for (j in seq_along(row.names(CVset1))) {
  ifelse (CVset1$CV[j] > CVp1$CVupper, CVset1$sig[j] <- "*",
                                       CVset1$sig[j] <- "ns")
}
CVset1$w.var <- CV2mse(CVset1$CV)*CVset1$df
dftot        <- sum(CVset1$df)
CVCL         <- CVCL(CV=CVp1$CV, df=dftot, side="2-sided", alpha=alpha)
print(CVp1, verbose=TRUE); print(CVset1, row.names=FALSE)
CVset1$study <- seq_along(1:length(CVset1$n))
ylim  <- c(1, nrow(CVset1)+1)
ycorr <- 18*0.025/(diff(ylim))
plot(CVset1$CV, 1:nrow(CVset1), type="n", log="x", axes=FALSE,
     frame.plot=TRUE, xlim=xlim, ylim=ylim, xlab=xlab, ylab=ylab)
axis(1, at=pretty(xlim), labels=sprintf("%.0f%%", pretty(100*xlim)))
axis(2, at=1:nrow(CVset1), labels=CVset1$no, tick=FALSE, las=1)
axis(3, at=CVp1$CV, labels=sprintf("%.2f%%", 100*CVp1$CV))
abline(v=c(CVp1$CV, CVCL[["upper CL"]]), lty=c(1, 3), col="blue")
for (j in seq_along(row.names(CVset1))) {
  if (CVset1$CV[j] > CVCL[["upper CL"]])
    points(CVset1$CV[j], j, pch=15, cex=1.1, col="red")
  arrows(x0=CVset1$CLlo[j], y0=j, x1=CVset1$CLhi[j], y1=j,
         length=ycorr/1.5, angle=90, code=3)
  points(CVset1$CV[j], j, pch=3, cex=1.5)
  mtext(4, text=sprintf("%4.1f%%", 100*CVdata$pwr.GMR1[j]), at=j,
        line=2.6, las=1, cex=0.85, adj=1)
}
loc <- nrow(CVset1)+1
polygon(x=c(CVCL[["lower CL"]], CVp1$CV, CVCL[["upper CL"]],
            CVp1$CV, CVCL[["lower CL"]]),
        y=c(loc, loc-ycorr, loc, loc+ycorr, loc),
        border=NA, col="lightblue")
text(x=CVCL[["upper CL"]], y=loc, pos=4,
     labels=paste0("pooled CV (", 100*(1-alphaCL), "% CI)"))
mtext(4, text="power:\nGMR 1", at=loc, line=0.5, las=1, cex=0.85)
par(op)


and got this ( denote CVs > the upper CL of the pooled CV; power for a GMR of 1 in the right margin):

[image]

[image]Oops! Apples and oranges.
What we also see: Variability was the highest in study #12 (MR vs. IR), whereas MD (#16) showed only half of the variability of SD (#15) although the accumulation was <1%.

For the subset (same product, SD, same analytical method):

[image]

Here using the upper CL of the pooled CV would “work” but using the highest CV would be even more conservative.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2018-01-29 20:21
(2249 d 22:27 ago)

@ Helmut
Posting: # 18304
Views: 13,024
 

 Pooling is fooling?

Just some thoughts:
  • on one end of the pooling debate you can find meta-analysis. And their often far-reaching conclusions for "evidence-based medicine".
  • on the other end you find the headline of this post (without the question mark).
For me (considering the purpose of this thread) what is most appealing is:
  • reasonable worst-case scenario. Pick the studies comparable to yours and then make a worst-case assessment and base your numbers on this. Numbers considering the scientific (power) and economic (if necessary) considerations.
Then you have done much more than many before you. I see so so so often: "we choose to include 12 healthy volunteers". End of story. No rational, no numbers, no power, no nothing.

Study fails? How come? Big surprise..

Kindest regards, nobody
ElMaestro
★★★

Denmark,
2018-01-30 19:23
(2248 d 23:25 ago)

@ nobody
Posting: # 18305
Views: 13,021
 

 Life is good

Hi all,

well, pooling is good for many things. Not very good for others.

Consider also the issue with scaling in BE. We operate on basis of a variance of lnR-lnR, so to say. Not a variance of lnR-lnT or lnT-lnT. But why? - bear in mind that differences in lnT and lnR is accounted for by the fixed effect. It is because the within variance may be heavily formulation dependent. Keep this in mind when you pool different studies. You are potentially throwing a lot of variance of lnT-lnR into the pot from all sorts of different processes and formulations and batches.

To me common sense is not pooling variances from different trials. But at least trying to check fi you can relate to the published cases and picking the one you think matches your own conditions best. Yes, the amount of detail may not be enormous in a PAR. Annoying but just a fact of life. The amount of info that you do not have in a set of PARs is scientifically not compensated by pooling of variances.

Pass or fail!
ElMaestro
nobody
nothing

2018-01-30 19:31
(2248 d 23:18 ago)

@ ElMaestro
Posting: # 18306
Views: 12,994
 

 Life is good

...mildly (un)related to pooling:

Why is the Swedish flag in the forum here so much bigger than the Danish? And Belgium is even smaller than that? Is there no set of average-sized flags out there? :-)

Sorry, but I had to post this one day and this day is today...

Kindest regards, nobody
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-31 02:47
(2248 d 16:01 ago)

@ nobody
Posting: # 18307
Views: 12,902
 

 OT: Flags

[image]Hi nobody,

❝ Why is the Swedish flag ([image]) in the forum here so much bigger than the Danish ([image])? And Belgium ([image]) is even smaller than that? Is there no set of average-sized flags out there? :-)


All flags are scaled to the same height (12 px) in order to match the CSS-properties ul { line-height: 108%; } img.flag { box-shadow: 1px 1px 3px #999; }. The width / height-ratio of flags and their colors are mandated by national laws (not kiddin’ – try this one). The red in the Austrian flag is #EF3340 and the Danish one is #E4002B. :-D
The most common ratio is 3:2, f.i. [image] [image] [image] [image] [image] [image] [image] [image] [image]
followed by ones with 2:1, f.i. [image] [image] [image] [image] [image] [image] [image] [image] [image].
Rare ones: 7:6 [image], 11:8 [image] [image], 4:3 [image], 5:2 [image].
The only square one [image] and the only one which is not rectangular [image]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2018-01-31 11:17
(2248 d 07:31 ago)

@ Helmut
Posting: # 18308
Views: 12,767
 

 OT: Flags

...why not pool all these thing and have them all in 3:2? In real life all this pieces of fabric are of the same size, I guess, when they have a summit, or? :-)

Kindest regards, nobody
ElMaestro
★★★

Denmark,
2018-01-31 12:02
(2248 d 06:46 ago)

@ nobody
Posting: # 18309
Views: 12,757
 

 OT: Flags

❝ ...why not pool all these thing and have them all in 3:2? In real life all this pieces of fabric are of the same size, I guess, when they have a summit, or? :-)



Ah yes, harmonisation is usually never a contentious issue. Oh wait, is it spelled 'hamonization'...?!?

"Dear Elizabeth,

we have had a great idea. Following a discussion on bebac.at - which we are sure you recognize as a leading authority in the field of flags and banners, and not least an incubator for brilliant thoughts - we have decided that as of May 1st 2018, the dimensions of the Union Jack should be 8:5 (width:height). We are sure this little change will not cause any stir of fuss. Please advise your colonies for alignment, change all letterhead throughout the Commonwealth accordingly and make sure to incinerate all current and unused governmental stocks by the aforementioned date.

Best regards,
Nobody and colleagues

PS: We appreciate your efforts to introduce the metric system but we find the process a bit erratic. Can you please get rid of road signs indicating distances in miles? And while you're at it kindly rework your annoying electric plugs and sockets. Your willingness to collaborate is highly appreciated."

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-31 12:16
(2248 d 06:32 ago)

@ nobody
Posting: # 18310
Views: 12,962
 

 OT: Flags

Hi nobody,

❝ ...why not pool all these thing and have them all in 3:2?


I don’t like distortions.
[image] [image]
[image] [image]
Reminds me on early 16:9 TV-CRTs which scaled the 5:4 PAL making actors looking obese.

❝ In real life all this pieces of fabric are of the same size, I guess, when they have a summit, or? :-)


No idea. Politicians don’t give a s**t?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2018-01-31 12:22
(2248 d 06:26 ago)

@ Helmut
Posting: # 18311
Views: 12,869
 

 OT: Flags

@Danmark

"PPS: and stop driving on the wrong side of the lane..."

(off-off-off topic: running low on Karen Wolff Honnungsnitter. Can you help me out?)

@Vienna calling:

Keep the Swiss and Bhutan as they are. And transform the rest. Pooling at its best.

Kindest regards, nobody
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-31 12:38
(2248 d 06:10 ago)

@ nobody
Posting: # 18312
Views: 12,859
 

 OT: Flags

Hi nobody,

❝ "PPS: and stop driving on the wrong side of the lane..."


… and follow the good example of Sweden which did it on Sep 3, 1967 (the Dagen H – short for Dagen Högertrafikomläggningen).

❝ Keep the Swiss and Bhutan as they are.


Don’t mix up Bhutan ([image]) with Nepal ([image]).

❝ And transform the rest. Pooling at its best.


:not really:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2018-01-31 13:28
(2248 d 05:21 ago)

@ Helmut
Posting: # 18313
Views: 12,775
 

 OT: Flags

❝ ❝ "PPS: and stop driving on the wrong side of the lane..."


❝ … and follow the good example of Sweden which did it on Sep 3, 1967 (the Dagen H – short for Dagen Högertrafikomläggningen).


Nowadays other "Höger" problems...

❝ ❝ Keep the Swiss and Bhutan as they are.


❝ Don’t mix up Bhutan ([image]) with Nepal ([image]).


Me idi*t! I appologize and will climb Annapurna twice, to make up for my fault...

❝ ❝ And transform the rest. Pooling at its best.


:not really:


Which sims in R could I do to convince you? These uneven flags are a no-go! :-p

Kindest regards, nobody
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-31 18:18
(2248 d 00:30 ago)

@ nobody
Posting: # 18316
Views: 12,866
 

 OT: Flags

❝ ❝ Don’t mix up Bhutan ([image]) with Nepal ([image]).


❝ Me idi*t! I appologize and will climb Annapurna twice, to make up for my fault...


Why not the Sagarmatha/Chomolungma? Much safer than the Annapurna with its death rate of 25%. Climb it twice?

❝ Which sims in R could I do to convince you?


Finding an adjusted α for reference-scaling performed in a two-stage design. Do you have access to a supercomputer or at least a large Linux-cluster?

❝ These uneven flags are a no-go! :-p


Sorry. You have to learn to live with them. Optionally: Switch off graphics in your brause.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
nobody
nothing

2018-01-31 18:32
(2248 d 00:17 ago)

@ Helmut
Posting: # 18317
Views: 12,692
 

 OT: Flags

❝ ❝ ❝ Don’t mix up Bhutan ([image]) with Nepal ([image]).

❝ ❝

❝ ❝ Me idi*t! I appologize and will climb Annapurna twice, to make up for my fault...


❝ Why not the Sagarmatha/Chomolungma? Much safer than the Annapurna with its death rate of 25%. Climb it twice?


Just 25%? Only about 160 people made it. Entleibung, the hard way...

❝ ❝ Which sims in R could I do to convince you?


❝ Finding an adjusted α for reference-scaling performed in a two-stage design. Do you have access to a supercomputer or at least a large Linux-cluster?


Son is studying computer stuffffs, maybe some years ;-) Still no distributed R to do somethink like Mersenne prime mining?

❝ ❝ These uneven flags are a no-go! :-p


❝ Sorry. You have to learn to live with them. Optionally: Switch off graphics in your brause.


You are mean!

Kindest regards, nobody
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2018-01-31 20:00
(2247 d 22:49 ago)

@ nobody
Posting: # 18318
Views: 12,834
 

 OT: Flags

Hi nobody,

❝ ❝ ❝ Which sims in R could I do to convince you?

❝ ❝

❝ ❝ Finding an adjusted α for reference-scaling performed in a two-stage design. Do you have access to a supercomputer or at least a large Linux-cluster?


❝ Son is studying computer stuffffs, maybe some years ;-) Still no distributed R to do somethink like Mersenne prime mining?


CRAN Task View: High-Performance and Parallel Computing with R

❝ ❝ Sorry. You have to learn to live with them. Optionally: Switch off graphics in your brause.


❝ You are mean!


I’m afraid, yes.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
75 visitors (0 registered, 75 guests [including 4 identified bots]).
Forum time: 18:49 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5