Dear all,
I stumbled across this statement in the
MR guideline:
5.1.2. Variability
The inter-individual variability of the pharmacokinetic parameters of interest should be determined in the single dose or multiple dose studies […] and should be compared between the modified and immediate release formulation. The variability for the modified release formulation should preferably not exceed that for the immediate release formulation unless it is adequately justified in terms of potential clinical consequences.
(my
emphases)
IMHO, that calls for a one-sided test (non-superiority). Between-subject variability is nasty. Even worse, if the drug is subjected to polymorphic metabolism it can be much larger than within-subject variability.
BTW, it’s funny to ask for it because one doesn’t obtain the between-subject variability from the EMA’s “all effects fixed” model.
Another ambiguity: It’s possible the extract CV
b and CV
w from a crossover study but not
separate for the treatments. Only if we assume no period effects we can get the
total (pooled) CV of T and R. Is
that meant and only the all too common sloppy terminology is used by the EMA?
Let’s explore an example of a HVD(P) where reference-scaling is acceptable for
Cmax and
Cτ. The within-subject CVs are 30, 50, and 60% for
AUC,
Cmax, and
Cτ, respectively. The between-subject variabilities are twice the within-subject variabilities. The assumed T/R-ratios are 0.95 for
AUC and for 0.90 for the concentrations. The non-superiority margin is 1.25. 4 period full replicate design, ≥80% power.
![[image]](img/uploaded/Rlogo_15_12.svg)
scripts at the end.
I got following sample sizes:
design metric variability CV theta0 margin method n
2x2x4 AUC within 0.3 0.9500 NA ABE 20
2x2x4 AUC between 0.6 1.0526 1.25 Non-superiority 84
2x2x4 Cmax within 0.5 0.9000 NA ABEL 28
2x2x4 Cmax between 1.0 1.1111 1.25 Non-superiority 394
2x2x4 Ctau within 0.6 0.9000 NA ABEL 32
2x2x4 Ctau between 1.2 1.1111 1.25 Non-superiority 506
I beg your pardon. If one takes this seriously, it’s a show stopper.
OK, seems that I’m on the wrong track. Here I’m testing non-superiority of the
PK metrics. I’m
not comparing their variabilities. How should we do that? The conventional
F-test wouldn’t do cause it is for independent data. What about Pitman-Morgan? An example of a last year’s hybrid for Health Canada (T = MR 20 mg o.a.d [n 40], R = IR 10 mg b.i.d [n 39];
Cmax and
Cτ data, R-script at the end):
Paired Pitman-Morgan test
data: Cmax.T and Cmax.R
t = -0.10458, df = 37, p-value = 0.5414
alternative hypothesis: true ratio of variances is greater than 1
95 percent confidence interval:
0.6826989 Inf
sample estimates:
variance of x variance of y
0.09814356 0.10036611
metric treatment variance CVp
Cmax T 0.09814356 0.3211248
Cmax R 0.10036611 0.3249240
Paired Pitman-Morgan test
data: Ctau.T and Ctau.R
t = 0.96519, df = 37, p-value = 0.1704
alternative hypothesis: true ratio of variances is greater than 1
95 percent confidence interval:
0.852144 Inf
sample estimates:
variance of x variance of y
0.3531756 0.2845929
metric treatment variance CVp
Cmax T 0.3531756 0.6508311
Cmax R 0.2845929 0.5737777
In my hybrid applications I was never asked for a test so far (I just
reported the CV).
What are your experiences?
NB, comparing pooled variances assumes no period effects. What about replicate designs? Work with the subjects’ geometric means / treatment? For the EMA’s imbalanced and incomplete data set I:
Paired Pitman-Morgan test
data: logDATA.T.means and logDATA.R.means
t = -1.1499, df = 75, p-value = 0.8731
alternative hypothesis: true ratio of variances is greater than 1
95 percent confidence interval:
0.7506468 Inf
sample estimates:
variance of x variance of y
0.7383189 0.8301172
metric treatment CVp CVw
logDATA T 1.0452 0.35157
logDATA R 1.1374 0.46964
logDATA pooled 1.0912 0.41362
As expected pooled CVs > within-subject CVs.

Little bit cheating cause for subjects with only one observation I took it as it is. Not for this example but in the partial replicate it is possible to have one or two observations of R and none of T. Maybe this stuff helps.*
If you know a reference for calculating power of the Pitman-Morgan test, please let me know. Chow/Liu propose simulations in Chapter 7.5.
- Derrick, B, Ruck A, Toher D, White P. Tests for Equality of Variances between Two Samples which Contain Both Paired Observations and Independent Observations. J App Quant Meth. 2018;13(2):36–47.
Open access.
################
# Sample sizes #
################
library(PowerTOST)
metric <- c("AUC", "Cmax", "Ctau")
CVw <- c(0.3, 0.5, 0.6) # within-subject (AUC, Cmax, Ctau)
CVb <- c(0.6, 1.0, 1.2) # between-subject (AUC, Cmax, Ctau)
theta0e <- c(0.95, 0.90, 0.90) # assumed T/R-ratio for equivalence
theta0s <- 1/theta0e # assumed T/R-ratio for non-superiority
margin <- 1.25 # Non-superiority margin
design <- "2x2x4" # TRTR|RTRT
res <- data.frame(design = design, metric = rep(metric, each = 2),
variability = rep(c("within", "between"), 3),
CV = c(CVw[1], CVb[1], CVw[2], CVb[2], CVw[3], CVb[3]),
theta0 = c(theta0e[1], theta0s[1],
theta0e[2], theta0s[2],
theta0e[3], theta0s[3]),
margin = rep(c(NA, margin), 3),
method = c("ABE", "Non-superiority",
rep(c("ABEL", "Non-superiority"), 2)), n = NA,
stringsAsFactors = FALSE)
for (j in 1:nrow(res)) {
if (j %%2 == 0) { # Non-superiority
res$n[j] <- sampleN.noninf(CV = res$CV[j], design = design,
theta0 = res$theta0[j],
margin = res$margin[j], details = FALSE,
print = FALSE)[["Sample size"]]
} else {
if (j == 1) { # ABE
res$n[j] <- sampleN.TOST(CV = res$CV[j], design = design,
theta0 = res$theta0[j], details = FALSE,
print = FALSE)[["Sample size"]]
} else { # ABEL
res$n[j] <- sampleN.scABEL(CV = res$CV[j], design = design,
theta0 = res$theta0[j], details = FALSE,
print = FALSE)[["Sample size"]]
}
}
}
res$theta0 <- signif(res$theta0, 5)
print(res, row.names = FALSE)
##############################
# Hybrid application example #
##############################
library(PowerTOST)
library(PairedData)
# requires normal distributed data; hence, log transform
df <- data.frame(subject = 1:40,
Cmax.T = log(c( 4.588, 4.056, 4.068, 5.222, 4.890, 8.051, 7.453,
7.236, 6.057, 4.009, 5.658, 6.374, 6.062, 5.362,
11.468, 7.409, 6.548, 6.983, 7.708, 3.913, 6.971,
4.867, 5.751, 3.766, 10.015, 4.134, 3.809, 4.956,
3.380, 6.506, 7.700, 3.709, 4.148, 3.363, 6.491,
4.869, 5.172, 4.532, 2.999, 3.494)),
Cmax.R = log(c( 5.787, 4.947, 4.113, 5.599, 5.857, NA, 6.329,
7.099, 4.114, 4.824, 4.330, 7.070, 5.950, 4.270,
13.264, 9.765, 6.709, 5.769, 6.277, 4.676, 6.662,
5.295, 5.517, 4.425, 8.692, 3.794, 4.226, 5.009,
2.816, 7.168, 4.386, 3.612, 5.539, 4.407, 4.615,
8.683, 4.612, 3.537, 3.413, 3.457)),
Ctau.T = log(c(0.39, 0.13, 0.15, 0.56, 0.60, 0.24, 0.21, 0.14,
0.17, 0.31, 0.51, 0.19, 0.46, 0.37, 0.78, 0.37,
0.12, 0.21, 0.31, 0.51, 0.24, 0.22, 0.54, 0.34,
1.36, 0.47, 0.49, 0.26, 0.19, 0.13, 0.20, 0.14,
0.22, 0.42, 0.37, 0.11, 0.45, 0.21, 0.19, 0.12)),
Ctau.R = log(c(0.33, 0.16, 0.19, 0.12, 0.46, NA, 0.19, 0.17,
0.19, 0.30, 0.60, 0.17, 0.45, 0.29, 0.81, 0.53,
0.15, 0.26, 0.37, 0.46, 0.20, 0.17, 0.59, 0.43,
0.93, 0.29, 0.37, 0.28, 0.19, 0.15, 0.26, 0.16,
0.33, 0.25, 0.22, 0.46, 0.38, 0.22, 0.14, 0.10)))
# create objects of class paired
paired.Cmax <- with(df, paired(Cmax.T, Cmax.R))
paired.Ctau <- with(df, paired(Ctau.T, Ctau.R))
# Pitman-Morgan tests
# only complete data are used, i.e., subject’s 6 missing R is dropped
PM.Cmax <- Var.test(paired.Cmax, alternative = "greater")
PM.Ctau <- Var.test(paired.Ctau, alternative = "greater")
# pooled variances and CVs
Cmax <- data.frame(metric = "Cmax", treatment = c("T", "R"),
variance = c(PM.Cmax$estimate[["variance of x"]],
PM.Cmax$estimate[["variance of y"]]),
CVp = c(mse2CV(PM.Cmax$estimate[["variance of x"]]),
mse2CV(PM.Cmax$estimate[["variance of y"]])))
Ctau <- data.frame(metric = "Cmax", treatment = c("T", "R"),
variance = c(PM.Ctau$estimate[["variance of x"]],
PM.Ctau$estimate[["variance of y"]]),
CVp = c(mse2CV(PM.Ctau$estimate[["variance of x"]]),
mse2CV(PM.Ctau$estimate[["variance of y"]])))
PM.Cmax; print(Cmax, row.names = FALSE); PM.Ctau; print(Ctau, row.names = FALSE)
##################
# EMA Data set I #
##################
library(replicateBE)
library(PowerTOST)
library(PairedData)
var.pool <- function(var, n) {
if (!length(var) == length(n)) stop
return(sum(var*(n-1))/(sum(n)-2))
}
EMA1 <- rds01[, c(1, 4, 6)]
names(EMA1)[3] <- "logDATA"
T.means <- aggregate(.~subject,
data = EMA1[EMA1$treatment == "T", ], mean)[, c(1, 3)]
R.means <- aggregate(.~subject,
data = EMA1[EMA1$treatment == "R", ], mean)[, c(1, 3)]
n <- c(nrow(T.means), nrow(R.means))
names(T.means)[2] <- "logDATA.T.means"
names(R.means)[2] <- "logDATA.R.means"
EMA1.means <- merge(T.means, R.means, by = "subject")
paired.logDATA <- with(EMA1.means,
paired(logDATA.T.means, logDATA.R.means))
PM.logDATA <- Var.test(paired.logDATA, alternative = "greater")
x <- as.numeric(method.A(data = rds01, print = FALSE,
details = TRUE)[c(11:12)])
options(digits = 7)
var.p <- c(PM.logDATA$estimate[["variance of x"]],
PM.logDATA$estimate[["variance of y"]]) # pooled (total) T and R
var.w <- c(CV2mse(x[1]/100), CV2mse(x[2]/100)) # within T and R
res <- data.frame(metric = "logDATA", treatment = c("T", "R", "pooled"),
CVp = c(mse2CV(var.p), mse2CV(var.pool(var.p, n))),
CVw = c(mse2CV(var.w), mse2CV(var.pool(var.w, n))))
res[, 3:4] <- signif(res[, 3:4], 5)
PM.logDATA; print(res, row.names = FALSE)