LBAs: Anchor points and model fitting [Regulatives / Guidelines]
adding to Ohlbe’s post: Anchor points may – or may not! – stabilize the nonlinear model fit in LBAs. It is up to you to decide – based on the A&P of back-calculated concentrations (of calibrators and QCs) – which model performs better and has to be used in method validation and analysis of samples.
An example of a 4-parameter logistic model$$y=D+\frac{A-D}{1+\left(\frac{x}{C}\right)^B}$$
Initial guesses, parameter bounds (constraints):
type A B C D
start 15 0.5 50 90
lower 0 0.1 5 50
upper 50 1.0 80 100
Model with anchor points: AIC = 84.48761
Formula: obs ~ logistic(A, B, C, D, x)
Parameters:
Estimate Std. Error t value Pr(>|t|)
A 0.03825 0.95628 0.040 0.968493
B 0.50353 0.03443 14.623 3.84e-12 ***
C 48.93165 10.79076 4.535 0.000202 ***
D 99.98011 5.34034 18.722 3.80e-14 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.251 on 20 degrees of freedom
(In)accuracy (%) = A
x xhat A
0.1000 0.09949 -0.5076
0.1000 0.10650 6.4670
0.1000 0.09218 -7.8210
0.3981 0.38360 -3.6350
0.3981 0.36040 -9.4730
0.3981 0.39200 -1.5320
1.5850 1.80900 14.1400
1.5850 1.82400 15.0800
1.5850 1.60100 1.0010
6.3100 6.19500 -1.8170
6.3100 5.45600 -13.5300
6.3100 6.25800 -0.8301
25.1200 23.44000 -6.7040
25.1200 23.15000 -7.8620
25.1200 29.92000 19.1200
100.0000 123.10000 23.0600
100.0000 94.70000 -5.3050
100.0000 88.64000 -11.3600
(Im)precision (%) = P
x mean (xhat) P
0.1000 0.09938 -0.6205
0.3981 0.37870 -4.8800
1.5850 1.74500 10.0700
6.3100 5.97000 -5.3920
25.1200 25.50000 1.5190
100.0000 102.10000 2.1350
Model without anchor points: AIC = 62.06841
Formula: obs0 ~ logistic(A, B, C, D, x0)
Parameters:
Estimate Std. Error t value Pr(>|t|)
A 0.16548 1.82282 0.091 0.929
B 0.50623 0.07225 7.006 6.19e-06 ***
C 49.06767 29.33499 1.673 0.117
D 100.00000 15.92206 6.281 2.02e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.165 on 14 degrees of freedom
(In)accuracy (%) = A
x xhat A
0.1000 0.09704 -2.9620
0.1000 0.10400 4.0080
0.1000 0.08974 -10.2600
0.3981 0.38230 -3.9640
0.3981 0.35890 -9.8370
0.3981 0.39070 -1.8490
1.5850 1.81600 14.6000
1.5850 1.83200 15.5500
1.5850 1.60700 1.3970
6.3100 6.21800 -1.4610
6.3100 5.47700 -13.2000
6.3100 6.28000 -0.4722
25.1200 23.43000 -6.7150
25.1200 23.14000 -7.8690
25.1200 29.89000 19.0000
100.0000 122.20000 22.1700
100.0000 94.13000 -5.8720
100.0000 88.14000 -11.8600
(Im)precision (%) = P
x mean (xhat) P
0.1000 -3.073 -3.073
0.3981 -5.216 -5.216
1.5850 10.520 10.520
6.3100 -5.044 -5.044
25.1200 1.471 1.471
100.0000 1.480 1.480
Homework: Given the small value of the parameter A and its large p-value in both models, one could remove it and try the simplified model:$$y=D+\frac{D}{1+\left(\frac{x}{C}\right)^B}$$
simulation script
LLOQ <- 0.1
ULOQ <- 100
CV <- 0.05 # 5% multiplicative error
reps <- 3 # Replicates at each level
calibs <- 6 # Levels of calibrators
# Parameters of the 4-parameter logistic function for simulations
A <- 0 # Zero concentration response
B <- 0.5 # Slope
C <- 50 # Inflection point
D <- 100 # Infinite concentration response
# Calibrators equally spaced in log-scale
calib <- sort(rep(signif(exp(seq(log(LLOQ), log(ULOQ), length.out = calibs)), 4), reps))
first <- unique(head(calib, 2 * reps))
last <- unique(tail(calib, 2 * reps))
anchor <- signif(c(LLOQ / (first[2] / first[1]), ULOQ * (last[2] / last[1])), 4)
x <- sort(c(rep(anchor, reps), calib)) # Add anchor points
anchs <- which(x %in% anchor)
# Mandatory parameter guesses (start), lower and upper bounds
guess <- data.frame(type = c("start", "lower", "upper"),
A = c(15, 0, 50), B = c(0.5, 0.1, 1),
C = c(50, 5, 80), D = c(90, 50, 100))
logistic <- function(A, B, C, D, x) { # Response
D + (A - D) / (1 + (x / C)^B)
}
back <- function(x, A, B, C, D, obs) { # For uniroot()
logistic(A, B, C, D, x) - obs
}
y <- logistic(A, B, C, D, x) # Theoretical responses
# Add error
obs <- rlnorm(n = length(y), meanlog = log(y) - 0.5 * log(CV^2 + 1),
sdlog = sqrt(log(CV^2 + 1)))
# Nonlinear least-squares model with anchor points
# Note: Only with algorithm = "port" bounds (parameter constraints) can be specified
m1 <- nls(obs ~ logistic(A, B, C, D, x), algorithm = "port",
start = as.list(guess[guess$type == "start", 2:5]),
lower = guess[guess$type == "lower", 2:5],
upper = guess[guess$type == "upper", 2:5],
control = list(tol = 1e-08))
x0 <- x[-anchs] # Remove anchor points
obs0 <- obs[-anchs] # and their observations
# Nonlinear least-squares model without anchor points
m0 <- nls(obs0 ~ logistic(A, B, C, D, x0), algorithm = "port",
start = as.list(guess[guess$type == "start", 2:5]),
lower = guess[guess$type == "lower", 2:5],
upper = guess[guess$type == "upper", 2:5],
control = list(tol = 1e-08))
res1 <- res0 <- data.frame(x = x0, y = obs0, xhat = NA_real_, A = NA_real_)
for (j in seq_along(x0)) { # Back-calculate x and (in)accuracy
# No analytical solution for x, we need a numeric one
res1$xhat[j] <- uniroot(back, interval = range(x), extendInt = "yes",
tol = 1e-8, A = coef(m1)[["A"]], B = coef(m1)[["B"]],
C = coef(m1)[["C"]], D = coef(m1)[["D"]],
obs = res1$y[j])$root
res1$A[j] <- 100 * (res1$xhat[j] - res1$x[j]) / res1$x[j]
res0$xhat[j] <- uniroot(back, interval = range(x), extendInt = "yes",
tol = 1e-8, A = coef(m0)[["A"]], B = coef(m0)[["B"]],
C = coef(m0)[["C"]], D = coef(m0)[["D"]],
obs = res0$y[j])$root
res0$A[j] <- 100 * (res0$xhat[j] - res0$x[j]) / res0$x[j]
}
P1 <- P0 <- data.frame(x = unique(x0), xhat = NA_real_, P = NA_real_)
for (j in 1:nrow(P1)) { # (Im)precision
P1$xhat[j] <- mean(res1$xhat[res1$x == P1$x[j]])
P1$P[j] <- mean(res1$A[res1$x == P1$x[j]])
P0$xhat[j] <- mean(res0$A[res0$x == P0$x[j]])
P0$P[j] <- mean(res0$A[res1$x == P0$x[j]])
}
res1$y <- res0$y <- NULL # No more needed
names(P1)[2] <- names(P0)[2] <- "mean (xhat)"
plot(x, obs, ylim = c(0, max(obs)), type = "n", log = "x", axes = FALSE,
xlab = expression(bolditalic(x)), ylab = expression(bolditalic(y)))
x.ax <- axTicks(1, log = TRUE)
grid(nx = NA, ny = NULL)
abline(v = x.ax, lty = 3, col = "lightgrey")
abline(v = c(LLOQ, ULOQ), lty = 2)
axis(1, at = x.ax)
axis(2, las = 1)
axis(3, at = c(LLOQ, ULOQ), labels = c("LLOQ", "ULOQ"))
box(lwd = 2)
points(x, obs, pch = 21, cex = 1.25, col = "blue", bg = "#0000FF80")
points(x0, obs0, pch = 21, cex = 1.25, col = "red", bg = "#FF000080")
new.x <- exp(seq(log(min(x)), log(max(x)), length.out = 501))
new.x0 <- exp(seq(log(min(x0)), log(max(x0)), length.out = 501))
lines(new.x, predict(m1, list(x = new.x)), lwd = 3, col = "blue")
lines(new.x0, predict(m0, list(x0 = new.x0)), lwd = 3, col = "red")
cat("Initial guesses, parameter bounds (constraints):\n"); print(guess, row.names = FALSE)
cat("Model with anchor points: AIC =", AIC(m1)); summary(m1)
cat("(In)accuracy (%) = A\n"); print(signif(res1, 4), row.names = FALSE)
cat("(Im)precision (%) = P\n"); print(signif(P1, 4), row.names = FALSE)
cat("Model without anchor points: AIC =", AIC(m0)); summary(m0)
cat("(In)accuracy (%) = A\n"); print(signif(res0, 4), row.names = FALSE)
cat("(Im)precision (%) = P\n"); print(signif(P0, 4), row.names = FALSE)
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
Helmut Schütz
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- ICH M10 - Method validation - Anchoring points-reg ryraman2661 2024-03-20 04:51 [Regulatives / Guidelines]
- ICH M10 - Method validation - Anchoring points-reg Ohlbe 2024-03-20 10:50
- LBAs: Anchor points and model fittingHelmut 2024-03-21 17:42
- LBAs: Anchor points and model fitting ryraman2661 2024-03-25 12:28