moblak
☆    

2010-11-17 12:46
(5307 d 18:07 ago)

Posting: # 6154
Views: 11,977
 

 Integration - smoothing [Bioanalytics]

Hi everyone,

our general approach for integration of peaks in chromatograms is one generic method per batch, no smoothing and "manual" integrations.

However (as every analyst except regulators knows :angry:), from time to time it is not so easy to integrate all chromatograms within the batch consistently with one generic method.

This time I will not focus on "manually" (though scientificaly sound) integrated chromatograms that the regulatory agencies are so afraid of :confused: and were already thoroughly discussed elsewhere, but rather on the "type" of integration method- "raw" data or/and smoothing.

Despite the fact that some agencies prefer the same integration method to be used for validation and study :ponder:, our approach remains one method per batch.
If we are lucky there is indeed the same method for validation and study, but in reality, we have to adjust certain parameters such as noise level, RT, peak width etc., but never apply smoothing (again historic regulatory tabu).

If we took a rather conservative/regulated approch, which tests should/could be performed to justify that smoothing (how much?) could be applied?

I was thinking of reintegrating 3xA/P (data used for between run A/P and establishment of regression model) and then comparing the results (raw vs. smoothed data). Since I am far from a statistician expert, how to evaluate the comparison (t-test....)? IMHO, only the "classical bioanalytical" %nominal and %CV wouldn't be enough.

Anyway, would such a procedure be sufficient for regulatory agencies to allow smoothing or no-smoothing within the same validation or study.
Does anyone have any real experience on this topic?

Regards
Marko
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-11-17 15:32
(5307 d 15:22 ago)

@ moblak
Posting: # 6155
Views: 10,863
 

 Integration - smoothing

Dear Marko!

❝ our general approach for integration of peaks in chromatograms is one generic method per batch, no smoothing and "manual" integrations.

❝ However (as every analyst except regulators knows :angry:), from time to time it is not so easy to integrate all chromatograms within the batch consistently with one generic method.


Well, it’s a cumbersome task to find integration parameters suitable for an entire batch, when you have problems in the lower range. I don’t get the point why such a procedure (trial & error!) should be “better” than manual reintegration.

❝ This time I will […] focus on the "type" of integration method-"raw" data or/and smoothing.

❝ Despite the fact that some agencies prefer the same integration method to be used for validation and study, our approach remains one method per batch.

❝ If we are lucky there is indeed the same method for validation and study, but in reality, we have to adjust certain parameters such as noise level, RT, peak width etc., but never apply smoothing (again historic regulatory tabu).


You are aware that you never use the raw signal of the detector? Without bunching the chromatogram (especially in LC-MS/MS) would not only look awful, but would be impossible to be integrated at all. Either there is an ion in the detector or not… See also this spring’s discussion at David’s PK-List.

❝ If we took a rather conservative/regulated approch, which tests should/could be performed to justify that smoothing (how much?) could be applied?

❝ I was thinking of reintegrating 3xA/P (data used for between run A/P and establishment of regression model) and then comparing the results (raw vs. smoothed data). Since I am far from a statistician expert, how to evaluate the comparison (t-test....)? IMHO, only the "classical bioanalytical" %nominal and %CV wouldn't be enough.


Stupid question: A/P? One possibility would be an approach similar to the one used in cross-validation, namely orthogonal regression. You can test the slope for # 1 and the intercept for # 0. It’s important that you don’t use simple linear regression, because the main assumption there are error-free regressors (x-values).

❝ Anyway, would such a procedure be sufficient for regulatory agencies to allow smoothing or no-smoothing within the same validation or study.


No idea.

❝ Does anyone have any real experience on this topic?


Not me. :cool:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
moblak
☆    

2010-11-17 16:26
(5307 d 14:28 ago)

@ Helmut
Posting: # 6156
Views: 10,720
 

 Integration - smoothing

Dear Helmut,

❝ You are aware that you never use the raw signal of the detector?


That's why I wrote "raw" data.

❝ See also this spring’s discussion at David’s PK-List.


A very nice discussion (thanks), but no closures for me :crying:

❝ Stupid question: A/P?


Accuracy/Precision

orthogonal regression.


By clicking on the link I just got formulaphobia :-D

❝ You can test the slope for # 1 and the intercept for # 0.


I am not sure what do you mean. Could you please explain in more detail?

Btw. Do you use smoothing for data processing of your LC-MS/MS chromatograms.

Regards
Marko
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-11-17 17:31
(5307 d 13:23 ago)

@ moblak
Posting: # 6158
Views: 10,981
 

 Integration - smoothing

Dear Marko!

❝ ❝ You are aware that you never use the raw signal of the detector?

❝ That's why I wrote "raw" data.


Again – what do you mean by “raw data”?

❝ ❝ Stupid question: A/P?

❝ Accuracy/Precision


I see. :cool:

❝ ❝ orthogonal regression.

❝ By clicking on the link I just got formulaphobia :-D


Nice term, but not sooo tough.*
I you have R, it would boil down to something like this (on back-calculated concentrations, not A/P):
x      <- c(1.00,1.10,0.85, 13.0,14.2,15.9, 210,215,190)
y      <- c(0.81,0.95,1.15, 15.0,16.0,12.8, 200,205,180)
Q.x    <- sum((x-mean(x))^2)
Q.y    <- sum((y-mean(y))^2)
Q.xy   <- sum((x-mean(x))*(y-mean(y)))
b      <- (-(Q.x-Q.y)+sqrt((Q.x-Q.y)^2+4*Q.xy^2))/(2*Q.xy)
a      <- mean(y)-b*mean(x)
a;b

gives
[1] 0.4675463
[1] 0.9492506

compared to linear regression
linear <- lm(y~x)
summary(linear)

gives
Call:
lm(formula = y ~ x)

Residuals:
    Min      1Q  Median      3Q     Max
-2.7673 -0.6152 -0.1328  0.4600  2.1852

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.476075   0.684809   0.695    0.509
x           0.949134   0.005764 164.675 8.04e-14 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.615 on 7 degrees of freedom
Multiple R-squared: 0.9997,     Adjusted R-squared: 0.9997
F-statistic: 2.712e+04 on 1 and 7 DF,  p-value: 8.036e-14


❝ ❝ You can test the slope for # 1 and the intercept for # 0.

❝ I am not sure what do you mean. Could you please explain in more detail?


In the linear model above you get not only the estimated slope and intercept but also their standard errors. You can test the estimates against 0 and 1 by means of these SEs. The (1-α) confidence interval is given by [a,btn-2,1-α/2×SE[a,b]. Now look whether the CI for a includes 0 and the CI of b includes 1.
If you find a significant intercept: constant = additive bias (independent from concentration)
Significant slope: proportional bias (dependent on concentration).
If both are not significant, the methods perform equally well.

I’m a little bit short in time to come up with code for SEs in orthogonal regression. There’s a package for R ‘MethComp’ (download), which is not available form CRAN right now. It contains the method ‘Deming’ – it should be possible to extract the standard-errors or do some jackknife. The simple call gives the same results as my code above:
Deming(x,y)
Intercept     Slope   sigma.x   sigma.y
0.4675463 0.9492506 1.1712265 1.1712265


❝ Btw. Do you use smoothing for data processing of your LC-MS/MS chromatograms.


Well, I left the lab some good years ago… But many CROs I know do so – as long as the resolution of peaks is not negatively affected.


  • Linnet K. Evaluation of regression procedures for method comparison studies. Clin Chem. 1993;39(3):424–32. [image] free resource.
If you have access to SAS, look here.


Edit: After reading some stuff, one thing is clear: Never use a t-test in method comparisons! You will only detect a constant bias, but not a proportional one.
Package ‘MethComp’ is really nice. Contains even Bland-Altman-Plots. The SEs of the Deming-regression are estimated by bootstrapping. Example:
require(MethComp)
x          <- c(0.88,1.19,0.85, 13.0,13.5,16.2, 212,225,190)
y          <- c(0.81,0.96,1.20, 15.0,16.2,12.8, 180,220,190)
orthogonal <- Deming(x,y,
                     boot=5000, keep.boot=TRUE, alpha=0.05)

will give
           Estimate S.e.(boot)       50%       2.5%    97.5%
Intercept 0.5395809 1.03057354 0.5294279 -1.2314002 2.476764
Slope     0.9397784 0.05580518 0.9424645  0.8422070 1.002284

0 is well within the 95% confidence interval = no constant bias.
1 is within the CI, but only borderline (you may repeat the bootstrap or request a high number; if you set boot=TRUE the default of 1000 samples is used). Now let’s get at a plot:
plot(x,y, xlim=c(0,max(x,y)), ylim=c(0,max(x,y)),
     xlab="method 1", ylab="method 2", col="red", cex=2, cex.lab=1.25)
abline(0,1, col="black", lwd=1)
bothlines(x,y, Dem=TRUE, sdr=1,
          col=c("red","transparent","blue"), lwd=2)


[image]

The black line is identity (y=x), the red line ordinary (linear) regression, and the blue one Deming (orthogonal) regression. Seems to be no big difference. Now for the lower range:
plot(x,y, xlim=c(0,16.2), ylim=c(0,16.2),
     xlab="method 1", ylab="method 2", col="red", cex=2, cex.lab=1.25)
abline(0,1, col="black", lwd=1)
bothlines(x,y, Dem=TRUE, sdr=1,
          col=c("red","transparent","blue"), lwd=2)


[image]

Now it’s more clear. Hope that helps.

If you don’t have R installed – or it will take ages until your IT department does it for you – you can post a dataset. I would suggest to include back-calculated calibrators and QCs from your ‘raw’ integration and the same dataset ‘smoothed’…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
moblak
☆    

2010-11-18 15:39
(5306 d 15:15 ago)

@ Helmut
Posting: # 6167
Views: 10,613
 

 Integration - smoothing

Hi Helmut,

thanks for the explanation...things are getting clearer (I must admit though it's not 100% clear yet...I'll get there in time....eventually)

❝ If you don’t have R installed – or it will take ages until your IT department does it for you


I rather not comment on this topic :vomit:

❝ you can post a dataset.


I really appreciate your help :love: :clap:

Below please find the dataset:
Nominal "raw" "smooth"
CSs      
100   104.14   102.97
90     90.03    88.58
75     73.98    74.37
50     49.91    49.79
20     20.47    20.13
10      9.39     9.58
4       4.02     4.21
2       2.02     1.96
      
QCs   
2       2.19     2.37
2       2.12     2.34
2       2.40     2.19
2       2.39     2.22
2       2.15     2.29
2       2.39     2.26
6       5.83     5.78
6       5.93     5.80
6       6.00     6.01
6       6.00     6.07
6       5.94     5.89
6       6.17     6.07
30     29.61    29.75
30     31.49    31.37
30     30.07    30.24
30     29.60    29.68
30     29.31    29.44
30     29.07    28.84
80     82.38    82.34
80     82.33    82.10
80     81.01    81.43
80     83.16    83.77
80     82.61    82.81
80     85.04    84.95


Thanks again for your help

Regards
Marko
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-11-18 16:33
(5306 d 14:21 ago)

@ moblak
Posting: # 6168
Views: 10,654
 

 Deming regression: example

Dear Marko!

❝ ❝ If […] it will take ages until your IT department does it for you

❝ I rather not comment on this topic :vomit:


Well, as the forum’s admin I know your e-mail address, so I can make an educated guess about your company’s IT department. Accept my deep sympathy. :-D

From a regulatory perspective I would assume that the decision on the method should be done only on the calibrators. I was asking for the entire data, because I’m insatiable… I increased the number of bootstrap samples to 50,000.

Calibrators:
           Estimate  S.e.(boot)       50%       2.5%     97.5%
Intercept 0.1857501 0.183498823 0.1754573 -0.1034715 0.5112881
Slope     0.9891061 0.005718894 0.9886411  0.9818633 1.0044841

No constant (0 within 95% CI) and no proportional (1 within CI) bias; methods perform equally well.

[image]

QCs:
             Estimate  S.e.(boot)         50%        2.5%      97.5%
Intercept -0.02736395 0.043195345 -0.02774578 -0.11252838 0.05762585
Slope      1.00203634 0.001710295  1.00200216  0.99879615 1.00551642

No constant (0 within CI) and no proportional (1 within CI) bias; methods perform equally well.

[image]

Calibrators + QCs:
            Estimate  S.e.(boot)        50%        2.5%     97.5%
Intercept 0.04514501 0.056534217 0.04177928 -0.06922545 0.1520692
Slope     0.99722500 0.003040459 0.99739786  0.99157351 1.0034422

No constant (0 within CI) and no proportional (1 within CI) bias; methods perform equally well.

[image]

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-11-18 19:35
(5306 d 11:19 ago)

@ moblak
Posting: # 6170
Views: 10,942
 

 Jackknife

Hi Marko!

Another reply; will be lenghty…

Let’s dive in. Bootstrapping is nice, but as a random resampling procedure it will not give you the same results if repeated (I would guess, regulators would hate that). Below five runs (calibrators) of 50,000 samples each:

           Estimate S.e.(boot)       50%       2.5%    97.5%
Intercept 0.1857501 0.183498823 0.1754573 -0.1034715  0.5112881
Intercept 0.1857501 0.177899951 0.1757267 -0.1050423  0.5080163
Intercept 0.1857501 0.181154692 0.1758359 -0.09813111 0.5226411
Intercept 0.1857501 0.174302130 0.1757267 -0.1005911  0.5080163
Intercept 0.1857501 0.171805180 0.1757267 -0.1050744  0.5083477

Slope     0.9891061 0.005718894 0.9886411  0.9818633  1.0044841
Slope     0.9891061 0.005606322 0.9886513  0.9819443  1.0044762
Slope     0.9891061 0.005654949 0.9885763  0.98194865 1.0043886
Slope     0.9891061 0.005613795 0.9886046  0.9819592  1.0045131
Slope     0.9891061 0.005652954 0.9886552  0.9819592  1.0044587


You see that standard errors – and thus the CI – are similar, but not identical (although the decision would always be the same).

An alternative would be the Jackknife, because you will always get the same results. Unfortunally there is no package readily available, but I found some code by Terry Therneau in the R-help archives.

# Generalized Deming regression, based on Ripley, Analyst, 1987:377-383.
#
deming <- function(x, y, xstd, ystd, jackknife=TRUE, dfbeta=FALSE,
                   scale=TRUE) {
    Call <- match.call()
    n <- length(x)
    if (length(y) !=n) stop("x and y must be the same length")
    if (length(xstd) != length(ystd))
        stop("xstd and ystd must be the same length")

    # Do missing value processing
    nafun <- get(options()$na.action)
    if (length(xstd)==n) {
        tdata <- nafun(data.frame(x=x, y=y, xstd=xstd, ystd=ystd))
        x <- tdata$x
        y <- tdata$y
        xstd <- tdata$xstd
        ystd <- tdata$ystd
        }
    else {
        tdata <- nafun(data.frame(x=x, y=y))
        x <- tdata$x
        y <- tdata$y
        if (length(xstd) !=2) stop("Wrong length for std specification")
        xstd <- xstd[1] + xstd[2]*x
        ystd <- ystd[1] + ystd[2]*y
        }

    if (any(xstd <=0) || any(ystd <=0)) stop("Std must be positive")

    minfun <- function(beta, x, y, xv, yv) {
        w <- 1/(yv + beta^2*xv)
        alphahat <- sum(w * (y - beta*x))/ sum(w)
        sum(w*(y-(alphahat + beta*x))^2)
        }

    minfun0 <- function(beta, x, y, xv, yv) {
        w <- 1/(yv + beta^2*xv)
        alphahat <- 0  #constrain to zero
        sum(w*(y-(alphahat + beta*x))^2)
        }

    afun <-function(beta, x, y, xv, yv) {
        w <- 1/(yv + beta^2*xv)
        sum(w * (y - beta*x))/ sum(w)
        }

    fit <- optimize(minfun, c(.1, 10), x=x, y=y, xv=xstd^2, yv=ystd^2)
    coef <- c(intercept=afun(fit$minimum, x, y, xstd^2, ystd^2),
               slope=fit$minimum)
    fit0 <- optimize(minfun0, coef[2]*c(.5, 1.5), x=x, y=y,
                     xv=xstd^2, yv=ystd^2)

    w <- 1/(ystd^2 + (coef[2]*xstd)^2) #weights
    u <- w*(ystd^2*x + xstd^2*coef[2]*(y-coef[1])) #imputed "true" value
    if (is.logical(scale) && scale) {
        err1 <- (x-u)/ xstd
        err2 <- (y - (coef[1] + coef[2]*u))/ystd
        sigma <- sum(err1^2 + err2^2)/(n-2)
        # Ripley's paper has err = [y - (a + b*x)] * sqrt(w); gives the same SS
        }
    else sigma <- scale^2
   
    test1 <- (coef[2] -1)*sqrt(sum(w *(x-u)^2)/sigma) #test for beta=1
    test2 <- coef[1]*sqrt(sum(w*x^2)/sum(w*(x-u)^2) /sigma) #test for a=0
                     
    rlist <- list(coefficient=coef, test1=test1, test0=test2, scale=sigma,
                  err1=err1, err2=err2, u=u)

    if (jackknife) {
        delta <- matrix(0., nrow=n, ncol=2)
        for (i in 1:n) {
            fit <- optimize(minfun, c(.5, 1.5)*coef[2],
                            x=x[-i], y=y[-i], xv=xstd[-i]^2, yv=ystd[-i]^2)
            ahat <- afun(fit$minimum, x[-i], y[-i], xstd[-i]^2, ystd[-i]^2)
            delta[i,] <- coef - c(ahat, fit$minimum)
            }
        rlist$variance <- t(delta) %*% delta
        if (dfbeta) rlist$dfbeta <- delta
        }

    rlist$call <- Call
    class(rlist) <- 'deming'
    rlist
    }

print.deming <- function(x, ...) {
    cat("\nCall:\n", deparse(x$call), "\n\n", sep = "")
    if (is.null(x$variance)) {
        table <- matrix(0., nrow=2, ncol=3)
        table[,1] <- x$coefficient
        table[,2] <- c(x$test0, x$test1)
        table[,3] <- pnorm(-2*abs(table[,2]))
        dimnames(table) <- list(c("Intercept", "Slope"),
                                c("Coef", "z", "p"))
        }
    else {
        table <- matrix(0., nrow=2, ncol=4)
        table[,1] <- x$coefficient
        table[,2] <- sqrt(diag(x$variance))
        table[,3] <- c(x$test0, x$test1)
        table[,4] <- pnorm(-2*abs(table[,3]))
        dimnames(table) <- list(c("Intercept", "Slope"),
                                c("Coef", "se(coef)", "z", "p"))
        }
    print(table, ...)
    cat("\n   Scale=", format(x$scale, ...), "\n")
    invisible(x)
    }


For your calibration data:

fit <- deming(x,y, xstd=c(1,0), ystd=c(1,0), jackknife=TRUE)
print(fit)

I got:
Call:
deming(x = x, y = y, xstd = c(1, 0), ystd = c(1, 0), jackknife = TRUE)

               Coef   se(coef)            z         p
Intercept 0.1865192 0.14052612 124.48408471 0.0000000
Slope     0.9890887 0.00546163  -0.01336272 0.4893394


Hhm, strange. Intercept and slope are not the same as with the other procedures?! Ah, some weighting is used…

Let’s calculate the 95% confidence interval based on the t-distribution anyhow:
df    <- length(x)-2
t     <- qt(1-0.05/2, df=df)
CL1.a <- fit$coefficient[[1]] - t*sqrt(fit$variance[1, 1])
CL2.a <- fit$coefficient[[1]] + t*sqrt(fit$variance[1, 1])
CL1.b <- fit$coefficient[[2]] - t*sqrt(fit$variance[2, 2])
CL2.b <- fit$coefficient[[2]] + t*sqrt(fit$variance[2, 2])
cat(paste("Intercept (95% CI):",
  signif(min(CL1.a, CL2.a), 7), signif(max(CL1.a, CL2.a), 7),
  "\nSlope     (95% CI):",
  signif(min(CL1.b, CL2.b), 7), signif(max(CL1.b, CL2.b), 7), "\n"))


I got:
Intercept (95% CI): -0.1573358 0.5303742
Slope     (95% CI):  0.9757246 1.002453

Again no constant and/or proportional bias.

If I have time (haha), I will check at R-help about the weighting. If I set all weights=1, I get still different results…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
moblak
☆    

2010-11-19 10:25
(5305 d 20:29 ago)

@ Helmut
Posting: # 6173
Views: 10,542
 

 Torture

Hi Helmut,

Now I see how a scientific torture(r) might look like.

But seriously, thanks for taking your time and helping me with the statistical calculations.
Though I am not sure, I will be able to apply this routinely by myself... I might just either smooth or not smooth :-D ...I have to reconsider it.

Regards
Marko
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-11-19 14:47
(5305 d 16:07 ago)

@ moblak
Posting: # 6175
Views: 10,900
 

 Fun!

Dear Marko!

❝ Now I see how a scientific torture(r) might look like.


My dear, science is a cruel mistress.  Paul Shenar (as Dr. Laurence)


❝ But seriously, thanks for taking your time and helping me with the statistical calculations.


Welcome!

❝ Though I am not sure, I will be able to apply this routinely by myself…


Maybe your IT-gurus are more happy with a commercial solution. Two products offer 30-day trial downloads. Analyse-it (an Excel :not really: Add-in) and MedCalc (stand-alone software).

❝ I might just either smooth or not smooth :-D ...I have to reconsider it.


In your example, there was no difference. If you smooth, how much? As Ohlbe said, resolution must not be affected. Contrary to bundling, smoothing should not change the shape of peaks that much.
Last but not least: It seems that this is just an issue with ANVISA!

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2010-11-19 17:00
(5305 d 13:54 ago)

@ Helmut
Posting: # 6176
Views: 10,535
 

 Sharpen the Jackknife

Hi Helmut!

❝ Hhm, strange. Intercept and slope are not the same as with the other procedures?! Ah, some weighting is used...


You set the weights to 1 with your function call!

❝ If I have time (haha), I will check at R-help about the weighting. If I set all weights=1, I get still different results...


Save your time for nicer things, f.i. NLYW :-D.

Set the call to minimizing the weighted sum of squares in the function to:
fit <- optimize(minfun, c(0.1, 10), x=x, y=y, xv=xstd^2, yv=ystd^2, tol=1.e-8)
and look what happens :cool:.

Have a nice weekend.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-11-19 17:19
(5305 d 13:35 ago)

@ d_labes
Posting: # 6177
Views: 10,627
 

 Sharpen the Jackknife

Dear D. Labes!

❝ Save your time for nicer things, f.i. NLYW :-D.


Oh, dear! I'm not sure whether my girlfriend would like that - especially the Y in NLYW. :pirate:

❝ Set the call to minimizing the weighted sum of squares in the function to:

fit <- optimize(minfun, c(0.1, 10), x=x, y=y, xv=xstd^2, yv=ystd^2, tol=1.e-8) and look what happens :cool:.


Great! For the archive:
Call:
deming(x = x, y = y, xstd = c(1, 0), ystd = c(1, 0), jackknife = TRUE)

               Coef    se(coef)            z         p
Intercept 0.1857505 0.140423178 123.97018229 0.0000000
Slope     0.9891061 0.005457953  -0.01334144 0.4893563

Intercept (95% CI): -0.1586471 0.5301481
Slope     (95% CI):  0.9758092 1.002403


Code for the plot with the fit and its CI:

df  <- length(x)-2
t   <- qt(1-0.05/2, df=df)
x1  <- seq(min(x), max(x), length.out=250)
CI  <- matrix(nrow=250, ncol=2, byrow=TRUE,
        dimnames=list(NULL, c("CL.lo", "CL.hi")))
for (j in 1:length(x1)) {
  CI[j, 1] <- (fit$coefficient[[1]] - t*sqrt(fit$variance[1, 1])) +
              (fit$coefficient[[2]] - t*sqrt(fit$variance[2, 2]))*x1[j]
  CI[j, 2] <- (fit$coefficient[[1]] + t*sqrt(fit$variance[1, 1])) +
              (fit$coefficient[[2]] + t*sqrt(fit$variance[2, 2]))*x1[j]
}
plot(x, y, main="method comparison", sub="calibrators", xlab="raw",
  ylab="smooth", cex=2, col="red", cex.sub=0.9)
lines(x=range(x), y=fit$coefficient[[1]]+fit$coefficient[[2]]*range(x),
  col="blue", lwd=2)
lines(x=x1, y=CI[, 1], col="blue")
lines(x=x1, y=CI[, 2], col="blue")


[image]

You could also ask whether the smoothed results (y) are within the 95% CI of the fit:
CI <- CI[1:length(x), ] for (j in 1:length(x)) {
  CI[j, 1] <- (fit$coefficient[[1]]-t*sqrt(fit$variance[1,1])) +
              (fit$coefficient[[2]]-t*sqrt(fit$variance[2,2]))*x[j]   CI[j, 2] <- (fit$coefficient[[1]]+t*sqrt(fit$variance[1,1])) +
              (fit$coefficient[[2]]+t*sqrt(fit$variance[2,2]))*x[j] }
CI <- cbind(CI, y)
CI <- as.data.frame(round(CI, 2))
CI <- cbind(CI, CI[, 3] >= CI[, 1] & CI[, 3] <= CI[, 2])
names(CI)[4]<- "within CI?"
print(CI, row.names=FALSE)

  CL.lo  CL.hi      y within CI?
 101.46 104.92 102.97       TRUE
  87.69  90.78  88.58       TRUE
  72.03  74.69  74.37       TRUE
  48.54  50.56  49.79       TRUE
  19.82  21.05  20.13       TRUE
   9.00   9.94   9.58       TRUE
   3.76   4.56   4.21       TRUE
   1.81   2.56   1.96       TRUE

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2010-11-18 02:39
(5307 d 04:15 ago)

@ moblak
Posting: # 6160
Views: 10,621
 

 Integration - smoothing

Dear Marko,

❝ However (as every analyst except regulators knows :angry:), from time to time it is not so easy to integrate all chromatograms within the batch consistently with one generic method.


Agreed, particularly with low concentrations and noisy baselines (which you will get without smoothing).

❝ we (...) never apply smoothing (again historic regulatory tabu).


:confused: I have never heard of regulatory authorities objecting to the use of smoothing. Are you thinking of any specific agency ?

I agree with Helmut: it can be challenging or even impossible to integrate chromatograms properly without some level of smoothing. Though I agree that smoothing should remain reasonable (the idea is not to mask chromatographic problems and shoulders which could be due to interferences !), it is needed.

One question could be whether to authorise changes in smoothing within a study, if one run is noisier than another. Within a run I would consider that the smoothing should remain the same for all samples (it will affect peak area and therefore the concentration). Between runs I would probably try all other parameters first...

Regards
Ohlbe

Regards
Ohlbe
moblak
☆    

2010-11-18 11:57
(5306 d 18:57 ago)

@ Ohlbe
Posting: # 6164
Views: 10,560
 

 Integration - smoothing

Dear Ohlbe,

Ohlbe

France,

2010-11-18 01:39


Working late? ;-)

❝ Are you thinking of any specific agency ?


ANVISA... Now I have to go through Helmut's response... it will take some time.

Regards
Marko
Ohlbe
★★★

France,
2010-11-18 14:58
(5306 d 15:56 ago)

@ moblak
Posting: # 6165
Views: 10,747
 

 ANVISA

Dear Marko,

❝ Working late? ;-)


I was in the US, it was not that late there...

❝ ANVISA...


Gosh... they really are special !
There was a session on global harmonisation of bioanalytical guidelines and expectations at the PSWC2010 (FIP/AAPS annual congress) in New Orleans on Monday. Eric Woolf from Merck gave a presentation with industry's position. His presentation was not really on US vs. EMA (current hot topic with the draft EMA guideline and FDA revising its own guidance) but rather on problems with other countries. He did not name any but it was quite obvious from the examples he gave that he was pointing at Brazil...

By the way, if I understood correctly the presentation from Ariadna Barra (ANVISA) at the same session, ANVISA will also update their bioanalytical guidance and some aspects will get closer to US/EMA.

Regards
Ohlbe

Regards
Ohlbe
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,679 registered users;
118 visitors (0 registered, 118 guests [including 15 identified bots]).
Forum time: 07:54 CEST (Europe/Vienna)

If you want to get people to believe
something really, really stupid,
just stick a number on it.    Charles Seife

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5