Integration - smoothing [Bioanalytics]

posted by Helmut Homepage – Vienna, Austria, 2010-11-17 17:31 (5308 d 13:53 ago) – Posting: # 6158
Views: 10,982

Dear Marko!

❝ ❝ You are aware that you never use the raw signal of the detector?

❝ That's why I wrote "raw" data.


Again – what do you mean by “raw data”?

❝ ❝ Stupid question: A/P?

❝ Accuracy/Precision


I see. :cool:

❝ ❝ orthogonal regression.

❝ By clicking on the link I just got formulaphobia :-D


Nice term, but not sooo tough.*
I you have R, it would boil down to something like this (on back-calculated concentrations, not A/P):
x      <- c(1.00,1.10,0.85, 13.0,14.2,15.9, 210,215,190)
y      <- c(0.81,0.95,1.15, 15.0,16.0,12.8, 200,205,180)
Q.x    <- sum((x-mean(x))^2)
Q.y    <- sum((y-mean(y))^2)
Q.xy   <- sum((x-mean(x))*(y-mean(y)))
b      <- (-(Q.x-Q.y)+sqrt((Q.x-Q.y)^2+4*Q.xy^2))/(2*Q.xy)
a      <- mean(y)-b*mean(x)
a;b

gives
[1] 0.4675463
[1] 0.9492506

compared to linear regression
linear <- lm(y~x)
summary(linear)

gives
Call:
lm(formula = y ~ x)

Residuals:
    Min      1Q  Median      3Q     Max
-2.7673 -0.6152 -0.1328  0.4600  2.1852

Coefficients:
            Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.476075   0.684809   0.695    0.509
x           0.949134   0.005764 164.675 8.04e-14 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.615 on 7 degrees of freedom
Multiple R-squared: 0.9997,     Adjusted R-squared: 0.9997
F-statistic: 2.712e+04 on 1 and 7 DF,  p-value: 8.036e-14


❝ ❝ You can test the slope for # 1 and the intercept for # 0.

❝ I am not sure what do you mean. Could you please explain in more detail?


In the linear model above you get not only the estimated slope and intercept but also their standard errors. You can test the estimates against 0 and 1 by means of these SEs. The (1-α) confidence interval is given by [a,btn-2,1-α/2×SE[a,b]. Now look whether the CI for a includes 0 and the CI of b includes 1.
If you find a significant intercept: constant = additive bias (independent from concentration)
Significant slope: proportional bias (dependent on concentration).
If both are not significant, the methods perform equally well.

I’m a little bit short in time to come up with code for SEs in orthogonal regression. There’s a package for R ‘MethComp’ (download), which is not available form CRAN right now. It contains the method ‘Deming’ – it should be possible to extract the standard-errors or do some jackknife. The simple call gives the same results as my code above:
Deming(x,y)
Intercept     Slope   sigma.x   sigma.y
0.4675463 0.9492506 1.1712265 1.1712265


❝ Btw. Do you use smoothing for data processing of your LC-MS/MS chromatograms.


Well, I left the lab some good years ago… But many CROs I know do so – as long as the resolution of peaks is not negatively affected.


If you have access to SAS, look here.


Edit: After reading some stuff, one thing is clear: Never use a t-test in method comparisons! You will only detect a constant bias, but not a proportional one.
Package ‘MethComp’ is really nice. Contains even Bland-Altman-Plots. The SEs of the Deming-regression are estimated by bootstrapping. Example:
require(MethComp)
x          <- c(0.88,1.19,0.85, 13.0,13.5,16.2, 212,225,190)
y          <- c(0.81,0.96,1.20, 15.0,16.2,12.8, 180,220,190)
orthogonal <- Deming(x,y,
                     boot=5000, keep.boot=TRUE, alpha=0.05)

will give
           Estimate S.e.(boot)       50%       2.5%    97.5%
Intercept 0.5395809 1.03057354 0.5294279 -1.2314002 2.476764
Slope     0.9397784 0.05580518 0.9424645  0.8422070 1.002284

0 is well within the 95% confidence interval = no constant bias.
1 is within the CI, but only borderline (you may repeat the bootstrap or request a high number; if you set boot=TRUE the default of 1000 samples is used). Now let’s get at a plot:
plot(x,y, xlim=c(0,max(x,y)), ylim=c(0,max(x,y)),
     xlab="method 1", ylab="method 2", col="red", cex=2, cex.lab=1.25)
abline(0,1, col="black", lwd=1)
bothlines(x,y, Dem=TRUE, sdr=1,
          col=c("red","transparent","blue"), lwd=2)


[image]

The black line is identity (y=x), the red line ordinary (linear) regression, and the blue one Deming (orthogonal) regression. Seems to be no big difference. Now for the lower range:
plot(x,y, xlim=c(0,16.2), ylim=c(0,16.2),
     xlab="method 1", ylab="method 2", col="red", cex=2, cex.lab=1.25)
abline(0,1, col="black", lwd=1)
bothlines(x,y, Dem=TRUE, sdr=1,
          col=c("red","transparent","blue"), lwd=2)


[image]

Now it’s more clear. Hope that helps.

If you don’t have R installed – or it will take ages until your IT department does it for you – you can post a dataset. I would suggest to include back-calculated calibrators and QCs from your ‘raw’ integration and the same dataset ‘smoothed’…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,682 registered users;
61 visitors (0 registered, 61 guests [including 36 identified bots]).
Forum time: 08:25 CEST (Europe/Vienna)

Nerds don’t just happen to dress informally.
They do it too consistently.
Consciously or not, they dress informally
as a prophylactic measure against stupidity.    Paul Graham

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5