Dilution? [General Statistics]
Hi ElMaestro,
You are on the right track. By serial dilutions I mean the way we go from the primary stock solution to various spiking solutions. Since in every step we transfer a small volume to a large one, the error is multiplicative. Don’t force me to go back to my chemistry textbooks I haven’t touched for 35+ years.
The problems in calibration (aka inverse regression) are manifold. The common model assumes that the regressor (here concentration) is free of any error – all error is in the regressand (the measurement). Is this true? Obviously not. Lower concentrations generally are spiked with solutions which were more often diluted. In other words the higher variability / imprecision we observe in lower concentrations is only partly due the inherently error in the measurement (fluctuations in flow more influential on small peaks, lower signal/noise-ratio of the electronic components,
) but also to the higher error in the concentrations itself. Are there any ways out?
Oh, boy!
I’ll give you some insights into one of my projects which stays for years on the bottom of my to-do-pile. More simulations than one can take. Steps:
❝ I can see that diluting X times would lead to the error being multiplied by X,…
You are on the right track. By serial dilutions I mean the way we go from the primary stock solution to various spiking solutions. Since in every step we transfer a small volume to a large one, the error is multiplicative. Don’t force me to go back to my chemistry textbooks I haven’t touched for 35+ years.

❝ …but is this really a proper argument for using the multiplicative model in BE in any way? I can't readily see how it fits in. For simplicity we can consider just Cmax: Since dilution (as in dilution integrity) affects a subset of the samples, any impact on the error will be on just those samples.
The problems in calibration (aka inverse regression) are manifold. The common model assumes that the regressor (here concentration) is free of any error – all error is in the regressand (the measurement). Is this true? Obviously not. Lower concentrations generally are spiked with solutions which were more often diluted. In other words the higher variability / imprecision we observe in lower concentrations is only partly due the inherently error in the measurement (fluctuations in flow more influential on small peaks, lower signal/noise-ratio of the electronic components,

- Instead of OLS perform a variant of orthogonal regression (aka Deming’s regression). Whereas OLS minimizes the SSE in the ordinate, orthogonal regression minimizes the SSE perpendicular to the regression line. Unfortunately this makes only sense if we compare two analytical methods of essentially equal error (Deming’s regression is one method, Bland-Altman-plots another). There are very sophisticated weighting-methods available which force the “error-lines” to any wanted angle (let’s say, you know that 90% of the error is in the response and 10% in the regressor – the error-lines will be almost vertical).
The devil is in the details. How to select and justify the weighting function? Based on propagation of uncertainty it is possible to predict accuracy/precision at any concentration. Note: It is better to use large volumetric flasks and keep the dilution steps at a minimum. Error (%) for Class A flasks:
mL Accuracy Precision
───────────────────────
10 0.20 0.050
25 0.12 0.020
50 0.10 0.014
100 0.08 0.011
250 0.05 0.0068
500 0.04 0.0042
1000 0.03 0.0042
───────────────────────
- Ignore this whole bunch of crap (leave it to your PhD-students) and keep in mind that lower concentrations are more erroneous than higher ones.
❝ This would just speak against any parametric method, unless someone can make a parametric bi-modal model, right?
Oh, boy!
❝ […] Let me please know your thoughts. Especially if there is a potential for simulating something, haha
I’ll give you some insights into one of my projects which stays for years on the bottom of my to-do-pile. More simulations than one can take. Steps:
- Based on the PK of the drug/formulation simulate profiles. Take into account intra-/inter-subject variability, analytical error, whatsover. Simulate different sampling schedules.
Level: Easy to intermediate. Software: R.
- Model the PK. Select the schedule which will give estimates of Cmax and AUC with lowest variability.
Level: Intermediate. Hint: Variance Inflation Factor. Software: Phoenix/WinNonlin and/or R/lme/lme4.
- Based on the know properties of the analytical method – and the expected location of concentrations! – find the “optimal” number and location of calibrators and QCs.
Level: Difficult. Software: R.
—
Dif-tor heh smusma 🖖🏼 Довге життя Україна!![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Complete thread:
- Geometric Mean KG 2014-06-23 14:47
- Lognormal transformation / multiplicative model Helmut 2014-06-23 15:20
- Dilution? ElMaestro 2014-06-23 21:42
- Dilution?Helmut 2014-06-24 16:19
- Lognormal transformation / multiplicative model KG 2014-06-24 10:51
- Dilution? ElMaestro 2014-06-23 21:42
- Lognormal transformation / multiplicative model Helmut 2014-06-23 15:20