Dilution? [General Sta­tis­tics]

posted by Helmut Homepage – Vienna, Austria, 2014-06-24 18:19 (4033 d 01:14 ago) – Posting: # 13134
Views: 6,078

Hi ElMaestro,

❝ I can see that diluting X times would lead to the error being multiplied by X,…


You are on the right track. By serial dilutions I mean the way we go from the primary stock solu­tion to various spiking solutions. Since in every step we transfer a small volume to a large one, the error is multi­plicative. Don’t force me to go back to my chemistry textbooks I haven’t touched for 35+ years. ;-)

❝ …but is this really a proper argument for using the multiplicative model in BE in any way? I can't readily see how it fits in. For simplicity we can consider just Cmax: Since dilution (as in dilution integrity) affects a subset of the samples, any impact on the error will be on just those samples.


The problems in calibration (aka inverse regression) are manifold. The common model assumes that the regressor (here concentration) is free of any error – all error is in the regressand (the measurement). Is this true? Obviously not. Lower concentrations generally are spiked with solu­tions which were more often diluted. In other words the higher variability / imprecision we observe in lower concentrations is only partly due the inherently error in the measurement (fluc­tu­a­tions in flow more influential on small peaks, lower signal/noise-ratio of the electronic components, :blahblah:) but also to the higher error in the concentrations itself. Are there any ways out?

❝ This would just speak against any parametric method, unless someone can make a para­metric bi-modal model, right?


Oh, boy!

❝ […] Let me please know your thoughts. Especially if there is a potential for simulating something, haha :pirate:


I’ll give you some insights into one of my projects which stays for years on the bottom of my to-do-pile. More simulations than one can take. Steps:
  1. Based on the PK of the drug/formulation simulate profiles. Take into account intra-/inter-subject variability, analytical error, whatsover. Simulate different sampling schedules.
    Level: Easy to intermediate. Software: R.
  2. Model the PK. Select the schedule which will give estimates of Cmax and AUC with lowest variability.
    Level: Intermediate. Hint: Variance Inflation Factor. Software: Phoenix/WinNonlin and/or R/lme/lme4.
  3. Based on the know properties of the analytical method – and the expected location of concentrations! – find the “optimal” number and location of calibrators and QCs.
    Level: Difficult. Software: R.
#1 is rarely done (Paulo Paixão et al., the two Lászlós,…); I and a few other people I know sometimes use #2. #3 is a cosmic mindfucker.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,428 posts in 4,929 threads, 1,689 registered users;
68 visitors (0 registered, 68 guests [including 10 identified bots]).
Forum time: 19:34 CEST (Europe/Vienna)

If I’d observed all the rules,
I’d never have got anywhere.    Marilyn Monroe

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5