KG
☆    

India,
2014-06-23 16:47
(4034 d 19:44 ago)

Posting: # 13127
Views: 7,191
 

 Geometric Mean [General Sta­tis­tics]

Dear Sir,

Why Geometric mean is taken in to consideration in BE studies. Why not arithmetic mean??

I would like to understand concept behind that.

thanks

KG
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-06-23 17:20
(4034 d 19:11 ago)

@ KG
Posting: # 13129
Views: 6,430
 

 Lognormal transformation / multiplicative model

Hi KG,

❝ Dear Sir,

      ↑↑↑↑ Not interested in opinions of female members of the Forum?


❝ Why Geometric mean is taken in to consideration in BE studies. Why not arithmetic mean??


There is a consensus – for decades – that AUC and Cmax (like many biological variables) follow a log­nor­mal (rather than a normal) distribution. Whereas the arithmetic mean is the best unbiased estimator of location for the normal distribution, the best estimator for the lognormal distribution is the geometric mean.
Justification for the lognormal distribution in BE:
  • Negative concentrations are not possible.
  • Empiric distributions of AUC and Cmax are skewed to the right.
  • Serial dilutions in bioanalytics lead to multiplicative errors.
See this example for the distribution of AUC-values (437 subjects from pooled studies).

<nitpicking>

The distribution of data is not relevant, only the intra-subject residuals from the model. How­ever, using the geometric means – or adjusted means in case of imbalanced sequences (SAS-lingo: least squares means) – is in line with the log-transformation / multiplicative model.

</nitpicking>

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2014-06-23 23:42
(4034 d 12:49 ago)

@ Helmut
Posting: # 13130
Views: 6,179
 

 Dilution?

Hi KG, sorry for partially hijacking your thread:

Hi Helmut,

❝ Justification for the lognormal distribution in BE:

❝     (...)

❝     ● Serial dilutions in bioanalytics lead to multiplicative errors.


I can see that diluting X times would lead to the error being multiplied by X, but is this really a proper argument for using the multiplicative model in BE in any way? I can't readily see how it fits in. For simplicity we can consider just Cmax: Since dilution (as in dilution integrity) affects a subset of the samples, any impact on the error will be on just those samples. This would just speak against any parametric method, unless someone can make a parametric bi-modal model, right?

Or perhaps you referred to the plain dilution any sample undergoes during the assay? I still can't see how this fits in the argumentation. Let me please know your thoughts. Especially if there is a potential for simulating something, haha :pirate:

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2014-06-24 18:19
(4033 d 18:12 ago)

@ ElMaestro
Posting: # 13134
Views: 6,080
 

 Dilution?

Hi ElMaestro,

❝ I can see that diluting X times would lead to the error being multiplied by X,…


You are on the right track. By serial dilutions I mean the way we go from the primary stock solu­tion to various spiking solutions. Since in every step we transfer a small volume to a large one, the error is multi­plicative. Don’t force me to go back to my chemistry textbooks I haven’t touched for 35+ years. ;-)

❝ …but is this really a proper argument for using the multiplicative model in BE in any way? I can't readily see how it fits in. For simplicity we can consider just Cmax: Since dilution (as in dilution integrity) affects a subset of the samples, any impact on the error will be on just those samples.


The problems in calibration (aka inverse regression) are manifold. The common model assumes that the regressor (here concentration) is free of any error – all error is in the regressand (the measurement). Is this true? Obviously not. Lower concentrations generally are spiked with solu­tions which were more often diluted. In other words the higher variability / imprecision we observe in lower concentrations is only partly due the inherently error in the measurement (fluc­tu­a­tions in flow more influential on small peaks, lower signal/noise-ratio of the electronic components, :blahblah:) but also to the higher error in the concentrations itself. Are there any ways out?
  • Instead of OLS perform a variant of orthogonal regression (aka Deming’s regression). Where­as OLS minimizes the SSE in the ordinate, orthogonal regression minimizes the SSE perpendicular to the regression line. Unfortunately this makes only sense if we compare two analytical methods of essentially equal error (Deming’s regression is one method, Bland-Altman-plots another). There are very sophisticated weighting-methods available which force the “error-lines” to any wanted angle (let’s say, you know that 90% of the error is in the response and 10% in the regressor – the error-lines will be almost vertical).
    The devil is in the details. How to select and justify the weighting function? Based on pro­pa­gation of uncertainty it is possible to predict accuracy/precision at any concentration. Note: It is better to use large volumetric flasks and keep the dilution steps at a minimum. Error (%) for Class A flasks:
      mL Accuracy Precision
    ───────────────────────
      10   0.20    0.050   
      25   0.12    0.020   
      50   0.10    0.014   
     100   0.08    0.011   
     250   0.05    0.0068  
     500   0.04    0.0042  
    1000   0.03    0.0042  
    ───────────────────────

  • Ignore this whole bunch of crap (leave it to your PhD-students) and keep in mind that lower concentrations are more erroneous than higher ones.

❝ This would just speak against any parametric method, unless someone can make a para­metric bi-modal model, right?


Oh, boy!

❝ […] Let me please know your thoughts. Especially if there is a potential for simulating something, haha :pirate:


I’ll give you some insights into one of my projects which stays for years on the bottom of my to-do-pile. More simulations than one can take. Steps:
  1. Based on the PK of the drug/formulation simulate profiles. Take into account intra-/inter-subject variability, analytical error, whatsover. Simulate different sampling schedules.
    Level: Easy to intermediate. Software: R.
  2. Model the PK. Select the schedule which will give estimates of Cmax and AUC with lowest variability.
    Level: Intermediate. Hint: Variance Inflation Factor. Software: Phoenix/WinNonlin and/or R/lme/lme4.
  3. Based on the know properties of the analytical method – and the expected location of concentrations! – find the “optimal” number and location of calibrators and QCs.
    Level: Difficult. Software: R.
#1 is rarely done (Paulo Paixão et al., the two Lászlós,…); I and a few other people I know sometimes use #2. #3 is a cosmic mindfucker.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
KG
☆    

India,
2014-06-24 12:51
(4033 d 23:40 ago)

@ Helmut
Posting: # 13133
Views: 6,133
 

 Lognormal transformation / multiplicative model

Thanks sir for clarfying my doubt


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut]
UA Flag
Activity
 Admin contact
23,428 posts in 4,929 threads, 1,684 registered users;
28 visitors (0 registered, 28 guests [including 7 identified bots]).
Forum time: 12:32 CEST (Europe/Vienna)

To know that we know what we know,
and to know that we do not know what we do not know,
that is true knowledge.    Nicolaus Copernicus

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5