d_labes
★★★

Berlin, Germany,
2008-02-01 15:58
(5900 d 18:45 ago)

Posting: # 1572
Views: 30,261
 

 Methods for calculation of half lives [NCA / SHAM]

Dear All,

i would like to hear your opinion about the calculation of terminal half life, especially of choosing the linear part.

Is it done by you via 'informed' look at the concentration time curve (half logarithmic of course) or do you use any automatic method?

As i know, WINNONLIN has built in a method using the adjusted R2.
How are your experiences with that?
Does it lead to reasonable choices?

Are there any other methods (automatic or half automatic) used?

I think in view of the standardization of the pharmacokinetic evaluation within the framework of bioequivalence studies an automated method would be desirable.
This would remove the subjectivity in estimating the AUC values, especially the extrapolated part, if possible.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-02-01 17:59
(5900 d 16:44 ago)

@ d_labes
Posting: # 1573
Views: 29,268
 

 Methods for calculation of half lives

Dear DLabes!

❝ i would like to hear your opinion about the calculation of terminal half life, especially of choosing the linear part.


❝ Is it done by you via 'informed' look at the concentration time curve (half logarithmic of course) or do you use any automatic method?


Personally I’m sticking to ‘eyeball-PK’ in this respect. The standard textbook specialized in regression emphasizes the importance of visual inspection of the fit.[1] Other rather old, but still valid references are given below.[2,3]

❝ As i know, WINNONLIN has built in a method using the adjusted R2.


Yes, quoting WinNonlin’s (v5.2, 2007) online-help:


During the analysis, WinNonlin repeats regressions using the last three points with non-zero concentrations, then the last four points, last five, etc. Points prior to Cmax or prior to the end of infusion are not used unless the user spe­cifically requests that time range. Points with a value of zero for the depen­dent variable are excluded. For each regression, an adjusted \(\small{R^2}\) is computed:
\(\text{Adjusted}\,R^2=\frac{(1-R^2)\,\times\,(n-1)}{(n-2)}\)
where \(\small{n}\) is the number of data points in the regression and \(\small{R^2}\) is the square of the correlation coefficient (aka the coefficient of determination).
WinNonlin estimates \(\small{\lambda_\text{z}}\) using the regression with the largest adjusted \(\small{R^2}\) and:

  • If the adjusted \(\small{R^2}\) does not improve, but is within 0.0001 of the largest adjusted \(\small{R^2}\) value, the regression with the larger number of points is used.
  • \(\small{\lambda_\text{z}}\) must be positive, and calculated from at least three data points.

[…] Using this methodology, WinNonlin will almost always compute an estimate for \(\small{\lambda_\text{z}}\). It is the user’s responsibility to evaluate the appropriateness of the esti­mated value.



My emphasis:-D

❝ How are your experiences with that?

❝ Does it lead to reasonable choices?


In my experience this method shows a tendency to include too many points – regularly includes even Cmax/tmax
\(\small{R^2}\) is a terribly bad parameter in assessing the ‘quality of fit’; we had a rather lengthy discussion at David Bourne's PKPD-List in 2002. To quote myself: […] we can see the dependency of \(\small{R^2}\) from \(\small{n}\), e.g., \(\small{R^2=0.996}\) for \(\small{n=4}\) reflects the same ‘quality of fit’ than does \(\small{R^2=0.766}\) for \(\small{n=9}\)!
The adjusted \(\small{R^2}\) doesn't help – since \(\small{R^2_\text{adj}=0.994}\) (\(\small{n=4}\)), and \(\small{R^2_\text{adj}=0.733}\) (\(\small{n=9}\)).
Another discussion in the context of calibration almost became a flame war for two weeks and ended just yesterday.

Interestingly enough the automated method is not mentioned by a single word in Section ‘2.8.4 Strategies for estimation of lambdaz’.[4] Dan Weiner is Pharsight’s Chief Technology Officer…

❝ Are there any other methods (automatic or half automatic) used?


IMHO no.

❝ I think in view of the standardization of the pharmacokinetic evaluation within the framework of bioequivalence studies an automated method would be desirable.

❝ This would remove the subjectivity in estimating the AUC values, especially the extrapolated part, if possible.


Yes, but on the other hand if the procedure is laid down in an SOP/the protocol no problems are to be expected (at least I hadn’t a single request from regulators in the last 27 years). You may also find this thread interesting.
My personal procedure:
  • Inclusion of at least three points
  • Fits are accepted if \(\small{p_R\leq0.05}\) (where \(\small{\widehat{t}}\) is one-sided tested against \(\small{t_{0.05,n-2}}\): \(\small{\widehat{t}=\left | R \right |\times\sqrt{\frac{n-2}{1-R^2}}}\)
Although Pharsight gives this nice sentence about responsibility of the user, all automated procedures may lead to ‘push-the-button-PK’ – or ‘Mickey Mouse pharmacokinetics’… © Nick Holford :lol2:
IMHO they are waiving themself out from responsibility (…we have told you, that…) and are misleading users to apply the automated procedure. In the NCA wizard > Lambda z Ranges > Lambda z Calculation Method > ⦿ Best Fit is checked by default. I’m afraid to observe a tendency that unwary users simply love clicking themselves just through all windows as fast as possible…

An example (real data from a study with very little variability; only data following tmax:
+------+--------+
| time |  conc. |
+------+--------+
|  2.5 | 5.075  |
|  3   | 4.89   |
|  3.5 | 5.025  |
|  4   | 4.93   |
|  6   | 4.12   |
|  8   | 2.975  |
| 10   | 2.055  |
| 12   | 1.405  |
| 24   | 0.1895 |
+------+--------+


Comparison of fits:
+----------+-------------+-------------+------------+-------------+
| n        |      3      |      4      |      5     |      6      |
+----------+-------------+-------------+------------+-------------+
| lambda-z |  0.169106   |  0.170921   |  0.171285  |  0.167192   |
| SE       |  0.002800   |  0.002754   |  0.002018  |  0.004257   |
| CV%      |  1.655627   |  1.611108   |  1.177875  |  2.54598    |
| T1/2     |  4.098903   |  4.055373   |  4.046743  |  4.145812   |
| R^2      |  0.999726   |  0.999481   |  0.999584  |  0.997414   |
| R^2 adj  |  0.999452   |  0.999222   |  0.999445  |  0.996767   |
| t^       | 60.4000909  | 62.0691045  | 84.8986909 | 39.2775774  |
| p(R)     |  0.00526954 |  0.00012973 |  1.801E-06 |  1.2551E-06 |
+----------+-------------+-------------+------------+-------------+


WinNonlin chooses 5 data-points, but why?
adj for 3 data points (0.999452) >adj for 5 data points (0.999445). Obviously the rule of ‘less than 0.0001 difference use the larger n’ was applied. But since we are interested in estimation of the terminal half life and not in getting a high R2, IMHO we should apply Occam’s razor!
On the other hand, I would have chosen 5 data points as well. Hans Proost’s suggestion of using the minimum SE of lambdaz would also lead to n=5.

Final remarks:
  • Inclusion of Cmax/tmax should be avoided,
  • selection of data points should be done blinded for treatment, and
  • consistently across subjects - therefore some kind of EDA before fitting is strongly recommended.
References:
  1. NR Draper and H Smith
    Applied Regression Analysis
    John Wiley, New York, 3rd ed 1998
  2. FJ Anscombe and JW Tukey
    The Examination and Analysis of Residuals
    Technometrics 5/2, 141-160 (1963)
  3. Boxenbaum HG, Riegelman S and RM Elashoff
    Statistical Estimations in Pharmacokinetics
    J Pharmacokin Biopharm 2/2, 123-148 (1974)
  4. J Gabrielsson and D Weiner
    Pharmacokinetic an Pharmacodynamic Data Analysis: Concepts and Applications
    Swedish Pharmaceutical Press, Stockholm pp 167-169 (4th edition 2006)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Ohlbe
★★★

France,
2008-02-03 00:00
(5899 d 10:43 ago)

@ Helmut
Posting: # 1574
Views: 26,308
 

 Methods for calculation of half lives

Dear HS,

❝ Personally I'm sticking to 'eyeball-PK' in this respect. The standard textbook specialized in regression emphasizes the importance of visual inspection of the fit.[1] Other rather old, but still valid references are given below.[2,3]


Yes, I agree ! And I'm also not convinced by R2.

What about Akaike's criterion ? I think Kinetica uses it, right ?

Regards
Ohlbe
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-02-03 14:49
(5898 d 19:54 ago)

@ Ohlbe
Posting: # 1575
Views: 26,237
 

 Minimum AIC?!

Dear Ohlbe!

❝ What about Akaike's criterion ?


Minimum AIC is only useful in selecting between rival models (e.g., in PK 1 2 compartments, in PD simple Emax sigmoidal Emax).*

For the example we get…
+-----+---------+---------+---------+---------+
|  n  |    3    |    4    |    5    |    6    |
+-----+---------+---------+---------+---------+
| AIC |  7.7894 | 12.3142 | 18.3473 | 20.9395 |
+-----+---------+---------+---------+---------+


… which would suggest n=3 as the ‘best’!
Looking at the definition of AIC
   AIC = n × ln (SSQ) + 2p
it’s clear that since in estimating lambdaz the number of estimated parameters p is fixed to 2 (intercept, slope) AIC essentially reduces to a transformation of the residual sum of squares times the number of data points.
AIC is a quite comfortable in compartmental modeling (as compared to the alternative F-test, because no calculations are needed), but in the given problem it does not help.

❝ I think Kinetica uses it, right ?


At least not in the estimation of lambdaz; strange enough there's a parameter ‘G’ in the output, which is neither described in the online-help nor the manual (v4.1.1, 2007).
I will try to find out the next days, what this parameter might be… :-D


  • Yamaoka K, Nakagawa T and T Uno
    Application of Akaike's Information Criterion (AIC) in the Evaluation of Linear Pharmacokinetic Equations
    J Pharmacokin Biopharm 6/2, 165–75 (1978)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-02-04 02:12
(5898 d 08:30 ago)

@ Helmut
Posting: # 1576
Views: 26,786
 

 G (Kinetica) = R²adj (WinNonlin)

Dear Ohlbe!

❝ … strange enough there’s a parameter ‘G’ in the output, which is neither described in the online-help nor the manual (v4.1.1, 2007).

❝ I will try to find out the next days, what this parameter might be…


OK, I ran the example in Kinetica; the parameter ‘G’ in the output is numerically identical to R2adj.

Just another quote supporting “eye-ball PK”:*

‘The selection of the most suitable time interval cannot be left to a programmed algorithm based on mathematical criteria, but necessitates scientific judgment by both the clinical pharmacokineticist and the person who determinend the concentrations and knows about their reliability.’



  • D Hauschke, VW Steinijans and I Pigeot
    Bioequivalence Studies in Drug Development: Methods and Applications
    Wiley, New York, pp 20-23 (2007)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-02-04 17:43
(5897 d 16:59 ago)

@ Helmut
Posting: # 1577
Views: 26,811
 

 Methods for calculation of half lives

Dear HS

thank you for your concise and elaborate answer and opinion.
As i knew the thread in the PKPD list from 2002 i had expected this to a certain extent ;-).

I agree to you not in all respects. First of all i think within the context of BE studies we are not interested in the half life but rather in estimating the AUC part from tlast to infinity.
Half live is only a vehicle to that end.

Second: I have played around with adj. R2 (SAS code from Matos-Pita and Lillo (2005)) and cannot confirm your statement that too many points are included by this method in general. I on the contrary found many cases using the data from Sauter et al. that the method stops too early (starting with 3 points) if these points lie very good on the linear part.
Which is the case especially the case for studies with low variability and 'good natured' concentration time courses.

Third: I am not your opinion regarding AIC. Akaikes information criterion
is not only used for model discrimination (with different numbers of model parameters) also this is the commonly known usage.

Below you can find 2 references on usage as outlier test.

Genshiro Kitagawa
On the Use of AIC for the Detection of Outliers
Technometrics, Vol. 21, No. 2 (May, 1979), pp. 193-199

Pynnönen, S. (1992).
Detection of outliers in regression analysis by information criteria.
Proceedings of the University of Vaasa. Discussion Papers 146, 8 p.
http://lipas.uwasa.fi/~sjp/Abstracts/dp146.pdf

Google "outlier AIC" to find many more references.

Based on this one can imagine a method for choosing stepwise the lineare part based on the test if the next included point is an 'outlier' to the linear model.
Begin with a maximum number of points, leave one point (with lowest time) out. IF AIC decreases leave one more out and so on.
This should fulfill your demand on parsimony.

Your formula for AIC in the reply to Ohlbe is only correct for comparing
AICs with the same number of data points n.
From the original definition
  AIC =-2*log likelihood +2*p
we derive
  AIC =n*(ln(2*pi*RSS/n)+1)+2*p    (2)
(http://en.wikipedia.org/wiki/Akaike_information_criterion)
      =n*(ln(2*pi)+1)+n*ln(RSS/n)+2*p
where RSS=residual sum of squares of errors

The first term is only constant if models are compared with same n.
But most references on regression use AIC=n*ln(RSS/n)+2*p.

By the way: I cannot verify numerical your results. Mine are (SAS Proc Reg, ln C versus time) :
+----------+------------+-------------+------------+-------------+
| n        |      3     |      4      |      5     |      6      |
+----------+------------+-------------+------------+-------------+
| RMSE     | 0.0299805  | 0.03428349  | 0.0285321  | 0.06775109  |
| AIC 1    | -20.339    | -25.757     |  -34.121   |  -30.736    |
| AIC 2    | -11.825    | -11.633     |  -14.439   |  - 5.391    |
+----------+------------+-------------+------------+-------------+
RSS=(RMSE*(n-2))^2
AIC 1: n*ln(RSS/n)+4 (SAS AIC), AIC: full formula

Again 5 points will be chosen.

Fourst: Regarding your method of choosing the points i wonder why you choose
5 points. Your criteria
- at least 3 points
- not including tmax, Cmax
- fit with p(r)<0.05
are fulfilled with 3,4,5 and 6 points. The rest is your opinion ('informed' view).
This is the subjectivity factor i meant.
On the other hand i am convinced that man is the best pattern recognizer (at least in 3D ;-)). If trained appropriate and has 'good will'. I have received a number of questions from people doing PK analyses to aid from a statistical view. Especially in cases of not so 'well behaved' concentration time curves.

For the presented example i think there is no substantial influence at all as we can see in regarding the lamdaZ. (By the way i think your lamdaZ is t1/2. :ponder:)
But for other curves it can make the difference.

Fifth and last comment:
Your emphasis "It is the user's responsibility to evaluate the appropriateness of the estimated value" is totally correct but applies also to

❝ ‘eyeball-PK’ :-P



Edit: References linked. [HS]

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-02-04 18:49
(5897 d 15:54 ago)

@ d_labes
Posting: # 1579
Views: 26,265
 

 MAIC and beyond...

Dear D. Labes,

thank you for your fruitful comments!
Unfortunately I’ll be out of office until 29 Feb, so just a few comments - I will jump into the details later.
I hope that other members of the forum will make some contributions in the meantime…

❝ […] within the context of BE studies we are not interested in the half life but rather in estimating the AUC part from tlast to infinity.

❝ Half live is only a vehicle to that end.


Definitely.

❝ Second: I have played around with adj. R2 (SAS code from Matos-Pita and Lillo (2005)) and cannot confirm your statement that too many points are included by this method in general. I on the contrary found many cases using the data from Sauter et al. that the method stops too early (starting with 3 points) if these points lie very good on the linear part.

❝ Which is the case especially the case for studies with low variability and 'good natured' concentration time courses.


Yes, but on the other hand if we observe low variability (which occurs rarely enough close to the LLOQ), why should we include more points than necessary to describe lambdaz? OK, the term 'necessary' is quite spongily

❝ Third: …


I will come up to this point in March. Thanks for the references!

❝ By the way: I cannot verify numerical your results.


I must confess, this was just a quickshot in R. Obviously I made a mistake in coding (not using the correct extractAIC(lm)).

❝ Mine are (SAS Proc Reg, ln C versus time) :

❝ +----------+------------+-------------+------------+-------------+
❝ | n        |      3     |      4      |      5     |      6      |
❝ +----------+------------+-------------+------------+-------------+
❝ | RMSE     | 0.0299805  | 0.03428349  | 0.0285321  | 0.06775109  |
❝ | AIC 1    | -20.339    | -25.757     |  -34.121   |  -30.736    |
❝ | AIC 2    | -11.825    | -11.633     |  -14.439   |  - 5.391    |
❝ +----------+------------+-------------+------------+-------------+

❝ RSS=(RMSE*(n-2))^2

❝ AIC 1: n*ln(RSS/n)+4 (SAS AIC), AIC: full formula


❝ Again 5 points will be chosen.


R (1) and WinNonlin (2) come up with: +----------+------------+-------------+------------+-------------+
| n        |      3     |      4      |      5     |      6      |
+----------+------------+-------------+------------+-------------+
| AIC (1)  | -20.33909  | -25.75732   | -34.12138  |  -30.73577  |
| AIC (2)  | -17.04325  | -20.21214   | -26.07419  |  -19.98521  |
+----------+------------+-------------+------------+-------------+

... which again would pick 5 points (results from R coincide with SAS').

❝ … i wonder why you 5 points. Your criteria […] are fulfilled with 3,4,5 and 6 points. The rest is your opinion ('informed' view).

❝ This is the subjectivity factor i meant.


Absolutely.

❝ On the other hand i am convinced that man is the best pattern recognizer (at least in 3D ;-)). If trained appropriate and has 'good will'. I have received a number of questions from people doing PK analyses to aid from a statistical view. Especially in cases of not so 'well behaved' concentration time curves.


Fully agree. I'm with you in the desire to establish some kind of statistically sound procedure which at least would support the ‘untrained eye’.

❝ For the presented example i think there is no substantial influence at all as we can see in regarding the lamdaZ. (By the way i think your lamdaZ is t1/2. :ponder:)


Oops; I just corrected the original table. :-(

❝ But for other curves it can make the difference.


Yes, and these are the nasty ones we have regularly to deal with.

❝ Your emphasis "It is the user's responsibility to evaluate the appropriateness of the estimated value" is totally correct but applies also to

❝ ❝ ‘eyeball-PK’ :-P

  1. This emphasis was a quote from Pharsight.
  2. Sure.
It just starts to become interesting; your suggestion of a stepwise procedure lead by AIC looks really promising! I think we should start with a couple of simulations:
  • different ‘noise’ (type, degree)
  • 2-compartment models
  • &c &c ad libitum

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-05-10 15:56
(5801 d 19:47 ago)

@ d_labes
Posting: # 1844
Views: 26,697
 

 TTT method

Dear Detlew,

just discovered a recent paper:

Scheerans C, Derendorf H, Kloft C. Proposal for a Standardised Identification of the Mono-Exponential Terminal Phase for Orally Administered Drugs. Biopharm Drug Dispos. 2008; 29(3): 145–57. doi:10.1002/bdd.596


Authors recommend the so-called TTT (two times tmax) method for identifying the mono-exponential terminal phase in the case of oral drug administration. Rules for selecting sample points in the estimation of lambdaz are quite simple:
First point: 2× tmax – or, if no sampling point is available, the subsequent one in the profile
Last point: last measured (C ≥LLOQ)

A large Monte-Carlo-Study was performed to compare the TTT-method to the maximum adjusted R² algorithm (ARS). TTT was found to be superior to ARS, both in terms of bias and precision.

The method is not intended as an automated procedure – and for any ≥1 compartment model, which shows up as a multilinear decline in a lin/log-plot.
Quote:

“It should be emphasised that the TTT method has been introduced in this paper to provide a reasonable tool to support visual curve inspection for reliably identifying the mono-exponential terminal phase. Moreover, the TTT method should not be utilised without visual inspection of the respective concentration-time course. Thus, before using this new approach the monophasic shape post the peak of the curve has to be checked visually by means of a semilogarithmic diagram.”


P.S.: For my example data set five points would be chosen again.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-05-16 11:12
(5796 d 00:31 ago)

@ Helmut
Posting: # 1851
Views: 25,774
 

 TTT method

Dear HS,

thanx. I will have a look at the paper soon. Eventually then we have things to discuss.

Regards,

Detlew
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-05-16 19:21
(5795 d 16:22 ago)

@ d_labes
Posting: # 1852
Views: 26,143
 

 TTT method

Dear DLabes!

First impressions from my side:
The authors compared results from the TTT-method to the ARS-algorithm and claimed ‘better performance’ in terms of bias (relative error RE% = 100×[estimate-true]/true).
M$ Excel’s pseudo-random number generator NORMINV(RAND(), mu, sigma) is known to be suboptimal. In generating 50 sets of 10000 random samples with different seeds each (i.e., 500000 samples), I got a maximum absolute bias of 2.3% (contrary to the 1% claimed by the authors). Therefore a significance limit of >2% difference between methods IMHO is too low (5% is more realistic).

However, in repeating their simulation ‘Study A’ (not only 10000 ‘profiles’, but 20 sets of 10000 each), I could confirm their results in terms of bias and precision (without setting my feet in the deep puddle of statistically comparing data sets):

lambdaz
         TTT-method
     mean    min     max
RE  -1.06%  -0.80%  -1.45%
CV  12.12%  11.97%  12.43%
       ARS-algorithm
     mean    min     max
RE  +5.08%  +4.58%  +5.59%
CV  20.89%  20.53%  21.28%


n (number of data points chosen)
         TTT-method
median   2.5% quant.  97.5% quant.
  8          5           10
       ARS-algorithm
median   2.5% quant.  97.5% quant.
  8          3           10


Scary histograms of number of sampling points selected by both methods clearly favors TTT.
For instance in my first data set ARS chose the last three data points in 23.73% of cases, whereas TTT suggested the same number in only 0.01%… In the upper range of data points ARS selected ≥10 data points in 40.40%, whereas TTT came up with the same n in just 2.78% of cases (≥11: ARS 19% – TTT 0%!).
According to the bias and precision seen in the simulations my personal reluctance against the automated ARS seems to be justified. I always suspected that the algorithm selects too many data points and was surprised to see that my prejudice was not only justified, but that the contrary (too little data points) also holds.

Lessons learned:
  • Two-times tmax is a good starting point, but
  • visual inspection is still suggested (not an automated procedure only).
  • ARS leads to 'bad' results, both in terms of bias and precision.
  • TTT not suitable for 2 distinctive phases (intersection suggested as 1st value).
I’m considering to set up new simulations with a log-normal error model and also using either a better pseudo-random generator, or true random numbers obtained from www.random.org
Another option I’m considering for testing is the method ‘lee’ in the package PK for R by Martin J. Wolfsegger and Thomas Jaki.*


  • Wolfsegger MJ. The R Package PK for Basic Pharmacokinetics. Biometrie und Medizin 2006; 5: 61–8. online resource

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2008-05-23 17:43
(5788 d 17:59 ago)

@ Helmut
Posting: # 1869
Views: 25,818
 

 TTT method

Dear HS,

thanks for sharing your first impression.
Here are my first thoughts about the paper.

1. It seems, that the ARS was not restricted to the timepoints later than tmax.
At least not for creating figure 2.
No one would implement the algorithm in that way!

2. Beside your concern about EXCELs random number generator I argue:
The simulation is not that what we see in real world for different subjects.
It is the simulation of thousands of c-t curves of one subject with the 'true' parameters given for the concentration time curve (one-compartment or two compartment).
Thus it answers the question: How good is the estimation of one lamdaz and / or one AUC(0-inf).
I am concerned if we can draw any conclusion from that.

3. Although it is true that scientifically we should choose that method with best properties in bias and / or variability.

But I cannot see any practical implications of the reported differences, especially in the context of bioequivalence studies.
A bias of 5% for residual area amounts to max 1% absolute if you has planned your sampling times propperly thus that AUC(0-tlast) is at least 80% or greater. It makes only a difference if you are borderline.

I argue that the bioequivalence test as the last end is not at all affected in any way.

4. The statement, that both methods deliver statistical significant results is a fundamental misunderstanding what statistics is for. It is easy to obtain significance for ten thousands of values with low variability.
But significance is not relevance.
I think no one would argue that AUC values for instance of 296.59 or 297.22 (AUCinf from study A) are different.

5. I cannot verify the claim that TTT should not applied if the concentration time curve is biphasic after Cmax, at least not for the reported results.
The bias in lamdaz for models B and C are -1.15% or 1.39% (according to table 1). This is not different to 'Study A', for which TTT is recommended.
What's the point?

6. The suggestion that low N=3, which occures relative often in the ARS algorithmn, is associated with a higher varaibility, is a mis-interpretation of the extra simulations. Here all calculations were stopped at N=3 and so on. Thus it can only regarded as a criticismn to choose a low predefined fixed number of points.

To my experience (with only a limited number of trys), the ARS stops only, if the first 3 points fit is superior.
If this lead to a higher variability remains open.

7. Looking forward to your simulations.
I suggest that the TTT method and any Fit statistics algorithmn (ARS or AICc or whatever) should be combined to get the best of.
TTT could be used to restrict the number of points used by Fit statistics algorithms.

A side question to your simulations:
You report a positive bias for the ARS method.
What is the source for that?
The simulations where ARS goes not beyond N=3?

Regards,

Detlew
hiren379
★    

India,
2012-07-11 13:11
(4278 d 22:31 ago)

@ d_labes
Posting: # 8925
Views: 22,019
 

 TTT method

Thank you all for such a nice information..
I would like to further post on this...

Concider a case of IV Infusion with very slow rate. Once the infusion is over pharmacologically the only phase running in human body is elimination of drug as absorption is completed (end of infusion).

Then shouldnt we concider all the time points after the end of infusion time as here in this case we are very sure that after what particular time only elimination rules.

And if we go by regression it would be a bias as all the points after the infusion is over must be concidered as you are very sure where the lone elimination phase has begins against the extravascular dosage forms where it is better to start your Kel calculation from bottom as you are not sure when does the absorption stops.
:confused::confused::confused:
Ohlbe
★★★

France,
2012-07-11 13:37
(4278 d 22:06 ago)

@ hiren379
Posting: # 8927
Views: 22,038
 

 TTT method

Dear Hiren,

❝ Once the infusion is over pharmacologically the only phase running in human body is elimination of drug as absorption is completed (end of infusion). Then shouldn't we consider all the time points after the end of infusion time as here in this case we are sure that after what particular time only elimination happens.


No: you may have a distribution phase first. Infusion does not mean monocompartmental PK. You still need to only consider the terminal elimination phase.

Regards
Ohlbe

Regards
Ohlbe
hiren379
★    

India,
2012-07-11 13:44
(4278 d 21:59 ago)

@ Ohlbe
Posting: # 8928
Views: 22,095
 

 TTT method

❝ No: you may have a distribution phase first. Infusion does not mean monocompartmental PK. You still need to only consider the terminal elimination phase.


Thanks but will the same work for Non compartmental model also? And if you are so worried about including the time of distribution then the postulate of instantaneous distribution is violated for using Non compartment model.
:confused::confused:

And if u are assuming instantaneous distribution then one must take all time points after completion of infusion to avoid bias created by statistics of R2

And I knew that distribution will come into play and hence I have mentioned slow infusion.
Ohlbe
★★★

France,
2012-07-11 14:16
(4278 d 21:27 ago)

@ hiren379
Posting: # 8929
Views: 21,967
 

 TTT method

Dear Hiren,

❝ And if u are assuming...


Please avoid SMS spelling in your messages on this forum.

❝ Thanks but will the same work for Non compartmental model also?


Let me replace multicompartmental with multiphasic, then ?

❝ And if you are so worried about including the time of distribution then the postulate of instantaneous distribution is violated


Why make such a postulate ? Look at the PK profiles first.

❝ And if u are assuming instantaneous distribution then one must take all time points after completion of infusion to avoid bias created by statistics of R2


1. F*** R2
2. I don't assume instantaneous distribution or monophasic elimination without looking at the PK profiles for each subject.
(Also look at Helmut's first message in this same thread).

Regards
Ohlbe

Regards
Ohlbe
hiren379
★    

India,
2012-07-11 14:53
(4278 d 20:50 ago)

@ Ohlbe
Posting: # 8930
Views: 22,133
 

 TTT method

❝ Please avoid SMS spelling in your messages on this forum.


sorry as I am new to this forum

❝ Let me replace multicompartmental with multiphasic, then ?


Then how can you assume monophasic when calculating Kel for extra vascular drug administration. May be when you are calculating Kel for a tablet distribution is still on??

❝ 1. F*** R2


Strongly with you for this

❝ 2. I don't assume instantaneous distribution or monophasic elimination without looking at the PK profiles for each subject.

❝ (Also look at Helmut's first message in this same thread).


I also believe in this

So I think conclusion is that manual selection coupled with statistical method is best irrespective of dosage form....
Is it OK?

Now if I am using this + I know that my drug is having instantaneous distribution then still I should go for manual linearity finding. Will it not be bias???

:confused::confused:
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-07-11 16:37
(4278 d 19:05 ago)

@ hiren379
Posting: # 8931
Views: 22,042
 

 Eyeball-PK

Dear Hiren!

❝ ❝ ❝ ❝ ❝ ❝ […] Once the infusion is over pharmacologically the only phase running in human body is elimination of drug as absorption is completed (end of infusion).


There is no absorption if the drug is administered directly to the central compartment. In rare cases you may notice an increase of concentrations after the end of infusion. This may occur if the drug precipitates in the vein. Then you have a true absorption process (actually dissolution). Dissolution may take place either in the vein itself or (downstream) in the lungs.

❝ ❝ Please avoid SMS spelling in your messages on this forum.

❝ sorry as I am new to this forum


… still a newbie after >2½ years? ;-)

❝ Then how can you assume monophasic when calculating Kel for extra vascular drug administration. May be when you are calculating Kel for a tablet distribution is still on??


For monophasic PK see footnote 1 of this post. Note that the TTT-method was developed for monophasic PK only – justified by irrelevant absorption after the inflection point of the profile. For multiphasic profiles the authors suggest as the starting point the intersection of the last phase with the preceding one. This point might be difficult to find, especially if the distribution phase is not substantially faster than elimination. Any algorithm might fail here; visual inspection of the fit is mandatory (see the quotes from the TTT-paper above and from Hauschke et al. above).

❝ So I think conclusion is that manual selection coupled with statistical method is best irrespective of dosage form....

❝ Is it OK?


I would do it the other way ’round. Start with an automatic method and adjust the selected time points if deemed necessary.

❝ Now if I am using this + I know that my drug is having instantaneous distribution then still I should go for manual linearity finding. Will it not be bias???


Concentrations are not measured without error. It might be that – especially with long half lives and values close to the LLOQ – any automatic method fails. To avoid bias I suggest to perform the estimation of λz blinded for treatment and import the randomization afterwards. At least this is what I do.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
hiren379
★    

India,
2012-07-11 17:28
(4278 d 18:14 ago)

@ Helmut
Posting: # 8932
Views: 22,021
 

 Eyeball-PK

Dear HS,
Thanks for your inputs

❝ There is no absorption if the drug is administered directly to the central compartment. In rare cases you may notice an increase of concentrations after the end of infusion. This may occur if the drug precipitates in the vein. Then you have a true absorption process (actually dissolution). Dissolution may take place either in the vein itself or (downstream) in the lungs.


Ok, I need little bit further clarification and for that please concider these properties of a drug (BE study...NCA used)
  1. Drug is a pure solution and nothing like precipitation and all. I mean pure solution form
  2. Instantaneous distribution happens in your central compartment.
  3. No compartmental specific distribution or in and out from any compartment
So if 1, 2 and 3 are correct. Cant we concider that elimination phase is only working phase as soon as infusion is over?

And if yes since you are knowing from what time the elimination phase has started it would be a bias if you are selecting 3 or 4 points from bottom of the curve to find Kel irrespective you are going for R2 or mannual.

One has to select all the time point in the elimination phase characterization.

In extra-vascular since we are unable to differentiate from where the elimination starts we start from bottom of curve and go up.

❝ ❝ ❝ Please avoid SMS spelling in your messages on this forum.

❝ ❝ sorry as I am new to this forum


❝ … still a newbie after >2½ years? ;-)


I am not still an addict to this forum. But I am sure I will be 100%
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-07-11 17:47
(4278 d 17:56 ago)

@ hiren379
Posting: # 8933
Views: 22,056
 

 Eyeball-PK

Dear Hiren!

❝ Ok, I need little bit further clarification and for that please concider these properties of a drug (BE study...NCA used)


❝ 1. Drug is a pure solution and nothing like precipitation and all. I mean pure solution form


The fact that your drug is completely in solution doesn’t prevent precipitation in the body. Blood ≠ Ringer’s solution. Apart from a small increase after end of infusion due to analytical variability sometimes you see an increase which may last for more than an hour – that’s an indication of precipitation. You have to inspect the individual profiles and should not assume “this is an infusion, therefore Cmax/tmax = end of infusion”.

❝ 2. Instantaneous distribution happens in your central compartment.

❝ 3. No compartmental specific distribution or in and out from any compartment


OK

❝ So if 1, 2 and 3 are correct. Cant we concider that elimination phase is only working phase as soon as infusion is over?


Still you have to check #1. So my answer is only a “conditional yes”. ;-)

❝ And if yes since you are knowing from what time the elimination phase has started it would be a bias if you are selecting 3 or 4 points from bottom of the curve to find Kel irrespective you are going for R2 or mannual.


Do we really know that? Check the profiles.

❝ One has to select all the time point in the elimination phase characterization.


Well – at least for i.v. as many as possible since analytical error is inverse proportional to concentration.

❝ In extra-vascular since we are unable to differentiate from where the elimination starts we start from bottom of curve and go up.


Yep.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
hiren379
★    

India,
2012-07-11 18:24
(4278 d 17:18 ago)

@ Helmut
Posting: # 8935
Views: 21,978
 

 Eyeball-PK

Dear HS!

❝ The fact that your drug is completely in solution doesn’t prevent precipitation in the body. Blood ≠ Ringer’s solution.


Ya thanks for drawing my attention to mighty pH difference between nature's creation and Man's effort to become God.

❝ Apart from a small increase after end of infusion due to analytical variability sometimes you see an increase which may last for more than an hour – that’s an indication of precipitation.


Thanks for sharing your experience.

❝ You have to inspect the individual profiles and should not assume “this is an infusion, therefore Cmax/tmax = end of infusion”.


Ok

❝ Still you have to check #1. So my answer is only a “conditional yes”. ;-)


Ok HS, now if you are accepting that elimination phase prevails (ok conditionally) after completion of infusion, the thing now acts like an IV bolus. So do you accept that in this hypothetical situation one must include maximum number of points for kel estimation as per your say on IV?

❝ Well – at least for i.v. as many as possible since analytical error is inverse proportional to concentration.

UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
115 visitors (0 registered, 115 guests [including 8 identified bots]).
Forum time: 10:43 CET (Europe/Vienna)

With four parameters I can fit an elephant,
and with five I can make him wiggle his trunk.    John von Neumann

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5