d_labes ★★★ Berlin, Germany, 20080201 13:58 Posting: # 1572 Views: 24,970 

Dear All, i would like to hear your opinion about the calculation of terminal half life, especially of choosing the linear part. Is it done by you via 'informed' look at the concentration time curve (half logarithmic of course) or do you use any automatic method? As i know, WINNONLIN has built in a method using the adjusted R^{2}. How are your experiences with that? Does it lead to reasonable choices? Are there any other methods (automatic or half automatic) used? I think in view of the standardization of the pharmacokinetic evaluation within the framework of bioequivalence studies an automated method would be desirable. This would remove the subjectivity in estimating the AUC values, especially the extrapolated part, if possible. — Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20080201 15:59 @ d_labes Posting: # 1573 Views: 24,376 

Dear DLabes! » i would like to hear your opinion about the calculation of terminal half life, especially of choosing the linear part. » » Is it done by you via 'informed' look at the concentration time curve (half logarithmic of course) or do you use any automatic method? Personally I’m sticking to ‘eyeballPK’ in this respect. The standard textbook specialized in regression emphasizes the importance of visual inspection of the fit.^{[1]} Other rather old, but still valid references are given below.^{[2,3]} » As i know, WINNONLIN has built in a method using the adjusted R^{2}. Yes, quoting WinNonlin’s (v5.2, 2007) onlinehelp:
[…] Using this methodology, WinNonlin will almost always compute an estimate for lambda_{z}. It is the user’s responsibility to evaluate the appropriateness of the estimated value. My emphases in red… » How are your experiences with that? » Does it lead to reasonable choices? In my experience this method shows a tendency to include too many points – regularly includes even C_{max}/t_{max}… R² is a terribly bad parameter in assessing the ‘quality of fit’; we had a rather lengthy discussion at David Bourne's PKPDList in 2002. To quote myself: […] we can see the dependency of R² from n, e.g., R²=0.996 for n=4 reflects the same ‘quality of fit’ than does R²=0.766 for n=9! The adjusted R² doesn't help  since R²_{adj}=0.994 (n=4), and R²_{adj}=0.733 (n=9). Another discussion in the context of calibration almost became a flame war for two weeks and ended just yesterday. Interestingly enough the automated method is not mentioned by a single word in Section ‘2.8.4 Strategies for estimation of lambda_{z}’.^{[4]} Dan Weiner is Pharsight’s Chief Technology Officer… » Are there any other methods (automatic or half automatic) used? IMHO no. » I think in view of the standardization of the pharmacokinetic evaluation within the framework of bioequivalence studies an automated method would be desirable. » This would remove the subjectivity in estimating the AUC values, especially the extrapolated part, if possible. Yes, but on the other hand if the procedure is laid down in an SOP/the protocol no problems are to be expected (at least I hadn’t a single request from regulators in the last 27 years). You may also find this thread interesting. My personal procedure:
IMHO they are waiving themself out from responsibility (…we have told you, that…) and are misleading users to apply the automated procedure. In the NCA wizard > Lambda z Ranges > Lambda z Calculation Method > ⦿ Best Fit is checked by default. I’m afraid to observe a tendency that unwary users simply love clicking themselves just through all windows as fast as possible…An example (real data from a study with very little variability; only data following t_{max}:
+++ Comparison of fits:
++++++ WinNonlin chooses 5 datapoints, but why? R²_{adj} for 3 data points (0.999452) > R²_{adj} for 5 data points (0.999445). Obviously the rule of ‘less than 0.0001 difference → use the larger n’ was applied. But since we are interested in estimation of the terminal half life and not in getting a high R^{2}, IMHO we should apply Occam’s razor! On the other hand, I would have chosen 5 data points as well. Hans Proost’s suggestion of using the minimum SE of lambda_{z} would also lead to n=5. Final remarks:
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Ohlbe ★★★ France, 20080202 22:00 @ Helmut Posting: # 1574 Views: 21,610 

Dear HS, » Personally I'm sticking to 'eyeballPK' in this respect. The standard textbook specialized in regression emphasizes the importance of visual inspection of the fit.^{[1]} Other rather old, but still valid references are given below.^{[2,3]} Yes, I agree ! And I'm also not convinced by R^{2}. What about Akaike's criterion ? I think Kinetica uses it, right ? Regards Ohlbe 
Helmut ★★★ Vienna, Austria, 20080203 12:49 @ Ohlbe Posting: # 1575 Views: 21,672 

Dear Ohlbe! » What about Akaike's criterion ? Minimum AIC is only useful in selecting between rival models (e.g., in PK 1 ↔ 2 compartments, in PD simple E_{max} ↔ sigmoidal E_{max}). ^{*} For the example we get…
++++++ … which would suggest n=3 as the ‘best’! Looking at the definition of AIC AIC = n × ln (SSQ) + 2p it’s clear that since in estimating lambda_{z} the number of estimated parameters p is fixed to 2 (intercept, slope) AIC essentially reduces to a transformation of the residual sum of squares times the number of data points.AIC is a quite comfortable in compartmental modeling (as compared to the alternative Ftest, because no calculations are needed), but in the given problem it does not help. » I think Kinetica uses it, right ? At least not in the estimation of lambda_{z}; strange enough there's a parameter ‘G’ in the output, which is neither described in the onlinehelp nor the manual (v4.1.1, 2007). I will try to find out the next days, what this parameter might be…
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Helmut ★★★ Vienna, Austria, 20080204 00:12 @ Helmut Posting: # 1576 Views: 21,986 

Dear Ohlbe! » … strange enough there’s a parameter ‘G’ in the output, which is neither described in the onlinehelp nor the manual (v4.1.1, 2007). » I will try to find out the next days, what this parameter might be… OK, I ran the example in Kinetica; the parameter ‘G’ in the output is numerically identical to R^{2}_{adj}. Just another quote supporting “eyeball PK”:^{*} ‘The selection of the most suitable time interval cannot be left to a programmed algorithm based on mathematical criteria, but necessitates scientific judgment by both the clinical pharmacokineticist and the person who determinend the concentrations and knows about their reliability.’
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20080204 15:43 @ Helmut Posting: # 1577 Views: 22,041 

Dear HS thank you for your concise and elaborate answer and opinion. As i knew the thread in the PKPD list from 2002 i had expected this to a certain extent . I agree to you not in all respects. First of all i think within the context of BE studies we are not interested in the half life but rather in estimating the AUC part from tlast to infinity. Half live is only a vehicle to that end. Second: I have played around with adj. R2 (SAS code from MatosPita and Lillo (2005)) and cannot confirm your statement that too many points are included by this method in general. I on the contrary found many cases using the data from Sauter et al. that the method stops too early (starting with 3 points) if these points lie very good on the linear part. Which is the case especially the case for studies with low variability and 'good natured' concentration time courses. Third: I am not your opinion regarding AIC. Akaikes information criterion is not only used for model discrimination (with different numbers of model parameters) also this is the commonly known usage. Below you can find 2 references on usage as outlier test. Genshiro Kitagawa On the Use of AIC for the Detection of Outliers Technometrics, Vol. 21, No. 2 (May, 1979), pp. 193199 Pynnönen, S. (1992). Detection of outliers in regression analysis by information criteria. Proceedings of the University of Vaasa. Discussion Papers 146, 8 p. http://lipas.uwasa.fi/~sjp/Abstracts/dp146.pdf Google "outlier AIC" to find many more references. Based on this one can imagine a method for choosing stepwise the lineare part based on the test if the next included point is an 'outlier' to the linear model. Begin with a maximum number of points, leave one point (with lowest time) out. IF AIC decreases leave one more out and so on. This should fulfill your demand on parsimony. Your formula for AIC in the reply to Ohlbe is only correct for comparing AICs with the same number of data points n. From the original definition AIC =2*log likelihood +2*p we derive AIC =n*(ln(2*pi*RSS/n)+1)+2*p (2) (http://en.wikipedia.org/wiki/Akaike_information_criterion) =n*(ln(2*pi)+1)+n*ln(RSS/n)+2*p where RSS=residual sum of squares of errors The first term is only constant if models are compared with same n. But most references on regression use AIC=n*ln(RSS/n)+2*p. By the way: I cannot verify numerical your results. Mine are (SAS Proc Reg, ln C versus time) :
++++++ Again 5 points will be chosen. Fourst: Regarding your method of choosing the points i wonder why you choose 5 points. Your criteria  at least 3 points  not including tmax, Cmax  fit with p(r)<0.05 are fulfilled with 3,4,5 and 6 points. The rest is your opinion ('informed' view). This is the subjectivity factor i meant. On the other hand i am convinced that man is the best pattern recognizer (at least in 3D ). If trained appropriate and has 'good will'. I have received a number of questions from people doing PK analyses to aid from a statistical view. Especially in cases of not so 'well behaved' concentration time curves. For the presented example i think there is no substantial influence at all as we can see in regarding the lamdaZ. (By the way i think your lamdaZ is t1/2. ) But for other curves it can make the difference. Fifth and last comment: Your emphasis "It is the user's responsibility to evaluate the appropriateness of the estimated value" is totally correct but applies also to » ‘eyeballPK’ Edit: References linked. [HS] — Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20080204 16:49 @ d_labes Posting: # 1579 Views: 21,650 

Dear D. Labes, thank you for your fruitful comments! Unfortunately I’ll be out of office until 29 Feb, so just a few comments  I will jump into the details later. I hope that other members of the forum will make some contributions in the meantime… » […] within the context of BE studies we are not interested in the half life but rather in estimating the AUC part from tlast to infinity. » Half live is only a vehicle to that end. Definitely. » Second: I have played around with adj. R2 (SAS code from MatosPita and Lillo (2005)) and cannot confirm your statement that too many points are included by this method in general. I on the contrary found many cases using the data from Sauter et al. that the method stops too early (starting with 3 points) if these points lie very good on the linear part. » Which is the case especially the case for studies with low variability and 'good natured' concentration time courses. Yes, but on the other hand if we observe low variability (which occurs rarely enough close to the LLOQ), why should we include more points than necessary to describe lambda_{z}? OK, the term 'necessary' is quite spongily… » Third: … I will come up to this point in March. Thanks for the references! » By the way: I cannot verify numerical your results. I must confess, this was just a quickshot in R. Obviously I made a mistake in coding (not using the correct extractAIC(lm) ).» Mine are (SAS Proc Reg, ln C versus time) : » ++++++ »  n  3  4  5  6  » ++++++ »  RMSE  0.0299805  0.03428349  0.0285321  0.06775109  »  AIC 1  20.339  25.757  34.121  30.736  »  AIC 2  11.825  11.633  14.439   5.391  » ++++++ » RSS=(RMSE*(n2))^2 » AIC 1: n*ln(RSS/n)+4 (SAS AIC), AIC: full formula » » Again 5 points will be chosen. R (1) and WinNonlin (2) come up with:
++++++ ... which again would pick 5 points (results from R coincide with SAS'). » … i wonder why you 5 points. Your criteria […] are fulfilled with 3,4,5 and 6 points. The rest is your opinion ('informed' view). » This is the subjectivity factor i meant. Absolutely. » On the other hand i am convinced that man is the best pattern recognizer (at least in 3D ). If trained appropriate and has 'good will'. I have received a number of questions from people doing PK analyses to aid from a statistical view. Especially in cases of not so 'well behaved' concentration time curves. Fully agree. I'm with you in the desire to establish some kind of statistically sound procedure which at least would support the ‘untrained eye’. » For the presented example i think there is no substantial influence at all as we can see in regarding the lamdaZ. (By the way i think your lamdaZ is t1/2. ) Oops; I just corrected the original table. » But for other curves it can make the difference. Yes, and these are the nasty ones we have regularly to deal with. » Your emphasis "It is the user's responsibility to evaluate the appropriateness of the estimated value" is totally correct but applies also to » » ‘eyeballPK’
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Helmut ★★★ Vienna, Austria, 20080510 11:56 @ d_labes Posting: # 1844 Views: 21,831 

Dear Detlew, just discovered a recent paper: Scheerans C, Derendorf H, Kloft C. Proposal for a Standardised Identification of the MonoExponential Terminal Phase for Orally Administered Drugs. Biopharm Drug Dispos. 2008; 29(3): 145–57. doi:10.1002/bdd.596 Authors recommend the socalled TTT (two times t_{max}) method for identifying the monoexponential terminal phase in the case of oral drug administration. Rules for selecting sample points in the estimation of lambda_{z} are quite simple: First point: 2× t_{max} – or, if no sampling point is available, the subsequent one in the profile Last point: last measured (C ≥LLOQ) A large MonteCarloStudy was performed to compare the TTTmethod to the maximum adjusted R² algorithm (ARS). TTT was found to be superior to ARS, both in terms of bias and precision. The method is not intended as an automated procedure – and for any ≥1 compartment model, which shows up as a multilinear decline in a lin/logplot. Quote: “It should be emphasised that the TTT method has been introduced in this paper to provide a reasonable tool to support visual curve inspection for reliably identifying the monoexponential terminal phase. Moreover, the TTT method should not be utilised without visual inspection of the respective concentrationtime course. Thus, before using this new approach the monophasic shape post the peak of the curve has to be checked visually by means of a semilogarithmic diagram.” P.S.: For my example data set five points would be chosen again. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20080516 07:12 @ Helmut Posting: # 1851 Views: 21,252 

Dear HS, thanx. I will have a look at the paper soon. Eventually then we have things to discuss. — Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20080516 15:21 @ d_labes Posting: # 1852 Views: 21,524 

Dear DLabes! First impressions from my side: The authors compared results from the TTTmethod to the ARSalgorithm and claimed ‘better performance’ in terms of bias (relative error RE% = 100×[estimatetrue]/true). M$ Excel’s pseudorandom number generator NORMINV(RAND(), mu, sigma) is known to be suboptimal. In generating 50 sets of 10000 random samples with different seeds each (i.e., 500000 samples), I got a maximum absolute bias of 2.3% (contrary to the 1% claimed by the authors). Therefore a significance limit of >2% difference between methods IMHO is too low (5% is more realistic).However, in repeating their simulation ‘Study A’ (not only 10000 ‘profiles’, but 20 sets of 10000 each), I could confirm their results in terms of bias and precision (without setting my feet in the deep puddle of statistically comparing data sets): lambda_{z}
TTTmethod n (number of data points chosen)
TTTmethod Scary histograms of number of sampling points selected by both methods clearly favors TTT. For instance in my first data set ARS chose the last three data points in 23.73% of cases, whereas TTT suggested the same number in only 0.01%… In the upper range of data points ARS selected ≥10 data points in 40.40%, whereas TTT came up with the same n in just 2.78% of cases (≥11: ARS 19% – TTT 0%!). According to the bias and precision seen in the simulations my personal reluctance against the automated ARS seems to be justified. I always suspected that the algorithm selects too many data points and was surprised to see that my prejudice was not only justified, but that the contrary (too little data points) also holds. Lessons learned:
Another option I’m considering for testing is the method ‘lee’ in the package PK for R by Martin J. Wolfsegger and Thomas Jaki.^{*}
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20080523 13:43 @ Helmut Posting: # 1869 Views: 21,263 

Dear HS, thanks for sharing your first impression. Here are my first thoughts about the paper. 1. It seems, that the ARS was not restricted to the timepoints later than tmax. At least not for creating figure 2. No one would implement the algorithm in that way! 2. Beside your concern about EXCELs random number generator I argue: The simulation is not that what we see in real world for different subjects. It is the simulation of thousands of ct curves of one subject with the 'true' parameters given for the concentration time curve (onecompartment or two compartment). Thus it answers the question: How good is the estimation of one lamda_{z} and / or one AUC(0inf). I am concerned if we can draw any conclusion from that. 3. Although it is true that scientifically we should choose that method with best properties in bias and / or variability. But I cannot see any practical implications of the reported differences, especially in the context of bioequivalence studies. A bias of 5% for residual area amounts to max 1% absolute if you has planned your sampling times propperly thus that AUC(0tlast) is at least 80% or greater. It makes only a difference if you are borderline. I argue that the bioequivalence test as the last end is not at all affected in any way. 4. The statement, that both methods deliver statistical significant results is a fundamental misunderstanding what statistics is for. It is easy to obtain significance for ten thousands of values with low variability. But significance is not relevance. I think no one would argue that AUC values for instance of 296.59 or 297.22 (AUCinf from study A) are different. 5. I cannot verify the claim that TTT should not applied if the concentration time curve is biphasic after Cmax, at least not for the reported results. The bias in lamda_{z} for models B and C are 1.15% or 1.39% (according to table 1). This is not different to 'Study A', for which TTT is recommended. What's the point? 6. The suggestion that low N=3, which occures relative often in the ARS algorithmn, is associated with a higher varaibility, is a misinterpretation of the extra simulations. Here all calculations were stopped at N=3 and so on. Thus it can only regarded as a criticismn to choose a low predefined fixed number of points. To my experience (with only a limited number of trys), the ARS stops only, if the first 3 points fit is superior. If this lead to a higher variability remains open. 7. Looking forward to your simulations. I suggest that the TTT method and any Fit statistics algorithmn (ARS or AICc or whatever) should be combined to get the best of. TTT could be used to restrict the number of points used by Fit statistics algorithms. A side question to your simulations: You report a positive bias for the ARS method. What is the source for that? The simulations where ARS goes not beyond N=3? — Regards, Detlew 
hiren379 ★ India, 20120711 09:11 (edited by hiren379 on 20120711 09:38) @ d_labes Posting: # 8925 Views: 17,478 

Thank you all for such a nice information.. I would like to further post on this... Concider a case of IV Infusion with very slow rate. Once the infusion is over pharmacologically the only phase running in human body is elimination of drug as absorption is completed (end of infusion). Then shouldnt we concider all the time points after the end of infusion time as here in this case we are very sure that after what particular time only elimination rules. And if we go by regression it would be a bias as all the points after the infusion is over must be concidered as you are very sure where the lone elimination phase has begins against the extravascular dosage forms where it is better to start your Kel calculation from bottom as you are not sure when does the absorption stops. 
Ohlbe ★★★ France, 20120711 09:37 @ hiren379 Posting: # 8927 Views: 17,463 

Dear Hiren, » Once the infusion is over pharmacologically the only phase running in human body is elimination of drug as absorption is completed (end of infusion). Then shouldn't we consider all the time points after the end of infusion time as here in this case we are sure that after what particular time only elimination happens. No: you may have a distribution phase first. Infusion does not mean monocompartmental PK. You still need to only consider the terminal elimination phase. Regards Ohlbe — Regards Ohlbe 
hiren379 ★ India, 20120711 09:44 @ Ohlbe Posting: # 8928 Views: 17,400 

» No: you may have a distribution phase first. Infusion does not mean monocompartmental PK. You still need to only consider the terminal elimination phase. Thanks but will the same work for Non compartmental model also? And if you are so worried about including the time of distribution then the postulate of instantaneous distribution is violated for using Non compartment model. And if u are assuming instantaneous distribution then one must take all time points after completion of infusion to avoid bias created by statistics of R2 And I knew that distribution will come into play and hence I have mentioned slow infusion. 
Ohlbe ★★★ France, 20120711 10:16 @ hiren379 Posting: # 8929 Views: 17,411 

Dear Hiren, » And if u are assuming... Please avoid SMS spelling in your messages on this forum. » Thanks but will the same work for Non compartmental model also? Let me replace multicompartmental with multiphasic, then ? » And if you are so worried about including the time of distribution then the postulate of instantaneous distribution is violated Why make such a postulate ? Look at the PK profiles first. » And if u are assuming instantaneous distribution then one must take all time points after completion of infusion to avoid bias created by statistics of R2 1. F*** R2 2. I don't assume instantaneous distribution or monophasic elimination without looking at the PK profiles for each subject. (Also look at Helmut's first message in this same thread). Regards Ohlbe — Regards Ohlbe 
hiren379 ★ India, 20120711 10:53 @ Ohlbe Posting: # 8930 Views: 17,493 

» Please avoid SMS spelling in your messages on this forum. sorry as I am new to this forum » Let me replace multicompartmental with multiphasic, then ? Then how can you assume monophasic when calculating Kel for extra vascular drug administration. May be when you are calculating Kel for a tablet distribution is still on?? » 1. F*** R2 Strongly with you for this » 2. I don't assume instantaneous distribution or monophasic elimination without looking at the PK profiles for each subject. » (Also look at Helmut's first message in this same thread). I also believe in this So I think conclusion is that manual selection coupled with statistical method is best irrespective of dosage form.... Is it OK? Now if I am using this + I know that my drug is having instantaneous distribution then still I should go for manual linearity finding. Will it not be bias??? 
Helmut ★★★ Vienna, Austria, 20120711 12:37 @ hiren379 Posting: # 8931 Views: 17,465 

Dear Hiren! » » » » » » […] Once the infusion is over pharmacologically the only phase running in human body is elimination of drug as absorption is completed (end of infusion). There is no absorption if the drug is administered directly to the central compartment. In rare cases you may notice an increase of concentrations after the end of infusion. This may occur if the drug precipitates in the vein. Then you have a true absorption process (actually dissolution). Dissolution may take place either in the vein itself or (downstream) in the lungs. » » Please avoid SMS spelling in your messages on this forum. » sorry as I am new to this forum … still a newbie after >2½ years? » Then how can you assume monophasic when calculating Kel for extra vascular drug administration. May be when you are calculating Kel for a tablet distribution is still on?? For monophasic PK see footnote 1 of this post. Note that the TTTmethod was developed for monophasic PK only – justified by irrelevant absorption after the inflection point of the profile. For multiphasic profiles the authors suggest as the starting point the intersection of the last phase with the preceding one. This point might be difficult to find, especially if the distribution phase is not substantially faster than elimination. Any algorithm might fail here; visual inspection of the fit is mandatory (see the quotes from the TTTpaper above and from Hauschke et al. above). » So I think conclusion is that manual selection coupled with statistical method is best irrespective of dosage form.... » Is it OK? I would do it the other way ’round. Start with an automatic method and adjust the selected time points if deemed necessary. » Now if I am using this + I know that my drug is having instantaneous distribution then still I should go for manual linearity finding. Will it not be bias??? Concentrations are not measured without error. It might be that – especially with long half lives and values close to the LLOQ – any automatic method fails. To avoid bias I suggest to perform the estimation of λ_{z} blinded for treatment and import the randomization afterwards. At least this is what I do. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
hiren379 ★ India, 20120711 13:28 @ Helmut Posting: # 8932 Views: 17,456 

Dear HS, Thanks for your inputs » There is no absorption if the drug is administered directly to the central compartment. In rare cases you may notice an increase of concentrations after the end of infusion. This may occur if the drug precipitates in the vein. Then you have a true absorption process (actually dissolution). Dissolution may take place either in the vein itself or (downstream) in the lungs. Ok, I need little bit further clarification and for that please concider these properties of a drug (BE study...NCA used)
And if yes since you are knowing from what time the elimination phase has started it would be a bias if you are selecting 3 or 4 points from bottom of the curve to find Kel irrespective you are going for R2 or mannual. One has to select all the time point in the elimination phase characterization. In extravascular since we are unable to differentiate from where the elimination starts we start from bottom of curve and go up. » » » Please avoid SMS spelling in your messages on this forum. » » sorry as I am new to this forum » » … still a newbie after >2½ years? I am not still an addict to this forum. But I am sure I will be 100% 
Helmut ★★★ Vienna, Austria, 20120711 13:47 @ hiren379 Posting: # 8933 Views: 17,487 

Dear Hiren! » Ok, I need little bit further clarification and for that please concider these properties of a drug (BE study...NCA used) » » 1. Drug is a pure solution and nothing like precipitation and all. I mean pure solution form The fact that your drug is completely in solution doesn’t prevent precipitation in the body. Blood ≠ Ringer’s solution. Apart from a small increase after end of infusion due to analytical variability sometimes you see an increase which may last for more than an hour – that’s an indication of precipitation. You have to inspect the individual profiles and should not assume “this is an infusion, therefore C_{max}/t_{max} = end of infusion”. » 2. Instantaneous distribution happens in your central compartment. » 3. No compartmental specific distribution or in and out from any compartment OK » So if 1, 2 and 3 are correct. Cant we concider that elimination phase is only working phase as soon as infusion is over? Still you have to check #1. So my answer is only a “conditional yes”. » And if yes since you are knowing from what time the elimination phase has started it would be a bias if you are selecting 3 or 4 points from bottom of the curve to find Kel irrespective you are going for R2 or mannual. Do we really know that? Check the profiles. » One has to select all the time point in the elimination phase characterization. Well – at least for i.v. as many as possible since analytical error is inverse proportional to concentration. » In extravascular since we are unable to differentiate from where the elimination starts we start from bottom of curve and go up. Yep. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
hiren379 ★ India, 20120711 14:24 @ Helmut Posting: # 8935 Views: 17,410 

Dear HS! » The fact that your drug is completely in solution doesn’t prevent precipitation in the body. Blood ≠ Ringer’s solution. Ya thanks for drawing my attention to mighty pH difference between nature's creation and Man's effort to become God. » Apart from a small increase after end of infusion due to analytical variability sometimes you see an increase which may last for more than an hour – that’s an indication of precipitation. Thanks for sharing your experience. » You have to inspect the individual profiles and should not assume “this is an infusion, therefore C_{max}/t_{max} = end of infusion”. Ok » Still you have to check #1. So my answer is only a “conditional yes”. Ok HS, now if you are accepting that elimination phase prevails (ok conditionally) after completion of infusion, the thing now acts like an IV bolus. So do you accept that in this hypothetical situation one must include maximum number of points for kel estimation as per your say on IV? » Well – at least for i.v. as many as possible since analytical error is inverse proportional to concentration. 