BNR ☆ US, 2016-04-01 00:46 (3211 d 13:33 ago) Posting: # 16159 Views: 19,284 |
|
I have a question about AUC0-tau calculation when using actual sampling time. For example, the dosing interval is 24 hr and the 24-hr sample was drawn at 23.8hr. If lambda is not able to be obtained (eg. the concentration-time profile is flat during dosing interval), Phoenix 6.4 will not calculate the AUC0-tau value as extrapolation is not possible. What is the best way to hand this case, reporting a missing value for AUC0-tau or replacing 23.8hr with 24hr for the calculation? Thanks for your help in advance! |
jag009 ★★★ NJ, 2016-04-01 01:05 (3211 d 13:14 ago) @ BNR Posting: # 16160 Views: 17,543 |
|
Am I missing something? Why would you evaluate lambda since AUC0-tau (dosing interval 24 hr) is a steady state Area under curve parameter which doesn't require lambda. Lambda is needed if you want to calculate AUC0-inf. John |
BNR ☆ US, 2016-04-01 01:43 (3211 d 12:36 ago) @ jag009 Posting: # 16161 Views: 17,631 |
|
Hi John, Thank you for your reply. I thought AUC0-tau not reported by Phoenix is because Lambda can't be calculated. I tested a simulated dataset for which the Lambda can be calculated. In this case, AUC0-tau is reported by Phoenix even though the actual sampling time 23.8 is used. Please see two example datasets below. AUC0-tau (tau=24 hr) is not reported by Phoenix for the dataset 1. But it is reported for the dataset 2. For dataset 1, AUC0-tau will be reported by Phoenix if using the nominal time 24 hr. Would you recommend to use the actual sampling time of 23.8 hr with a missing AUCo-tau value or use the nominal time of 24 hr (which is not much different from the actual time in this case) and have a calculable AUC0-tau? Thanks. Data set 1 Data set 2 Edit: Data BBcoded & full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut] |
Helmut ★★★ Vienna, Austria, 2016-04-01 02:59 (3211 d 11:20 ago) @ BNR Posting: # 16162 Views: 17,855 |
|
Hi BNR, ❝ I thought AUC0-tau not reported by Phoenix is because Lambda can't be calculated. Exactly. Works as designed. See also the “Phoenix WinNonlin 6.4 user’s Guide” ► Noncompartmental Analyis ► Partial areas I suggest to ☑ Intermediate Output in Setup/Options-tab. Since Phoenix 6.0 (and different to ‘classical’ WNL up to 5.3) in the automatic setup ( BestFit ) Cmax/tmax is not included in the fitting procedure. Hence, in dataset 1 you have just two data points left and don’t get an estimate of λz from PHX. In the Core output you find for data set 1:
However, for dataset 2:
PHX’s default is not a law (and the automatic method needs inspection and adjustments anyhow – search the forum for ‘TTT method’ and the references given in the posts). You could manually select the last three (or maybe even better only the last two) data points in such a case. See also this presentation about Cmin vs. Cτ where related problems exist. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
BNR ☆ US, 2016-04-01 04:05 (3211 d 10:15 ago) @ Helmut Posting: # 16163 Views: 17,402 |
|
Hi Helmut, Thanks for your reply. Now I understand why the AUC0-tau value is not calculated for dataset 1. I am not familiar with the "TTT method" and will read the posts and literature about that. BNR Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut] |
Astea ★★ Russia, 2019-02-10 17:38 (2165 d 19:41 ago) @ Helmut Posting: # 19896 Views: 13,483 |
|
Dear Smart People! I'd like to understand the correct method to calculate AUC0-τ in the case of time deviations in the first sample point (actually I think that we always have deviations in the first point cause it is impossible to "eat" drug and "give" blood at the same time). Suppose we have to calculate NCA parameters for multiple dose during n-th dose period of 24 hours. We have these concentations and time deviation -1 min in the first sample and 0,5 hours in the last: Data set 3 Aiming to calculate concentration at t=0, Phoenix use the minimum observed during the dose interval (from dose time to dose time+tau) for extravascular and infusion data (while for IV bolus data it performs a log-linear regression). That is for extravascular or infusion data in the listed dataset first point (time=0) would be replaced by 20, so that AUC0-τ equals 1325 while if we will not include 1-minute time deviation it would be 1316. May be this example is not so demonstrative but I was slightly suprised that a difference in one minute should totally change the input data: in fact we throw pre-dose concentration to the bin. Are there another methdos for handling AUC0-τ in such cases (linear extrapolation for example)? There are also some more questions about AUC0-τ: Interval of dosing (τ, 24 hour) is always a constant for all subjects not depending for the actual dose period, isn't it? What is the best way to handle with BLQ in the end of the dosing period for steady-state? — "Being in minority, even a minority of one, did not make you mad" |
Helmut ★★★ Vienna, Austria, 2019-02-10 20:32 (2165 d 16:47 ago) @ Astea Posting: # 19897 Views: 13,732 |
|
Hi Nastia, ❝ I'd like to understand the correct method to calculate AUC0-τ […] Duno what is correct. Just my opinion. ❝ Aiming to calculate concentration at t=0, Phoenix use the minimum observed during the dose interval (from dose time to dose time+tau) for extravascular and infusion data (while for IV bolus data it performs a log-linear regression). Ha-ha, you’ve read the manual.
❝ That is for extravascular or infusion data in the listed dataset first point (time=0) would be replaced by 20, … Yep, cause it is the minimum within {0, τ}. The concentration 2 at 24.5 is ignored. What a strange idea! ❝ … so that AUC0-τ equals 1325 By the linear trapezoidal method (dammit!)… With lin-up/log-down I get 1298. ❝ […] I was slightly suprised that a difference in one minute should totally change the input data: in fact we throw pre-dose concentration to the bin. Are there another methdos for handling AUC0-τ in such cases (linear extrapolation for example)? In PHX not without massive tweaks. A linear interpolation of t0|C0 and t1|C1 would give 1.986. Much better than 20. You could ask for the concentration at τ and get 2.388 – higher than the 2 at 24.5 but the fit is not that good. Note that this value confirms what PHX reports for Ctau. ✔ Interesting: If you enter in the field 0 you get again 20. As designed but IMHO, stupid. What I would do:
❝ There are also some more questions about AUC0-τ: ❝ Interval of dosing (τ, 24 hour) is always a constant for all subjects not depending for the actual dose period, isn't it? Yes. Otherwise you would open Pandora’s box. ❝ What is the best way to handle with BLQ in the end of the dosing period for steady-state? Lin-up/log-down as usual. Don’t you have any accumulation or is the method lousy? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Astea ★★ Russia, 2019-02-10 21:50 (2165 d 15:30 ago) @ Helmut Posting: # 19898 Views: 13,434 |
|
Dear Helmut! I'm grateful for your quick reply! ❝ By the linear trapezoidal method (dammit!)… With lin-up/log-down I get 1298. For some reason (may be for it's simplicity) russians like linear method and include it into the protocols. I recalculated via lin-up/log-down and now I'm puzzled with a new question: why do Cτ's differ by the way of calculation AUC? For linear I've got 3,38... (by the way, what are (1) and (2) on your presentation, slide 18?) ❝ Let’s call the first two datapoints t0|C0... Is it correct to use t0 and t1, if dosing time happened between this two points? I mean for the time before dose (t0), the curve should be decreasing exponentially, while for the t1 it should be rising similar to linear? ❝ ❝ What is the best way to handle with BLQ in the end of the dosing period for steady-state? ❝ Lin-up/log-down as usual. Don’t you have any accumulation or is the method lousy? I was lucky enough not to get it till now but shit happens, you know, and a danger foreseen is half avoided. Is it a bad idea to miss that BLQ for ss elimination part? What is about regulatory's acceptance of any of the calculation modifications? I may use profound methods of numeric integration, but would it be accepted? — "Being in minority, even a minority of one, did not make you mad" |
Helmut ★★★ Vienna, Austria, 2019-02-11 02:45 (2165 d 10:34 ago) @ Astea Posting: # 19899 Views: 14,073 |
|
Hi Nastia, ❝ ❝ By the linear trapezoidal method (dammit!)… ❝ For some reason (may be for it's simplicity) russians like linear method and include it into the protocols. Simplicity or simple minds? SCNR. Following textbooks of the 70s of the last century? No pocket calculators? Even in f**ing Excel one can implement the lin-up/log-down. ❝ I recalculated via lin-up/log-down and now I'm puzzled with a new question: why do Cτ's differ by the way of calculation AUC? For linear I've got 3,38... Gotcha! By specifying the linear trapezoidal you tell PHX that you believe that the drug between time points is eliminated in zero (!) order like водка. You get what you want for t=24, namely by linear interpolation (24–18)(2–20)/(24.5–18)+20=3.385. ∎ ❝ (by the way, what are (1) and (2) on your presentation, slide 18?) Values obtained by formulas (1) and (2) of slide 16. Note that in interpolation PHX doesn’t use the estimated λz (6–24.5: 0.2103) but the local slope (18–24.5: 0.3542). ❝ Is it correct to use t0 and t1, if dosing time happened between this two points? I mean for the time before dose (t0), the curve should be decreasing exponentially, while for the t1 it should be rising similar to linear? Good point. Unfortunately we don’t know the elimination of the previous dose. ❝ ❝ […] Don’t you have any accumulation or is the method lousy? ❝ I was lucky enough not to get it till now but shit happens, you know, and a danger foreseen is half avoided. Is it a bad idea to miss that BLQ for ss elimination part? Maybe, maybe not. Depends on your sampling schedule and how much the previous concentration is >LLOQ. ❝ What is about regulatory's acceptance of any of the calculation modifications? I may use profound methods of numeric integration, but would it be accepted? Duno. I always describe exactly what I plan to do in the protocol. Never got any questions afterwards. Believe me, I would make the life of an assessor miserable if at the end he/she doesn’t buy sumfink which was approved by his/her own agency and the ethics committee before. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Astea ★★ Russia, 2019-02-11 20:46 (2164 d 16:34 ago) @ Helmut Posting: # 19900 Views: 13,228 |
|
Dear Helmut! Thanks a lot for the clarification! The more I get answers the more I have questions... Why do algorithms for inter- and extrapolation differ? Let's say that the last point is NA. Then PHX will calculate it via estimated λz suspecting log elimination not depending from the actual AUC calculation method. ❝ Gotcha! By specifying the linear trapezoidal you tell PHX that you believe that the drug between time points is eliminated in zero (!) order Then why does it use lambda and log elimination for unknown samples and local slope and linear elimination for known samples? What's wrong with calculation by λz for the both cases? Of course I can try to think about it as a compromis (see this post), but then how to define my own rule to be sure that it is correct? ❝ Duno. I always describe exactly what I plan to do in the protocol. I was always wondering how far one can go in stating his favourite methods in SOPs and SAPs? Can I write that after the calculation a statistician should have a cup of tea? — "Being in minority, even a minority of one, did not make you mad" |
Helmut ★★★ Vienna, Austria, 2019-02-12 03:20 (2164 d 10:00 ago) @ Astea Posting: # 19901 Views: 13,276 |
|
Hi Nastia, ❝ The more I get answers the more I have questions... That’s what science is all about, right? We learn more by looking for the answer to a question and not finding it than we do from learning the answer itself. Lloyd Alexander ❝ Why do algorithms for inter- and extrapolation differ? Sorry, ask Certara/Pharsight. ❝ Let's say that the last point is NA. Then PHX will calculate it via estimated λz suspecting log elimination not depending from the actual AUC calculation method. I wouldn’t say suspecting but rather rightly assuming. ❝ ❝ By specifying the linear trapezoidal you tell PHX that you believe that the drug between time points is eliminated in zero (!) order ❝ ❝ Then why does it use lambda and log elimination for unknown samples and local slope and linear elimination for known samples? What's wrong with calculation by λz for the both cases? Good question. I think that λz in any case would not be a bad idea because it is more reliable (i.e., taking information of samples with higher concentrations which are more accurate into account). On the same account I never had problems in using $$AUC_{0-\infty}=AUC_{0-\text{t}}+\frac{\widehat{C}_\text{t}}{\widehat{\lambda}_\text{z}}\tag{1}$$ instead of the simple $$AUC_{0-\infty}=AUC_{0-\text{t}}+\frac{C_\text{t}}{\widehat{\lambda}_\text{z}}\tag{2}$$ In Phoenix WinNonlin \(\small{(1)}\) is termed AUCINF_pred ) and \(\small{(2)}\) AUCINF_obs . See also this article.❝ Of course I can try to think about it as a compromis (see this post), but then how to define my own rule to be sure that it is correct? Yep (you are an expert in using the forum’s search function and digging out old posts). High time for us all to switch to AUCtlast (Common).* ❝ ❝ I always describe exactly what I plan to do in the protocol. ❝ That's a perfect position. But sometimes we have to work with what someone has planned before. I know, I know… ❝ I was always wondering how far one can go in stating his favourite methods in SOPs and SAPs? As I said above, very far. Maybe I was lucky? IMHO, in the SAP it’s easier because it is more specific to the study. In an SOP you likely end up with a crazy flowchart trying to cover all potential cases only to discover later that you have forgotten one. NB, I don’t have a single SOP for NCA. ❝ Can I write that after the calculation a statistician should have a cup of tea? Or even? Borderline.
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
mittyri ★★ Russia, 2019-02-14 13:55 (2161 d 23:24 ago) @ Astea Posting: # 19917 Views: 13,235 |
|
Hi Astea, ❝ Why do algorithms for inter- and extrapolation differ? Well.. I don't think there's a simple enough solution which fits all situations. Say we have Data set Something is wrong at the point 18h. Could be a good idea to exclude that point from λz estimation, right? Now imagine we want to get AUC0-18. Since that point exists, we cannot ignore it. Lin-up,Log-down method claims AUC0-18 = 1320.139 So far so good. Other next goal is AUC0-17.9, but taking into account not a robust interpolation of the neighbours but intelligent loglinear regression fit given as λz: exp(-lambda*17.9+interc) AUC0_17.9 = 1224.5557 Are you feeling some PK inconsistency here? Why is AUC18 so differ from the value 0.1h before? I think this is (only one of many I think!) the reason why the methods differ. Regarding Cmin as a first point: there could be other situations where to use Cmin from overall dataset (not the dosing period) is not good idea. Ctaupred? OK, but what if λz is not estimable for any reason? I don't think the default WNL NCA rules will change since it can break a lot of templates and projects. Name it as unhealthy and lazy conservatism PS: you can ask Certara to add some new optional NCA rule. — Kind regards, Mittyri |
Astea ★★ Russia, 2019-02-16 09:33 (2160 d 03:46 ago) @ mittyri Posting: # 19926 Views: 13,075 |
|
Dear Mittyri! Thank you for your remarkable example! I've got it. But there could be some phylosophical thoughts: may be 60 is a miss (error or samples were mixed) than what whould be better - to calculate the square under 30-60-2 triangle carefully or to calculate the square under the curve assuming log elimination? If we have a large sample size in the study and noone presents the same pharmacokinetic features than it's a reason to think what that could be. ❝ I don't think the default WNL NCA rules will change since it can break a lot of templates and projects. Name it as unhealthy and lazy conservatism Don't believe in it cause I observe a lot of changes and improvements while using PHX through years Another point is that I used PHX NCA like a standard candle (maybe regulators do it as well?). But it turns out that in some rare extreme cases it is worth to think about alternative approaches. Would it be accepted? How to be sure that the method stated in SAP would work better than PHX algo? Is it possible to explain in the report any disagreement with PHX output? — "Being in minority, even a minority of one, did not make you mad" |
ElMaestro ★★★ Denmark, 2019-02-16 14:33 (2159 d 22:47 ago) @ Astea Posting: # 19927 Views: 13,064 |
|
Hi all, also remember that when we do BE we treat T and R in the same fashion. So regardless of how terrible we bias some estimate we may do that equally for the two treatments we are comparing. This will work as long as the phenomena that we discuss (funky data points, imputation, deletion, addition, intellingent fixes) occur with equal probability across groups. We may bias some estimate (like lambda) rather terribly either in positive or negative direction, but that does not automatically imply that the expected point estimate changes and therefore it may be entirely safe in term of patient's risk. "It depends" as they say. For example, there was a post recently (Mittyri? Hötzi? Someone else?) who wrote that when we do log down in BE we assume a first order elimination. That is much a personal interpretation, I think. When I do log or linear down in BE, I am personally only saying I am willing to make the same error for T and R regardless of how the drug is eliminated, full stop and that's how I am not necessarily strongly favouring one method over the other. Even if the SPC or FOI info indicates first order elimination I am perfectly fine with lin. down especially if this is what the CRO usually does well. Wouldn't want them to get out of their comfort zone. From the top of my head, I believe I have never been in a situation where I needed to defend an AUC value per se, I was only in situations where regulators questioned the proof of BE. For dose-finding or superiority it may be a somewhat different situation. ❝ Another point is that I used PHX NCA like a standard candle (maybe regulators do it as well?). But it turns out that in some rare extreme cases it is worth to think about alternative approaches. Would it be accepted? How to be sure that the method stated in SAP would work better than PHX algo? Is it possible to explain in the report any disagreement with PHX output? Regulators are using PHX now???? — Pass or fail! ElMaestro |
Helmut ★★★ Vienna, Austria, 2019-02-16 15:26 (2159 d 21:54 ago) @ ElMaestro Posting: # 19929 Views: 13,096 |
|
Hi ElMaestro ❝ […] remember that when we do BE we treat T and R in the same fashion. So regardless of how terrible we bias some estimate we may do that equally for the two treatments we are comparing. Reads nice on paper but people might be tempted to fiddle around if they are already unblinded before performing NCA. Hence, my sequence is:
❝ […] there was a post recently (Mittyri? Hötzi? Someone else?) … Myself for sure. I’m notorious for that. ❝ … who wrote that when we do log down in BE we assume a first order elimination. Yep. ❝ That is much a personal interpretation, I think. When I do log or linear down in BE, I am personally only saying I am willing to make the same error for T and R regardless of how the drug is eliminated, full stop Agree, if (!) there are no missings. OK, one exception: Same time point missing in the same subject. ❝ Even if the SPC or FOI info indicates first order elimination I am perfectly fine with lin. down especially if this is what the CRO usually does well. Wouldn't want them to get out of their comfort zone. If the linear trapezoidal defines the comfort zone of a CRO it disqualifies itself (linear-up / log-down is implemented in software for ages). If someone isn’t able to RTFM and push yet another button, well… ❝ From the top of my head, I believe I have never been in a situation where I needed to defend an AUC value per se, … Not AUC but I was once asked whether my software for NCA was validated (see there). Funny enough, the agency asked for cross-validation [sic] against “commercial software like WinNonlin” as if commercical software is the gold standard. ❝ Regulators are using PHX now???? The FDA does (modeling). The Saudia FDA does (BE, though additionally to SAS). — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Helmut ★★★ Vienna, Austria, 2019-02-16 14:59 (2159 d 22:21 ago) @ Astea Posting: # 19928 Views: 13,013 |
|
Hi Nastia, ❝ […] it turns out that in some rare extreme cases it is worth to think about alternative approaches. Would it be accepted? As I said numerous times – if described in the protocol – yes. ❝ How to be sure that the method stated in SAP would work better than PHX algo? Is it possible to explain in the report any disagreement with PHX output? Good questions. No software is validated when it comes to NCA. Assessors/inspectors are you aware of it? See my attempt there. For years the NCA Consortium is struggling to come up with sumfink. You have to apply for membership; interesting enough the majority of members are coming from originators or the academia. The task turned out to be rather difficult… — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
mittyri ★★ Russia, 2019-02-20 22:22 (2155 d 14:58 ago) @ Helmut Posting: # 19949 Views: 13,049 |
|
Hi Helmut, from your lecture: ‘Black box’ validation
So what is the problem to validate WNL NCA using the datasets (OK, not only from literature since there are so many troubles as you showed, but some simulated)? All rules are written in the User's guide, so it could be crosschecked against handmade R code. Or should it be validated against another validated software (which does not exist)? — Kind regards, Mittyri |
mittyri ★★ Russia, 2019-02-20 22:40 (2155 d 14:40 ago) @ Astea Posting: # 19950 Views: 12,907 |
|
Dear Astea, ❝ But there could be some phylosophical thoughts: may be 60 is a miss (error or samples were mixed) than what whould be better - to calculate the square under 30-60-2 triangle carefully or to calculate the square under the curve assuming log elimination? ❝ Don't believe in it cause I observe a lot of changes and improvements while using PHX through years — Kind regards, Mittyri |