BNR
☆    

US,
2016-04-01 00:46
(2918 d 10:08 ago)

Posting: # 16159
Views: 16,715
 

 AUC0-tau at steady state [NCA / SHAM]

I have a question about AUC0-tau calculation when using actual sampling time. For example, the dosing interval is 24 hr and the 24-hr sample was drawn at 23.8hr. If lambda is not able to be obtained (eg. the concentration-time profile is flat during dosing interval), Phoenix 6.4 will not calculate the AUC0-tau value as extrapolation is not possible. What is the best way to hand this case, reporting a missing value for AUC0-tau or replacing 23.8hr with 24hr for the calculation? Thanks for your help in advance!
jag009
★★★

NJ,
2016-04-01 01:05
(2918 d 09:49 ago)

@ BNR
Posting: # 16160
Views: 15,179
 

 AUC0-tau at steady state

Am I missing something? Why would you evaluate lambda since AUC0-tau (dosing interval 24 hr) is a steady state Area under curve parameter which doesn't require lambda. Lambda is needed if you want to calculate AUC0-inf.

John
BNR
☆    

US,
2016-04-01 01:43
(2918 d 09:11 ago)

@ jag009
Posting: # 16161
Views: 15,265
 

 AUC0-tau at steady state

Hi John,

Thank you for your reply. I thought AUC0-tau not reported by Phoenix is because Lambda can't be calculated. I tested a simulated dataset for which the Lambda can be calculated. In this case, AUC0-tau is reported by Phoenix even though the actual sampling time 23.8 is used.
Please see two example datasets below. AUC0-tau (tau=24 hr) is not reported by Phoenix for the dataset 1. But it is reported for the dataset 2. For dataset 1, AUC0-tau will be reported by Phoenix if using the nominal time 24 hr. Would you recommend to use the actual sampling time of 23.8 hr with a missing AUCo-tau value or use the nominal time of 24 hr (which is not much different from the actual time in this case) and have a calculable AUC0-tau?

Thanks.

Data set 1   Data set 2
Time  Conc   Time  Conc
 0    1000    0    1000
 4    1500    4    1500
 8     900    8    1750
12    1750   12    1750
16    1000   16    1000
23.8   800   23.8   800



Edit: Data BBcoded & full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2016-04-01 02:59
(2918 d 07:55 ago)

@ BNR
Posting: # 16162
Views: 15,467
 

 RTFM

Hi BNR,

❝ I thought AUC0-tau not reported by Phoenix is because Lambda can't be calculated.


Exactly. Works as designed. See also the “Phoenix WinNonlin 6.4 user’s Guide” ► Noncompartmental Analyis ► Partial areas
I suggest to ☑ Intermediate Output in Setup/Options-tab.
Since Phoenix 6.0 (and different to ‘classical’ WNL up to 5.3) in the automatic setup (BestFit) Cmax/tmax is not included in the fitting procedure. Hence, in dataset 1 you have just two data points left and don’t get an estimate of λz from PHX. In the Core output you find for data set 1:

Calculation method:  Linear Trapezoidal Rule for for Increasing Values,
                     Log Trapezoidal Rule for Decreasing Values
Weighting for lambda_z calculations:  Uniform weighting
Lambda_z method:  Find best fit for lambda_z,  Log regression

Intermediate Output: Trying different time intervals to get best fit for LambdaZ
--------------------------------------------------------------------------------
  Lower_time Upper_time PtsUsed  LambdaZ      Rsq    Adj_Rsq
  ----------------------------------------------------------
  Unable to determine Lambda_z

Intermediate Output: AUC_TAU
----------------------------
Computing partial area from 0.000000 to 24.000000:

*** Warning 14530: Lambda_z could not be estimated.
No parameters could be extrapolated to infinity.

*** Warning 14531: AUC_TAU could not be estimated.
Steady state PK parameters could not be estimated.


However, for dataset 2:

Intermediate Output: Trying different time intervals to get best fit for LambdaZ
--------------------------------------------------------------------------------
  Lower_time Upper_time PtsUsed  LambdaZ      Rsq    Adj_Rsq
  ----------------------------------------------------------
     12.00      23.80      3      0.0612    0.8284    0.6568
  Best value for Lambda_z:      0.0612, and intercept:     8.0759

Intermediate Output: AUC_TAU
----------------------------
Computing partial area from 0.000000 to 24.000000:
piece from (0.000000, 1000.000000) to (4.000000, 1500.000000) = 5000.000000
piece from (4.000000, 1500.000000) to (8.000000, 1750.000000) = 6500.000000
piece from (8.000000, 1750.000000) to (12.000000, 1750.000000) = 7000.000000
piece from (12.000000, 1750.000000) to (16.000000, 1000.000000) = 5360.820879
piece from (16.000000, 1000.000000) to (23.800000, 800.000000) = 6991.015384
piece from (23.800000, 800.000000) to (24.000000, 741.073056) = 154.032169
Partial area from 0.000000 to 24.000000 = 31005.868431


PHX’s default is not a law (and the automatic method needs inspection and adjustments anyhow – search the forum for ‘TTT method’ and the references given in the posts). You could manually select the last three (or maybe even better only the last two) data points in such a case.

See also this presentation about Cmin vs. Cτ where related problems exist.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
BNR
☆    

US,
2016-04-01 04:05
(2918 d 06:50 ago)

@ Helmut
Posting: # 16163
Views: 15,062
 

 RTFM

Hi Helmut,

Thanks for your reply. Now I understand why the AUC0-tau value is not calculated for dataset 1. I am not familiar with the "TTT method" and will read the posts and literature about that.

BNR


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Helmut]
Astea
★★  

Russia,
2019-02-10 17:38
(1872 d 16:16 ago)

@ Helmut
Posting: # 19896
Views: 11,129
 

 AUC0-τ estimation with time deviations

Dear Smart People!

I'd like to understand the correct method to calculate AUC0-τ in the case of time deviations in the first sample point (actually I think that we always have deviations in the first point cause it is impossible to "eat" drug and "give" blood at the same time).

Suppose we have to calculate NCA parameters for multiple dose during n-th dose period of 24 hours.
We have these concentations and time deviation -1 min in the first sample and 0,5 hours in the last:
Data set 3
 Time  Conc
-0.017     1
1          60
2          80
3          100
4          130
5          160
6          130
7          100
8          90
10         70
12         40
14         30
18         20
24.5       2

Aiming to calculate concentration at t=0, Phoenix use the minimum observed during the dose interval (from dose time to dose time+tau) for extravascular and infusion data (while for IV bolus data it performs a log-linear regression).

That is for extravascular or infusion data in the listed dataset first point (time=0) would be replaced by 20, so that AUC0-τ equals 1325 while if we will not include 1-minute time deviation it would be 1316. May be this example is not so demonstrative but I was slightly suprised that a difference in one minute should totally change the input data: in fact we throw pre-dose concentration to the bin. Are there another methdos for handling AUC0-τ in such cases (linear extrapolation for example)?

There are also some more questions about AUC0-τ:
Interval of dosing (τ, 24 hour) is always a constant for all subjects not depending for the actual dose period, isn't it?
What is the best way to handle with BLQ in the end of the dosing period for steady-state?

"Being in minority, even a minority of one, did not make you mad"
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-02-10 20:32
(1872 d 13:22 ago)

@ Astea
Posting: # 19897
Views: 11,303
 

 AUC0-τ estimation with time deviations

Hi Nastia,

❝ I'd like to understand the correct method to calculate AUC0-τ […]



Duno what is correct. Just my opinion.

❝ Aiming to calculate concentration at t=0, Phoenix use the minimum observed during the dose interval (from dose time to dose time+tau) for extravascular and infusion data (while for IV bolus data it performs a log-linear regression).


Ha-ha, you’ve read the manual. :-D
  • Inserting initial time points: If a PK profile does not contain an observation at dose time, Phoenix inserts a value using the following rules. (Note that, if there is no dosing time specified, the dosing time is assumed to be zero.)
    • Extravascular and Infusion data: For single dose data, a concentration of zero is used; for steady-state, the minimum observed during the dose interval (from dose time to dose time +tau) is used.
    • IV Bolus data: Phoenix performs a log-linear regression of first two data points to back­extrapolate C0. If the regression yields a slope ≥0, or at least one of the first two y-values is zero, or if one or both of the points is viewed as an outlier and excluded from the Lambda Z regression, then the first observed y-value is used as an estimate for C0. If a weighting option is selected, it is used in the regression.

❝ That is for extravascular or infusion data in the listed dataset first point (time=0) would be replaced by 20, …


Yep, cause it is the minimum within {0, τ}. The concentration 2 at 24.5 is ignored. What a strange idea!

❝ … so that AUC0-τ equals 1325


By the linear trapezoidal method (dammit!)… With lin-up/log-down I get 1298.

❝ […] I was slightly suprised that a difference in one minute should totally change the input data: in fact we throw pre-dose concentration to the bin. Are there another methdos for handling AUC0-τ in such cases (linear extrapolation for example)?


[image]In PHX not without massive tweaks.
A linear interpolation of t0|C0 and t1|C1 would give 1.986. Much better than 20. You could ask for the concentration at τ and get 2.388 – higher than the 2 at 24.5 but the fit is not that good. Note that this value confirms what PHX reports for Ctau. 

Interesting: If you enter in the field 0 you get again 20. As designed but IMHO, stupid.

What I would do:
  • Option 1 (substitute the first concentration by Ctau):
    Perform an NCA where you ask only for Ctau. Play around with the Data Wizard to get a table with one row and three columns: id (1), Time (0), Conc (2.3875533). Rank your original dataset by Time and specify a new column id. Join with the results of the DW. Rank: sort by id, source Time, Conc. DW: sort by id, source Conc. You get a new table id 1–14, Time –0.017–24.5, Conc_1 your original ones, and Conc_2 2.3875533 in the first row (all others empty). Another DW with a custom transformation, new column Conc, Formula: if(IsNull(Conc_2), Conc_1, Conc_2). Then a filter, which replaces all Times <0 with 0. This stuff to NCA. AUCtau 1289, Tmin 0 (!), Cmin 2.388 (!).
  • Option 2: Linear interpolation (likely better):
    Let’s call the first two datapoints t0|C0 and C1|C1. Then the interpolated concentration at t=0 is given by: –t0(C1–C0)/(t1–t0)+C0 or 1.986234. Too stupid to come up with a workflow for it. Maybe Mittyri can help.
  • What most people do:
    Ignore any negative time of the predose samples and replace it with 0.

❝ There are also some more questions about AUC0-τ:

❝ Interval of dosing (τ, 24 hour) is always a constant for all subjects not depending for the actual dose period, isn't it?


Yes. Otherwise you would open Pandora’s box.

❝ What is the best way to handle with BLQ in the end of the dosing period for steady-state?


Lin-up/log-down as usual. Don’t you have any accumulation or is the method lousy? ;-)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Astea
★★  

Russia,
2019-02-10 21:50
(1872 d 12:05 ago)

@ Helmut
Posting: # 19898
Views: 11,077
 

 Cτ for lin and lin-up/log-down

Dear Helmut!

I'm grateful for your quick reply!

❝ By the linear trapezoidal method (dammit!)… With lin-up/log-down I get 1298.


For some reason (may be for it's simplicity) russians like linear method and include it into the protocols.

I recalculated via lin-up/log-down and now I'm puzzled with a new question: why do Cτ's differ by the way of calculation AUC? For linear I've got 3,38... (by the way, what are (1) and (2) on your presentation, slide 18?)

❝ Let’s call the first two datapoints t0|C0...


Is it correct to use t0 and t1, if dosing time happened between this two points? I mean for the time before dose (t0), the curve should be decreasing exponentially, while for the t1 it should be rising similar to linear?

❝ ❝ What is the best way to handle with BLQ in the end of the dosing period for steady-state?

❝ Lin-up/log-down as usual. Don’t you have any accumulation or is the method lousy? ;-)


I was lucky enough not to get it till now but shit happens, you know, and a danger foreseen is half avoided. Is it a bad idea to miss that BLQ for ss elimination part?

What is about regulatory's acceptance of any of the calculation modifications? I may use profound methods of numeric integration, but would it be accepted?

"Being in minority, even a minority of one, did not make you mad"
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-02-11 02:45
(1872 d 07:09 ago)

@ Astea
Posting: # 19899
Views: 11,668
 

 Cτ by lin-/lin, lin-up/log-down, and λz

Hi Nastia,

❝ ❝ By the linear trapezoidal method (dammit!)…

❝ For some reason (may be for it's simplicity) russians like linear method and include it into the protocols.


Simplicity or simple minds? SCNR. Following textbooks of the 70s of the last century? No pocket calculators? Even in f**ing Excel one can implement the lin-up/log-down.

❝ I recalculated via lin-up/log-down and now I'm puzzled with a new question: why do Cτ's differ by the way of calculation AUC? For linear I've got 3,38...


Gotcha! By specifying the linear trapezoidal you tell PHX that you believe that the drug between time points is eliminated in zero (!) order like водка. You get what you want for t=24, namely by linear interpolation (24–18)(2–20)/(24.5–18)+20=3.385. ∎

❝ (by the way, what are (1) and (2) on your presentation, slide 18?)


Values obtained by formulas (1) and (2) of slide 16.
Note that in interpolation PHX doesn’t use the estimated λz (6–24.5: 0.2103) but the local slope (18–24.5: 0.3542).

[image]


❝ Is it correct to use t0 and t1, if dosing time happened between this two points? I mean for the time before dose (t0), the curve should be decreasing exponentially, while for the t1 it should be rising similar to linear?


Good point. Unfortunately we don’t know the elimination of the previous dose.

❝ ❝ […] Don’t you have any accumulation or is the method lousy? ;-)

❝ I was lucky enough not to get it till now but shit happens, you know, and a danger foreseen is half avoided. Is it a bad idea to miss that BLQ for ss elimination part?


Maybe, maybe not. Depends on your sampling schedule and how much the previous concentration is >LLOQ.

❝ What is about regulatory's acceptance of any of the calculation modifications? I may use profound methods of numeric integration, but would it be accepted?


Duno. I always describe exactly what I plan to do in the protocol. Never got any questions afterwards. Believe me, I would make the life of an assessor miserable if at the end he/she doesn’t buy sumfink which was approved by his/her own agency and the ethics committee before. :-D

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Astea
★★  

Russia,
2019-02-11 20:46
(1871 d 13:09 ago)

@ Helmut
Posting: # 19900
Views: 10,890
 

 inter- vs extra-

Dear Helmut!

Thanks a lot for the clarification! The more I get answers the more I have questions... Why do algorithms for inter- and extrapolation differ?

Let's say that the last point is NA. Then PHX will calculate it via estimated λz suspecting log elimination not depending from the actual AUC calculation method.

❝ Gotcha! By specifying the linear trapezoidal you tell PHX that you believe that the drug between time points is eliminated in zero (!) order


Then why does it use lambda and log elimination for unknown samples and local slope and linear elimination for known samples? What's wrong with calculation by λz for the both cases?

Of course I can try to think about it as a compromis (see this post), but then how to define my own rule to be sure that it is correct?

❝ Duno. I always describe exactly what I plan to do in the protocol.

That's a perfect position. But sometimes we have to work with what someone has planned before.

I was always wondering how far one can go in stating his favourite methods in SOPs and SAPs? Can I write that after the calculation a statistician should have a cup of tea? :-)

"Being in minority, even a minority of one, did not make you mad"
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-02-12 03:20
(1871 d 06:35 ago)

@ Astea
Posting: # 19901
Views: 10,943
 

 inter- vs extra-

Hi Nastia,

❝ The more I get answers the more I have questions...


That’s what science is all about, right?

We learn more by looking for the answer to a question and not finding it
than we do from learning the answer itself.
  Lloyd Alexander


❝ Why do algorithms for inter- and extrapolation differ?


Sorry, ask Certara/Pharsight. ;-)

❝ Let's say that the last point is NA. Then PHX will calculate it via estimated λz suspecting log elimination not depending from the actual AUC calculation method.


I wouldn’t say suspecting but rather rightly assuming.

❝ ❝ By specifying the linear trapezoidal you tell PHX that you believe that the drug between time points is eliminated in zero (!) order


❝ Then why does it use lambda and log elimination for unknown samples and local slope and linear elimination for known samples? What's wrong with calculation by λz for the both cases?


Good question. I think that λz in any case would not be a bad idea because it is more reliable (i.e., taking information of samples with higher concentrations which are more accurate into account). On the same account I never had problems in using $$AUC_{0-\infty}=AUC_{0-\text{t}}+\frac{\widehat{C}_\text{t}}{\widehat{\lambda}_\text{z}}\tag{1}$$ instead of the simple $$AUC_{0-\infty}=AUC_{0-\text{t}}+\frac{C_\text{t}}{\widehat{\lambda}_\text{z}}\tag{2}$$ In Phoenix WinNonlin \(\small{(1)}\) is termed AUCINF_pred) and \(\small{(2)}\) AUCINF_obs. See also this article.

❝ Of course I can try to think about it as a compromis (see this post), but then how to define my own rule to be sure that it is correct?


Yep (you are an expert in using the forum’s search function and digging out old posts).
High time for us all to switch to AUCtlast (Common).*

❝ ❝ I always describe exactly what I plan to do in the protocol.

❝ That's a perfect position. But sometimes we have to work with what someone has planned before.


I know, I know…

❝ I was always wondering how far one can go in stating his favourite methods in SOPs and SAPs?


As I said above, very far. Maybe I was lucky? IMHO, in the SAP it’s easier because it is more specific to the study. In an SOP you likely end up with a crazy flowchart trying to cover all potential cases only to discover later that you have forgotten one.
NB, I don’t have a single SOP for NCA.

❝ Can I write that after the calculation a statistician should have a cup of tea? :-)


Or even? :party: Borderline.


  • Fisher D, Kramer W, Burmeister Getz E. Evaluation of a Scenario in Which Estimates of Bioequivalence Are Biased and a Proposed Solution: tlast (Common). J Clin Pharm. 2016;56(7):794–800. doi:10.1002/jcph.663. [image] Open access.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
mittyri
★★  

Russia,
2019-02-14 13:55
(1868 d 19:59 ago)

@ Astea
Posting: # 19917
Views: 10,881
 

 No rule fits all

Hi Astea,

❝ Why do algorithms for inter- and extrapolation differ?


Well.. I don't think there's a simple enough solution which fits all situations.
Say we have
Data set
 Time  Conc
0          1
1          60
2          80
3          100
4          130
5          160
6          130
7          100
8          90
10         70
12         40
14         30
18         60
24         2

Something is wrong at the point 18h. Could be a good idea to exclude that point from λz estimation, right?
Now imagine we want to get AUC0-18. Since that point exists, we cannot ignore it.
Lin-up,Log-down method claims AUC0-18 = 1320.139
So far so good.
Other next goal is AUC0-17.9, but taking into account not a robust interpolation of the neighbours but intelligent loglinear regression fit given as λz:
exp(-lambda*17.9+interc)
[1] 15.0076

AUC0_17.9 = 1224.5557 :confused:

Are you feeling some PK inconsistency here? Why is AUC18 so differ from the value 0.1h before?
I think this is (only one of many I think!) the reason why the methods differ.

Regarding Cmin as a first point: there could be other situations where to use Cmin from overall dataset (not the dosing period) is not good idea. Ctaupred? OK, but what if λz is not estimable for any reason?
I don't think the default WNL NCA rules will change since it can break a lot of templates and projects. Name it as unhealthy and lazy conservatism ;-)

PS: you can ask Certara to add some new optional NCA rule.

Kind regards,
Mittyri
Astea
★★  

Russia,
2019-02-16 09:33
(1867 d 00:21 ago)

@ mittyri
Posting: # 19926
Views: 10,724
 

 one size fits all vs goal posts

Dear Mittyri!

Thank you for your remarkable example! I've got it. But there could be some phylosophical thoughts: may be 60 is a miss (error or samples were mixed) than what whould be better - to calculate the square under 30-60-2 triangle carefully or to calculate the square under the curve assuming log elimination? If we have a large sample size in the study and noone presents the same pharmacokinetic features than it's a reason to think what that could be.

❝ I don't think the default WNL NCA rules will change since it can break a lot of templates and projects. Name it as unhealthy and lazy conservatism ;-)


Don't believe in it cause I observe a lot of changes and improvements while using PHX through years :-)

Another point is that I used PHX NCA like a standard candle (maybe regulators do it as well?). But it turns out that in some rare extreme cases it is worth to think about alternative approaches. Would it be accepted? How to be sure that the method stated in SAP would work better than PHX algo? Is it possible to explain in the report any disagreement with PHX output?

"Being in minority, even a minority of one, did not make you mad"
ElMaestro
★★★

Denmark,
2019-02-16 14:33
(1866 d 19:22 ago)

@ Astea
Posting: # 19927
Views: 10,724
 

 one size fits all vs goal posts

Hi all,

also remember that when we do BE we treat T and R in the same fashion. So regardless of how terrible we bias some estimate we may do that equally for the two treatments we are comparing. This will work as long as the phenomena that we discuss (funky data points, imputation, deletion, addition, intellingent fixes) occur with equal probability across groups.
We may bias some estimate (like lambda) rather terribly either in positive or negative direction, but that does not automatically imply that the expected point estimate changes and therefore it may be entirely safe in term of patient's risk. "It depends" as they say.

For example, there was a post recently (Mittyri? Hötzi? Someone else?) who wrote that when we do log down in BE we assume a first order elimination. That is much a personal interpretation, I think. When I do log or linear down in BE, I am personally only saying I am willing to make the same error for T and R regardless of how the drug is eliminated, full stop :-) and that's how I am not necessarily strongly favouring one method over the other. Even if the SPC or FOI info indicates first order elimination I am perfectly fine with lin. down especially if this is what the CRO usually does well. Wouldn't want them to get out of their comfort zone.
From the top of my head, I believe I have never been in a situation where I needed to defend an AUC value per se, I was only in situations where regulators questioned the proof of BE.

For dose-finding or superiority it may be a somewhat different situation.

❝ Another point is that I used PHX NCA like a standard candle (maybe regulators do it as well?). But it turns out that in some rare extreme cases it is worth to think about alternative approaches. Would it be accepted? How to be sure that the method stated in SAP would work better than PHX algo? Is it possible to explain in the report any disagreement with PHX output?


Regulators are using PHX now????

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-02-16 15:26
(1866 d 18:29 ago)

@ ElMaestro
Posting: # 19929
Views: 10,760
 

 Bias etc.

Hi ElMaestro

❝ […] remember that when we do BE we treat T and R in the same fashion. So regardless of how terrible we bias some estimate we may do that equally for the two treatments we are comparing.


Reads nice on paper but people might be tempted to fiddle around if they are already unblinded before performing NCA. Hence, my sequence is:
  1. Import necessary data (doses, dosing times, actual sampling time points, measured concentrations).
  2. Perform NCA in a blinded manner (in PHX-lingo: Sort the dataset by subject and period).
  3. Lock the results.
  4. Import the randomization and join the respective tables.
Actually I even don’t have the randomization beforehand. I ask the sponsor or CRO for it after #3. Hence, nobody can blame me that I didn’t “treat T and R in the same fashion”.

❝ […] there was a post recently (Mittyri? Hötzi? Someone else?) …


Myself for sure. I’m notorious for that.

❝ … who wrote that when we do log down in BE we assume a first order elimination.


Yep.

❝ That is much a personal interpretation, I think. When I do log or linear down in BE, I am personally only saying I am willing to make the same error for T and R regardless of how the drug is eliminated, full stop :-)


Agree, if (!) there are no missings. OK, one exception: Same time point missing in the same subject.

❝ Even if the SPC or FOI info indicates first order elimination I am perfectly fine with lin. down especially if this is what the CRO usually does well. Wouldn't want them to get out of their comfort zone.


If the linear trapezoidal defines the comfort zone of a CRO it disqualifies itself (linear-up / log-down is implemented in software for ages). If someone isn’t able to RTFM and push yet another button, well…

❝ From the top of my head, I believe I have never been in a situation where I needed to defend an AUC value per se, …


Not AUC but I was once asked whether my software for NCA was validated (see there). Funny enough, the agency asked for cross-validation [sic] against “commercial software like WinNonlin” as if commercical software is the gold standard.

❝ Regulators are using PHX now????


The FDA does (modeling). The Saudia FDA does (BE, though additionally to SAS).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2019-02-16 14:59
(1866 d 18:56 ago)

@ Astea
Posting: # 19928
Views: 10,701
 

 software: NCA not validated

Hi Nastia,

❝ […] it turns out that in some rare extreme cases it is worth to think about alternative approaches. Would it be accepted?


As I said numerous times – if described in the protocol – yes.

❝ How to be sure that the method stated in SAP would work better than PHX algo? Is it possible to explain in the report any disagreement with PHX output?


Good questions. No software is validated when it comes to NCA. Assessors/inspectors are you aware of it? See my attempt there.
For years the NCA Consortium is struggling to come up with sumfink. You have to apply for membership; interesting enough the majority of members are coming from originators or the academia. The task turned out to be rather difficult…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
mittyri
★★  

Russia,
2019-02-20 22:22
(1862 d 11:33 ago)

@ Helmut
Posting: # 19949
Views: 10,703
 

 no way out for NCA validation?

Hi Helmut,

from your lecture:
‘Black box’ validation
  • Run datasets with certified results (e.g., from NIST’s Statistical Reference Datasets Project).
    – FDA (2002)
    • Testing with usual inputs is necessary.
    • However, testing a software product only with expected, valid inputs does not thoroughly test that software product.
    • By itself, normal case testing cannot provide sufficient confidence in the dependability of the software product.

  • Create ‘worst-case’ datasets (extreme range of input, enter floating point numbers to integer fields, enter characters to numeric fields…).

So what is the problem to validate WNL NCA using the datasets (OK, not only from literature since there are so many troubles as you showed, but some simulated)?
All rules are written in the User's guide, so it could be crosschecked against handmade R code. Or should it be validated against another validated software (which does not exist)?

Kind regards,
Mittyri
mittyri
★★  

Russia,
2019-02-20 22:40
(1862 d 11:15 ago)

@ Astea
Posting: # 19950
Views: 10,587
 

 Default rules

Dear Astea,

❝ But there could be some phylosophical thoughts: may be 60 is a miss (error or samples were mixed) than what whould be better - to calculate the square under 30-60-2 triangle carefully or to calculate the square under the curve assuming log elimination?

BE guidelines are strict enough (recalling the cases with substituted samples shown by Helmut?) - you cannot exclude the concentration point during PK analysis (even if you have a strong suspicion regarding that). Thus, if you exclude that point from λz estimation and then use λz for interpolation, you are ignoring that point. I would not recommend it (at least for BE).

❝ Don't believe in it cause I observe a lot of changes and improvements while using PHX through years :-)

The only DEFAULT thing was changed (please correct me) is a way for therapeutic areas calculation. All additional features like another PK parameters, Concentrations and λz rules do not affect default ways of computation. I mean if you run NCA from Time vs. Conc data in WNL 6.3 and then rerun in WNL 8.1, the values will be the same.

Kind regards,
Mittyri
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,640 registered users;
75 visitors (0 registered, 75 guests [including 4 identified bots]).
Forum time: 09:55 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5