jag009
★★★

NJ,
2013-07-17 17:58
(4303 d 06:31 ago)

Posting: # 10998
Views: 23,559
 

 1 compartment model with lag time [PK / PD]

Hi everyone,

Need some help. Could someone solve the k01 term from the 1 compartment w lag time model?

C(t) = DK01/V(K01-K10) * e(-K10*t-tlag) - e(-k01*t-tlag)

Thanks
John
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-17 18:08
(4303 d 06:21 ago)

@ jag009
Posting: # 10999
Views: 22,241
 

 solve for k01?

Hi John,

❝ Could someone solve the k01 term from the 1 compartment w lag time model?


❝ C(t) = DK01/V(K01-K10) * (e(-K10*(t-tlag)) - e(-k01*(t-tlag)))


I was missing some () in your formula… What do you mean by ‘solve’? Can you elaborate?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
jag009
★★★

NJ,
2013-07-17 18:19
(4303 d 06:10 ago)

@ Helmut
Posting: # 11001
Views: 22,382
 

 solve for k01?

Thanks Helmut,

❝ I was missing some () in your formula… What do you mean by ‘solve’? Can you elaborate?


I meant to solve for K01 if given C, V, K10, t, tlag:
K01=....

John
ElMaestro
★★★

Denmark,
2013-07-17 19:09
(4303 d 05:20 ago)

@ jag009
Posting: # 11004
Views: 22,235
 

 solve for k01?

Hi John,

❝ K01=....


Can't imagine why you need to do it, but I think you will have to do that numerically. Wouldn't be wrong anyway.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-18 01:01
(4302 d 23:28 ago)

@ jag009
Posting: # 11010
Views: 22,149
 

 no closed form

Hi John,

after unsuccessfully fiddling with paper, pencil, and a little brain I decided to fire up my symbolic algebra system Maxima.*

Seems that this equation cannot be solved in closed form. So you have to go with one of the numerical algos suggested by Yung-jin and ElMaestro.


  • If you want to give it a try yourself: Get Maxima and install wxMaxima as GUI. I don’t recommend the native GUI xMaxima.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-17 22:57
(4303 d 01:32 ago)

(edited on 2013-07-18 13:00)
@ jag009
Posting: # 11005
Views: 22,316
 

 1 compartment model with lag time

Hi John
Let me try to solve this puzzle. I used Helmut's equation since his is correct one. What I am using is JAGS with R. Two R packages will be required (rjags and R2jags). Here is its brief tutorial. Many can be found with Google. Since you don't give any value for this question, I use following parameters to do simulation dataset first.

Parameter values I used:
D = 100
V = 10
K01 = 1.67 <-- we want to calculate this param.
K10 = 0.213
t_lag = 0.5

So I got C(t) as follows:

t (hr)  C(t)
0       0
1       5.331
2       7.391
3       6.553
4       5.405
6       3.551 <- we will use this C(t).
8       2.320
12      0.990
16      0.422
18      0.276
24      0.0768


Here is the JAGS model. You can copy & paste the model and save it as "jag009.txt" in your R's working directory (or folder).

model
    {
     C~dnorm(mu, 1.0E+6)     # C: drug concentration
     mu<-D*K01/(V*(K01-K10))*(exp(-K10*(t-t_lag))-exp(-K01*(t-t_lag)))
     K01~dnorm(theta1, 10)   # define K10 = theta1 here
     theta1<-0.8             # prior of K01; as close to the real number
                             # as possible; or just guess one if don't know it.
    }


Then run the following R script.

library(rjag)
library(R2jags)
dataList= list(
C=3.551,
D=100,
V=10,
K10=0.213,
t=6,
t_lag=0.5
)
params = c("K01")              # The parameter(s) to be monitored.
initsList = list(K01=0.8)      # initialize prior here;I tried not to give too close value for K01
adaptSteps = 500               # Number of steps to "tune" the samplers.
burnInSteps = 6000             # Number of steps to "burn-in" the samplers.
nChains = 3                    # Number of chains to run.
numSavedSteps=100000           # Total number of steps in chains to save.
thinSteps=1                    # Number of steps to "thin" (1=keep every step).
nIter = ceiling(( numSavedSteps * thinSteps ) / nChains )   # Steps per chain.
# Create, initialize, and adapt the model:
jagsModel = jags.model("jag009.txt", data=dataList ,   
n.chains=nChains , n.adapt=adaptSteps, inits=initsList)
# Burn-in:
cat("Burning in the MCMC chain...\n")
update(jagsModel, n.iter=burnInSteps)
# The saved MCMC chain:
cat("Sampling final MCMC chain...\n")
codaSamples <- coda.samples(jagsModel, params, n.iter=nIter)
show(summary(codaSamples))
dev.new()
plot(codaSamples)
dev.new()
gelman.plot(codaSamples)
cat("\n")
K01.mat <- as.matrix(codaSamples[[1]])
posterior.estimates <- rbind(K01.mat)
vars <- t(as.matrix(posterior.estimates))
params.mean <- apply(vars, 1, mean)
K01.mean <- as.matrix(params.mean[[1]])
C<-3.551; D<-100; V<-10; K10<-0.213; t<-6; t_lag<-0.5  ### all known parameter values here
C_calc<-D*K01.mean/(V*(K01.mean-K10))*(exp(-K10*(t-t_lag))-exp(-K01.mean*(t-t_lag)))
out<-data.frame(Parameters=c("sim. C(t)","calc. C(t)"),Values=c(C, C_calc))
cat("\n");show(out);cat("\n\n")

Finally, I got
...
Iterations = 6501:39834
Thinning interval = 1
Number of chains = 3
Sample size per chain = 33334

1. Empirical mean and standard deviation for each variable,
   plus standard error of the mean:

          Mean             SD       Naive SE Time-series SE
     1.670e+00     3.281e-03      1.038e-05      1.319e-05

2. Quantiles for each variable:

 2.5%   25%   50%   75% 97.5%
1.663 1.667 1.670 1.672 1.676

  Parameters   Values
1  sim. C(t) 3.551000
2 calc. C(t) 3.551006

It converges successfully even with init. K01 = 0.8 from two diagnostic plots (not attached with this post). Try it yourself. Good luck.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2013-07-17 23:53
(4303 d 00:36 ago)

@ jag009
Posting: # 11006
Views: 22,574
 

 Package-free solution

Hi John,

Here's an offer of a quick and dirty package-free approach to yung-jin's dataset:

Conc=function(D, K01, K10, V, t, tlag)
{
 C = ((D*K01)/(V*(K01-K10))) * (exp(-K10*(t-tlag)) - exp(-K01*(t-tlag)))
 return(C)
}

Cx=Conc(100, 1.67, 0.213, 10, 6, 0.5)
Cx ## this is the known!
##Let's assume we know D, K01, V, tlag
##We want to find a K10 that gives this result Cx at the given t.

K01_fubar=function(K01start, K01end, Gridsize)
{
 tmpK01=K01start
 hotK=K01start
 minDiff=123456789e54321
 d=(K01end-K01start)/Gridsize
 for (i in 1:Gridsize)
   {
   if (abs(Cx-Conc(100, tmpK01, 0.213, 10, 6, 0.5))<minDiff)
    {
      minDiff=abs(Cx-Conc(100, tmpK01, 0.213, 10, 6, 0.5))
      hotK=tmpK01
    }
   tmpK01=tmpK01+d
   }
 return(hotK)
}

Solution=K01_fubar(0.001, 10.0, 10000)
Solution
##want more decimals?? the interval above had a grid denisty of (10.0-0.001)/10000 so...
d=(10.0-0.001)/10000
Solution=K01_fubar(Solution-d, Solution+d, 10000)
Solution

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-18 00:11
(4303 d 00:18 ago)

@ ElMaestro
Posting: # 11007
Views: 22,227
 

 Package-free solution

Dear ElMaestro,

:thumb up: Very smart numerical approach. Works perfectly. Simple and clean. Can this approach be used to solve more one parameters, such as K10 and K01 simultaneously? Hmm...

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2013-07-18 00:27
(4303 d 00:02 ago)

@ yjlee168
Posting: # 11008
Views: 22,193
 

 Package-free solution

Hi yung-jin,

:thumb up: Very smart numerical approach. Works perfectly. Simple and clean. Can this approach be used to solve more one parameters, such as K10 and K01 simultaneously? Hmm...


Yes it can. This time it was a one-domensional problem, but it can be extended to any number of dimensions. It is a typical grid search. The trouble is that for each new dimension the computation time is multiplied by a factor Gridsize. In this case I used 10000 steps as this is just one dimension. But if we try two dimensions and use 10000 steps per dimension, well, it is going to be (10^5)^2 loops then and you might be late for supper. Better to decrease the grid density then iterate more times over the optimiser then. Of course one can work with X steps in one dimension and Y steps in the other dimension if a speed gain is envisaged. Everything is possible.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-18 00:42
(4302 d 23:47 ago)

@ ElMaestro
Posting: # 11009
Views: 22,174
 

 Package-free solution

Dear ElMaestro,

I see. It's amazing to see the powerful grid search here. I feel my approach like using a canon to shoot a bird for John's question. Thank you so much. :ok:

❝ Yes it can. This time it was a one-domensional problem, but it can be extended to any number of dimensions. It is a typical grid search. [..] Everything is possible.


All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2013-07-18 01:21
(4302 d 23:09 ago)

@ ElMaestro
Posting: # 11012
Views: 22,315
 

 Package-free solution

And here's for those who prefer a minimum of typing:

Conc=function(D, K01, K10, V, t, tlag)
{
 C = ((D*K01)/(V*(K01-K10))) * (exp(-K10*(t-tlag)) - exp(-K01*(t-tlag)))
 return(C)
}

Cx=Conc(100, 1.67, 0.213, 10, 6, 0.5)

Objective.Func=function(K01)
{
  return(abs (Cx-Conc(100, K01, 0.213,10,6,0.5)))
}

optim(c(0.1), Objective.Func, method="Brent", lower=0.001, upper=100.0)

Pass or fail!
ElMaestro
d_labes
★★★

Berlin, Germany,
2013-07-18 10:45
(4302 d 13:44 ago)

@ ElMaestro
Posting: # 11014
Views: 22,279
 

 A new R-King was born

Mei liaba Kokoschinsky!
(bajuvarian)

:thumb up: Respect!

Can you enlighten me what the reason was to choose method="Brent"?

The range 0.001-100 is surely nearly exhaustive for k01 (elimination constant), but if elimination half life is >693.1472 hours ... :-D

What about the simple solution: "Log-linear regression of points after two times tmax (known as TTT rule), negative slope =k01"?
Of course restricted to one-compartiment model.

@John:
A general criticism: I don't understand the problem. If we have concentration time points and fit a one-compartiment model we get all the parameters necessary for that model, including k01. I can't imagine situations where we have V, k10, t, tlag only but no k01.

Regards,

Detlew
ElMaestro
★★★

Denmark,
2013-07-18 11:07
(4302 d 13:22 ago)

@ d_labes
Posting: # 11015
Views: 22,280
 

 Brent

Hi d_labes,

❝ Can you enlighten me what the reason was to choose method="Brent"?


For the case of a one-dimensional optimisation problem, the Nelder-Mead is as unreliable as a used-car sales man. In such cases Brent is a good choice. If we want to do two-dimensional (or more) optimisation then I'd probably go with the default Nelder-Mead.

Along these lines: Yung-jin, for a two-dimensional problem as discussed above the Nelder-Mead etc. may perform much more efficiently than a grid search.

Re. the TTT rule: I have no experience with it in practice. It sounds like something that for all practical pruposes should be good enough.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-18 12:21
(4302 d 12:08 ago)

@ d_labes
Posting: # 11016
Views: 22,456
 

 A new R-King was born

Dear Detlew,

❝ [...]

❝ What about the simple solution: "Log-linear regression of points after two times tmax (known as TTT rule), negative slope =k01"?

❝ Of course restricted to one-compartment model.


For John's question, it just happened that there was only one data point (t, C(t)) available. Log-linear regression with TTT rule seems not possible in this case. Of course, if John had a serial conc. vs. time data point, he can fit a one-compartment PK model directly to get all parameters simultaneously or using log-linear regression with TTT rule as you suggest.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-18 22:28
(4302 d 02:01 ago)

@ yjlee168
Posting: # 11023
Views: 22,177
 

 OT: TTT in bear

Dear Yung-jin,

❝ […] using log-linear regression with TTT rule as you suggest.


Incidentally I met one of the authors of the TTT-paper (Christian Scheerans) last month. He knew that you have implemented the method in bear after our discussions in the forum. He was very happy about that. :-D He introduced R in his company (which was a Sisyphean task because his company is a very conservative German innovator). The only reason why he performed his simulations in Excel’s VBA was that he didn’t know R at that time. We (Martin, Jack, and I) have performed a comparison of different algos and confirmed that there are indeed differences in terms of bias and precision – but only when it comes to λz itself. Differences diminished when we com­pared AUC. So we lost interest and never get it published.
After some discussion with Christian we are considering to extend the method for multi­com­part­ment profiles. Hopefully it will not go to the to-do-list and stay there forever…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-19 00:17
(4302 d 00:12 ago)

@ Helmut
Posting: # 11024
Views: 22,388
 

 OT: TTT in bear

Dear Helmut,

Yes, as I can remember, after we announced bear v1.0.0 on 2008 at this Forum, we took most advices from colleagues of this Forum to improve bear. Adding different algos (such as ARS, TTT, AIC and combo) to estimate λz was one of major tasks at that time. When finishing this part, we did some research trying to figure out if different algo really made differences. Basically, we got similar results as you did: no differences could be found with AUC0-∞, but λz and AUC0-t (AUC from time zero to the last measurable conc.) could be significantly different. We did not publish the results either. I think it mostly can be due to that the extrapolated AUC (AUCt-∞) is only 10-20% of AUC0-∞, or even less. Thus it may not have significant impact on AUC0-∞. I know some countries even do not include AUC0-∞ as pivotal BE parameters. So I very agree with what you got from comparing different algos for λz estimation. However, we did learn a lots from adding these algos into bear.

❝ Incidentally I met one of the authors of the TTT-paper (Christian Scheerans) last month. He knew that you have implemented the method in bear after our discussions in the forum. He was very happy about that. :-D [...]



Thank you and Christian.

❝ After some discussion with Christian we are considering to extend the method for multicompartment profiles. Hopefully it will not go to the to-do-list and stay there forever…


In this thread, I noticed that Detlew mentioned "TTT rule... only restricted to one-compartment model..." And now you say "... to extend the method for multicompartment profiles..." I cannot quite remember if TTT rule in only restricted form one-compartment model or not. Like like it should be. Question is that we all use NCA to analyze BE data (such as Cmax & AUCs). It raises some questions here.
  1. Does it imply that if we want to choose TTT rule to estimate λz we need to prove that the study drugs exhibit (or best fit with) the one-compartment model in the body first?
  2. Is it something related to "the inflection point of the curve" (i.e., the second point; the first is Cmax) in one-compartment model?
  3. Should we put "warning" in bear for this restriction?
  4. Also, if TTT rule is a model-dependent algo for λz estimation, is it appropriate to add the algo to the part of NCA?

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2013-07-19 11:47
(4301 d 12:42 ago)

@ yjlee168
Posting: # 11025
Views: 21,987
 

 OT: TTT subtleties

Dear Yung-jin,

❝ we did some research trying to figure out if different algo really made differences. Basically, we got similar results as you did: no differences could be found with AUC0-∞, but λz and AUC0-t (AUC from time zero to the last measurable conc.) could be significantly different.


:confused: How could this be? How is AUC0-t influenced by the estimate of lamdaZ. IMHO in no way!

❝ In this thread, I noticed that Detlew mentioned "TTT rule... only restricted to one-compartment model..."


This is a misunderstanding. We talked in this thread about models, especially obtaining k01. And in a model other than a one-compartiment model you can't get k01 by log-linear regression

❝ It raises some questions here.

  1. Does it imply that if we want to choose TTT rule to estimate λz we need to prove that the study drugs exhibit (or best fit with) the one-compartment model in the body first?
  2. Is it something related to "the inflection point of the curve" (i.e., the second point; the first is Cmax) in one-compartment model?
  3. Should we put "warning" in bear for this restriction?
  4. Also, if TTT rule is a model-dependent algo for λz estimation, is it appropriate to add the algo to the part of NCA?
IMHO the TTT rule in it's strict sense - use all points after two-times-tmax for log-linear regression - is indeed only appropriate for concentration-time curve shapes which resemble those from a one-compartiment model. And yes it has to do with the second inflection point of the curve after which the linear part of log(C) versus time begins in an one-compartiment model.

But as a tool to restrict the upper number of points to consider in the terminal phase it may be also used for other shapes. Hopefully the fit criterion you use (adjR2, AIC or whatever you prefer) will pick the linear part. In that sense I have used the TTT rule and found it giving reasonable results.
To be detailed for my method:
  1. Start with the last 3-4 points in the terminal part
  2. Go back adding additional points until you reach the time point after/at TTT.
  3. Choose the points which gave the best fit for a log-linar regression to obtain lambdaZ.
The TTT is using ideas derived from models, but itself is not a model-dependent algo because it doesn't use any model. Beside the assumption of a log-linear part of the concentration-time curves of course, which is in itself a model conception.
Thus don't worry to much :-D.

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-20 21:44
(4300 d 02:45 ago)

@ d_labes
Posting: # 11027
Views: 21,948
 

 OT: TTT subtleties

Dear Detlew,

:confused: How could this be? How is AUC0-t influenced by the estimate of lamdaZ. IMHO in no way!


Oops! Sorry, my mistake. It should be "...but λz and AUCt-∞ (the extrapolated AUC) could be significantly different."

❝ This is a misunderstanding. We talked in this thread about models, especially obtaining k01. And in a model other than a one-compartment model you can't get k01 by log-linear regression


I see. Sorry about my misunderstanding.

❝ IMHO the TTT rule in it's strict sense - use all points after two-times-tmax for log-linear regression - is indeed only appropriate for concentration-time curve shapes which resemble those from a one-compartment model. And yes it has to do with the second inflection point of the curve after which the linear part of log(C) versus time begins in an one-compartment model.


Very happy to know these.

❝ But as a tool to restrict the upper number of points to consider in the terminal phase it may be also used for other shapes. Hopefully the fit criterion you use (adjR2, AIC or whatever you prefer) will pick the linear part. In that sense I have used the TTT rule and found it giving reasonable results.


Perfect and crystal. :thumb up: :clap: Yes, it should be picking linear part only. Since when it approaches "nonlinear" part, adjR2 will start decreasing or AIC will increase in that case (theoretically?). However, could you please give us an example to explain how to tell we get the "reasonable results" from TTT when comparing to others? If possible, I wish I can add this into bear.

❝ To be detailed for my method:

❝ [...]


OK.

❝ The TTT is using ideas derived from models, but itself is not a model-dependent algo because it doesn't use any model. Beside the assumption of a log-linear part of the concentration-time curves of course, which is in itself a model conception.

❝ Thus don't worry to much :-D.


Sounds contradictory but it really makes sense to me. Thanks for your time.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-21 02:08
(4299 d 22:21 ago)

@ yjlee168
Posting: # 11028
Views: 22,347
 

 OT: beyond TTT?

Dear Yung-jin & Detlew,

❝ ❝ But as a tool to restrict the upper number of points to consider in the terminal phase it may be also used for other shapes. Hopefully the fit criterion you use (adjR2, AIC or whatever you prefer) will pick the linear part. In that sense I have used the TTT rule and found it giving reasonable results.


❝ […] Yes, it should be picking linear part only. Since when it approaches "nonlinear" part, adjR2 will start decreasing or AIC will increase in that case (theoretically?).


Maybe. Maybe not.
Scheerans et al. recommend as a starting point the intersection of the last two log/linear lines (“[…] the crossing of the imaginary first and second disposition phase lines in the (semi-logarithmic) plasma profile post Cmax is generally used as a ‘visual marker’ for the beginning of the monoexponential terminal phase.”)1 Unfortunately this point might be difficult to spot if (1) there is more than moderate noise and (2) distribution/elimination don’t differ too much.

I’m currently exploring this workflow:
  1. Find TTT by the usual algo (see here).
  2. Generate a subset of data in the interval [TTT, tz].
  3. Perform a 2-part linear stick-regression or MARS with a common intercept on log(C), where the initial estimate of the breakpoint δ = 2nd time point after TTT (3 in my example below).
    Y = β0 + β1t + ifelse(t≥δ, β2(t-δ), 0)
  4. Set the starting point of the last phase similar to TTT: Use the first time point which is ≥ the estimated breakpoint δ.
  5. Visual inspection still mandatory.
I used a simple two-compartment model without lagtime, parameterized in macro-constants C(t) = Aeαt + Beβt + Cekat, where C = –(A+B). No clear story yet…

If you want to play:
A 75, B 25, α 0.5, β 0.1, ka 2, t {0, 0.25, 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 9, 12, 16, 24}
After adding some noise / rounding I got:
  0.25  29.63
  0.5   46.07
  1     53.68
  1.5   52.75
  2     45.99
  2.5   40.24
  3     34.87
  4     27.19
  6     17.59
  9     11.83
 12      7.39
 16      4.08
 24      2.75

tmax 1, TTT 2, time-interval for stick-regression {2, 24}, δ 7.73 → starting-point for λz 9.
   t { }    n   λz     %RE   R²adj
  1.5 – 24 10 0.1384 +38.36 0.9391 ← max. R²adj, min. AIC
  2   – 24  9 0.1342 +34.19 0.9382 ← TTT/max. R²adj, TTT/min. AIC
  2.5 – 24  8 0.1295 +29.47 0.9362
  3   – 24  7 0.1236 +23.65 0.9333
  4   – 24  6 0.1157 +15.68 0.9290
  6   – 24  5 0.1051 + 5.07 0.9206
  9   – 24  4 0.0950 – 5.04 0.8800 ← ‘breakpoint method’
 12   – 24  3 0.0777 –22.35 0.8183

The common algos would pick nine to ten values – resulting in extremely biased λz.2 This demonstrates how important visual inspection of the fit is (aka don’t naïvely trust in automatic methods). The algo based on the breakpoint would pick four values, which is also the ‘best’ in terms of bias (–5.04%). By eye-balling likely I would choose the last three (bias –22.35%). :-(

[image]


  1. Scheerans C, Derendorf H, Kloft C. Proposal for a Standardised Identification of the Mono-Exponential Terminal Phase for Orally Administered Drugs. Biopharm Drug Dispos. 2008;29(3):145–57. doi 10.1002/bdd.596
  2. Phoenix by default and bear restrict the first value to the one following tmax. Classical WinNonlin’s algo is even more greedy and would include even tmax (!): R²adj 0.9431, λz 0.1408, RE +40.80%… See also this post.

P.S.: I would not use function lee() in package PK – smells already too much of modeling to me (sorry, Martin; see here). All methods detect two phases if the starting point is set to TTT, but…
lee(conc[6:14], time[6:14], method="ols"): last 3, λz 0.0777 (–22.35%)
lee(conc[6:14], time[6:14], method="lad"): last 4, λz 0.0824 (-17.62%)
lee(conc[6:14], time[6:14], method="hub"): last 3, λz 0.0777 (–22.35%)
lee(conc[6:14], time[6:14], method="npr"): last 4, λz 0.0973 (-2.73%)
Maybe a way forward would be to extract only the breakpoint?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-22 15:45
(4298 d 08:44 ago)

@ Helmut
Posting: # 11030
Views: 21,602
 

 OT: beyond TTT?

Dear Helmut,

Thanks for your great demonstrations. So I misunderstood Detlew's messages but not yours. I just wondered why you did not respond at that moment. As illustrated, TTT rule may cause extreme bias for λz estimation when the study drug exhibits more than one-compartment PK model, i.e., when it has more than one exponential terms mathematically. Should I consider to remove TTT from bear first since it is a model-dependent approach? And then we can add it back again when we have clear picture about it.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-22 16:21
(4298 d 08:09 ago)

@ yjlee168
Posting: # 11031
Views: 21,737
 

 OT: keep TTT!

Dear Yung-jin,

❝ […] I just wondered why you did not respond at that moment.


I was giving a workshop and had limited spare time…

❝ As illustrated, TTT rule may cause extreme bias for λz estimation when the study drug exhibits more than one-compartment PK model, i.e., when it has more than one exponential terms mathematically.


Yes. But still slightly better than R²adj alone – the only one available in commercial software. ;-)

❝ Should I consider to remove TTT from bear first since it is a model-dependent approach?


Please no! Contrary to R²adj TTT is based on PK-grounds. I would not call it model-dependent. OK, it is based on a one-compartment model but the estimation of λz is within the boundaries of NCA, IMHO. For multiple distribution / elimination phases it is the responsibility of the user to select the last loglinear phase visually.

❝ And then we can add it back again when we have clear picture about it.


I think that you should keep it. If we find a reasonable automatic method for >1 compartment you can add it.

Anyhow, I’m not overly optimistic yet – especially if at least one of the following conditions applies:
  • Noisy data.
  • Limited samples, especially in the area of the ‘breakpoint’.
  • No pronounced difference in rate constants (rule of thumb: ratios ≤5 are difficult to detect).
  • Large differences in volumes of distribution. Might be only of theoretical interest because one of the phases will not be apparent unless the analytical method is very sensitive.

I started a new thread over there.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-23 01:42
(4297 d 22:47 ago)

@ Helmut
Posting: # 11035
Views: 21,712
 

 OT: keep TTT!

Dear Helmut,

❝ [...]

❝ Please no! Contrary to R²adj TTT is based on PK-grounds. I would not call it model-dependent. OK, it is based on a one-compartment model but the estimation of λz is within the boundaries of NCA, IMHO. For multiple distribution / elimination phases it is the responsibility of the user to select the last loglinear phase visually.


OK. I plot your simulated data as semi-log scale (log(C) vs. time) to see if the 'breakpoint' (which is occurring at 9) is easy to figure out visually. It is not. I don't know if you put too much noise or not. If we can visually figure out the 'breakpoint' easily, then we probably can change current TTT algo as a semi-automatic way. That is to pick the last 3-4 data points first (automatically) and then to show plots to allow user to add more until reaching 'breakpoint' (manually). I guess it will work only if user knows which is the 'breakpoint.' One more question: what if the 'breakpoint' is just not one of scheduled sampling times?

❝ Anyhow, I’m not overly optimistic yet – especially if at least one of the following conditions applies: Noisy data. [...]



Agree.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2013-07-23 03:06
(4297 d 21:23 ago)

@ yjlee168
Posting: # 11036
Views: 21,641
 

 OT: EOD (here)

Dear Yung-jin,

❝ OK. I plot your simulated data as semi-log scale (log(C) vs. time) to see if the 'breakpoint' (which is occurring at 9) is easy to figure out visually. It is not.


No? Profiles like that one are not uncommon. I think it should not be too difficult to ‘eyeball’ the breakpoint somewhere between 6 and 12.

❝ I don't know if you put too much noise or not.


That was just a little one…

❝ If we can visually figure out the 'breakpoint' easily, then we probably can change current TTT algo as a semi-automatic way. That is to pick the last 3-4 data points first (automatically) and then to show plots to allow user to add more until reaching 'breakpoint' (manually).


You shouldn’t call it TTT any more.

❝ I guess it will work only if user knows which is the 'breakpoint.'


In the real world this will never rarely happen.

❝ One more question: what if the 'breakpoint' is just not one of scheduled sampling times?


Same procedure as in TTT: Select the next one in the sampling schedule.* In my example the BP is 7.73 → start estimation at 9.

❝ ❝ Anyhow, I’m not overly optimistic yet – especially if at least one of the following conditions applies: Noisy data. [...]


❝ Agree.


I’m not sure whether noise will be our biggest problem. I would suggest to stop here (hijacking John’s thread) and continue at the new one.


  • From the paper: “If no sampling time point is available for the twofold of the observed tmax value due to the sampling design, the next later time point […] should be used.”

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2013-07-18 12:27
(4302 d 12:02 ago)

@ ElMaestro
Posting: # 11017
Views: 22,254
 

 Package-free solution

Dear ElMaestro,

Brilliant! :clap: It works great, too. I really like it.

❝ And here's for those who prefer a minimum of typing:

❝ [...]


All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
jag009
★★★

NJ,
2013-07-18 18:37
(4302 d 05:52 ago)

@ yjlee168
Posting: # 11021
Views: 22,056
 

 Thank you guys!

Thanks you guys!

I will give it a try.

Now the stupid question... Is it possible to derive a solution by hand? You know, pen and papers? Don't shoot me, just curious...

John
ElMaestro
★★★

Denmark,
2013-07-18 20:57
(4302 d 03:32 ago)

@ jag009
Posting: # 11022
Views: 21,997
 

 Thank you guys!

Hi John,

Pen and papers?? Come on, if you are interested in archaelogy and savage ancient cultures go visit a museum.
As Helmut indicated above I don't think there is an analytical solution to be derived for K01. The expression for C in t is an evil effer.

Pass or fail!
ElMaestro
jag009
★★★

NJ,
2013-07-19 22:40
(4301 d 01:49 ago)

@ ElMaestro
Posting: # 11026
Views: 21,914
 

 Thank you guys!

Hi ElMaestro!

❝ Pen and papers?? Come on, if you are interested in archaelogy and savage ancient cultures go visit a museum.


Just some challenge for myself :-D Wanna see how skillful(or unskillful) I am in solving mathematical equations... No i don't get paid for this!

John
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,673 registered users;
98 visitors (0 registered, 98 guests [including 3 identified bots]).
Forum time: 00:30 CEST (Europe/Vienna)

There are two possible outcomes: if the result confirms the
hypothesis, then you’ve made a measurement. If the result is
contrary to the hypothesis, then you’ve made a discovery.    Enrico Fermi

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5