martin
★★  

Austria,
2012-01-11 22:02
(4925 d 21:49 ago)

Posting: # 7904
Views: 7,036
 

 endogenous compounds [PK / PD]

Dear all!

Having thought a long time about handling endogenous compounds I would like to share my approach how to model such data assuming a two-compartmental model following an IV bolus. I think the model below has an advantage compared to the frequently used turnover model based on differential equations (e.g. Gabrielsson and Weiner, 2000) as the presented model takes also the measured pre-dose concentration(s) straightforwardly into account.

I would be grateful if you could have a look at the code and share your opinion on this PK model.

best regards

martin

Gabrielsson J. and Weiner D. (2000). Pharmacokinetic and Pharmacodynamic Data Analysis: Concepts and Applications. 4th Edition. Swedish Pharmaceutical Press, Stockholm.

#### modified data from Gabrielsson and Weiner (2000, page 743)
#### endogenous concentration is assumed to be constant over time 
dose <- 36630 
time <- c(-1, 0.167E-01, 0.1167, 0.1670, 0.25, 0.583, 0.8330, 1.083, 1.583, 2.083, 4.083, 8.083, 12, 23.5, 24.25, 26.75, 32)
conc <- c(20.34, 3683, 884.7, 481.1, 215.6, 114, 95.8, 87.89, 60.19, 60.17, 34.89, 20.99, 20.54, 19.28, 18.18, 19.39, 22.72)
data <- data.frame(conc,time)

## specify indicator variable enabling inclusion of pre-dose concentration for fitting 
data$i1 <- ifelse(data$time <0, 1, 0)
data$i2 <- ifelse(data$time <0, 0, 1)

## assuming constant absolute error: ordinary least squares
mod.ols <- nls(conc ~ i1*base + i2*(base + a1*exp(-k1*time) + a2*exp(-k2*time)),
               start=c(base=20.34, a1=4500, k1=15, a2=125, k2=0.6), data=data, trace=TRUE)
 
## assuming constant relative error: weighted least squares 
mod.wls <- nls(conc ~ i1*base + i2*(base + a1*exp(-k1*time) + a2*exp(-k2*time)),
               start=c(base=20.34, a1=4500, k1=15, a2=125, k2=0.6), data=data,
               weight=1/predict(mod.ols)^2, trace=TRUE)

## assuming constant relative error: iteratively re-weighted least squares
mod.irwls <- mod.wls
for(i in 1:10){
   mod.irwls <- nls(conc ~ i1*base + i2*(base + a1*exp(-k1*time) + a2*exp(-k2*time)), 
                     start=c(base=20.34, a1=4500, k1=15, a2=125, k2=0.6),
                     data=data, weight=1/predict(mod.irwls)^2)
   print(as.vector(coef(mod.irwls)))
}
 
summary(mod.ols)
summary(mod.wls)
summary(mod.irwls)

newdata <- data.frame(time=seq(0,32,0.01))
newdata$i1 <- ifelse(newdata$time <0, 1, 0)
newdata$i2 <- ifelse(newdata$time <0, 0, 1)
plot(conc ~ time, data=data, ylim=c(10,1E4), log='y', yaxt='n',
     xlab='Time (hours)', ylab='Log of concentration (pmol/L)')
axis(side=2, at=c(10, 100, 1000, 10000), las=1)
axis(side=2, at=seq(1E1,1E2,1E1), tcl=-0.25, labels=FALSE)
axis(side=2, at=seq(1E2,1E3,1E2), tcl=-0.25, labels=FALSE)
axis(side=2, at=seq(1E3,1E4,1E3), tcl=-0.25, labels=FALSE)
points(x=newdata$time, y=predict(mod.irwls, newdata), type='l')
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-01-13 03:06
(4924 d 16:44 ago)

@ martin
Posting: # 7920
Views: 6,059
 

 To diff or not to diff, that’s the question

Dear Martin!

❝ Having thought a long time about handling endogenous compounds I would like to share my approach how to model such data assuming a two-compartmental model following an IV bolus. I think the model below has an advantage compared to the frequently used turnover model based on differential equations (e.g. Gabrielsson and Weiner, 2000)…


Since you don’t need to approximate SSQ/∂Pi by ∆SSQ/∆Pi. Good luck in R. ;-)

❝ […] as the presented model takes also the measured pre-dose concentration(s) straightforwardly into account.


Right.

❝ I would be grateful if you could have a look at the code and share your opinion on this PK model.


The model is OK and very intuitive. I love IRWLS. How long did it take you to get working initial estimates? But: AIC(mod.ols, mod.wls, mod.irwls) :lookaround:

… and for the ambitious eyeball-modeler:
Rmax <- max(abs(c(resid(mod.ols), resid(mod.wls), resid(mod.irwls))))
Rmin <- -Rmax
opar <- par(mfrow = c(2, 2), oma = c(1, 0, 0.65, 0))
  plot(time, resid(mod.ols), ylim=c(Rmin, Rmax), cex=1.5, main="OLS")
    abline(h=0, lty=3)
  plot(time, resid(mod.wls), ylim=c(Rmin, Rmax), cex=1.5, main="WLS")
    abline(h=0, lty=3)
  plot(time, resid(mod.irwls), ylim=c(Rmin, Rmax), cex=1.5, main="IRWLS")
    abline(h=0, lty=3)
  title("model residuals", outer = TRUE)
par(opar)


Surprise! Compare the shapes to Fig. 31.2! It’s nice to get the basal level directly from the model, but the coefficients are f**g complicated hybrids of volumes of distribution and rate constants. Personally I don’t care whether a model is purely empiric or has a more ‘physiological touch’ like Johan’s. Be prepared to run into arguments with clinical pharmacologists – you remember the record-setting 3-months-thread about the ‘correct’ parameterization at David’s PKPD-list…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2012-01-13 10:19
(4924 d 09:31 ago)

@ Helmut
Posting: # 7923
Views: 5,979
 

 To diff or not to diff, that’s the question

Dear HS!

Thank you very much for your feedback. Yes, fitting a system of differential equations is a burden in R (and also with some other software :-(). I used the very nice function genoud of R package rgenoud to get initial estimates which of course took some time.

Regarding the AIC: according to Gabrielsson and Weiner (page 460) the AIC or SC (Schwarz criteria) cannot be used to compare the models as different weighting schemes are used.

OMG; yes I remember the thread about the correct parameterization but I think an estimate of the basal effect obtained by the above model is better understandable to non-experts than the turnover rate estimated via Johan's model (personal opinion/experience).

best regards & thank you very much

martin
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-01-13 16:20
(4924 d 03:30 ago)

@ martin
Posting: # 7926
Views: 5,991
 

 To diff or not to diff, that’s the question

Dear Martin!

❝ Regarding the AIC: according to Gabrielsson and Weiner (page 460) the AIC or SC (Schwarz criteria) cannot be used to compare the models as different weighting schemes are used.


Shame on me. Pointed out by John Wagner already in the 1970s… Actually I started with the residual plots and sneeked into the AICs later. Will never do it again. Promise.

❝ […] I think an estimate of the basal effect obtained by the above model is better understandable to non-experts than the turnover rate estimated via Johan's model (personal opinion/experience).


Agree about the basal levels. I just wanted to advise against nasty questions arising concerning volumes of distribution and clearances, like “Whaaat did you say are a1 and a2?” If you are not comfortable with Laplace transforms you can go with a symbolic maths package (my favorite: Maxima; moderate learning curve). If you obtain an explicit solution from the system of differential equations you can fit the hybrid constants and backtransform them later on.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2012-01-13 16:55
(4924 d 02:55 ago)

@ Helmut
Posting: # 7927
Views: 6,075
 

 nasty questions

Dear HS!

Thank you very much for your valuable advice and for bringing Maxima to my attention. I will definitely have a close look at it. What do you think about the following fast workaround to get estimates (unfortunately without SEs) for clearance and volumes of distributions.

dose <- 36630
parm <- c(a1=4606.5294570, k1=15.5050053, a2=118.6882298, k2=0.5412553)
fn1 <- function(x){parm[1]*exp(-parm[2]*x) + parm[3]*exp(-parm[4]*x)}
fn2 <- function(x){x*(parm[1]*exp(-parm[2]*x) + parm[3]*exp(-parm[4]*x))}
aucinf <- integrate(fn1, lower=0, upper=Inf)$value
aumcinf <- integrate(fn2, lower=0, upper=Inf)$value
cls <- dose / aucinf
mrt <- aumcinf / aucinf
vss <- mrt * cls
vc <- as.real(dose/(parm[1] + parm[3]))
vz <- as.real(cls / parm[4])
print(c(aucinf=aucinf, cls=cls, mrt=mrt, vss=vss, vc=vc, vz=vz))


best regards

martin
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2012-01-13 17:09
(4924 d 02:41 ago)

@ martin
Posting: # 7929
Views: 5,988
 

 quick answers

Dear Martin!

❝ What do you think about the following fast workaround to get estimates (unfortunately without SEs) for clearance and volumes of distributions.


Sure!

Quick and dirty,
or clean and never.
(coders’ proverb)



P.S.: Still in love with IRWLS? Should try it with NLYW!


Dear Tech Support:

Last year I upgraded from Girlfriend 7.0 to Wife 1.0. I soon noticed that the new program began unexpected child processing that took up a lot of space and valuable resources. No mention of his was included with the product information.

In addition, Wife 1.0 installed itself into all other programs and now launches during system initialization, where it monitors all other system activity. Applications such as Poker Night 10.3, Football 5.0, Fishing 7.5, and Motorcycling 5.0 no longer run, crashing the system whenever selected.

I can’t seem to keep Wife 1.0 in the background while attempting to run my favorite applications. I’m thinking about going back to Girlfriend 7.0, but the uninstall doesn’t work on Wife 1.0.

Please help!!!!!!

THE REPLY:

Dear Troubled User:
This is due to a very common problem that generates many complaints.

It is due to a primary misconception generally by male users. Many people upgrade from Girlfriend 7.0 to Wife 1.0 thinking that it is merely a Utilities and Entertainment program. Wife 1.0 is an OPERATING SYSTEM and is designed by its creator to run everything!

It is also impossible to delete Wife 1.0 and return to Girlfriend 7.0. Hidden operating systems files cause Girlfriend 7.0 to emulate Wife 1.0, so nothing is gained. It is impossible to uninstall, delete, or purge the program files from the system once installed.

You cannot go back to Girlfriend 7.0 because Wife 1.0 is designed to disallow this. Some have tried Girlfriend 8.0 or Wife 2.0 but end up with more problems than in the original system. Look in your Wife 1.0 manual under Warnings - Alimony/Child Support. I recommend that you keep Wife 1.0 and work on improving the situation. I suggest installing the background application C:\YES DEAR to alleviate software augmentation.

Having installed Wife 1.0 myself, I also suggest that you read the entire section regarding General Partnership Faults (GPFs). You must assume all responsibility for any faults and problems that occur, regardless of their cause. You will also find that GPFs are cyclical. The best course of action is to enter the command C:\APOLOGIZE.

Avoid excessive use of C:\YES DEAR because ultimately you will have to give the APOLOGIZE command before the system will return to normal. Anyway, Wife 1.0 is a great program, but it tends to be very high maintenance.

Wife 1.0 comes with several support programs, such as Clean and Sweep 3.0, Cook It 1.5 (which replaces Burn It 1.0), Trash 4.0, and Do Bills 4.2. You must, however, be very careful how you use these programs. Improper use will cause the system to launch the program NagNag 9.5. Once this happens, the only way to improve the performance of Wife 1.0 is to purchase additional software. I recommend Flowers 2.1 and Diamonds 5.0 should this happen.

WARNING!!!!! DO NOT, under any circumstances, install SecretaryShortSkirt 3.3. This application is not supported by Wife 1.0 and will cause irreversible damage to the operating system.

Tech Support


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
23,428 posts in 4,929 threads, 1,684 registered users;
92 visitors (0 registered, 92 guests [including 17 identified bots]).
Forum time: 20:51 CEST (Europe/Vienna)

No matter what side of the argument you are on,
you always find people on your side
that you wish were on the other.    Thomas Berger

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5