yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-09-17 15:33
(5671 d 07:25 ago)

(edited by yjlee168 on 2008-09-17 19:32)
Posting: # 2372
Views: 20,902
 

 update: bear v1.1.4 for R (a free tool for ABE) [🇷 for BE/BA]

Dear all,

It's been a while since we released bear v1.0.0 at this forum on last July. Then we have received lots of comments from bear's users, also including very nice experts at this forum. Though I tried to update some change log of bear since v1.0.0, I had problem to log in this forum from time to time. Then we went to Dortmund, Germany, for the R Annual Meeting (userR!2008) during August 9 to 18. A very nice meeting there. The following are the change log and Todo lists since v1.0.0.

Change Log (Sept. 17, 2008)
-----
v1.1.4
  • fixed the compatibility for iMac and Linux (thanks to Koji Shimamoto, Tokyo, Japan; also for testing bear on iMac machine)
v1.1.3-1.1.0
  • removed the function of "Sample size estimation (raw data)"; also improve the function of "Sample size estimation (Log Transform)."
  • fixed the rounding error in the display of "Sample size estimation (Log transform)" (thanks to Helmut)
  • fixed some the format (comma, semicolon, etc.) of import data file (thanks to Helmut); users now can choose their favorite formats.
  • display the PATH where bear will import from and will output all results to.
  • display only one graphic devices (using PageDown & PageUp to change plots) (thanks to Helmut)
  • display semilog (not linear) plots when choosing data points to do linear regression for lambda_z (kel) in NCA (thanks to Helmut)
  • calculate CV_intra & CV_inter now (thanks to Helmut)
  • output both the .csv and the .RData file formats obtained from NCA; users can choose either one for anova.
  • use "Tests of SUBJECT(SEQUENCE) as an error term" in ANOVA output (thanks to Helmut)
  • changed ANOVA(GLM) to ANOVA (lm) in the menu title (thanks to EIMaestro)
-----

Todo Lists (Sept. 17, 2008)
-----
  1. calculate intra-subject residuals (a little bit tough...) (suggested by Helmut)
  2. implement the better algorithm for lambda_z calculation (suggested by martin and Ace; especially Ace who provided us his R codes for this) as additional options
  3. implement "lme" for ANOVA as the additional method to current "lm". (suggested by EIMaestro)
  4. implement the "Save" function for data points selection for each subjects before doing NCA (using data.frame()?)
  5. add Type III SS (this should be able to be done quickly...)
  6. add methods for replicated study... (it may take longer for this).
-----

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-09-17 18:59
(5671 d 04:00 ago)

@ yjlee168
Posting: # 2374
Views: 18,741
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear Yung-jin,

thanks again for sharing your package with the R community!

A suggestion for the next version: in the sample size demo I would suggest an expected ratio of 95% rather than 105% due to the asymetry of the power curves (in actual calculations it's always better to use an expected difference of -5% rather than +5% in order to obtain a conservative sample size for +5%).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-09-17 23:03
(5670 d 23:56 ago)

(edited by yjlee168 on 2008-09-18 06:07)
@ Helmut
Posting: # 2375
Views: 18,600
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear Helmut,

Thank you for your suggestion. I recalled that the reason we used 105% in the demo of sample size estimation was based on Dr. David Dubins' FARTSSIE (.xls) (plz see the attached picture that I did screenshot from it).
We will do as you suggest in the next version.

❝ A suggestion for the next version: in the sample size demo I would suggest

❝ an expected ratio of 95% rather than 105% due to the asymetry of the power

❝ curves (in actual calculations it's always better to use an expected

❝ difference of -5% rather than +5% in order to obtain a

❝ conservative sample size for +5%).


[image]

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
martin
★★  

Austria,
2008-09-19 15:13
(5669 d 07:45 ago)

@ yjlee168
Posting: # 2377
Views: 18,541
 

 update: bear v1.1.4 for R (a free tool for ABE)

dear Yung-jin Lee !

thank you very much for your acknowledgment regarding estimation of lambda_z !

you may think on using the function lee in the R package PK for estimation of lambda_z. you may adopt the lee approach to estimate lambda_z for extravascual administered drugs. I think by calling the function lee with a subset of the data (e.g. the period from Tmax to Tlast) should provide estimates for lambda_z.

what do you think?

best regards

martin

PS.: note that we used log10 - transformed values for estimation of intitial and terminal half-life after IV bolus administration. we used the log10 transformation in accordance to the orginal paper of Lee et. al (1990). in the case that you are interested in estimation of lambda_z for estimation of AUC from 0 to infinity (extrapolated area) you have to use the ln transformation otherwise your AUC from 0 to infinity is biased.
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-09-22 20:19
(5666 d 02:39 ago)

@ martin
Posting: # 2383
Views: 18,441
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear martin,

Thank you and we will see if we can implement "lee" function into bear to calculate lambda_z. However, it seems very positive. And I will remember the difference of "log10" and "ln". Thanks again.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-09-23 02:46
(5665 d 20:12 ago)

@ martin
Posting: # 2385
Views: 18,442
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear Martin!

❝ [...] I think by calling the function lee with a subset of the data (e.g.

❝ the period from Tmax to Tlast) should provide estimates for lambda_z.


❝ what do you think?


I don't think that Tmax is the right starting point - not even for a 1-compartment model (see this post and this thread as well).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
martin
★★  

Austria,
2008-09-27 11:37
(5661 d 11:21 ago)

(edited by martin on 2008-09-27 13:28)
@ Helmut
Posting: # 2426
Views: 18,329
 

 update: bear v1.1.4 for R (a free tool for ABE)

dear HS !

on the assumption of a one-compartmental open model (i.e. one compartmental absorption and one-compartmental elimination) I think that the lee method should be able to distinguish between the "mixture phase" (i.e. absorption and elimination) after Tmax and the pure elimination phase. You are right that Tmax is not a good starting point in the case of higher order models after extravascular administration.

I have to admit that I never performed simulations for the lee method in combination with extra-vascular compartmental models (I studied models after intravenous bolus administration). however, this discussion motivated me to set-up a few simulation scenarios and check the behaviour of the lee method to estimate the terminal elimination rate in such cases.

best regards

martin

PS.: I would very be happy if you could provide me some real life examples of one-compartmental open models (just parameter estimates no source data) to simulate log-normally distributed concentrations based on these models and check the performance of the lee method to select the elimination phase.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-09-27 16:05
(5661 d 06:53 ago)

@ martin
Posting: # 2427
Views: 18,342
 

 Package lee for R

Dear Martin!

Answering to your P.S. only (I'm leaving for a couple of conferences tomorrow): I would be glad sharing data-sets (actually I'm 'having' hundreds of them). I will give you a call you after the 9th of October.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-09-23 12:07
(5665 d 10:51 ago)

@ martin
Posting: # 2386
Views: 18,474
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear martin,

The lee function from package PK (for R) seems not working correctly. We tried several examples here and found the result weird. The code we used to test and the result we obtained is shown as follow (one of tested examples).
> ##1 example for preparation 1 from Lee et al. (1990)
> time <- c(0.5, 1.0, 4.0, 8.0, 12.0, 24.0)  # truncated dp before Tmax
> conc <- c(75, 72, 61, 54, 36, 6)           # truncated dp before Tmax
> result1 <- lee(conc=conc, time=time, method='ols', points=2, lt=TRUE)
> print(result1$parms)
              initial    terminal
halflife   6.59747708  6.59747708
slope     -0.04562805 -0.04562805
intercept  1.97385991  1.97385991

And its plot of the regression line is like this:
[image]
Interesting to find that lee function seems to take all data points to estimate lambdaz with the subset data. Looked like that lee function cannot pick up appropriate data points to estimate lambdaz (or half-lives). Apparently, lee function doesn't meet our needs.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
martin
★★  

Austria,
2008-09-27 11:25
(5661 d 11:34 ago)

(edited by martin on 2008-09-27 17:57)
@ yjlee168
Posting: # 2425
Views: 18,365
 

 update: bear v1.1.4 for R (a free tool for ABE)

dear Yung-jin Lee !

thank you for considering the function lee. this method was originally developed for IV bolus administration models. please note that the lee method is able to distinguish between a distributive and an elimination phase (after IV bolus administration) based on an optimisation criteria when there are in fact two transport processes. for the example of Lee et al (1990) and using the ols criteria the best fit to the data is simply a straight line indicating that the transport process over time can be characterized by an elimination process only.

when there is a distributive and an elimination process the lee method is able to select phases appropriately. for testing the function I suggest to use also data that have a distributive and an elimination process which is not the case for the example provided. I suppose that your tested data cover only one-compartmental models. in such cases the lee method returns, of course, only a monophasic decline.

using this dataset for an IV bolus administration (R package stats) illustrates the lee method in more detail:
Indo.1 <- Indometh[Indometh$Subject==1,] print(Indo.1)

   Subject time conc
1        1 0.25 1.50
2        1 0.50 0.94
3        1 0.75 0.78
4        1 1.00 0.48
5        1 1.25 0.37
6        1 2.00 0.19
7        1 3.00 0.12
8        1 4.00 0.11
9        1 5.00 0.08
10       1 6.00 0.07
11       1 8.00 0.05

res <- lee(conc=Indo.1$conc, time=Indo.1$time, method='lad', points=2, lt=TRUE)
print(res)

$parms
             initial    terminal
halflife   0.4952051  3.06712387
slope     -0.6078895 -0.09814732
intercept  0.3280636 -0.56601802

$chgpt
[1] 1.753988

this results indicate that the decline over time can be described by a distributive and an elimation phase where the "best" time points used to calculate lambda_z are time points greater 1.754 hours to the last sampling time point. for technical details you may find this paper of interest: Wolfsegger M. J. (2006). The R Package PK for Basic Pharmacokinetics. Biometrie und Medizin, 5:61-68.

summarizing, the lee method works correctly - you just have to interpret the results carefully.

best regards

martin

PS.: I found a dataset for an one-compartmental open model (R package stats). please find below my idea to select appropriate time points for calculation of lambda_z when a drug is given by an extravascular route of administration:
> data <- Theoph[Theoph$Subject==5,] > print(data)
   Subject   Wt Dose  Time  conc
45       5 54.6 5.86  0.00  0.00
46       5 54.6 5.86  0.30  2.02
47       5 54.6 5.86  0.52  5.63
48       5 54.6 5.86  1.00 11.40
49       5 54.6 5.86  2.02  9.33
50       5 54.6 5.86  3.50  8.74
51       5 54.6 5.86  5.02  7.56
52       5 54.6 5.86  7.02  7.09
53       5 54.6 5.86  9.10  5.90
54       5 54.6 5.86 12.00  4.37
55       5 54.6 5.86 24.35  1.57
> Tmax <- max(subset(data, conc==max(conc), select='Time'))
> data <- subset(data, Time>Tmax)
> print(data)
   Subject   Wt Dose  Time conc
49       5 54.6 5.86  2.02 9.33
50       5 54.6 5.86  3.50 8.74
51       5 54.6 5.86  5.02 7.56
52       5 54.6 5.86  7.02 7.09
53       5 54.6 5.86  9.10 5.90
54       5 54.6 5.86 12.00 4.37
55       5 54.6 5.86 24.35 1.57
> res <- lee(conc=data$conc, time=data$Time, method='ols', points=2, lt=FALSE)
> print(res)
$parms
              initial    terminal
halflife  15.70395059  8.24935257
slope     -0.01916906 -0.03649135
intercept  1.00860315  1.08685775

$chgpt
[1] 4.517567

this results indicate that the "best" time points used to calculate lambda_z are time points greater 4.518 hours to the last sampling time point. note that you have to specify the option lt=FALSE as after Tmax there is an absorption and elimination process after extravascular administration. you get the corresponding plot with plot(res,log='y'). you have also the choice to use more robust methods of estimation by using the option method.
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-09-29 14:03
(5659 d 08:55 ago)

@ martin
Posting: # 2430
Views: 18,350
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear martin,

We cannot access the article (not in the PubMed) as you pointed below from Taiwan. Is the article in English? If not, what the language was it published?

Thanks.

❝ Wolfsegger M. J. (2006). The R Package PK for Basic Pharmacokinetics. Biometrie und Medizin, 5:61-68.


All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2008-09-29 14:10
(5659 d 08:48 ago)

@ yjlee168
Posting: # 2431
Views: 18,494
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear Yung-jin,

here's a direct link to Martin's article (PDF in English).

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-09-29 15:07
(5659 d 07:51 ago)

@ Helmut
Posting: # 2435
Views: 18,257
 

 update: bear v1.1.4 for R (a free tool for ABE)

Dear Helmut,

Thank you so much. I just cannot find it from PubMed.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
martin
★★  

Austria,
2008-10-11 17:30
(5647 d 05:29 ago)

@ yjlee168
Posting: # 2514
Views: 18,419
 

 modified function for selection of time points

dear yjlee168 !

please find enclosed the wonderful code from Aceto81 :ok: modified to use the lee method for extravascular administration. I added an option allowing to switch between fitting algorithms (lee.method; default='ols') and an option to choose including Cmax (default=FALSE) for selection of appropriate time points. you may find it useful.

best regards

martin

f <- function(dat, lee.method='ols', lee.cmax=FALSE){
   dat <- dat[order(dat$time),]    m <- which.max(dat$conc)
   f2 <-  function(m) return(cbind((nrow(dat)-m+1),abs(extractAIC(lm(log(conc)~time,dat[m:nrow(dat),])))[2],summary(lm(log(conc)~time,dat[m:nrow(dat),]))$adj.r.squared))
   overview <- as.data.frame(do.call(rbind,lapply((m+1):(nrow(dat)-2),f2)))
   names(overview) <- c("n","AIC","adjR2")
   n_ARS=0
   r.adj=0
   for (i in (nrow(dat)-2):(which.max(dat$conc)+1)) {
      if (r.adj - summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared <(0.0001)) {
         n_ARS = nrow(dat)-i+1
         r.adj = summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
         }
   }   

   n_TTT_ARS=0
   r.adj2=0
   for (i in (nrow(dat)-2):(min(seq_along(dat$time)[dat$time>=dat$time[which.max(dat$conc)]*2]))) {
      if (r.adj2 - summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared <(0.0001)) {
         n_TTT_ARS = nrow(dat)-i+1
         r.adj2 = summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
         }
   }   
   
   # start modification
   require(PK)
   start <- m+1 # default lee.cmax=FALSE: not including cmax for selection
   if(lee.cmax==TRUE){start <- m}  # inculding cmax for selection 
   leedat <- dat[c(start:nrow(dat)),] # select data
   l <- lee(conc=leedat$conc, time=leedat$time, method=lee.method, lt=FALSE)
   n_lee <- sum(dat$time> l$chgpt)
   if(is.na(n_lee)){n_lee <- nrow(leedat)}
   # end modification

   n_TTT <- sum(dat$time> (dat$time[which.max(dat$conc)]*2))
   n_AIC <- overview$n[which.min(abs(overview$AIC))]    plot(l,log="y")
   print(overview)
   cat("n")
   return(data.frame(TTT=n_TTT, AIC=n_AIC, ARS=n_ARS,TTT_ARS=n_TTT_ARS,lee=n_lee))
}
> b<-c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24)
> c<-c(0,36.1,125,567,963,1343,1739,1604,1460,797,383,72)
> dat <- data.frame(time=b,conc=c)

> f(dat, lee.cmax=FALSE)
  n      AIC     adjR2
1 5 25.53606 0.9972111
2 4 18.82088 0.9960696
3 3 12.91243 0.9929628
n  TTT AIC ARS TTT_ARS lee
1   3   3   5       4   5

> f(dat, lee.cmax=TRUE)
  n      AIC     adjR2
1 5 25.53606 0.9972111
2 4 18.82088 0.9960696
3 3 12.91243 0.9929628
n  TTT AIC ARS TTT_ARS lee
1   3   3   5       4   4


yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-10-11 21:45
(5647 d 01:13 ago)

@ martin
Posting: # 2517
Views: 18,330
 

 modified function for selection of time points

Dear martin,

Thank you for your codes. Yes, we're working very hard on these (lambdaz estimation) right now. We consider to release the first 3 methods: manual selection of the exact 3 points, ARS, and TTT in the next release. However, there are also TTT-ARS and TTT-AICs under development and coding. All these methods are to exclude the data point of Cmax. Recently, we also re-structure our codes to make it more integrable in the future. We will try your codes/methods asap and will try to implement it. The final results for all these methods will be posted in this Forum. We really learn a lots from this Forum.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
martin
★★  

Austria,
2008-10-12 02:04
(5646 d 20:54 ago)

@ yjlee168
Posting: # 2519
Views: 18,306
 

 last 3 time points

dear Yung-jin Lee !

good to hear that your code will be much more sophisticated than WinNonLin! I appreciate your efforts. you may find of interest that in the case of manual selection of the last 3 time points, the middle time point is “useless” to calculate lambda_z by using the OLS fitting criteria in the case of equidistant measurement time points. in this case, the middle data tuple only effects the intercept but not the slope (=lambda_z). you may think about implementing a warning message in this case.

the theoretical background for this unexpected behavior rather complicated but I can provide you some information upon request.

some examples:
t <- c(10, 12, 14) # equidistant time points
c1 <- c(5, 4, 1)
c2 <- rnorm(3)
c3 <- runif(3)

lm(c1~t)
lm(c1[-2]~t[-2])

lm(c2~t)
lm(c2[-2]~t[-2])

lm(c3~t)
lm(c3[-2]~t[-2])

yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2008-10-16 13:20
(5642 d 09:39 ago)

@ martin
Posting: # 2541
Views: 18,099
 

 modified function for selection of time points

dear martin,

Do you think that we need to take the absolute value for AIC (extractAIC) here? :confused: AICs can be negative and the smallest is the best, as far as I can remember. I know that these codes were written by Ace previously. What do you think about this?

Thanks.

return(cbind((nrow(dat)-m+1),abs(extractAIC(lm(log(conc)~time,dat[m:nrow(dat),])))[2],summary(lm(log(conc)~time,dat[m:nrow(dat),]))$adj.r.squared))


All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Aceto81
★    

Belgium,
2008-10-16 14:09
(5642 d 08:49 ago)

@ yjlee168
Posting: # 2542
Views: 18,001
 

 modified function for selection of time points

❝ Do you think that we need to take the absolute value for AIC (extractAIC) here? :confused: AICs can be negative and the smallest is the best, as far as I can remember. I know that these codes were written by Ace previously. What do you think about this?

return(cbind((nrow(dat)-m+1),abs(extractAIC(lm(log(conc)~time,dat[m:nrow(dat),])))[2],summary(lm(log(conc)~time,dat[m:nrow(dat),]))$adj.r.squared))


Maybe a bit of clarification for the abs() value:
I took the example from a thread, mentioned before, and get the same AIC values, except that they were negative. So that's the reason why I used the abs() value.

Ace
martin
★★  

Austria,
2008-10-16 14:29
(5642 d 08:29 ago)

@ Aceto81
Posting: # 2543
Views: 18,035
 

 modified code

dear ace and yung-jin lee !

I suggest not to use the absolute value. please find below a new modification of the ace's code including some lines to prevent errors which I encountered in a simulation study (for strange PK profiles)

best regards

# function for selection of time points
tp <- function(dat){

   dat <- dat[order(dat$time),] # modification

   m <- which.max(dat$conc)
   f2 <-  function(m) return(cbind((nrow(dat)-m+1),abs(extractAIC(lm(log(conc)~time,dat[m:nrow(dat),])))[2],summary(lm(log(conc)~time,dat[m:nrow(dat),]))$adj.r.squared))
   overview <- as.data.frame(do.call(rbind,lapply((m+1):(nrow(dat)-2),f2)))
   names(overview) <- c("n","AIC","adjR2")
   n_ARS=0
   r.adj=0
   for (i in (nrow(dat)-2):(which.max(dat$conc)+1)) {
      check <- summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared # modification
      if (!is.na(check) && (r.adj - check <(0.0001))) {
         n_ARS = nrow(dat)-i+1
         r.adj = summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
      }
   }   

   n_TTT_ARS=0
   r.adj2=0
   for (i in (nrow(dat)-2):(min(seq_along(dat$time)[dat$time>=dat$time[which.max(dat$conc)]*2]))) {
      check <- summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared # modification
      if (!is.na(check) && (r.adj2 - check < (0.0001))) {
         n_TTT_ARS = nrow(dat)-i+1
         r.adj2 = summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
      }
   }   
   
   # start modification
   require(PK)
   leedat <- dat[c((m+1):nrow(dat)),]
   n_lee <- nrow(leedat)
   if(nrow(leedat) >= 5){
       l <- lee(conc=leedat$conc, time=leedat$time, method='ols', lt=FALSE, points=3)
       n_lee <- sum(dat$time> l$chgpt)
       if(is.na(n_lee)){n_lee <- nrow(leedat)}
   }
   # end modification

   n_TTT <- sum(dat$time> (dat$time[which.max(dat$conc)]*2))
   n_AIC <- overview$n[which.min(overview$AIC)] # modification

   #plot(l,log="y")
   print(overview)
   cat("n")
   return(c(TTT=n_TTT, last3=3, AIC=n_AIC, ARS=n_ARS,TTT_ARS=n_TTT_ARS,lee=n_lee))
}
Aceto81
★    

Belgium,
2008-10-16 16:11
(5642 d 06:47 ago)

@ martin
Posting: # 2545
Views: 18,021
 

 modified code

Martin,

nice to see that the code is adapted untill everyone is happy :-D.
Correct me if I'm wrong, but you still use the abs function for the AIC values?

What's the reason you get those NA values for the adjusted R2 value? Could you provide me an example?

Ace
martin
★★  

Austria,
2008-10-16 17:03
(5642 d 05:56 ago)

(edited by martin on 2008-10-16 20:20)
@ Aceto81
Posting: # 2546
Views: 17,977
 

 abs(AIC)

dear ace !

UUPS! you are right - thank you for pointing this out; code line 3 has to be adjusted accordingly. I encountered this NA behavior in simulations studying very strange PK profiles; I have to fish for an example. I will keep you informed.

best regards

martin
martin
★★  

Austria,
2008-10-16 22:02
(5642 d 00:56 ago)

@ Aceto81
Posting: # 2547
Views: 18,120
 

 example

dear ace !

here is a rather strange (artificial) example generated in a simulation study (R version 2.6.2 for windows XP)

best regards

martin

time <- c(0.08263463, 0.52824754, 1.17358567, 1.39767757, 2.47663194,  3.19031555, 3.97925368, 6.95466074, 7.42732263, 12.08734425, 12.35692940, 13.63305248, 21.64587886, 29.33632297)
conc <- c(1.175213, 8.748377, 12.650698, 12.741200, 12.965722, 13.648985, 13.633996, 11.110552, 16.232875, 6.747145, 7.156791, 7.776978, 3.241740, 1.528261)

dat <- data.frame(time=time, conc=conc)

# without fishing for NA's
n_TTT_ARS=0
r.adj2=0
for (i in
(nrow(dat)-2):(min(seq_along(dat$time)[dat$time>=dat$time[which.max(dat$conc)]*2])))
 {
   check <- summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared #modification
   if ((r.adj2 - check <(0.0001))) {
      n_TTT_ARS = nrow(dat)-i+1
      r.adj2 = summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
   }
}

# fishing for NA's
n_TTT_ARS=0
r.adj2=0
for (i in
(nrow(dat)-2):(min(seq_along(dat$time)[dat$time>=dat$time[which.max(dat$conc)]*2])))
 {
   check <- summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared #modification
   if (!is.na(check) && (r.adj2 - check < (0.0001))) {
      n_TTT_ARS = nrow(dat)-i+1
      r.adj2 = summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
   }
}
Aceto81
★    

Belgium,
2008-10-17 14:30
(5641 d 08:28 ago)

@ martin
Posting: # 2548
Views: 18,353
 

 example

Dear Martin,

Thanks for the example!
The reason for the NA is the fact that the program was only selecting 2 samples, instead of the minimum of 3.
This was caused by the fact that there were 2 samples after 2x Tmax.
So infact this should return an NA for the method I think (correct me if I'm wrong).

Here is an implementation:

time <- c(0.08263463, 0.52824754, 1.17358567, 1.39767757, 2.47663194,  3.19031555, 3.97925368, 6.95466074, 7.42732263, 12.08734425, 12.35692940, 13.63305248, 21.64587886, 29.33632297)
conc <- c(1.175213, 8.748377, 12.650698, 12.741200, 12.965722, 13.648985, 13.633996, 11.110552, 16.232875, 6.747145, 7.156791, 7.776978, 3.241740, 1.528261)

dat <- data.frame(time=time, conc=conc)

n_TTT_ARS=0
r.adj2=0
tp <- min(seq_along(dat$time)[dat$time>=dat$time[which.max(dat$conc)]*2])
if ((nrow(dat) - tp + 1) >=3) { #modification
for (i in
(nrow(dat)-2):(min(seq_along(dat$time)[dat$time>=dat$time[which.max(dat$conc)]*2])))
  {
  check <- summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
 
    if ((r.adj2 - check <(0.0001))) {
      n_TTT_ARS = nrow(dat)-i+1
      r.adj2 = summary(lm(log(conc)~time,dat[i:nrow(dat),]))$adj.r.squared
    }
  }
} else {n_TTT_ARS = NA} #modification


Ace
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
81 visitors (0 registered, 81 guests [including 10 identified bots]).
Forum time: 21:59 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5