Bioequivalence and Bioavailability Forum

Main page Policy/Terms of Use/FAQs Abbreviations Latest Posts

Log in | Register | Search

Back to the forum  Forum time: 2014-07-23 08:03 CEST (UTC+2h)
 Thread view
Helmut
Homepage
Vienna, Austria,
2012-03-29 18:52

Posting: # 8347
 

PK simulation [R for BE/BA]

Dear all,

I tried to implement PK model I (one-compartment open, lag-time):*

parameter                  distribution  mean    CV(%)  truncation
Volume of distribution     log-normal    1        10
elimination rate constant  log-normal    0.17375  20
absorption rate constant   log-normal    1.39     20
bioavailability            uniform       0.5      —     0.4–0.6
lag time                   normal        1        50    0–2

LLOQ 1% of theoretical Cmax, analytical error: multiplicative component (log-normal, assay error 10%), additive component (log-normal, LLOQ × assay error).

My clumsy code (requires package truncnorm for the truncated normal distribution):
#set.seed(29081957)# uncomment this line to compare results only
require(truncnorm)
n      <- 16       # no of sims
t      <- c(0,0.25,0.5,0.75,1,1.25,1.5,2,2.5,3,3.5,4,5,6,9,12,16,24)
D      <- 500
# Index ".d" denotes default values, ".c" CV
F.d    <- 0.5      # fraction absorbed (BA)
F.c    <- 0.1      # ±0.1, uniform
V.d    <- 1        # volume of distribution
V.c    <- 0.1      # CV 10%, lognormal
k01.d  <- 1.39     # absorption rate constant (Wagner: ka)
k01.c  <- 0.2      # CV 20%, lognormal
k10.d  <- 0.17375  # elimination rate constant (Wagner: K)
k10.c  <- 0.2      # CV 20%, lognormal
tlag.d <- 1
tlag.c <- 0.5      # CV 50%, truncated normal
tlag.l <- 0        # lower truncation 0
tlag.u <- 2        # upper truncation 2
AErr   <- 0.1      # analytical error CV 10%, lognormal
curve(F.d*D/V.d*k01.d/(k01.d-k10.d)*(exp(-k10.d*(x-tlag.d))
  -exp(-k01.d*(x-tlag.d))), from=tlag.d, to=24, n=(24-tlag.d)*5+1,
  type="l", lwd=2, col="red", xlim=c(0,24), ylim=c(0,350),
  xlab="time", ylab="concentration", add=FALSE)
tmax.th<- log(k01.d/k10.d)/(k01.d-k10.d)+tlag.d
Cmax.th<- F.d*D*k01.d/(V.d*(k01.d-k10.d))*(exp(-k10.d*(tmax.th-tlag.d))
  -exp(-k01.d*(tmax.th-tlag.d)))
LLOQ   <- 0.01*Cmax.th # 1% of theoretical Cmax
abline(v=0)
abline(h=0)
abline(h=LLOQ, lty=3)
for (i in 1:n){
  F      <- runif(n=1, min=F.d-F.c, max=F.d+F.c)
  V      <- rlnorm(n=1, meanlog=log(V.d)-0.5*log(V.c^2+1),
              sdlog=sqrt(log(V.c^2+1)))
  k01    <- rlnorm(n=1, meanlog=log(k01.d)-0.5*log(k01.c^2+1),
              sdlog=sqrt(log(k01.c^2+1)))
  k10    <- rlnorm(n=1, meanlog=log(k10.d)-0.5*log(k10.c^2+1),
              sdlog=sqrt(log(k10.c^2+1)))
  tlag   <- rtruncnorm(n=1, a=tlag.l, b=tlag.u, mean=tlag.d, sd=tlag.c)
  # C    <- F*D/V*k01/(k01-k10)*(exp(-k10*(t-tlag))-exp(-k01*(t-tlag)))
  # explicit (compute macro-constants, Wagner’s notation)
  # John G. Wagner, Pharmacokinetics for the Pharmaceutical Scientist, pp 4–5, 1993)
  ka     <- k01
  K      <- k10
  C0     <- F*D/V  # fictive iv concentration at t=0
  B1     <- C0*ka/(ka-K)*exp(K*tlag)
  B2     <- C0*ka/(ka-K)*exp(ka*tlag)
  C      <- B1*exp(-K*t)-B2*exp(-ka*t)
  for (j in 1:length(t)){ # analytical error (multiplicative 10% C)
    if (j==1)
      AErr1 <- rlnorm(n=1, meanlog=0, sdlog=sqrt(log(C[j]*AErr^2+1)))
    else
      AErr1 <- c(AErr1, rlnorm(n=1, meanlog=0, sdlog=sqrt(log(C[j]*AErr^2+1))))
  }
  AErr2  <- rep(AErr*LLOQ, length(t)) # analytical error (constant 10% LLOQ)
  C      <- C+AErr1+AErr2 # add analytical error
  C[C<LLOQ] <- NA  # assign NAs to Cs below LLOQ
  if (i==1) P <- log(C) else P <- P+log(C) # Prod. for geom. means
  if (i==1) S <- C else S <- S+C           # Sum  for arithm. means
  lines(t, C, type="l") # spaghetti plots
  }
GM       <- exp(P/n)  # geometric means
AM       <- S/n       # arithmetic means
points(t, GM, pch=16, col="blue")
lines(t, GM, type="l", col="blue", lwd=2)
points(t, AM, pch=16, col="#12DD00")
lines(t, AM, type="l", col="#12DD00", lwd=2)
legend("topright",
  c("theoretical", "geometric means", "arithmetic means"),
  pch=c(3,16,16),
  col=c("red","blue","#12DD00"))


A run of the set random seed below:

[image]

Is this correct? I think that I screwed up the analytical error. Original text:

Analytical assay errors were generated from log-normal distributions with no bias, a CV of 10%, plus a constant term equal to the product of the assay CV and the limit of quantification, LQ.


Shouldn’t I rather use a normal distribution instead (AErr1 <- rnorm(n=1, mean=0, sd=abs(C[j]*AErr)))? …which would give:

[image]


  • Csizmadia F and L Endrényi
    Model-Independent Estimation of Lag Times with First-Order Absorption and Disposition
    J Pharm Sci 87(5), 608–12 (1998)

[image]All the best, Helmut Schütz.[image]
Profile «accessible to registered users only»
The quality of responses received is directly proportional to the quality of the question asked. ☼
→ Science Quotes
d_labes

Berlin, Germany,
2012-03-30 15:38

@ Helmut
Posting: # 8356
 

PK simulation

Dear Helmut,

» Is this correct? I think that I screwed up the analytical error. Original text:

Analytical assay errors were generated from log-normal distributions with no bias, a CV of 10%, plus a constant term equal to the product of the assay CV and the limit of quantification, LQ.


» Shouldn’t I rather use a normal distribution instead (AErr1 <- rnorm(n=1, mean=0, sd=abs(C[j]*AErr)))? …

I'm not quite sure If I really understand what you attempt here.

But your implementation of the analytical error via log-normal distribution seems correct for me.

What I absolutely don't understand is the "... constant term ...". What is it good for :confused:. This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for.

BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data :smoke:.

Regards,

Detlew
Helmut
Homepage
Vienna, Austria,
2012-03-30 15:56

@ d_labes
Posting: # 8357
 

PK simulation

Dear Detlew!

» » Is this correct? I think that I screwed up the analytical error. Original text:

Analytical assay errors were generated from log-normal distributions with no bias, a CV of 10%, plus a constant term equal to the product of the assay CV and the limit of quantification, LQ.


» » Shouldn’t I rather use a normal distribution instead (AErr1 <- rnorm(n=1, mean=0, sd=abs(C[j]*AErr)))? …

» I'm not quite sure If I really understand what you attempt here.

Well, reproduce the sims of the paper…

» But your implementation of the analytical error via log-normal distribution seems correct for me.

Really? Im not sure about meanlog=0 since dlnorm(1)==dnorm(0); shouldn’t I rather use meanlog=1?*

» What I absolutely don't understand is the "... constant term ...". What is it good for :confused:. This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for.

Me too. I don't get the idea as well. Maybe it’s time to ask László.

» BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data :smoke:.

Exactly. Also I don’t get the point why I should go with a log-normal here (noise is not necessarily positive). Analytical error is normal, IMHO.


  • No big difference:

    [image]

[image]All the best, Helmut Schütz.[image]
Profile «accessible to registered users only»
The quality of responses received is directly proportional to the quality of the question asked. ☼
→ Science Quotes
d_labes

Berlin, Germany,
2012-03-31 14:40

@ Helmut
Posting: # 8363
 

PK simulation - log-normal

Dear Helmut!

» » But your implementation of the analytical error via log-normal distribution seems correct for me.
»
» Really? Im not sure about meanlog=0 since dlnorm(1)==dnorm(0); shouldn’t I rather use meanlog=1?*

If it comes to the log-normal I prefer always to work in the log domain, i.e. simulate the errors via normal distribution add it to the log-transformed theoretical value and than transform it back. The syntax of *lnorm() functions (which values to use as meanlog, meansd) is too complicated for my mind :cool:. Eventually this helps to figure out.

» » What I absolutely don't understand is the "... constant term ...". What is it good for :confused:. This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for.
»
» Me too. I don't get the idea as well. Maybe it’s time to ask László.

That seems a very good idea. Which László ever :-D.

» » BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data :smoke:.
»
» Exactly. Also I don’t get the point why I should go with a log-normal here (noise is not necessarily positive). Analytical error is normal, IMHO.

(Emphasis by me) This is a very good question, able to battle on over long evenings at beer with smoking heads :smoke:.

Regarding positive noise via log-normal you mix up somefink here, I think. The log-normal lends to positive or negative errors in the log-domain which back-transformed gives values greater or lower than the theoretical one in a multiplicative fashion.

The log-normal lends to variances (in the original domain) which are proportional to the values itself (higher variability at higher values). Maybe this is not the correct behavior since in bioanalysis it is often so that the errors are biggest at the lowest concentration. But here you are the expert :-D.

Regards,

Detlew
Helmut
Homepage
Vienna, Austria,
2012-03-30 18:46

@ d_labes
Posting: # 8358
 

Why all this fuzz?

Dear Detlew!

» I'm not quite sure If I really understand what you attempt here.

In more detail now. The authors explored different methods for estimating tlag. Clearly the one used in standard PK software (the last timepoint before the first measured concentration) performed worst – especially if ka is high and few timepoints are available. We discussed that already (#1872, #4850). Csizmadia and Endrényi studied the methods in terms of RSME and bias; I’m also interested in setting up simulations of BE studies (formulations different in F, ka, tlag, or combinations). In other words: If we use a ‘bad’ method – is the BE outcome substantially affected? Some ideas:
  • Reproduce the paper’s results for comparability.
  • Explore tlag as x of the last time point where C≤LLOQ and the first >LLOQ.
  • Think about DR formulations. Is a shift in tlag unbiased reflected in tmax? Would be nice because (already) frequent sampling around Cmax would make additional frequent sampling around the expected tlag unnecessary.
In my experience one- and two-compartment models with a lag-time cover the vast majority of IR and DR formulations. Even if in all subjects concentrations were measured at the first sampling time point PK models with a lag-time generally result in better fits (especially of the absorption phase and of Cmax). Once I have a suitable setup I could try to answer questions only a few people dare to ask, like:
  • AUCt if tlast,R ≠ tlast,T (the wonderful missing value story).
  • AUC72 or AUCτ: Inter-/extrapolate to nominal time? Extrapolate a missing value?
  • Missing values generally…
  • Clast or estimated Cmin at τ?[1]
  • Explore different algorithms in selecting the terminal phase in multicompartmental models.
BTW, Csizmadia and Endrényi refer in their models’ parameters to an earlier (and much quoted) publication by Bois et al (1994).[2] Interesting that there normal distributed data were assumed (why?) and a truncation of ±3 SD was used (though with the same CVs). Does anybody know an R package supporting the truncated log-normal?

The analytical error was already defined in a strange way in Bois’ paper:

Analytical assay errors were generated from truncated normal distributions with no bias (mean zero), a CV of 10%, truncation at ±3 CV, plus a fixed term equal to the product of the assay CV and the limit of quantification, LQ.

Also interesting that they also showed a large positive bias and terrible CV of AUC, especially for two-compartment models. Of the extrapolation methods using the estimated Clast instead of the measured Clast performed slightly better. AUCt performed best by far. The study was sponsored by the FDA – still requiring AUC:not really:


  1. Paixão P, Gouveia LF, and JGA Morais
    An alternative single dose parameter to avoid the need for steady-state studies on oral extended-release drug products
    Eur J Pharm Biopharm 80(2), 410–7 (2012)
    DOI: 10.1016/j.ejpb.2011.11.001
  2. Bois FY, Tozer TN, Hauck WW, Chen M-L, Patnaik R, and RL Williams
    Bioequivalence: Performance of Several Measures of Extent of Absorption
    Pharm Res 11(5), 715–22 (1994)
    DOI: 10.1023/A:1018932430733

[image]All the best, Helmut Schütz.[image]
Profile «accessible to registered users only»
The quality of responses received is directly proportional to the quality of the question asked. ☼
→ Science Quotes
jag009

NJ,
2012-03-30 20:34
(edited by jag009 on 2012-03-30 21:13)

@ Helmut
Posting: # 8359
 

PK simulation

Hi Helmut,

»   V      <- rlnorm(n=1, meanlog=log(V.d)-0.5*log(V.c^2+1),
»               sdlog=sqrt(log(V.c^2+1)))
»   k01    <- rlnorm(n=1, meanlog=log(k01.d)-0.5*log(k01.c^2+1),
»               sdlog=sqrt(log(k01.c^2+1)))

Sorry for the interruption and excuse me for my novice question... How did you arrive at the meanlog equation? I looked the rlnorm syntax from R manual and the mean equation different.

Thanks

John


Hi Helmut,
Never mind.. It's Friday here. Wasn't thinking straight...:sleeping:


Edit: Copypasted from follow-up post. You can edit your posts for 24 hours. [Helmut]
P.S.: See Martin’s post.
jag009

NJ,
2012-04-04 17:03
(edited by jag009 on 2012-04-04 22:05)

@ Helmut
Posting: # 8383
 

PK simulation

Hi Helmut,

I have a question on R, excuse me for being a novice.. I just started using it :)

» I tried to implement PK model I (one-compartment open, lag-time):
» parameter                  distribution  mean      CV(%)  truncation
» Volume of distribution     log-normal    1         10
...
» V.d    <- 1        # volume of distribution
» V.c    <- 0.1      # CV 10%, lognormal
...
»   V      <- rlnorm(n=1, meanlog=log(V.d)-0.5*log(V.c^2+1),

In your program, is the mean of V from a log-normal distribution? If so, can't you just specify rlnorm statment as rlnorm(n=1, meanlog=V.d, ...) instead of rlnorm(n=1, meanlog=log(v.d)-0.5...)?

The reason I asked is because of the following I saw from an article on "PK Modeling and Simulation of Mixed Pellets" by Watanalumlerd et at. The author listed a table:

Parameter                Distribution        Mean+/-SD
...
lag time of emptying     Log-Normal          0.75+/-0.22


He wrote "A lognormal distribution was chosen ... because time cannot be negative"

In R, Does this mean the rlnorm statement will be:

rlnorm (n=1, mean=0.75, sdlog=0.22)?


Thanks

John
d_labes

Berlin, Germany,
2012-04-05 09:15

@ jag009
Posting: # 8384
 

R function rlnorm()

Dear John,

try ?rlnorm or help("rlnorm") in the R console. Or (as Helmut has already pointed to) see this post from Martin.

Regards,

Detlew
Helmut
Homepage
Vienna, Austria,
2012-04-05 14:27

@ jag009
Posting: # 8385
 

Lognormal vs. truncated normal

Hi John!

» […] I saw from an article on "PK Modeling and Simulation of Mixed Pellets" by Watanalumlerd et at. The author listed a table:
» Parameter                Distribution        Mean+/-SD
» lag time of emptying     Log-Normal          0.75+/-0.22
» He wrote "A lognormal distribution was chosen ... because time cannot be negative"

The complete quote:*

Gastric emptying time, lag time of emptying, and their variability (standard deviation) were obtained from the literature […]. A lognormal distribution was chosen for all time parameters because time cannot be negative.

I don’t have the referred literature but I have strong doubts that the sample sizes in these scintigraphy studies were large enough to distinguish between normal and lognormal distributions. I would suspect that reported arithmetic means ± SD were (erroneously) used by Watanalumlerd et al. BTW, they employed a nice software named Crystal Ball. ;-)

Time parameters in PK are a battleground. I wouldn’t opt for lognormal in simulations “because time cannot be negative” – unphysiological high values are still possible. I would rather go with a truncated normal (like in the paper of Csizmadia and Endrényi). The reasoning for truncated AUC72 is also based on the fact that a GI transit time of larger than 72 hours was never observed in any study.


  • Watanalumlerd P, Christensen JM, and JW Ayres
    Pharmacokinetic Modeling and Simulation of Gastrointestinal Transit Effects on Plasma Concentrations of Drugs from Mixed Immediate-Release and Enteric-Coated Pellet Formulations
    Pharmaceutical Development and Technology 12, 193–202 (2007)
    DOI: 10.1080/10837450701212750

[image]All the best, Helmut Schütz.[image]
Profile «accessible to registered users only»
The quality of responses received is directly proportional to the quality of the question asked. ☼
→ Science Quotes
jag009

NJ,
2012-04-05 17:52

@ Helmut
Posting: # 8389
 

Lognormal vs. truncated normal

Thanks Helmut and D_labes :-)
Back to the forum Activity
 Thread view
Bioequivalence and Bioavailability Forum | Admin contact
12899 Posts in 2925 Threads, 919 registered users;
15 users online (1 registered, 14 guests).
The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
XHTML/CSS RSS Feed