Helmut Vienna, Austria, 2012-03-29 18:52 Posting: # 8347 |
## PK simulation [R for BE/BA] |

Dear all, I tried to implement PK model I (one-compartment open, lag-time): ^{*}
_{max}, analytical error: multiplicative component (log-normal, assay error 10%), additive component (log-normal, LLOQ × assay error).My clumsy code (requires package `truncnorm` for the truncated normal distribution):`#set.seed(29081957)# uncomment this line to compare results only` A run of the set random seed below: Is this correct? I think that I screwed up the analytical error. Original text:
Shouldn’t I rather use a normal distribution instead ( `AErr1 <- rnorm(n=1, mean=0, sd=abs(C[j]*AErr))` )? …which would give:**Csizmadia F and L Endrényi**
*Model-Independent Estimation of Lag Times with First-Order Absorption and Disposition* J Pharm Sci 87(5), 608–12 (1998)
— All the best, Helmut Schütz. Profile «accessible to registered users only» The quality of responses received is directly proportional to the quality of the question asked. ☼ → Science Quotes |

d_labes Berlin, Germany, 2012-03-30 15:38 @ Helmut Posting: # 8356 |
## PK simulation |

Dear Helmut, » Is this correct? I think that I screwed up the analytical error. Original text:
» Shouldn’t I rather use a normal distribution instead ( `AErr1 <- rnorm(n=1, mean=0, sd=abs(C[j]*AErr))` )? …I'm not quite sure If I really understand what you attempt here. But your implementation of the analytical error via log-normal distribution seems correct for me. What I absolutely don't understand is the "... constant term ...". What is it good for . This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for. BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data . — Regards, Detlew |

Helmut Vienna, Austria, 2012-03-30 15:56 @ d_labes Posting: # 8357 |
## PK simulation |

Dear Detlew! » » Is this correct? I think that I screwed up the analytical error. Original text:
» » Shouldn’t I rather use a normal distribution instead ( `AErr1 <- rnorm(n=1, mean=0, sd=abs(C[j]*AErr))` )? …» I'm not quite sure If I really understand what you attempt here. Well, reproduce the sims of the paper… » But your implementation of the analytical error via log-normal distribution seems correct for me. Really? Im not sure about `meanlog=0` since `dlnorm(1)==dnorm(0)` ; shouldn’t I rather use `meanlog=1` ?^{*}» What I absolutely don't understand is the "... constant term ...". What is it good for . This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for. Me too. I don't get the idea as well. Maybe it’s time to ask László. » BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data . Exactly. Also I don’t get the point why I should go with a log-normal here (noise is not necessarily positive). Analytical error is normal, IMHO. - No big difference:
— All the best, Helmut Schütz. Profile «accessible to registered users only» The quality of responses received is directly proportional to the quality of the question asked. ☼ → Science Quotes |

d_labes Berlin, Germany, 2012-03-31 14:40 @ Helmut Posting: # 8363 |
## PK simulation - log-normal |

Dear Helmut! » » But your implementation of the analytical error via log-normal distribution seems correct for me. » » Really? Im not sure about `meanlog=0` since `dlnorm(1)==dnorm(0)` ; shouldn’t I rather use `meanlog=1` ?^{*}If it comes to the log-normal I prefer always to work in the log domain, i.e. simulate the errors via normal distribution add it to the log-transformed theoretical value and than transform it back. The syntax of `*lnorm()` functions (which values to use as meanlog, meansd) is too complicated for my mind . Eventually this helps to figure out.» » What I absolutely don't understand is the "... constant term ...". What is it good for . This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for. » » Me too. I don't get the idea as well. Maybe it’s time to ask László. That seems a very good idea. Which László ever . » » BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data . » » Exactly. Also I don’t get the point why I should go with a log-normal here (noise is not necessarily positive). Analytical error is normal, IMHO.(Emphasis by me) This is a very good question, able to battle on over long evenings at beer with smoking heads . Regarding positive noise via log-normal you mix up somefink here, I think. The log-normal lends to positive or negative errors in the log-domain which back-transformed gives values greater or lower than the theoretical one in a multiplicative fashion. The log-normal lends to variances (in the original domain) which are proportional to the values itself (higher variability at higher values). Maybe this is not the correct behavior since in bioanalysis it is often so that the errors are biggest at the lowest concentration. But here you are the expert . — Regards, Detlew |

Helmut Vienna, Austria, 2012-03-30 18:46 @ d_labes Posting: # 8358 |
## Why all this fuzz? |

Dear Detlew! » I'm not quite sure If I really understand what you attempt here. In more detail now. The authors explored different methods for estimating t _{lag}. Clearly the one used in standard PK software (the last timepoint before the first measured concentration) performed worst – especially if k_{a} is high and few timepoints are available. We discussed that already (#1872, #4850). Csizmadia and Endrényi studied the methods in terms of RSME and bias; I’m also interested in setting up simulations of BE studies (formulations different in F, k_{a}, t_{lag}, or combinations). In other words: If we use a ‘bad’ method – is the BE outcome substantially affected? Some ideas:- Reproduce the paper’s results for comparability.
- Explore t
_{lag}as x of the last time point where C≤LLOQ and the first >LLOQ.
- Think about DR formulations. Is a shift in t
_{lag}unbiased reflected in t_{max}? Would be nice because (already) frequent sampling around C_{max}would make*additional*frequent sampling around the expected t_{lag}unnecessary.
with a lag-time cover the vast majority of IR and DR formulations. Even if in all subjects concentrations were measured at the first sampling time point PK models with a lag-time generally result in better fits (especially of the absorption phase and of C_{max}). Once I have a suitable setup I could try to answer questions only a few people dare to ask, like:- AUC
_{t}if t_{last,R}≠ t_{last,T}(the wonderful missing value story).
- AUC
_{72}or AUC_{τ}: Inter-/extrapolate to nominal time? Extrapolate a missing value?
- Missing values generally…
- C
_{last}or*estimated*C_{min}at*τ*?^{[1]}
- Explore different algorithms in selecting the terminal phase in multicompartmental models.
et al (1994).^{[2]} Interesting that there normal distributed data were assumed (why?) and a truncation of ±3 SD was used (though with the same CVs). Does anybody know an R package supporting the truncated log-normal?The analytical error was already defined in a strange way in Bois’ paper:
_{∞}, especially for two-compartment models. Of the extrapolation methods using the estimated C_{last} instead of the measured C_{last} performed slightly better. AUC_{t} performed best by far. The study was sponsored by the FDA – still requiring AUC_{∞}… **Paixão P, Gouveia LF, and JGA Morais**
*An alternative single dose parameter to avoid the need for steady-state studies on oral extended-release drug products* Eur J Pharm Biopharm 80(2), 410–7 (2012) DOI: 10.1016/j.ejpb.2011.11.001
**Bois FY, Tozer TN, Hauck WW, Chen M-L, Patnaik R, and RL Williams**
*Bioequivalence: Performance of Several Measures of Extent of Absorption* Pharm Res 11(5), 715–22 (1994) DOI: 10.1023/A:1018932430733
— All the best, Helmut Schütz. Profile «accessible to registered users only» The quality of responses received is directly proportional to the quality of the question asked. ☼ → Science Quotes |

jag009 NJ, 2012-03-30 20:34 (edited by jag009 on 2012-03-30 21:13) @ Helmut Posting: # 8359 |
## PK simulation |

Hi Helmut, » ` V <- rlnorm(n=1, meanlog=log(V.d)-0.5*log(V.c^2+1),` » ` sdlog=sqrt(log(V.c^2+1)))` » ` k01 <- rlnorm(n=1, meanlog=log(k01.d)-0.5*log(k01.c^2+1),` » ` sdlog=sqrt(log(k01.c^2+1)))` Sorry for the interruption and excuse me for my novice question... How did you arrive at the meanlog equation? I looked the rlnorm syntax from R manual and the mean equation different. Thanks John Hi Helmut, Never mind.. It's Friday here. Wasn't thinking straight... Edit: Copypasted from follow-up post. You can edit your posts for 24 hours. [Helmut] P.S.: See Martin’s post. |

jag009 NJ, 2012-04-04 17:03 (edited by jag009 on 2012-04-04 22:05) @ Helmut Posting: # 8383 |
## PK simulation |

Hi Helmut, I have a question on R, excuse me for being a novice.. I just started using it :) » I tried to implement PK model I (one-compartment open, lag-time): » `parameter distribution mean CV(%) truncation` » `Volume of distribution log-normal 1 10` ... » `V.d <- 1 # volume of distribution` » `V.c <- 0.1 # CV 10%, lognormal` ... » ` V <- rlnorm(n=1, meanlog=log(V.d)-0.5*log(V.c^2+1),` In your program, is the mean of V from a log-normal distribution? If so, can't you just specify rlnorm statment as rlnorm(n=1, meanlog=V.d, ...) instead of rlnorm(n=1, meanlog=log(v.d)-0.5...)? The reason I asked is because of the following I saw from an article on "PK Modeling and Simulation of Mixed Pellets" by Watanalumlerd et at. The author listed a table: `Parameter Distribution Mean+/-SD` He wrote "A lognormal distribution was chosen ... because time cannot be negative" In R, Does this mean the rlnorm statement will be: `rlnorm (n=1, mean=0.75, sdlog=0.22)` ?Thanks John |

d_labes Berlin, Germany, 2012-04-05 09:15 @ jag009 Posting: # 8384 |
## R function rlnorm() |

Dear John, try `?rlnorm` or `help("rlnorm")` in the R console. Or (as Helmut has already pointed to) see this post from Martin.— Regards, Detlew |

Helmut Vienna, Austria, 2012-04-05 14:27 @ jag009 Posting: # 8385 |
## Lognormal vs. truncated normal |

Hi John! » […] I saw from an article on "PK Modeling and Simulation of Mixed Pellets" by Watanalumlerd et at. The author listed a table: » `Parameter Distribution Mean+/-SD` » `lag time of emptying Log-Normal 0.75+/-0.22` » He wrote "A lognormal distribution was chosen ... because time cannot be negative" The complete quote: ^{*}
et al. BTW, they employed a nice software named Crystal Ball. Time parameters in PK are a battleground. I wouldn’t opt for lognormal in simulations “because time cannot be negative” – unphysiological high values are still possible. I would rather go with a truncated normal (like in the paper of Csizmadia and Endrényi). The reasoning for truncated AUC_{72} is also based on the fact that a GI transit time of larger than 72 hours was never observed in any study.**Watanalumlerd P, Christensen JM, and JW Ayres**
*Pharmacokinetic Modeling and Simulation of Gastrointestinal Transit Effects on Plasma Concentrations of Drugs from Mixed Immediate-Release and Enteric-Coated Pellet Formulations* Pharmaceutical Development and Technology 12, 193–202 (2007) DOI: 10.1080/10837450701212750
All the best, Helmut Schütz. Profile «accessible to registered users only» The quality of responses received is directly proportional to the quality of the question asked. ☼ → Science Quotes |

jag009 NJ, 2012-04-05 17:52 @ Helmut Posting: # 8389 |
## Lognormal vs. truncated normal |

Thanks Helmut and D_labes |