Helmut Hero Vienna, Austria, 20120329 18:52 Posting: # 8347 Views: 6,116 

Dear all, I tried to implement PK model I (onecompartment open, lagtime):^{*}
My clumsy code (requires package truncnorm for the truncated normal distribution):#set.seed(29081957)# uncomment this line to compare results only A run of the set random seed below: Is this correct? I think that I screwed up the analytical error. Original text: Analytical assay errors were generated from lognormal distributions with no bias, a CV of 10%, plus a constant term equal to the product of the assay CV and the limit of quantification, LQ. Shouldn’t I rather use a normal distribution instead ( AErr1 < rnorm(n=1, mean=0, sd=abs(C[j]*AErr)) )? …which would give:
— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20120330 15:38 @ Helmut Posting: # 8356 Views: 5,121 

Dear Helmut, » Is this correct? I think that I screwed up the analytical error. Original text: Analytical assay errors were generated from lognormal distributions with no bias, a CV of 10%, plus a constant term equal to the product of the assay CV and the limit of quantification, LQ. » Shouldn’t I rather use a normal distribution instead ( AErr1 < rnorm(n=1, mean=0, sd=abs(C[j]*AErr)) )? …I'm not quite sure If I really understand what you attempt here. But your implementation of the analytical error via lognormal distribution seems correct for me. What I absolutely don't understand is the "... constant term ...". What is it good for . This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for. BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data . — Regards, Detlew 
Helmut Hero Vienna, Austria, 20120330 15:56 @ d_labes Posting: # 8357 Views: 5,131 

Dear Detlew! » » Is this correct? I think that I screwed up the analytical error. Original text: Analytical assay errors were generated from lognormal distributions with no bias, a CV of 10%, plus a constant term equal to the product of the assay CV and the limit of quantification, LQ. » » Shouldn’t I rather use a normal distribution instead ( AErr1 < rnorm(n=1, mean=0, sd=abs(C[j]*AErr)) )? …» I'm not quite sure If I really understand what you attempt here. Well, reproduce the sims of the paper… » But your implementation of the analytical error via lognormal distribution seems correct for me. Really? Im not sure about meanlog=0 since dlnorm(1)==dnorm(0) ; shouldn’t I rather use meanlog=1 ?^{*}» What I absolutely don't understand is the "... constant term ...". What is it good for . This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for. Me too. I don't get the idea as well. Maybe it’s time to ask László. » BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data . Exactly. Also I don’t get the point why I should go with a lognormal here (noise is not necessarily positive). Analytical error is normal, IMHO.
— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20120331 14:40 @ Helmut Posting: # 8363 Views: 5,089 

Dear Helmut! » » But your implementation of the analytical error via lognormal distribution seems correct for me. » » Really? Im not sure about meanlog=0 since dlnorm(1)==dnorm(0) ; shouldn’t I rather use meanlog=1 ?^{*}If it comes to the lognormal I prefer always to work in the log domain, i.e. simulate the errors via normal distribution add it to the logtransformed theoretical value and than transform it back. The syntax of *lnorm() functions (which values to use as meanlog, meansd) is too complicated for my mind . Eventually this helps to figure out.» » What I absolutely don't understand is the "... constant term ...". What is it good for . This is only a shift in the concentration levels constant over the whole curve and also for all simulated profiles the same, if I understand. But nothing like a random term as errors usually are deemed for. » » Me too. I don't get the idea as well. Maybe it’s time to ask László. That seems a very good idea. Which László ever . » » BTW: Why do you think you have screwed up something? Because the scatter in the simulated data is too smooth compared to real data . » » Exactly. Also I don’t get the point why I should go with a lognormal here (noise is not necessarily positive). Analytical error is normal, IMHO. (Emphasis by me) This is a very good question, able to battle on over long evenings at beer with smoking heads . Regarding positive noise via lognormal you mix up somefink here, I think. The lognormal lends to positive or negative errors in the logdomain which backtransformed gives values greater or lower than the theoretical one in a multiplicative fashion. The lognormal lends to variances (in the original domain) which are proportional to the values itself (higher variability at higher values). Maybe this is not the correct behavior since in bioanalysis it is often so that the errors are biggest at the lowest concentration. But here you are the expert . — Regards, Detlew 
Helmut Hero Vienna, Austria, 20120330 18:46 @ d_labes Posting: # 8358 Views: 5,282 

Dear Detlew! » I'm not quite sure If I really understand what you attempt here. In more detail now. The authors explored different methods for estimating t_{lag}. Clearly the one used in standard PK software (the last timepoint before the first measured concentration) performed worst – especially if k_{a} is high and few timepoints are available. We discussed that already (#1872, #4850). Csizmadia and Endrényi studied the methods in terms of RSME and bias; I’m also interested in setting up simulations of BE studies (formulations different in F, k_{a}, t_{lag}, or combinations). In other words: If we use a ‘bad’ method – is the BE outcome substantially affected? Some ideas:
The analytical error was already defined in a strange way in Bois’ paper: Analytical assay errors were generated from truncated normal distributions with no bias (mean zero), a CV of 10%, truncation at ±3 CV, plus a fixed term equal to the product of the assay CV and the limit of quantification, LQ. Also interesting that they also showed a large positive bias and terrible CV of AUC_{∞}, especially for twocompartment models. Of the extrapolation methods using the estimated C_{last} instead of the measured C_{last} performed slightly better. AUC_{t} performed best by far. The study was sponsored by the FDA – still requiring AUC_{∞}…
— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
jag009 Hero NJ, 20120330 20:34 (edited by jag009 on 20120330 21:13) @ Helmut Posting: # 8359 Views: 5,087 

Hi Helmut, » V < rlnorm(n=1, meanlog=log(V.d)0.5*log(V.c^2+1), » sdlog=sqrt(log(V.c^2+1))) » k01 < rlnorm(n=1, meanlog=log(k01.d)0.5*log(k01.c^2+1), » sdlog=sqrt(log(k01.c^2+1))) Sorry for the interruption and excuse me for my novice question... How did you arrive at the meanlog equation? I looked the rlnorm syntax from R manual and the mean equation different. Thanks John Hi Helmut, Never mind.. It's Friday here. Wasn't thinking straight... Edit: Copypasted from followup post. You can edit your posts for 24 hours. [Helmut] P.S.: See Martin’s post. 
jag009 Hero NJ, 20120404 17:03 (edited by jag009 on 20120404 22:05) @ Helmut Posting: # 8383 Views: 4,977 

Hi Helmut, I have a question on R, excuse me for being a novice.. I just started using it :) » I tried to implement PK model I (onecompartment open, lagtime): » parameter distribution mean CV(%) truncation » Volume of distribution lognormal 1 10 ... » V.d < 1 # volume of distribution » V.c < 0.1 # CV 10%, lognormal ... » V < rlnorm(n=1, meanlog=log(V.d)0.5*log(V.c^2+1), In your program, is the mean of V from a lognormal distribution? If so, can't you just specify rlnorm statment as rlnorm(n=1, meanlog=V.d, ...) instead of rlnorm(n=1, meanlog=log(v.d)0.5...)? The reason I asked is because of the following I saw from an article on "PK Modeling and Simulation of Mixed Pellets" by Watanalumlerd et at. The author listed a table: Parameter Distribution Mean+/SD He wrote "A lognormal distribution was chosen ... because time cannot be negative" In R, Does this mean the rlnorm statement will be: rlnorm (n=1, mean=0.75, sdlog=0.22) ?Thanks John 
d_labes Hero Berlin, Germany, 20120405 09:15 @ jag009 Posting: # 8384 Views: 4,991 

Dear John, try ?rlnorm or help("rlnorm") in the R console. Or (as Helmut has already pointed to) see this post from Martin.— Regards, Detlew 
Helmut Hero Vienna, Austria, 20120405 14:27 @ jag009 Posting: # 8385 Views: 5,038 

Hi John! » […] I saw from an article on "PK Modeling and Simulation of Mixed Pellets" by Watanalumlerd et at. The author listed a table: » Parameter Distribution Mean+/SD » lag time of emptying LogNormal 0.75+/0.22 » He wrote "A lognormal distribution was chosen ... because time cannot be negative" The complete quote:^{*} Gastric emptying time, lag time of emptying, and their variability (standard deviation) were obtained from the literature […]. A lognormal distribution was chosen for all time parameters because time cannot be negative. I don’t have the referred literature but I have strong doubts that the sample sizes in these scintigraphy studies were large enough to distinguish between normal and lognormal distributions. I would suspect that reported arithmetic means ± SD were (erroneously) used by Watanalumlerd et al. BTW, they employed a nice software named Crystal Ball.Time parameters in PK are a battleground. I wouldn’t opt for lognormal in simulations “because time cannot be negative” – unphysiological high values are still possible. I would rather go with a truncated normal (like in the paper of Csizmadia and Endrényi). The reasoning for truncated AUC_{72} is also based on the fact that a GI transit time of larger than 72 hours was never observed in any study.
— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
jag009 Hero NJ, 20120405 17:52 @ Helmut Posting: # 8389 Views: 4,917 

Thanks Helmut and D_labes 