eduardojmorales ● 2006-10-05 23:11 (6409 d 20:56 ago) Posting: # 280 Views: 12,444 |
|
Hello I'm a beginner in BE and I can't seem to find sources of information on how to determine the sampling interval for each drug. I'll appreciate any input. Thank you |
eduardojmorales ● 2006-10-06 01:16 (6409 d 18:51 ago) @ eduardojmorales Posting: # 281 Views: 11,171 |
|
❝ Hello I'm a beginner in BE and I can't seem to find sources of information on how to determine the sampling interval for each drug. Even though a Chilean Guideline says you need to take enough samples to draw an accurate curve, it doesn't say how to determine it. The common calculation is 15 samples, 5 on each phase of the curve (ascending, peak, descending). For practical purposes this would be enough; but do we have a parameter to calculate this? How do we know the minimmum number to get an accurate curve? |
H_Rotter ★ Germany, 2006-10-06 15:27 (6409 d 04:40 ago) @ eduardojmorales Posting: # 286 Views: 11,179 |
|
Hi Eduardo, are you talking to yourself? Hermann |
eduardojmorales ● 2006-10-10 20:09 (6404 d 23:58 ago) @ H_Rotter Posting: # 304 Views: 11,197 |
|
❝ are you talking to yourself? |
Helmut ★★★ Vienna, Austria, 2006-10-07 23:48 (6407 d 20:19 ago) @ eduardojmorales Posting: # 294 Views: 12,467 |
|
Dear Eduardo, the ‘classical’ algorithm uses a geometric progression in order to calculate sampling points based on:
ti = ti-1 × (tn/t1)1/(n–1) wherei index of the respective time points (2, 3, …, n), n number of time points, ti calculated time point at i, ti–1 previous time point, t1 first time point, and tn last time point. I prepared a little example based on following PK data:
Now you have to adapt these time points for practicability (e.g., sampling at 0.25 h instead of at 0.2417 h). In the example you see another problem which may arise from mean curves (very often the only ones being published). The sampling scheme works fine for the ‘average’ subjects, and also for the ‘fastest’ ones, but the elimination phase may be insufficiently described for 'slow' subjects. Therefore whenever possible try to get some ‘Spaghetti-Plots’ (overlays of all subjects in a previous study) in the planning phase. Another problem are enteric coated formulations, which show a large variability in lag-times due to variability in gastric emptying. The example to the right used PK parameters from the ‘fast’ subject with additonal lag-times of 0.5 and 1.5 h. The algorithm will simply fail if you are applying data from a mean curve only. In such a case you will have to ‘pack’ as much time points as possible within the first 3 hours or so, in order to define the absorption phase and the peak properly; you should use the algorithm only for the elimination phase (with t1 = 3 and tn = 12). Also note that the example is just illustrative; with ‘real world’ enteric coated formulations the tmax may be at 12 hours – or even later… Conclusion: A simple algorithm may only serve you as an entry point; thorough knowledge of the drug’s (and formulation’s) PK is a prerequisite in study planning. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
eduardojmorales ● 2006-10-10 20:23 (6404 d 23:44 ago) @ Helmut Posting: # 305 Views: 11,066 |
|
Thanks Helmut!!! Very informative. Can you please tell me - besides your ample experience - what's your source? That way I won't be doing so many beginner's questions! Thanks again, Eduardo |
Helmut ★★★ Vienna, Austria, 2006-10-10 21:05 (6404 d 23:02 ago) @ eduardojmorales Posting: # 306 Views: 11,302 |
|
Dear Eduardo! ❝ Can you please tell me […] what's your source? Oh my goodness! I dug the example out from an unpublished study I’ve done in 1987 (a Monte Carlo-Simulation on the influence of different sampling schedules on BE). In this study I was referring to a contribution to a book* which is out of print for many years. The algorithm is given there without any further source – although a geometric progression is an obvious solution, at least for a one-compartment model. The book actually is a collection of presentations from a conference which was held in 1978 (!) Just to quote the article again: Sampling strategies Only unspecific rules based upon general properties of compartmental models can be given if model type discrimination is the purpose of the study. They may be summarized as follows:
Many articles have been published on the topic of optimal sampling for PK parameter estimation, but only little (or even nothing?) for BE. Up to now, I think choosing the ‘right’ schedule is still mainly a decision coming from the ‘guts’ of an experienced person (with some ethical and financial constraints)… Just remember the main points:
❝ That way I won't be doing so many beginner's questions! No, just go on, that’s what the forum is for – if nobody has some time to spare, you simply receive no answers…
— Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
intuitivepharma ☆ India, 2013-02-28 09:10 (4072 d 09:57 ago) @ Helmut Posting: # 10127 Views: 9,211 |
|
Dear Helmut, I sincerely apologise for reopening an old discussion thread. However I found it very tempting and interesting to come across an algorithm, to generate sampling time points as per a geometric progression. I tried to emulate your algorithm to generate sampling time points as per a geometric progression in excel. Unfortunately I am unable to generate the time points. Can you please help me with the excel code for the same? Formula I used was, Ti = (B4-1)*((B2/B1)^(1/(B3-1))) First Time Point [B1] 0.0833 I understand if I had coded the algorithm correctly, then when I key in 15 at the index of time point, I should get 12 hrs as the time point. I got 19.966 instead. When I key in 1 at the index of time point, I should get 0.0833 hrs as the time point. I got 0 instead. As indicated by you the above formula will suffice the purpose of spacing time points as per a geometric progression and the output needs to be fine tuned towards practicability and based on individual formulation characteristic. On a more scientific basis, can I derive any input from the “partial derivatives plot” generated by WinNonlin for sampling point determination? The WinNonlin write-up on the same is not very comprehensive. Kindly throw some light on the same. For example kindly consider the following plasma concentration time profile. Time Concentration Its kinetic profile fits into a first order single compartment system. The Observed Y and Predicted Y vs X plot for the same is shown below. The partial derivatives plot plot for the same is shown below. Based on the partial derivatives plot [with information on Ka, Kel and VD] can you please guide how to optimise the sampling time points? The idea is to generate the final time points based on geometric progression and fine tuning it with the inputs from partial derivatives plot. Thanks IP — Thanks & Regards, IP. |
Helmut ★★★ Vienna, Austria, 2013-02-28 14:56 (4072 d 04:11 ago) @ intuitivepharma Posting: # 10131 Views: 9,254 |
|
Hi intuitivepharma! ❝ I sincerely apologise for reopening an old discussion thread. No problem. ❝ Formula I used was, ❝ Ti = (B4-1)*((B2/B1)^(1/(B3-1))) ❝ ❝ ❝ ❝ ❝ Let’s see. First sampling at 5 min, last at 12 h, 15/16/17 sampling points. Enter formulas in cells A4:A5 and drag to the others. A B C You should get: 0.0833 0.0833 0.0833 ❝ I understand if I had coded the algorithm correctly, then when I key in 15 at the index of time point, I should get 12 hrs as the time point. I got 19.966 instead. When I key in 1 at the index of time point, I should get 0.0833 hrs as the time point. I got 0 instead. No idea about indexing in M$ Excel. My coding is simple. ❝ As indicated by you the above formula will suffice the purpose of spacing time points as per a geometric progression and the output needs to be fine tuned towards practicability and based on individual formulation characteristic. Yes. Also keep variability in tmax across subjects in mind. I would use evenly spaced sampling until the expected tmax and use the geometric progression only afterwards. Therefore, t1 = tmax. ❝ On a more scientific basis, can I derive any input from the “partial derivatives plot” generated by WinNonlin for sampling point determination? The WinNonlin write-up on the same is not very comprehensive. Kindly throw some light on the same. The extremes (min/max) of the partial derivatives show the ‘most influentual’ time points – if you are interested in PK modeling. For BE you can only use K10 (NCA-speak: λz). In your example 36 hours is most important and the interval 16–72 ‘interesting’. But that’s trivial, isn’t it? ❝ Based on the partial derivatives plot [with information on Ka, Kel and VD] can you please guide how to optimise the sampling time points? ❝ ❝ The idea is to generate the final time points based on geometric progression and fine tuning it with the inputs from partial derivatives plot. I’m running a large PopPK-model in Phoenix right now – maybe I will post some examples later on. However, I don’t think that it will be really useful. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Helmut ★★★ Vienna, Austria, 2013-02-28 17:50 (4072 d 01:17 ago) @ intuitivepharma Posting: # 10134 Views: 9,095 |
|
Hi intuitivepharma, as promised some thoughts. Fitting your data (assuming a dose of 100 units) I got: V/F 1.0417028 To keep it simple I would set the sampling interval to multiples of 15 minutes. We should have at least three sampling time points in the absorption phase and around Cmax (FDA). So I suggest to aim for five evenly spaced samples in the interval [0, 4.75] ⇒ [0, 0.95, 1.9, 2.85, 3.8, 4.75] ⇒ mround(t,0.25) ⇒ [0, 1, 2, 2.75, 3.75, 4.75]. The remaining ten sample geometrically spaced, rounded ⇒ [6.5, 8.75, 11.75, 16, 21.5, 29, 39.25, 53.25, 72].Let’s see: Smaller partial derivatives for K10 would be nice. schedule 1 schedule 2 OK, we can expect to obtain more reliable fits in schedule 2. Since Cmax is higher variable than AUC it would be a good idea to sample more often in the earlier part of the profile. FDA’s 3/3-recommendation gives too few samples in most cases. Play around with the schedules, but forget modeling – is not helpful. Concentrate on clinical practicability. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
intuitivepharma ☆ India, 2013-03-01 08:42 (4071 d 10:25 ago) (edited by intuitivepharma on 2013-03-01 08:52) @ Helmut Posting: # 10138 Views: 8,977 |
|
Dear Helmut, Many thanks for the Excel workout and valuable info on “partial derivatives plot”. On a second thought, can I tweak the algorithm to generate geometric progression such that the sampling is closer towards the last time point. Consider I have 18 sampling time points till 36 hours and the Cmax for my formulation is around 4 hours. I can split them in to two sets.
For pre Tmax sampling time points I will use the algorithm that generates time points in a geometric progression that are closer towards the last time point [i.e. actually the Tmax]. After this exercise, if the 18 sampling time point thus generated are put together, we will have a sampling schedule which is crowded towards the Tmax [ideal] and spacing out in a geometric progression on either sides. ❝ To keep it simple I would set the sampling interval to multiples of 15 minutes. We should have at least three sampling time points in the absorption phase and around Cmax (FDA). So I suggest to aim for five evenly spaced samples in the interval [0, 4.75] ⇒ [0, 0.95, 1.9, 2.85, 3.8, 4.75] ⇒ I totally agree with you but, just wanted to play around a bit with the algorithm. May come in handy for ER products with Tmax around 12 hours having a zero order release/absorption profile. I tried my bit with tweaking the excel algorithm but only ended up with failure . Thanks, IP. — Thanks & Regards, IP. |
SDavis ★★ UK, 2013-03-07 11:40 (4065 d 07:27 ago) @ intuitivepharma Posting: # 10169 Views: 8,894 |
|
Dear IP, If you have WinNonlin and you're willing to do a bit of modelling & simulation you can use VIF (Variance Inflation Factor) from your simulation to choose between different study designs. VIF is the the multiplier of the residual mean square, used in deriving the asymptotic variance of the parameter estimates. Useful in simulations, to find optimal experimental design. [Like AIC] Smaller values are better. So an example of usage would be to;
PS I also like to see plot the Summary Table and Predicted table from my simulations to see visually the difference between my simulation and what I will actually observe with that sample schedule e.g. and here are the corresponding VIFs for Cmax, note that 1x was 11 samples whereas the other two designs were just 8 samples. Roughly speaking the ratio of VIF is analogous to the ratio in which you would expect CV% to change in the future experiment. So you can see the second "cheap" design is markedly better for Cmax. — Simon Senior Scientific Trainer, Certara™ [link=https://www.youtube.com/watch?v=xX-yCO5Rzag[/link] https://www.certarauniversity.com/dashboard https://support.certara.com/forums/ |
Helmut ★★★ Vienna, Austria, 2013-03-07 16:50 (4065 d 02:17 ago) @ SDavis Posting: # 10170 Views: 8,849 |
|
Hi Simon! Let’s see which VIFs we get for the same number of samples but with different schedules (IP’s example). Theoretical tmax at 4.837 h.
In #1 we have three samples within ±25% of tmax (4, 5, 6 h) and in #2 only two (3.75, 4.75 h). The closest sampling timepoint to the theoretical tmax in #1 is 5 h (+3.37%) and in #2 4.75 h (-1.80%). If I look at the VIFs for Cmax and tmax I would favor #1. On the other hand AUC and K10 speak for #2. BTW, your first plot is a nice example why the linear trapezoidal is such a lousy method. P.S.: If you upload screenshoots crop / scale down to the maximum size (640px longest side) and save them as 8bit-PNGs in IrfanView. Will look much better than downscaled JPEGs. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
intuitivepharma ☆ India, 2013-03-08 13:39 (4064 d 05:28 ago) @ SDavis Posting: # 10173 Views: 8,767 |
|
Dear Simon, Thanks for your valuable inputs on sampling time point optimization. I have WinNonlin and always work around with modelling & simulation before designing sampling time points for BE studies. I would like to know if there is any limit for VIF? I understand that lower is better [0.25 or 0.24 is better than 0.26]. But how much upper deviation from ideal value [0.26 in this example] is acceptable. In the example posted above you mean to say that a VIF 0.29 [with 8 sampling time points] is acceptable [with respect to Cmax] as against a nominal VIF of 0.26 [which would be costly with 11 sampling time points]. If I still modify the time points [with the same 8 or still lower say 7 sampling time points] and get a VIF of 0.30 will it be acceptable. What does Sqrt[VIF]_P_% mean. What is its significance? Thanks, IP. — Thanks & Regards, IP. |