gracehung ☆ Taiwan, 20130520 07:35 Posting: # 10603 Views: 15,115 

I have one question about the handling of the missing data, and need your advice. My question is: As in BE study, One sample’s actual sampling time was miss recorded. However as according to protocol, this sample was analyzed by our analytical laboratory.

Helmut ★★★ Vienna, Austria, 20130520 14:22 @ gracehung Posting: # 10604 Views: 13,453 

Hi Gracehung, please follow the Forum’s Policy in the future. THX. » As in BE study, One sample’s actual sampling time was miss recorded. However as according to protocol, this sample was analyzed by our analytical laboratory. Perform an audit. Why was an existing sample recorded as missing? » 1. it was not specified in protocol… Don’t worry, it’s too late. » … how would you handling this missed time point at statistical analysis? In the future describe how you would handle missing samples in an SOP or – much better IMHO – in the protocol. In the latter case any procedure is more transparent to an assessor (i.e., approved by the IEC and the local authority). » 2. Will this data be included in statistical analysis? If yes, what is the handling procedure? Depends on the intellectual horsepower you are willing/able to invest. Missings are generally not problematic if unambiguously located in the absorption^{1} or distribution/elimination^{2} phase. If the missing might be C_{max}^{3} data imputation is problematic. Regulators don’t like PK modeling, so I have tried regressive splines with not very convincing results (references there). » 3. If the data is excluded, what kind of method would you use in calculate your AUC? » a. Would you adapt interpolation method? » b. If yes, would you adapt logarithmic interpolation or linear interpolation? I always suggest to use the linup/logdown trapezoidal method. I think that Pharsight will make it even the default in the next version of Phoenix/WinNonlin. No need to impute data. Fine unless the missing might be C_{max}… See also this presentation about different integration algos (especially slides 16–17 on what happens with missing values). » 4. Is there any guideline or reference document we could follow? Not that I recall. Let’s denote denote the missing as C_{i}. Then if
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20130522 08:25 @ Helmut Posting: # 10614 Views: 13,096 

Dear Helmut, Great. However is there any special consideration when using linup/logdown trapezoidal rule to calculate AUCs if there is more than one absorption peak? Thanks in advance. » ... » I always suggest to use the linup/logdown trapezoidal method. I think that Pharsight will make it even the default in the next version of Phoenix/WinNonlin. No need to impute data. Fine unless the missing might be C_{max}… See also this presentation about different integration algos (especially slides 16–17 on what happens with missing values). — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut ★★★ Vienna, Austria, 20130522 20:25 @ yjlee168 Posting: # 10620 Views: 13,291 

Hi Yungjin! » However is there any special consideration when using linup/logdown trapezoidal rule to calculate AUCs if there is more than one absorption peak? Not that I am aware of. If values increase again the algo falls back to the linear trapezoidal. Try this clumsy code (a bimodal release formulation with two lagtimes, two absorption rate constants, and a common elimination): #set.seed(29081957) # keep this line to compare results only No big deal, but the linup/logdown should perform better if there are missings. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20130522 21:57 (edited by yjlee168 on 20130522 22:28) @ Helmut Posting: # 10622 Views: 13,091 

Dear Helmut, Thank you for your messages and codes. I tried to run your codes without luck, but I will try it later. I will change the way of AUC calculation with bear later. I remembered that I read something about method comparisons for trapezoidal rule in one textbook, but just cannot remember which book. » Not that I am aware of. If values increase again the algo falls back to the linear trapezoidal. Try this clumsy code (a bimodal release formulation with two lagtimes, two absorption rate constants and a common elimination): » » ... » B1.1 < C0.1*k01.1/(k01.1K)*exp(k10*tlag1) » B2.1 < C0.1*k01.1/(k01.1K)*exp(k01.1*tlag1) Here I got Error: object 'K' not found and some error messages after this.One more question, in your above presentation (slide#17[edited]; missing data occurred in R at 12h), it was a linup/logdown plot. As you said that no need to do data imputation if using linup/logdown. Do you do any line smoothing with your data first? Otherwise, the line should not be connected in that way. It should be connected as the line that the arrow points. Do I miss anything here? — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut ★★★ Vienna, Austria, 20130522 23:25 @ yjlee168 Posting: # 10623 Views: 13,238 

Hi Yungjin, » I will change the way of AUC calculation with bear later. Relax. Would make only sense if the user has a choice. Some people still get crazy if one tries anything else than the linear trapezoidal. Don’t use my code without improving it! In its current form it will not work with NAs and/or isolated BQLs within the profile. » I remembered that I read something about method comparisons for trapezoidal rule in one textbook, but just cannot remember which book. Yes. I think I have posted even references somewhere. To lazy to search now. » » ... » » B1.1 < C0.1*k01.1/(k01.1K)*exp(k10*tlag1) » » B2.1 < C0.1*k01.1/(k01.1K)*exp(k01.1*tlag1) » Here I got Error: object 'K' not found and some error messages after this.Sorry, I made a copy/paste error. Should work now (use k10 instead of K ).» One more question, in your above presentation (slide#17[edited]; missing data occurred in R at 12h), it was a linup/logdown plot. Nope. 16/17 are linear plots. In slide 16 I connected all points by straight lines (what the linear trap. would do), and in slide 17 by straight lines if ascending and by an exponential if descending. » As you said that no need to do data imputation if using linup/logdown. Do you do any line smoothing with your data first? No. » Otherwise, the line should not be connected in that way. It should be connected as the line that the arrow points. Why? » Do I miss anything here? Yes, you do. If the 12 h sample is missing the linear trapezoidal works as if we would impute a value by linear interpolation. The linup/logdown works as if we would impute a value by logarithmic interpolation. But the trick is that nothing has to be imputed in the linup/logdown at all! t C Interpolated values for imputation: linear: C_{12}^{*}=33.17+(128)/(168)(7.8633.17)=20.52 Calculation based on imputed values: linear/lin imp.: pAUC_{816}=0.5(12 8)(33.17+20.52)+ Direct calculation without imputation: linear: pAUC_{816}=0.5(168)(33.17+7.86)=164.1 BTW, the theoretical pAUC_{816} based on the model is 140.62… To make a long story short: Only if one has to use the linear trapezoidal (for whatever wacky reasons) I would impute an estimate based on the log/linear interpolation (second variant above). Otherwise the bias is substantial. Q.E.D. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20130523 11:43 @ Helmut Posting: # 10626 Views: 12,917 

Dear Helmut, Thanks for your messages, and the codes works well now. » ... » Relax. Would make only sense if the user has a choice. Some people still get crazy if one tries anything else than the linear trapezoidal. Yes, that's my concern too. I should provide users options for this step in NCA. It should be easy to implement such a function with bear at the moment. » Don’t use my code without improving it! In its current form it will not work with NAs and/or isolated BQLs within the profile. Sure and thanks for your reminding. » ... » » Otherwise, the line should not be connected in that way. It should be connected as the line that the arrow points. » » Why? Sorry, I thought it was using linup/logdown algo and plots in slide#17. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut ★★★ Vienna, Austria, 20130523 15:15 @ yjlee168 Posting: # 10630 Views: 13,055 

Dear Yungjin, » » » Otherwise, the line should not be connected in that way. It should be connected as the line that the arrow points. » » Why? » Sorry, I thought it was using linup/logdown algo and plots in slide#17. We are all guilty in not representing in spaghettis plots the method we use for calculating the AUC. For simplicity we connect point with straight lines, regardless whether we use a linear or semilogarithmic scale (all left panels below). If we use the linear trapezoidal, the semilog plot is wrong. What we are actually calculating would be reflected in the lower right. Weird, isn’t it? But then hopefully everybody would realize that the algo overestimates AUCs in the distribution / elimination phase. On the other hand if we use the linup/logdown trapezoidal the linear plot is wrong; should use the upper right one instead. Have you ever seen something like this before? I didn’t. Would be a tough job to come up with such a setup in commercial software. PS: Very rarely you find publications where in mean plots the points are not connected by lines at all. PPS: What do you think about adding rug(t, side=3) to plots in bear?— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20130523 20:47 @ Helmut Posting: # 10632 Views: 12,925 

Dear Helmut, Wow! That really quite surprises me. I know that all errors may result from "the approximation of trapezoid" (or assumption of trapezoid area between two points) when calculating AUC. Yes, right now we just use line connection between points to plot linear/semilog graphs. From your plots, it looks like linup/logdown method is more close to actual AUC. In this case, it does no matter what there is any missing data or not. Linup/logdown is more accurate than linear, isn't? » If we use the linear trapezoidal, the semilog plot is wrong. What we are actually calculating would be reflected in the lower right. » » » » Weird, isn’t it? But then hopefully everybody would realize that the algo overestimates AUCs in the distribution / elimination phase. Weird indeed but very convincible. Great demo. How do you plot the lowright graph? Is it possible to overlap both the lowerleft and the lowerright graph as one plot? That make differences more clear. » Have you ever seen something like this before? I didn’t. Would be a tough job to come up with such a setup in commercial software. Me either. Why do you think that it would be a tough job for commercial software? » PPS: What do you think about adding rug(t, side=3) to plots in bear?Should be doable in bear. But I don't quite understand why you want to add rug() plot? We don't have many data points in each subject (usually less than 50's in each period) for a BE/BA study.— All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut ★★★ Vienna, Austria, 20130523 21:29 @ yjlee168 Posting: # 10633 Views: 12,974 

Dear Yungjin! » […] From your plots, it looks like linup/logdown method is more close to actual AUC. In this case, it does no matter what there is any missing data or not. Linup/logdown is more accurate than linear, isn't? I think so. Whatever PK a drug follows, distribution/elimination likely is first order (well, leaving and MichaelisMenten aside). » Weird indeed but very convincible. […] How do you plot the lowright graph? Is it possible to overlap both the lowerleft and the lowerright graph as one plot? Sure. Just interpolate linear within each decreasing segment. Alternatively to lines 12–17 you could set up a function and use curve() . Was too lazy.t < c(0,0.5,0.75,1,1.5,2,2.5,3,4,6,8,16,24) If you want to use the code in a linear plot, interpolate logarithmically, i.e., change line 15 to Ck < c(Ck, exp(log(C[j])+abs((kt[j])/(t[j+1]t[j]))*(log(C[j+1])log(C[j])))) » Me either. Why do you think that it would be a tough job for commercial software? I have no experiences with SAS, but in Phoenix/WinNonlin it is tricky indeed. » […] But I don't quite understand why you want to add rug() plot? We don't have many data points in each subject (usually less than 50's in each period) for a BE/BA study.It was just an idea to display the sampling schedule. Forget it. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20130523 22:46 @ Helmut Posting: # 10634 Views: 12,927 

Dear Helmut, Thanks again. Now I think that I have fully understood the differences between the linear and the linup/logdown method for AUC calculations. Actually, linup/logdown method is still a kind of data smoothing approach. It is applied to calculate more accurate AUC than the linear one. Some people should not get crazy when using linup/logdown if they can see the differences here. Just because the line is too smooth to be true? That may scare people a little bit. That's why I asked if you used some kinds of data smoothing methods in previous message. The key point is that involved calculation step still uses the raw data to calculate the interpolated data. As we can see, the errors between linear and linup/logdown become significantly apparent, especially when sampling time between two data points is getting greater. Therefore, when there is missing data occurring, the linup/logdown method shows more accurate than the linear. » [...] » I have no experiences with SAS, but in Phoenix/WinNonlin it is tricky indeed. should be no problem at all. They are professional after all. » [...] » It was just an idea to display the sampling schedule. Forget it. I see. However, it could be misunderstood or confused by people to interpret it (the rug) as a 'log scale' with x axis. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut ★★★ Vienna, Austria, 20130524 01:24 @ yjlee168 Posting: # 10635 Views: 12,872 

Dear Yungjin! » Now I think that I have fully understood the differences between the linear and the linup/logdown method for AUC calculations. Hhm. » Actually, linup/logdown method is still a kind of data smoothing approach. No, it isn’t. » The key point is that involved calculation step still uses the raw data to calculate the interpolated data. There is no interpolation done at all. Have a look at my example in the previous post. I gave the interpolation/imputation formulas only as a hint what might happen if we need an intermediate data point. If we want to calculate the area in the interval [t_{2}, t_{2}] no interpolation, smoothing, whatsoever is done. Both methods are straightforward: linear: AUC_{t1→t2}=0.5(t_{2}–t_{1})(C_{2}+C_{1}) » As we can see, the errors between linear and linup/logdown become significantly apparent, especially when sampling time between two data points is getting greater. Right. » Therefore, when there is missing data occurring, the linup/logdown method shows more accurate than the linear. Xactly. See also Nathan’s PK/PD Blog. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20130524 09:17 @ Helmut Posting: # 10638 Views: 12,849 

Dear Helmut, » » Actually, linup/logdown method is still a kind of data smoothing approach. » » No, it isn’t. You are right. It isn't. Sorry about this. » [...] I gave the interpolation/imputation formulas only as a hint what might happen if we need an intermediate data point. If we want to calculate the area in the interval [t_{2}, t_{2}] no interpolation, smoothing, whatsoever is done. Both methods are straightforward: I see. Thank you for your detailed explanation once again. » linear: AUC_{t1→t2}=0.5(t_{2}–t_{1})(C_{2}+C_{1}) » linlog: AUC_{t1→t2}=(t_{2}–t_{1})(C_{2}–C_{1})/log(C_{2}/C_{1}) — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Ken Peh ★ Malaysia, 20130530 19:55 @ Helmut Posting: # 10688 Views: 12,690 

Dear Helmut, » Interpolated values for imputation: » linear: C_{12}^{*}=33.17+(128)/(168)(7.8633.17)=20.52 ok. » linlog: C_{12}^{*}=exp(log(33.17)+(128)/(168)log(7.86/33.17))=16.15 How did you get the readings of 16.15? I tried to follow your equation but I could not get the answer. » Calculation based on imputed values: » linear/lin imp.: pAUC_{816}=0.5(12 8)(33.17+20.52)+ » 0.5(1612)(20.52+ 7.86)=164.1 » linear/log imp.: pAUC_{816}=0.5(12 8)(33.17+20.52)+ » 0.5(1612)(20.52+ 7.86)=146.7 linear/lin imp and linear/log imp have the same data set. Why the output is different ? » linlog/log imp.: pAUC_{816}=(12 8)(16.1533.17)/log(16.15/33.17)+ » (1612)( 7.8616.15)/log( 7.86/16.15)=140.6 » » Direct calculation without imputation: » linear: pAUC_{816}=0.5(168)(33.17+7.86)=164.1 ok. » linlog: pAUC_{816}=(168)(7.8633.17)/log(7.86/33.17)=140.6 Similarly, I could not get the answer of 140.61. How did you calculate ? Please kindly elaborate. Thank you. Ken 
Helmut ★★★ Vienna, Austria, 20130530 22:38 @ Ken Peh Posting: # 10690 Views: 12,881 

Hi Ken! » » linlog: C_{12}^{*}=exp(log(33.17)+(128)/(168)log(7.86/33.17))=16.15 » How did you get the readings of 16.15? I tried to follow your equation but I could not get the answer. R round(exp(log(33.17)+abs((128)/(168))*log(7.86/33.17)), 2) SAS x=round(exp(log(33.17)+abs((128)/(168))*log(7.86/33.17)), 2); Excel / OO Calc / Gnumeric =ROUND(EXP(LN(33.17)+ABS((128)/(168))*LN(7.86/33.17)), 2) » linear/lin imp and linear/log imp have the same data set. Why the output is different ? Since I’m using different algorithms. Once more: Forget linear interpolation in the descending part of the profile. Have you ever asked yourself why the arithmetic and geometric means of the same data are different? Compare (7.86+33.17)*0.5 with (7.86*33.17)^0.5 Are these values vaguely reminiscent of something you have seen before? » » linlog: pAUC_{816}=(168)(7.8633.17)/log(7.86/33.17)=140.6 » Similarly, I could not get the answer of 140.61. round((168)*(7.8633.17)/log(7.86/33.17), 1) =ROUND((168)*(7.8633.17)/LN(7.86/33.17), 1) » How did you calculate ? See above. And you? — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Ken Peh ★ Malaysia, 20130603 17:52 @ Helmut Posting: # 10717 Views: 12,510 

Dear Helmut, Thank you very much. » » How did you calculate ? I know where my mistake is. It is the positive and negative signs. Brain is jammed up. Need more practice. May I know which software in R you refer to that can be used to calculate the data ? Regards, Ken 
Helmut ★★★ Vienna, Austria, 20130603 18:11 @ Ken Peh Posting: # 10718 Views: 12,667 

Hi Ken! » May I know which software in R you refer to that can be used to calculate the data ? … to calculate what? For imputation use my code above (simply give the timepoints in the t vector and concentrations in C ). For the linup/logdown trapezoidal rule in the calculation of AUC see the end of my code in this post.I guess it’s easier to wait for the next version of bear.*
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Helmut ★★★ Vienna, Austria, 20130525 15:09 @ yjlee168 Posting: # 10656 Views: 12,957 

Dear Yungjin, » […] is there any special consideration when using linup/logdown trapezoidal rule to calculate AUCs if there is more than one absorption peak? I checked one study of bimodal release formulations. Imbalanced, n=15. I evaluated AUC_{t} based on the linear trapezoidal and the linup/logdown method. Then I introduced a failure of the centrifuge resulting in a complete loss of 12 h samples in the second period. Let’s see:
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20130525 19:45 @ Helmut Posting: # 10658 Views: 12,830 

Dear Helmut, I think the differences at elimination phase between linear and logdown may results from the degree of concavity ([C1C2]) within a know time interval (t2t1). I also did a simple test with two different scenarios: ** if t2t1 = 12 h; C1 = 4.1, C2 =1.1 (the curve is more concave between t1 and t2) Therefore, you're right about logdown method. The logdown can be set as default method for AUC calculation safely, accurately and reliably. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Ohlbe ★★★ France, 20130520 21:45 @ gracehung Posting: # 10609 Views: 13,352 

Dear Gracehung, » As in BE study, One sample’s actual sampling time was miss recorded. However as according to protocol, this sample was analyzed by our analytical laboratory. I understand you question differently from Helmut. My understanding is that the sample was collected, but the phlebotomist failed to record the actual blood sampling time. In such a case I would just file a protocol deviation and retrain the staff, full stop. Even if there were a sampling time deviation, the influence on the AUC 90 % CI is so limited that it would only change the outcome of the trial in a very borderline situation. — Regards Ohlbe 
Helmut ★★★ Vienna, Austria, 20130521 13:53 @ Ohlbe Posting: # 10610 Views: 13,096 

Hi Ohlbe, » I understand you question differently from Helmut. Though I understood it correctly (see the start of my post), I am notorious for answering unasked questions as well. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
gracehung ☆ Taiwan, 20130523 01:06 @ Helmut Posting: # 10624 Views: 12,982 

Dear all: Thanks for your reply! The dilemma is we have the concentration data but don't have the actual sampling time. If we treat it as missing data, don't use it in bioequivalence evaluation, would assessor think we are hiding data? Should we include the data in PK analysis and use schedule time to be the "best guess"? One literature suggest to use log linear interpolation (at elimination phase) to minimize the bias (also as your reply). But someone said the assessor will think you are manipulating data. which one would be the best choice? Grace 
Helmut ★★★ Vienna, Austria, 20130523 01:35 @ gracehung Posting: # 10625 Views: 12,932 

Hi Grace! » If we treat it as missing data, don't use it in bioequivalence evaluation, would assessor think we are hiding data? Duno. But if you treat the value as missing – depending on the algo – you might get a substantially biased AUC (see my conversation with Yungjin above). » Should we include the data in PK analysis and use schedule time to be the "best guess"? Definitely (as Ohlbe suggested). I once had to assess deficiencies where in two studies the scheduled time points (instead of the actual ones) were used. Impact on the AUC comparison was in the second decimal place. And you have only one (!) value in an entire study… BTW, scheduled time points were my standard method in hundreds (!) of studies. So what? » One literature suggest to use log linear interpolation (at elimination phase) to minimize the bias (also as your reply). Yes, but only if the value is missing. You know the value – only its time point is uncertain. » But someone said the assessor will think you are manipulating data. I would not use an estimate. Use the measured one at the scheduled time point. Report the deviation from the protocol, don’t panic, & sleep well. Another idea – only if you really have nothing better to do: Most protocols require a justification of a time deviation to be recorded in CRF only if a certain time interval is exceeded (f.i. ±2’ in the very early phase, ±5’ later on, and ±15’ for t ≥24 h). Let’s assume only the actual sampling was not recorded, but still was within the allowed interval (otherwise we have two errors instead of one). Set up two data sets: One assuming the sample was taken at the maximum allowed negative deviation (too early) and the other one with the max. positive deviation (too late). Go ahead with the BE and assess the impact on the outcome. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 