yicaoting ★ NanKing, China, 2011-11-06 14:03 (4916 d 10:28 ago) Posting: # 7616 Views: 12,850 |
|
Dear all, When I am validating WNL's BE (v 5.1.1) results by comparing the results of WNL with those of my manual calculation, I find that WNL's BE's 90% CI of the Ratio is lack of precision. My dataset is Chow and Liu's famous data: Design and Analysis of Bioavailability and Bioequivalence Studies, Third Edition, Page 71. Select Ln()-transformation in WNL's BE wizard. For comparison, I list some critical results: First found WNL 5.1.1: WNL's 90% CI of LSMean(R-T) -0.066980983221424 0.124285167688542 Here I find the above 90% CI of GMR is lack of precision. According to Phoenix WinNonlin 6.0 guide Page 338: CI_Lower = 100 · exp(lower) my manual result is: Manual 90% CI of GeoMean Ratio(%) (T/R) based on WNL‘s 90% CI of LSMean(R-T) 88.3127965410502 106.9275143812610 Precision is only at the lelvel of 0.01(%), Incrediable? Interesting? Strange? Can anyone give me the result of PNX WNL 6.0 or later version? Dear HS or ElMaestro or others warm-hearted? Second found Moreover, I find thta WNL‘s 90% CI of LSMean(R-T) -0.066980983221424 0.124285167688542 is also lack of precison. WNL 5.1.1 WNL's Diff (T-R) -0.028652092233559 my manual result is: Manual Diff(T-R) -0.028652092233559 It can be seen this small bias is caused by the difference of WNL's Tinv(0.1,22) and my Manual Tinv(0.1,22). Here I am not sure that my Manual Tinv(0.1,22)=1.71714437438148 is of enough precison. So I have to try other software to get more Tinvs: Software Tinv(0.1,22) I know Excel 2003's result is a little poor, but I don't know which is more reliable. So I need Daer HS's help to give out the result of R's Tinv(0.1,22). Conclusion Even if we accept WNL's Tinv(0.1,22) and thus accept WNL's 90%CI of Diff (R-T). That is to say we accept the low precision of my Second Found. I really can not accept the poor precision of my First Found. Or my manual calculation is poor precision? I need more Dears to validate these results, give my best thanks to you. |
ElMaestro ★★★ Denmark, 2011-11-06 14:46 (4916 d 09:46 ago) @ yicaoting Posting: # 7617 Views: 11,489 |
|
Dear yicaoting, ❝ Precision is only at the lelvel of 0.01(%), Incrediable? Interesting? Strange? Credible. Somewhat dull. Perhaps 0.001% strange. The fitting engines must stop at some point when the objective function cannot be improved much further. The software internally keeps track of the improvement at each iteration and sets some criteria for stopping. It can for example be when the Log likelihood (or SS) cannot be improved by more than 0.0001 arbitrary units or something like that. How to set that limit is actually a little tricky, because if you set it too low you may screw up some other important aspects of the iterations (for example: small numbers and matrices are sometimes a toxic cocktail for numerical reasons) and if you set it too high you just get an apparent lack of precision. It all depends on how the math are actually implemented and we know nuffin about the source code really. There are many different ways to optimise; some prefer the brutally efficient Nelder-Mead algo, others the softcore Newtonian-based meffuds like Marquardt-Levenberg. Entire books are written about this. Of course any difference in precision can in theory lead to erroneous conclusions but in your case it would be a very rare event. Before jumping to any conclusions perhaps you should compare the precision in terms of objective function (SS or logLik). At least that might give an indicator of which program is actually doing the best job. But be sure to compare like for like (SS against SS, logLik against logLik, bear in mind if WNL by default fits by REML while SAS etc does it through SS). ❝ Can anyone give me the result of PNX WNL 6.0 or later version? Dear HS or ElMaestro or others warm-hearted? I don't have that software, sorry. But you're right I am warm-hearted, except when I tie my men to the mast and flog them. By the way, with R 2.10.1 I get: format(qt(0.95,22), digits=20) — Pass or fail! ElMaestro |
yicaoting ★ NanKing, China, 2011-11-06 15:20 (4916 d 09:11 ago) @ ElMaestro Posting: # 7619 Views: 11,137 |
|
Dear warm-hearted ElMaestro, Thank you for your quick response. ❝ The fitting engines must stop at some point when the objective function cannot be improved much further. The software internally keeps track of the improvement at each iteration and sets some criteria for stopping. It can for example be when the Log likelihood (or SS) cannot be improved by more than 0.0001 arbitrary units or something like that. ❝ ... There are many different ways to optimise; some prefer the brutally efficient Nelder-Mead algo, others the softcore Newtonian-based meffuds like Marquardt-Levenberg. Entire books are written about this. But you as everyone know (as WNL's usr guide pointed, Phoenix WinNonlin 6.0 guide Page 338), to convert 90% CI of LSMean(R-T) to 90% CI of GeoMean Ratio(%) (T/R), we only need a simple calculation like CI_Lower = 100 · exp(lower) CI_Upper = 100 · exp(upper) No iteration is needed, and no optimization algorithm is needed. Or we can conclude that WNL uses a strange (and of course low-precision) way to calc 90% CI of GeoMean Ratio(%) (T/R), but not the way pointed in the User Guide. ❝ By the way, with R 2.10.1 I get: ❝ ❝ Thank you for your kind help. My so called Second Found is clear now: key point is WNL's Tinv()'s low-precision, of course I have to accept it. |
ElMaestro ★★★ Denmark, 2011-11-06 15:32 (4916 d 08:59 ago) @ yicaoting Posting: # 7620 Views: 11,124 |
|
Dear yicaoting, ❝ But you as everyone know except me ![]() ❝ (as WNL's usr guide pointed, Phoenix WinNonlin 6.0 guide Page 338), to convert ❝ 90% CI of LSMean(R-T) to 90% CI of GeoMean Ratio(%) (T/R), we only need a simple calculation like ❝ CI_Lower = 100 · exp(lower) ❝ CI_Upper = 100 · exp(upper) ❝ No iteration is needed, and no optimization algorithm is needed. All software that I know of with the exception of man-made Excel sheets get the LSMEans from fitting. They are the model effects extracted from the b vector after optimsation. — Pass or fail! ElMaestro |
yicaoting ★ NanKing, China, 2011-11-06 15:59 (4916 d 08:33 ago) @ ElMaestro Posting: # 7622 Views: 11,106 |
|
Dear ElMaestro, ❝ All software that I know of with the exception of man-made Excel sheets get the LSMEans from fitting. They are the model effects extracted from the b vector after optimsation. Yes, you are right at this point. But, as mentioned in my first post, I used WNL's 90% CI of LSMean(R-T) -0.066980983221424 0.124285167688542, but not the 90% CI of LSMean(R-T) calculated by other software, to calc 90% CI of GeoMean Ratio(%) (T/R), the result is different from that of WNL's. LSMean is obtained from fitting or iteration, so some bias will be accepted. But WNL dose a simple anti-log transform with a bias at level of 0.01(%), it is unacceptable, unless WNL use another "strange method" to do anti-log transformation and we have accept that "strange method". |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-11-06 16:39 (4916 d 07:53 ago) @ yicaoting Posting: # 7625 Views: 11,088 |
|
Dear yicaoting! ❝ LSMean is obtained from fitting or iteration, so some bias will be accepted. ❝ But WNL dose a simple anti-log transform with a bias at level of 0.01(%), it is unacceptable, unless WNL use another "strange method" to do anti-log transformation and we have accept that "strange method". Please read my previous post! The imprecision in BE comes mainly (!) from transferring LME’s results. If you are not happy with the workaround (remaining imprecision in the 7th+ decimal due to the t-value used), you can calculate the CI based on the residual variance (a series of transformations in PHX or directly after detaching the worksheet and inserting a formula in WNL). But: I wouldn’t recommend that – requires the manual entry of a t-value from somewhere else (Gnumeric, R). Anyhow we would get: CI -0.124285169137953 0.0669809846708375 — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-11-06 15:46 (4916 d 08:45 ago) @ yicaoting Posting: # 7621 Views: 11,165 |
|
Dear yicaoting! ❝ But you as everyone know (as WNL's usr guide pointed, Phoenix WinNonlin 6.0 guide Page 338), to convert ❝ 90% CI of LSMean(R-T) to 90% CI of GeoMean Ratio(%) (T/R), we only need a simple calculation like ❝ CI_Lower = 100 · exp(lower) ❝ CI_Upper = 100 · exp(upper) ❝ No iteration is needed, and no optimization algorithm is needed. See my other post. No optimization here, but lower and upper are obtained by REML (as ElMaestro pointed out). Default settings in PHX/WNL are maximum 50 iterations, estimability tolerance 10-5, singularity tolerance 10-10, convergence criterion 10-10. BTW, with our dataset convergence is reached in the first iteration [-2×REML log(likelihood) 4.8289659].— Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
yicaoting ★ NanKing, China, 2011-11-06 16:25 (4916 d 08:07 ago) @ Helmut Posting: # 7623 Views: 11,557 |
|
Dear HS and ElMaestro! ❝ No optimization here, but Sorry fo my unknowing of WNL's calculation of lower and upper using REML.To do such a simple anti-log calc, how tired WNL is! Let's take a rest: In Chinese, there is a proverb “Why use an ox cleaver to kill a chicken?”. It means a butchers knife used for killing cows is not necessary in order to kill a chicken. It then became used as a metaphor to express that a small matter doesn't require big effort. I believe that there is a counterpart in English. Thank HS and ElMaestro for your patient explanation. |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-11-06 16:47 (4916 d 07:45 ago) @ yicaoting Posting: # 7626 Views: 11,278 |
|
Dear yicaoting! ❝ In Chinese, we have a Proverb “Why use an ox cleaver to kill a chicken?”. ❝ I believe that there is a counterpart in English. In German it’s “Mit Kanonen auf Spatzen schießen” (my private translation: to shoot sparrows with canons). Some English versions: “To take a sledgehammer to crack a nut”, “To break a butterfly on a wheel”. Sparrows… Mah Jongg (Mandarin 麻將) is my favorite game. Wasn’t it named after sparrow (in Cantonese 麻雀)? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-11-06 15:14 (4916 d 09:18 ago) @ yicaoting Posting: # 7618 Views: 11,753 |
|
Dear yicaoting! ![]() ❝ Can anyone give me the result of PNX WNL 6.0 or later version? Phoenix/WinNonlin 6.2.1.51 (Bioequivalence object) Average Bioequivalence:
If we change signs (lexical order in LSM!) and antilog we get:
88.3127964130425 – 106.927514536250 ) to 8-9 significant figures, and your antiloged LSM-CI exactly.*) The discrepancy is caused by lacking precision of the t-value only to a minor part. According to Pharsight (correct) output of the LME-engine is taken over to the BE wizard. Pharsight is suspecting that results are not used in full precision – but are internally rounded (!!)…So either use the workaround I suggested in the whitepaper or abandon the BE wizard entirely and set up the analysis in LME, backtransforming afterwards. ❝ So I need Daer HS's help to give out the result of R's Tinv(0.1,22). Here you are (R 2.14.0):
*) Proving that the bug is consistent within WNL 5.1.1 and PHX 6.2.1.51… — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
yicaoting ★ NanKing, China, 2011-11-06 16:38 (4916 d 07:53 ago) @ Helmut Posting: # 7624 Views: 11,086 |
|
❝ So either use the workaround I suggested in the whitepaper or abandon the BE wizard entirely and set up the analysis in LME, backtransforming afterwards. Great "white paper"! ![]() Thanks for your paper, I will be careful with WNL's BE wizard. Your suggestion is definitely valuable if one wants to get high-precision results (at least for 90% CI of the ratio). |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-11-06 17:53 (4916 d 06:38 ago) @ yicaoting Posting: # 7627 Views: 11,115 |
|
Dear yicaoting! ❝ I will be careful with WNL's BE wizard. Consider upgrading to Phoenix. It’s free of charge and support for ‘classical’ WinNonlin will end soon. In PHX I have set up a workflow running both the BE wiz and LME. Results are compared and if the difference in both CLs is ≤0.01 % (the magic rounding requirement according to FDA and EMA: 80.00 % ≤ CI ≤ 125.00 %) I use BE’s results. Although fancy stuff is included in WNL (M$-Word export) and PHX (+HTML tables) I never use them. Takes ages to set them up – much easier to import the ASCII-text into a statistical report. ![]() ❝ Your suggestion is definitely valuable if one wants to get high-precision results (at least for 90% CI of the ratio). If you really bother (and after this this post I’m sure you don’t) and have a lot of money to spare you can obtain a Phoenix Connect-license. Then it would be possible to get the t-value from an R-script. There’s another point on my wishlist: Have you ever tried to reproduce WNL’s reported power in any other piece of software? — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
yicaoting ★ NanKing, China, 2011-11-06 18:12 (4916 d 06:19 ago) @ Helmut Posting: # 7628 Views: 11,210 |
|
❝ There’s another point on my wishlist: Have you ever tried to reproduce WNL’s reported power in any other piece of software? Yes, I have tried. My results are same to WNL's at the level of 0.0001 for Power, both for untransformed and Ln()-transformed data. It's now midnight in China, tomorrow is Monday, I must sleep now. I will continue to do the Power story of WNL tomorrow or later with you. Do you have any suggestion? Dear HS. |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-11-06 19:29 (4916 d 05:03 ago) @ yicaoting Posting: # 7629 Views: 11,625 |
|
Dear yicaoting! ❝ It's now midnight in China, tomorrow is Monday, I must sleep now. ![]() ❝ My results are same to WNL's at the level of 0.0001 for Power, both for untransformed and Ln()-transformed data. Hhm. Going with Chow’s/Liu’s data set (PHX 6.2, ln-transformed):
How did Pharsight derive this value from the given…
Using famous package PowerTOST 0.8-7 for R (α 0.05, acceptance range 0.80–1.25, CV 0.1947357354907, n 24, 2×2 design, estimation exact or approximative) I got:
Since power even at PE=1 is lower than PHX’s results, the additional lines in PowerTOST are just for fun. Are they using a larger α of 0.10‽
Ah – this goes into the right (=wrong!!) direction. I had a quite lengthy discussion with Pharsight 1½ years ago and it is clear that PHX/WNL doesn’t contain an algo for the non-central t-distribution. So let’s try with PowerTOST’s hidden function of the shifted central t-distribution instead (still α 0.10! – .approx2.power.TOST(alpha = 0.1, ltheta1 = log(0.8), ltheta2 = log(1.25), diffm = log(PE), se = CV2se(0.194735735490703), n = 24, df = 22, bk = 2) ):
![]() ❝ I will continue to do the Power story of WNL tomorrow or later with you. I’m leaving tomorrow for Berlin; eager to meet D. Labes and the co-author of the forum’s scripts Auge. Not sure whether I’ll have free time to work on it. ❝ Do you have any suggestion? If you understand PHX/WNL’s manual, please let me know. Enlighten me how to derive 0.9839865 .For interested readers: The respective section of WinNonlin’s 4.1 manual here (hasn’t changed in later versions). P.S.: Of course I’m not interested in post-hoc power at all. But would be nice to set up the decision-tree of a Two-Stage design within PHX/WNL. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |