yicaoting
★    

NanKing, China,
2011-11-06 14:03
(4548 d 03:22 ago)

Posting: # 7616
Views: 11,229
 

 WinNonlin is lack of precision or? [Software]

Dear all,

When I am validating WNL's BE (v 5.1.1) results by comparing the results of WNL with those of my manual calculation, I find that WNL's BE's 90% CI of the Ratio is lack of precision.

My dataset is Chow and Liu's famous data:
Design and Analysis of Bioavailability and Bioequivalence Studies, Third Edition, Page 71.

Select Ln()-transformation in WNL's BE wizard.

For comparison, I list some critical results:

First found

WNL 5.1.1:
WNL's 90% CI of LSMean(R-T)   -0.066980983221424   0.124285167688542
WNL's DiffSE                   0.055693090418743    
WNL's 90% CI of GeoMean Ratio(%) (T/R)  88.3108418878651   106.9298810918720


Here I find the above 90% CI of GMR is lack of precision.
According to Phoenix WinNonlin 6.0 guide Page 338:
CI_Lower = 100 · exp(lower)
CI_Upper = 100 · exp(upper)


my manual result is:
Manual 90% CI of GeoMean Ratio(%) (T/R) based on WNL‘s 90% CI of LSMean(R-T)
88.3127965410502 106.9275143812610

Precision is only at the lelvel of 0.01(%), Incrediable? Interesting? Strange?

Can anyone give me the result of PNX WNL 6.0 or later version? Dear HS or ElMaestro or others warm-hearted?

Second found

Moreover, I find thta WNL‘s 90% CI of LSMean(R-T) -0.066980983221424 0.124285167688542 is also lack of precison.

WNL 5.1.1
WNL's Diff (T-R)              -0.028652092233559    
WNL's DiffSE                   0.055693090418743    
WNL's Tinv(0.1,22)             1.71714434835526
   
WNL's 90%CI of Diff (R-T)     -0.066980983221424     0.124285167688542


my manual result is:
Manual Diff(T-R)              -0.028652092233559    
Manual DiffSE                  0.055693090418743    
Manual Tinv(0.1,22)            1.71714437438148    
Manual 90%CI of Diff (T-R)    -0.124285169138022     0.066980984670905


It can be seen this small bias is caused by the difference of WNL's Tinv(0.1,22) and my Manual Tinv(0.1,22). Here I am not sure that my Manual Tinv(0.1,22)=1.71714437438148 is of enough precison. So I have to try other software to get more Tinvs:

   Software                Tinv(0.1,22)
WNL 5.1.1               1.71714434835526
Excel 2003 and 2007     1.71714433543983
Open Office 3.3.0       1.71714437438025
Gnumeric 1.10.16        1.71714437438148

I know Excel 2003's result is a little poor, but I don't know which is more reliable. So I need Daer HS's help to give out the result of R's Tinv(0.1,22).

Conclusion

Even if we accept WNL's Tinv(0.1,22) and thus accept WNL's 90%CI of Diff (R-T). That is to say we accept the low precision of my Second Found.
I really can not accept the poor precision of my First Found.

Or my manual calculation is poor precision?
I need more Dears to validate these results, give my best thanks to you.
ElMaestro
★★★

Denmark,
2011-11-06 14:46
(4548 d 02:40 ago)

@ yicaoting
Posting: # 7617
Views: 10,065
 

 WinNonlin is lack of precision or?

Dear yicaoting,

❝ Precision is only at the lelvel of 0.01(%), Incrediable? Interesting? Strange?


Credible. Somewhat dull. Perhaps 0.001% strange.

The fitting engines must stop at some point when the objective function cannot be improved much further. The software internally keeps track of the improvement at each iteration and sets some criteria for stopping. It can for example be when the Log likelihood (or SS) cannot be improved by more than 0.0001 arbitrary units or something like that.
How to set that limit is actually a little tricky, because if you set it too low you may screw up some other important aspects of the iterations (for example: small numbers and matrices are sometimes a toxic cocktail for numerical reasons) and if you set it too high you just get an apparent lack of precision.
It all depends on how the math are actually implemented and we know nuffin about the source code really. There are many different ways to optimise; some prefer the brutally efficient Nelder-Mead algo, others the softcore Newtonian-based meffuds like Marquardt-Levenberg. Entire books are written about this.

Of course any difference in precision can in theory lead to erroneous conclusions but in your case it would be a very rare event. Before jumping to any conclusions perhaps you should compare the precision in terms of objective function (SS or logLik). At least that might give an indicator of which program is actually doing the best job. But be sure to compare like for like (SS against SS, logLik against logLik, bear in mind if WNL by default fits by REML while SAS etc does it through SS).

❝ Can anyone give me the result of PNX WNL 6.0 or later version? Dear HS or ElMaestro or others warm-hearted?


I don't have that software, sorry. But you're right I am warm-hearted, except when I tie my men to the mast and flog them.

By the way, with R 2.10.1 I get:
format(qt(0.95,22), digits=20)
[1] "1.717144374380243"

Pass or fail!
ElMaestro
yicaoting
★    

NanKing, China,
2011-11-06 15:20
(4548 d 02:05 ago)

@ ElMaestro
Posting: # 7619
Views: 9,800
 

 WinNonlin is lack of precision or?

Dear warm-hearted ElMaestro,
Thank you for your quick response.

❝ The fitting engines must stop at some point when the objective function cannot be improved much further. The software internally keeps track of the improvement at each iteration and sets some criteria for stopping. It can for example be when the Log likelihood (or SS) cannot be improved by more than 0.0001 arbitrary units or something like that.

❝ ... There are many different ways to optimise; some prefer the brutally efficient Nelder-Mead algo, others the softcore Newtonian-based meffuds like Marquardt-Levenberg. Entire books are written about this.


But you as everyone know (as WNL's usr guide pointed, Phoenix WinNonlin 6.0 guide Page 338), to convert
90% CI of LSMean(R-T) to 90% CI of GeoMean Ratio(%) (T/R), we only need a simple calculation like
CI_Lower = 100 · exp(lower)
CI_Upper = 100 · exp(upper)

No iteration is needed, and no optimization algorithm is needed.

Or we can conclude that WNL uses a strange (and of course low-precision) way to calc 90% CI of GeoMean Ratio(%) (T/R), but not the way pointed in the User Guide.

❝ By the way, with R 2.10.1 I get:

format(qt(0.95,22), digits=20)

[1] "1.717144374380243"


Thank you for your kind help. My so called Second Found is clear now: key point is WNL's Tinv()'s low-precision, of course I have to accept it.
ElMaestro
★★★

Denmark,
2011-11-06 15:32
(4548 d 01:53 ago)

@ yicaoting
Posting: # 7620
Views: 9,791
 

 WinNonlin is lack of precision or?

Dear yicaoting,

❝ But you as everyone know


except me :-D

❝ (as WNL's usr guide pointed, Phoenix WinNonlin 6.0 guide Page 338), to convert

90% CI of LSMean(R-T) to 90% CI of GeoMean Ratio(%) (T/R), we only need a simple calculation like

CI_Lower = 100 · exp(lower)

CI_Upper = 100 · exp(upper)

❝ No iteration is needed, and no optimization algorithm is needed.


All software that I know of with the exception of man-made Excel sheets get the LSMEans from fitting. They are the model effects extracted from the b vector after optimsation.

Pass or fail!
ElMaestro
yicaoting
★    

NanKing, China,
2011-11-06 15:59
(4548 d 01:27 ago)

@ ElMaestro
Posting: # 7622
Views: 9,780
 

 WinNonlin is lack of precision or?

Dear ElMaestro,

❝ All software that I know of with the exception of man-made Excel sheets get the LSMEans from fitting. They are the model effects extracted from the b vector after optimsation.


Yes, you are right at this point.

But, as mentioned in my first post, I used
WNL's 90% CI of LSMean(R-T) -0.066980983221424 0.124285167688542, but not the 90% CI of LSMean(R-T) calculated by other software, to calc 90% CI of GeoMean Ratio(%) (T/R), the result is different from that of WNL's.

LSMean is obtained from fitting or iteration, so some bias will be accepted.
But WNL dose a simple anti-log transform with a bias at level of 0.01(%), it is unacceptable, unless WNL use another "strange method" to do anti-log transformation and we have accept that "strange method".
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-06 16:39
(4548 d 00:47 ago)

@ yicaoting
Posting: # 7625
Views: 9,763
 

 EOD?

Dear yicaoting!

❝ LSMean is obtained from fitting or iteration, so some bias will be accepted.

❝ But WNL dose a simple anti-log transform with a bias at level of 0.01(%), it is unacceptable, unless WNL use another "strange method" to do anti-log transformation and we have accept that "strange method".


Please read my previous post! The imprecision in BE comes mainly (!) from transferring LME’s results. If you are not happy with the workaround (remaining imprecision in the 7th+ decimal due to the t-value used), you can calculate the CI based on the residual variance (a series of transformations in PHX or directly after detaching the worksheet and inserting a formula in WNL). But: I wouldn’t recommend that – requires the manual entry of a t-value from somewhere else (Gnumeric, R). Anyhow we would get:
CI       -0.124285169137953   0.0669809846708375
antilog  88.3127964130486   106.927514536243

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-06 15:46
(4548 d 01:39 ago)

@ yicaoting
Posting: # 7621
Views: 9,826
 

 WinNonlin is lack of precision or?

Dear yicaoting!

❝ But you as everyone know (as WNL's usr guide pointed, Phoenix WinNonlin 6.0 guide Page 338), to convert

90% CI of LSMean(R-T) to 90% CI of GeoMean Ratio(%) (T/R), we only need a simple calculation like

CI_Lower = 100 · exp(lower)

CI_Upper = 100 · exp(upper)

❝ No iteration is needed, and no optimization algorithm is needed.


See my other post. No optimization here, but lower and upper are obtained by REML (as ElMaestro pointed out). Default settings in PHX/WNL are maximum 50 iterations, estimability tolerance 10-5, singularity tolerance 10-10, convergence criterion 10-10. BTW, with our dataset convergence is reached in the first iteration [-2×REML log(likelihood) 4.8289659].

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yicaoting
★    

NanKing, China,
2011-11-06 16:25
(4548 d 01:01 ago)

@ Helmut
Posting: # 7623
Views: 10,137
 

 use an ox cleaver to kill a chicken

Dear HS and ElMaestro!

❝ No optimization here, but lower and upper are obtained by REML (as ElMaestro pointed out). Default settings in PHX/WNL are maximum 50 iterations, estimability tolerance 10-5, singularity tolerance 10-10, convergence criterion 10-10. BTW, with our dataset convergence is reached in the first iteration [-2×REML log(likelihood) 4.8289659].


Sorry fo my unknowing of WNL's calculation of lower and upper using REML.

To do such a simple anti-log calc, how tired WNL is!

Let's take a rest:
In Chinese, there is a proverb “Why use an ox cleaver to kill a chicken?”. It means a butchers knife used for killing cows is not necessary in order to kill a chicken. It then became used as a metaphor to express that a small matter doesn't require big effort.
I believe that there is a counterpart in English.

Thank HS and ElMaestro for your patient explanation.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-06 16:47
(4548 d 00:39 ago)

@ yicaoting
Posting: # 7626
Views: 9,910
 

 use an ox cleaver to kill a chicken

Dear yicaoting!

❝ In Chinese, we have a Proverb “Why use an ox cleaver to kill a chicken?”.

❝ I believe that there is a counterpart in English.


In German it’s “Mit Kanonen auf Spatzen schießen” (my private translation: to shoot sparrows with canons). Some English versions: “To take a sledgehammer to crack a nut”, “To break a butterfly on a wheel”.


Sparrows… Mah Jongg (Mandarin 麻將) is my favorite game. Wasn’t it named after sparrow (in Cantonese 麻雀)?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-06 15:14
(4548 d 02:12 ago)

@ yicaoting
Posting: # 7618
Views: 10,380
 

 WinNonlin’s BE wizard is known to be buggy

Dear yicaoting!

[image]I’m not surprised; WinNonlin (all versions up to the current release of Phoenix/WinNonlin) con­tains a bug (filed as incident 00133350 on 22 Mar 2011, QC 10856). See my white paper.

❝ Can anyone give me the result of PNX WNL 6.0 or later version?


Phoenix/WinNonlin 6.2.1.51 (Bioequivalence object)
Average Bioequivalence:

Difference:   -0.028652092233559
Diff_SE:       0.055693090418743
Ratio_%Ref:   97.1754486595894
CI_90_Lower:  88.3108418878651
CI_90_Upper: 106.929881091872

LSM Differences:

Reference - Test:  0.028652092233559
StdError:          0.055693090418743
T_critical:        1.71714434835526
Lower_CI:         -0.066980983221424
Upper_CI:          0.124285167688542


If we change signs (lexical order in LSM!) and antilog we get:

90%lo:  88.3127965410502
90%hi: 106.9275143812607

… matching your manual calculation (antiloged: 88.3127964130425 – 106.927514536250) to 8-9 significant figures, and your antiloged LSM-CI exactly.*) The discrepancy is caused by lacking precision of the t-value only to a minor part. According to Pharsight (correct) output of the LME-engine is taken over to the BE wizard. Pharsight is suspecting that results are not used in full precision – but are internally rounded (!!)…
So either use the workaround I suggested in the whitepaper or abandon the BE wizard entirely and set up the analysis in LME, backtransforming afterwards.

❝ So I need Daer HS's help to give out the result of R's Tinv(0.1,22).


Here you are (R 2.14.0):

options(digits=16)
qt(1-0.05,22)
1.717144374380243

As expected matching Gnumeric (according to Gnumeric’s documentation the same algo is used).


*) Proving that the bug is consistent within WNL 5.1.1 and PHX 6.2.1.51…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yicaoting
★    

NanKing, China,
2011-11-06 16:38
(4548 d 00:48 ago)

@ Helmut
Posting: # 7624
Views: 9,755
 

 WinNonlin’s BE wizard is known to be buggy

❝ So either use the workaround I suggested in the whitepaper or abandon the BE wizard entirely and set up the analysis in LME, backtransforming afterwards.


Great "white paper"! :ok:
Thanks for your paper, I will be careful with WNL's BE wizard.

Your suggestion is definitely valuable if one wants to get high-precision results (at least for 90% CI of the ratio).
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-06 17:53
(4547 d 23:32 ago)

@ yicaoting
Posting: # 7627
Views: 9,764
 

 WinNonlin’s BE wizard is known to be buggy

Dear yicaoting!

❝ I will be careful with WNL's BE wizard.


Consider upgrading to Phoenix. It’s free of charge and support for ‘classical’ WinNonlin will end soon. In PHX I have set up a workflow running both the BE wiz and LME. Results are compared and if the difference in both CLs is ≤0.01 % (the magic rounding requirement according to FDA and EMA: 80.00 % ≤ CI ≤ 125.00 %) I use BE’s results. Although fancy stuff is included in WNL (M$-Word export) and PHX (+HTML tables) I never use them. Takes ages to set them up – much easier to import the ASCII-text into a statistical report. ;-)

❝ Your suggestion is definitely valuable if one wants to get high-precision results (at least for 90% CI of the ratio).


If you really bother (and after this this post I’m sure you don’t) and have a lot of money to spare you can obtain a Phoenix Connect-license. Then it would be possible to get the t-value from an R-script.

There’s another point on my wishlist: Have you ever tried to reproduce WNL’s reported power in any other piece of software?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yicaoting
★    

NanKing, China,
2011-11-06 18:12
(4547 d 23:13 ago)

@ Helmut
Posting: # 7628
Views: 9,854
 

 WinNonlin’s BE wizard is known to be buggy

❝ There’s another point on my wishlist: Have you ever tried to reproduce WNL’s reported power in any other piece of software?


Yes, I have tried.
My results are same to WNL's at the level of 0.0001 for Power, both for untransformed and Ln()-transformed data.
It's now midnight in China, tomorrow is Monday, I must sleep now.
I will continue to do the Power story of WNL tomorrow or later with you. Do you have any suggestion? Dear HS.
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2011-11-06 19:29
(4547 d 21:57 ago)

@ yicaoting
Posting: # 7629
Views: 10,235
 

 WinNonlin’s power?

Dear yicaoting!

❝ It's now midnight in China, tomorrow is Monday, I must sleep now.

Good night! :sleeping:

❝ My results are same to WNL's at the level of 0.0001 for Power, both for untransformed and Ln()-transformed data.


Hhm. Going with Chow’s/Liu’s data set (PHX 6.2, ln-transformed):

Power at 20% = 0.9839865


How did Pharsight derive this value from the given…

Var(Residual)    0.037220643844683
Intrasubject CV  0.194735735490703 almost correct: CVintra=√MSE-1
Ratio %Ref      97.1754486595894


Using famous package PowerTOST 0.8-7 for R (α 0.05, acceptance range 0.80–1.25, CV 0.1947357354907, n 24, 2×2 design, estimation exact or approximative) I got:

PE                power (exact = TRUE or FALSE)
1                 0.9745468083
0.971754486595894 0.9556573312
0.95              0.9096019837


Since power even at PE=1 is lower than PHX’s results, the additional lines in PowerTOST are just for fun. Are they using a larger α of 0.10

PE                power (exact = TRUE or FALSE)
1                 0.9918976973
0.971754486595894 0.9831552162
0.95              0.9593213240


Ah – this goes into the right (=wrong!!) direction. I had a quite lengthy discussion with Pharsight 1½ years ago and it is clear that PHX/WNL doesn’t contain an algo for the non-central t-distribution. So let’s try with PowerTOST’s hidden function of the shifted central t-distribution instead (still α 0.10! – .approx2.power.TOST(alpha = 0.1, ltheta1 = log(0.8), ltheta2 = log(1.25), diffm = log(PE), se = CV2se(0.194735735490703), n = 24, df = 22, bk = 2)):

PE                power
1                 0.9864870592
0.971754486595894 0.9774342541
0.95              0.9534457559

:confused:

❝ I will continue to do the Power story of WNL tomorrow or later with you.


I’m leaving tomorrow for Berlin; eager to meet D. Labes and the co-author of the forum’s scripts Auge. Not sure whether I’ll have free time to work on it.

❝ Do you have any suggestion?


If you understand PHX/WNL’s manual, please let me know. Enlighten me how to derive 0.9839865.
For interested readers: The respective section of WinNonlin’s 4.1 manual here (hasn’t changed in later versions).


P.S.: Of course I’m not interested in post-hoc power at all. But would be nice to set up the decision-tree of a Two-Stage design within PHX/WNL.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
UA Flag
Activity
 Admin contact
22,988 posts in 4,825 threads, 1,661 registered users;
88 visitors (1 registered, 87 guests [including 7 identified bots]).
Forum time: 18:26 CEST (Europe/Vienna)

The only way to comprehend what mathematicians mean by Infinity
is to contemplate the extent of human stupidity.    Voltaire

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5