WinNonlin is lack of precision or? [Software]

posted by yicaoting  – NanKing, China, 2011-11-06 15:20 (4932 d 16:56 ago) – Posting: # 7619
Views: 11,166

Dear warm-hearted ElMaestro,
Thank you for your quick response.

❝ The fitting engines must stop at some point when the objective function cannot be improved much further. The software internally keeps track of the improvement at each iteration and sets some criteria for stopping. It can for example be when the Log likelihood (or SS) cannot be improved by more than 0.0001 arbitrary units or something like that.

❝ ... There are many different ways to optimise; some prefer the brutally efficient Nelder-Mead algo, others the softcore Newtonian-based meffuds like Marquardt-Levenberg. Entire books are written about this.


But you as everyone know (as WNL's usr guide pointed, Phoenix WinNonlin 6.0 guide Page 338), to convert
90% CI of LSMean(R-T) to 90% CI of GeoMean Ratio(%) (T/R), we only need a simple calculation like
CI_Lower = 100 · exp(lower)
CI_Upper = 100 · exp(upper)

No iteration is needed, and no optimization algorithm is needed.

Or we can conclude that WNL uses a strange (and of course low-precision) way to calc 90% CI of GeoMean Ratio(%) (T/R), but not the way pointed in the User Guide.

❝ By the way, with R 2.10.1 I get:

format(qt(0.95,22), digits=20)

[1] "1.717144374380243"


Thank you for your kind help. My so called Second Found is clear now: key point is WNL's Tinv()'s low-precision, of course I have to accept it.

Complete thread:

UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,671 registered users;
98 visitors (0 registered, 98 guests [including 5 identified bots]).
Forum time: 09:17 CEST (Europe/Vienna)

Freedom is always and exclusively
freedom for the one
who thinks differently.    Rosa Luxemburg

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5