Phoenix Win­Non­lin: Variance estimate → CV, Welch-test [Software]

posted by Helmut Homepage – Vienna, Austria, 2024-05-30 11:21 (163 d 16:34 ago) – Posting: # 24010
Views: 2,191

Hi Jietian,

mystery solved. Phoenix WinNonlin gives in Output Data Final Variance Parameters the estimated variance and not the CV, which is for \(\small{\log_{e}\textsf{-}}\)trans­formed data:$$\small{CV=100\sqrt{\exp (\widehat{s}^{2})-1}}$$You have to setup a Custom Transformation in the Data Wizard:

[image]


❝ I used average bioequivalence with parallel design and formulation R as the reference. The only fixed effect is the Treatment (TRT) with Ln(X) transformation. No variance structure (left all empty).

I used the linear-up/log-down for AUC0–t (see this article why); download the Phoenix-Project to see the setup. It’s in v8.4.0 and will run in your v8.1 as well (ignore the initial warning that it was saved in a later version).
However, you should not assume equal variances in a parallel design. This was not stated in the User’s Guide of WinNonlin 8.1. See the online User’s Guide of v8.3 for the setup. Forget the Levene pre­test (which inflates the type I error) – always use this setup in the future. Then instead of the t-test the Welch-Satterthwaite-test with approximate degrees of freedom is applied. For your complete data I got:

[image]

While for Cmax the CVs of T and R are quite similar, for AUC the one of R is more than twice the one of T. That’s because subject K015 had no concentrations after 48 h (which was also the Cmax) and the AUC was way smaller than the ones of the other subjects.
This has consequences for the BE-calculation since the degrees of freedom will be different as well. Therefore, the point estimates in both models are the same but the confidence intervals are not:

[image]

It demonstrates why the t-test in case of unequal variances – and to a minor extent with unequal group sizes – is liberal (too narrow CI). Hence, in [image] and SAS the Welch-test is the default.

After excluding subject K015 I could confirm your results:

[image]
[image]

Now the CVs of T and R are reasonably close. Check with PowerTOST:

library(PowerTOST)
metric <- c("Cmax", "AUC")
lower  <- c( 74.75,  75.21)
upper  <- c(136.56, 141.15)
df     <- data.frame(metric = metric, CV = NA)
for (j in seq_along(metric)) {
  df$CV[j] <- sprintf(" ~%.2f%%", 100 * CI2CV(lower = lower[j], upper = upper[j],
                                              n = c(10, 9), design = "parallel"))
}
names(df) <- c("PK metric", "CV (total)")
print(df, row.names = FALSE, right = FALSE)

 PK metric CV (total)
 Cmax       ~39.08%   
 AUC        ~40.96%

See the Note of the man-page of CI2CV():

The calculations are further based on a common variance of Test and Reference treatments in replicate crossover studies or parallel group study, respectively.
(my emphasis)


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:

UA Flag
Activity
 Admin contact
23,291 posts in 4,891 threads, 1,661 registered users;
76 visitors (0 registered, 76 guests [including 15 identified bots]).
Forum time: 02:55 CET (Europe/Vienna)

I never did anything worth doing by accident,
nor did any of my inventions come by accident;
they came by work.    Thomas Alva Edison

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5