## Phoenix WinNonlin: Variance estimate → CV, Welch-test [Software]

mystery solved. Phoenix WinNonlin gives in

**Output Data**

→` Final Variance Parameters`

the estimated variance and *not*the

*CV*, which is for \(\small{\log_{e}\textsf{-}}\)transformed data:$$\small{CV=100\sqrt{\exp (\widehat{s}^{2})-1}}$$You have to setup a

`Custom Transformation`

in the `Data Wizard`

:❝ I used average bioequivalence with parallel design and formulation R as the reference. The only fixed effect is the Treatment (TRT) with Ln(X) transformation. No variance structure (left all empty).

*AUC*

_{0–t}(see this article why); download the Phoenix-Project to see the setup. It’s in v8.4.0 and will run in your v8.1 as well (ignore the initial warning that it was saved in a later version).

However, you should not assume equal variances in a parallel design. This was not stated in the User’s Guide of WinNonlin 8.1. See the online User’s Guide of v8.3 for the setup. Forget the Levene pretest (which inflates the type I error) – always use this setup in the future. Then instead of the

*t*-test the Welch-Satterthwaite-test with approximate degrees of freedom is applied. For your complete data I got:

While for

*C*

_{max}the

*CV*s of T and R are quite similar, for

*AUC*the one of R is more than twice the one of T. That’s because subject

`K015`

had no concentrations after 48 h (which was also the *C*

_{max}) and the

*AUC*was way smaller than the ones of the other subjects.

This has consequences for the BE-calculation since the degrees of freedom will be different as well. Therefore, the point estimates in both models are the same but the confidence intervals are not:

It demonstrates why the

*t*-test in case of unequal variances – and to a minor extent with unequal group sizes – is liberal (too narrow CI). Hence, in and

**SAS**

the Welch-test is the default.After excluding subject

`K015`

I could confirm your results:

*CV*s of T and R are reasonably close. Check with

`PowerTOST`

:`library(PowerTOST)`

metric <- c("Cmax", "AUC")

lower <- c( 74.75, 75.21)

upper <- c(136.56, 141.15)

df <- data.frame(metric = metric, CV = NA)

for (j in seq_along(metric)) {

df$CV[j] <- sprintf(" ~%.2f%%", 100 * CI2CV(lower = lower[j], upper = upper[j],

n = c(10, 9), design = "parallel"))

}

names(df) <- c("PK metric", "CV (total)")

print(df, row.names = FALSE, right = FALSE)

PK metric CV (total)

Cmax ~39.08%

AUC ~40.96%

`CI2CV()`

:The calculations are further based on a common variance of Test and Reference treatments in replicate crossover studies or parallel group study, respectively.

(my emphasis)

*Dif-tor heh smusma*🖖🏼 Довге життя Україна!

_{}

Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮

Science Quotes

### Complete thread:

- CV% and confidence interval Darborn 2024-05-29 08:26 [Software]
- Phoenix WinNonlin vs PowerTOST Helmut 2024-05-29 09:39
- Phoenix WinNonlin vs PowerTOST Darborn 2024-05-30 01:35
- Phoenix WinNonlin: Variance estimate → CV, Welch-testHelmut 2024-05-30 09:21
- Base 🇷 for comparison Helmut 2024-05-30 11:08
- Base 🇷 for comparison Darborn 2024-05-31 02:07

- Base 🇷 for comparison Helmut 2024-05-30 11:08

- Phoenix WinNonlin: Variance estimate → CV, Welch-testHelmut 2024-05-30 09:21

- Phoenix WinNonlin vs PowerTOST Darborn 2024-05-30 01:35

- Phoenix WinNonlin vs PowerTOST Helmut 2024-05-29 09:39