Darborn
☆    

Taiwan,
2024-05-29 10:26
(308 d 23:57 ago)

Posting: # 24007
Views: 3,609
 

 CV% and confidence interval [Software]

Hi,

I got a problem related to the CV% (from a parallel BE) and confidence interval. The CV% from winnonlin is ~14% but confidence interval is 74%-136%. Although the sample size for this trial is rather small (19 for analysis), the calculated CV% using confidence interval in PowerTOST is quite large (~39%). I'm not sure why the result is counterintuitive.

Anyone have ideas?

Sincerely,
Jietian
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-05-29 11:39
(308 d 22:44 ago)

@ Darborn
Posting: # 24008
Views: 3,072
 

 Phoenix Win­Non­lin vs Power­TOST

Hi Jietian,

❝ I got a problem related to the CV% (from a parallel BE) and confidence interval. The CV% from winnonlin is ~14% but confidence interval is 74%-136%. Although the sample size for this trial is rather small (19 for analysis), the calculated CV% using confidence interval in PowerTOST is quite large (~39%). I'm not sure why the result is counterintuitive.

Confirmed the result from PowerTOST:

library(PowerTOST)
cat(paste0("CV (total) ~",
    signif(100 * CI2CV(lower = 0.74, upper = 1.36,
                        n = 19, design = "parallel"), 2), "%\n"))

Unbalanced parallel design. n(i)= 10/9 assumed.
CV (total) ~39%


❝ Anyone have ideas?

Can you please post your data, the version of Phoenix, and the model specifications?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Darborn
☆    

Taiwan,
2024-05-30 03:35
(308 d 06:48 ago)

@ Helmut
Posting: # 24009
Views: 2,873
 

 Phoenix Win­Non­lin vs Power­TOST

Hi Helmut,

I can't upload files here, so feel free to download it from my onedrive share link.
The version of winnonlin is 8.1. I used average bioequivalence with parallel design and formulation R as the reference. The only fixed effect is the Treatment (TRT) with Ln(X) transformation. No variance structure (left all empty). Everything else were left default (I think).

Thanks!
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-05-30 11:21
(307 d 23:02 ago)

@ Darborn
Posting: # 24010
Views: 2,908
 

 Phoenix Win­Non­lin: Variance estimate → CV, Welch-test

Hi Jietian,

mystery solved. Phoenix WinNonlin gives in Output Data Final Variance Parameters the estimated variance and not the CV, which is for \(\small{\log_{e}\textsf{-}}\)trans­formed data:$$\small{CV=100\sqrt{\exp (\widehat{s}^{2})-1}}$$You have to setup a Custom Transformation in the Data Wizard:

[image]


❝ I used average bioequivalence with parallel design and formulation R as the reference. The only fixed effect is the Treatment (TRT) with Ln(X) transformation. No variance structure (left all empty).

I used the linear-up/log-down for AUC0–t (see this article why); download the Phoenix-Project to see the setup. It’s in v8.4.0 and will run in your v8.1 as well (ignore the initial warning that it was saved in a later version).
However, you should not assume equal variances in a parallel design. This was not stated in the User’s Guide of WinNonlin 8.1. See the online User’s Guide of v8.3 for the setup. Forget the Levene pre­test (which inflates the type I error) – always use this setup in the future. Then instead of the t-test the Welch-Satterthwaite-test with approximate degrees of freedom is applied. For your complete data I got:

[image]

While for Cmax the CVs of T and R are quite similar, for AUC the one of R is more than twice the one of T. That’s because subject K015 had no concentrations after 48 h (which was also the Cmax) and the AUC was way smaller than the ones of the other subjects.
This has consequences for the BE-calculation since the degrees of freedom will be different as well. Therefore, the point estimates in both models are the same but the confidence intervals are not:

[image]

It demonstrates why the t-test in case of unequal variances – and to a minor extent with unequal group sizes – is liberal (too narrow CI). Hence, in [image] and SAS the Welch-test is the default.

After excluding subject K015 I could confirm your results:

[image]
[image]

Now the CVs of T and R are reasonably close. Check with PowerTOST:

library(PowerTOST)
metric <- c("Cmax", "AUC")
lower  <- c( 74.75,  75.21)
upper  <- c(136.56, 141.15)
df     <- data.frame(metric = metric, CV = NA)
for (j in seq_along(metric)) {
  df$CV[j] <- sprintf(" ~%.2f%%", 100 * CI2CV(lower = lower[j], upper = upper[j],
                                              n = c(10, 9), design = "parallel"))
}
names(df) <- c("PK metric", "CV (total)")
print(df, row.names = FALSE, right = FALSE)

 PK metric CV (total)
 Cmax       ~39.08%   
 AUC        ~40.96%

See the Note of the man-page of CI2CV():

The calculations are further based on a common variance of Test and Reference treatments in replicate crossover studies or parallel group study, respectively.
(my emphasis)


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-05-30 13:08
(307 d 21:15 ago)

@ Helmut
Posting: # 24011
Views: 2,845
 

 Base 🇷 for comparison

Hi Jietian,

reproducing your complete AUC-result in base-[image]:

T   <- c(11029.0530, 18369.9140,  8917.8605,  6406.6051,  7019.6639,
          7646.7725,  7030.7416, 16904.9660, 11311.8470, 11325.7340)
R   <- c(10206.9030, 11803.9700, 18095.8210,  7356.8581,  5089.1634,
          6358.7897,  7418.6390,  1301.4833, 14322.1320, 13184.4070) # subject K015
wt  <- t.test(x = log(T), y = log(R),
              conf.level = 0.90) # Welch-test by default (implicit var.equal = FALSE)
tt  <- t.test(x = log(T), y = log(R), var.equal = TRUE,
              conf.level = 0.90) # force to conventional t-test
AUC <- data.frame(df = rep(NA_real_, 2), PE = NA_real_,
                  lower = NA_real_, upper = NA_real_)
AUC[1, ]   <- c(tt$parameter, 100 * exp(c(diff(rev(tt$estimate)), tt$conf.int)))
AUC[2, ]   <- c(wt$parameter, 100 * exp(c(diff(rev(wt$estimate)), wt$conf.int)))
AUC[, 1]   <- round(AUC[, 1], 4)
AUC[, 2:4] <- round(AUC[, 2:4], 2) # acc. to guidelines
WNL <- data.frame(df = c(18, 13.1164), PE = rep(125.89, 2),
                  lower = c(79.71, 78.97), upper = c(198.81, 200.69))
res <- paste("Are results of R and Phoenix WinNonlin equal?",
             all.equal(AUC, WNL), "\n") # check
AUC <- cbind(Method = c(tt$method, wt$method), AUC)
names(AUC)[4:5] <- c("lower CL", "upper CL")
print(AUC, row.names = FALSE); cat(res)

                  Method      df     PE lower CL upper CL
       Two Sample t-test 18.0000 125.89    79.71   198.81
 Welch Two Sample t-test 13.1164 125.89    78.97   200.69
Are results of R and Phoenix WinNonlin equal? TRUE


Phoenix WinNonlin:

[image]


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Darborn
☆    

Taiwan,
2024-05-31 04:07
(307 d 06:16 ago)

@ Helmut
Posting: # 24012
Views: 2,759
 

 Base 🇷 for comparison

Hi Helmut,

I must give my whole appreciation to your reply. The detail was beyond my expectation.
Thank you again for the explanation!!!

Sincerely,
Jietian


Welcome! [Helmut]
UA Flag
Activity
 Admin contact
23,412 posts in 4,923 threads, 1,665 registered users;
123 visitors (0 registered, 123 guests [including 12 identified bots]).
Forum time: 10:24 CEST (Europe/Vienna)

Genius is that which forces
the inertia of humanity to learn.    Henri Bergson

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5