Darborn
☆    

Japan,
2024-05-29 10:26
(20 d 07:54 ago)

Posting: # 24007
Views: 1,101
 

 CV% and confidence interval [Software]

Hi,

I got a problem related to the CV% (from a parallel BE) and confidence interval. The CV% from winnonlin is ~14% but confidence interval is 74%-136%. Although the sample size for this trial is rather small (19 for analysis), the calculated CV% using confidence interval in PowerTOST is quite large (~39%). I'm not sure why the result is counterintuitive.

Anyone have ideas?

Sincerely,
Jietian
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-05-29 11:39
(20 d 06:41 ago)

@ Darborn
Posting: # 24008
Views: 982
 

 Phoenix Win­Non­lin vs Power­TOST

Hi Jietian,

❝ I got a problem related to the CV% (from a parallel BE) and confidence interval. The CV% from winnonlin is ~14% but confidence interval is 74%-136%. Although the sample size for this trial is rather small (19 for analysis), the calculated CV% using confidence interval in PowerTOST is quite large (~39%). I'm not sure why the result is counterintuitive.

Confirmed the result from PowerTOST:

library(PowerTOST)
cat(paste0("CV (total) ~",
    signif(100 * CI2CV(lower = 0.74, upper = 1.36,
                        n = 19, design = "parallel"), 2), "%\n"))

Unbalanced parallel design. n(i)= 10/9 assumed.
CV (total) ~39%


❝ Anyone have ideas?

Can you please post your data, the version of Phoenix, and the model specifications?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Darborn
☆    

Japan,
2024-05-30 03:35
(19 d 14:45 ago)

@ Helmut
Posting: # 24009
Views: 909
 

 Phoenix Win­Non­lin vs Power­TOST

Hi Helmut,


I can't upload files here, so feel free to download it from my onedrive share link.
The version of winnonlin is 8.1. I used average bioequivalence with parallel design and formulation R as the reference. The only fixed effect is the Treatment (TRT) with Ln(X) transformation. No variance structure (left all empty). Everything else were left default (I think).

Thanks!
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-05-30 11:21
(19 d 06:59 ago)

@ Darborn
Posting: # 24010
Views: 898
 

 Phoenix Win­Non­lin: Variance estimate → CV, Welch-test

Hi Jietian,

mystery solved. Phoenix WinNonlin gives in Output Data Final Variance Parameters the estimated variance and not the CV, which is for \(\small{\log_{e}\textsf{-}}\)trans­formed data:$$\small{CV=100\sqrt{\exp (\widehat{s}^{2})-1}}$$You have to setup a Custom Transformation in the Data Wizard:

[image]


❝ I used average bioequivalence with parallel design and formulation R as the reference. The only fixed effect is the Treatment (TRT) with Ln(X) transformation. No variance structure (left all empty).

I used the linear-up/log-down for AUC0–t (see this article why); download the Phoenix-Project to see the setup. It’s in v8.4.0 and will run in your v8.1 as well (ignore the initial warning that it was saved in a later version).
However, you should not assume equal variances in a parallel design. This was not stated in the User’s Guide of WinNonlin 8.1. See the online User’s Guide of v8.3 for the setup. Forget the Levene pre­test (which inflates the type I error) – always use this setup in the future. Then instead of the t-test the Welch-Satterthwaite-test with approximate degrees of freedom is applied. For your complete data I got:

[image]

While for Cmax the CVs of T and R are quite similar, for AUC the one of R is more than twice the one of T. That’s because subject K015 had no concentrations after 48 h (which was also the Cmax) and the AUC was way smaller than the ones of the other subjects.
This has consequences for the BE-calculation since the degrees of freedom will be different as well. Therefore, the point estimates in both models are the same but the confidence intervals are not:

[image]

It demonstrates why the t-test in case of unequal variances – and to a minor extent with unequal group sizes – is liberal (too narrow CI). Hence, in [image] and SAS the Welch-test is the default.

After excluding subject K015 I could confirm your results:

[image]
[image]

Now the CVs of T and R are reasonably close. Check with PowerTOST:

library(PowerTOST)
metric <- c("Cmax", "AUC")
lower  <- c( 74.75,  75.21)
upper  <- c(136.56, 141.15)
df     <- data.frame(metric = metric, CV = NA)
for (j in seq_along(metric)) {
  df$CV[j] <- sprintf(" ~%.2f%%", 100 * CI2CV(lower = lower[j], upper = upper[j],
                                              n = c(10, 9), design = "parallel"))
}
names(df) <- c("PK metric", "CV (total)")
print(df, row.names = FALSE, right = FALSE)

 PK metric CV (total)
 Cmax       ~39.08%   
 AUC        ~40.96%

See the Note of the man-page of CI2CV():

The calculations are further based on a common variance of Test and Reference treatments in replicate crossover studies or parallel group study, respectively.
(my emphasis)


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2024-05-30 13:08
(19 d 05:12 ago)

@ Helmut
Posting: # 24011
Views: 857
 

 Base 🇷 for comparison

Hi Jietian,

reproducing your complete AUC-result in base-[image]:

T   <- c(11029.0530, 18369.9140,  8917.8605,  6406.6051,  7019.6639,
          7646.7725,  7030.7416, 16904.9660, 11311.8470, 11325.7340)
R   <- c(10206.9030, 11803.9700, 18095.8210,  7356.8581,  5089.1634,
          6358.7897,  7418.6390,  1301.4833, 14322.1320, 13184.4070) # subject K015
tt  <- t.test(x = log(T), y = log(R), var.equal = TRUE,
              conf.level = 0.90)      # force to t-test
wt  <- t.test(x = log(T), y = log(R),
              conf.level = 0.90)      # Welch-test by default
AUC <- data.frame(Method = NA_character_, df = NA_real_, PE = NA_real_,
                  lower = NA_real_, upper = NA_real_)
AUC[1, ] <- c(tt$method, sprintf("%.4f", tt$parameter),
              round(100 * exp(c(diff(rev(tt$estimate)), tt$conf.int)), 2))
AUC[2, ] <- c(wt$method, sprintf("%.4f", wt$parameter),
              round(100 * exp(c(diff(rev(wt$estimate)), wt$conf.int)), 2))
names(AUC)[4:5] <- c("lower CL", "upper CL")
print(AUC, row.names = FALSE)

                  Method      df     PE lower CL upper CL
       Two Sample t-test 18.0000 125.89    79.71   198.81
 Welch Two Sample t-test 13.1164 125.89    78.97   200.69


Phoenix WinNonlin:

[image]


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Darborn
☆    

Japan,
2024-05-31 04:07
(18 d 14:13 ago)

@ Helmut
Posting: # 24012
Views: 796
 

 Base 🇷 for comparison

Hi Helmut,

I must give my whole appreciation to your reply. The detail was beyond my expectation.
Thank you again for the explanation!!!

Sincerely,
Jietian


Welcome! [Helmut]
UA Flag
Activity
 Admin contact
23,057 posts in 4,840 threads, 1,641 registered users;
109 visitors (0 registered, 109 guests [including 10 identified bots]).
Forum time: 18:21 CEST (Europe/Vienna)

You should treat as many patients as possible with the new drugs
while they still have the power to heal.    Armand Trousseau

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5