Brus
★    

Spain,
2023-06-14 18:34
(173 d 16:00 ago)

Posting: # 23593
Views: 1,088
 

 Calculation of PE and 90% CI [General Sta­tis­tics]

Dear all,

In a partial replicated design (3-periods, 3 sequences), only reference is replicated, with a sample size of 45, CRO sent to me the following with proc GLM for ANOVA for the R/R comparison:

Source   / DF /       SS      /     MS       / P
Seq      /  2 /  0.20523638   / 0.10261819   / 0.7686
Sbj(seq) / 40 / 15.497377771  / 0.3874344443 / <0.0001
Per      /  2 /  0.0922514826 / 0.0461257413 / 0.3597
Treat    /  1 /  0.0231184863 / 0.0231184863 / 0.4725
Error    / 37 /  1.6238230813 / 0.0438871103 / -

Least Squares Means:
R1: 2.522
R2: 2.419


Also they sent to me the following 90% CI for the comparative R1/R2:

Ratio = 110.79
90% Confidence Interval = 87.31 - 140.57
Intra-subject CV% = 21.18


The number of observations used for the calculation were 40 due to drop outs.

But I have several doubts:
- It surprises me a 90% CI so wide for a relatively low, at least not high, ISCV.
- If I back-calculated the CV from this 90% CI with “CVfromCI” of PowerTOST, I obtain a CV of around 70%. Why?
- In addition, I have tried to calculate the ratio and 90% CI with MSE and LSM according to the Helmut example from Pamplona lecture on 2018 "Basic Statistic on BE", slide 16 and 17:

SE = √(MSE/nps), where nps = (n1 +n2)/2
∆ = LSM (T) – LSM (R)
PE (GMR = e∆)
90% CI = ∆ ± t(α = 0.05, df) × SE

I used 0.043 from first table as MSE and I used the formula from Helmut presentation to convert it in SE and then to calculate the ratio and 90% CI.

I obtained a ratio of 110,8 but a 90% CI of 102,44 – 119,94. Same ratio as CRO but different 90 % CI. This is strange although it is in line with the ISCV presented by the CRO.

Furthermore, doing various tests, I have noticed that if you add the mean square obtained in the Sbj(seq) from first table to the MSE and you use the formula from Helmut presentation to convert this sum in SE and then to calculate the ratio and 90% CI, you obtain the same 90% CI as CRO. But, If the subj(seq) is significant, should it be calculated in this way?, why?, and should it be taken into account and impact the 90% CI range?

So, what is the correct 90% CI?

Thank you so much

Best regards,
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2023-06-15 00:26
(173 d 10:08 ago)

@ Brus
Posting: # 23594
Views: 930
 

 No ‘R1’ and ‘R2’ in replicate designs

Hi Brus,

❝ In a partial replicated design (3-periods, 3 sequences), only reference is replicated, with a sample size of 45, CRO sent to me the following with proc GLM for ANOVA for the R/R comparison:

❝ …

Least Squares Means:

R1: 2.522

R2: 2.419


❝ Also they sent to me the following 90% CI for the comparative R1/R2:


Ratio = 110.79

90% Confidence Interval = 87.31 - 140.57

Intra-subject CV% = 21.18

[image]I guess, your problems started already at the beginning.

There can be only one.


There is no ‘R1’ and ‘R2’, only one R repeatedly administered in the sequences in different periods. Say, the sequences are TRR | RTR | RRT. Did the CRO call the first administration in each sequence R1 and the second R2 (while dropping T)?
This gives after recoding:   R1R2 | R1R2 | R1R2

❝ But I have several doubts:

❝ - It surprises me a 90% CI so wide for a relatively low, at least not high, ISCV.

As said above, such a ‘comparison’ will not work.

❝ - If I back-calculated the CV from this 90% CI with “CVfromCI” of PowerTOST, I obtain a CV of around 70%. Why?

You tried CI2CV(lower=0.8731, upper=1.4057, design="2x2x2", n=40), right? That’s not what you have. After recoding (wild guess) you have an Incomplete Block Design. Other degrees of freedom, :blahblah:

❝ - In addition, I have tried to calculate the ratio and 90% CI with MSE and LSM according to the Helmut example from Pamplona lecture on 2018 "Basic Statistic on BE", slide 16 and 17:

❝ …

❝ I obtained a ratio of 110,8 but a 90% CI of 102,44 – 119,94. Same ratio as CRO but different 90 % CI. This is strange although it is in line with the ISCV presented by the CRO.

These formulas are for a 2×2×2 crossover and are also implemented in CI2CV(). But again, there is only one R. A ‘PE’ and ‘CI’ doesn’t make sense.

❝ Furthermore, doing various tests, … why?

No idea.

❝ So, what is the correct 90% CI?

There is none. You can only calculate the CVwR according to the EMA’s or the FDA’s models. The results will be similar, though quite often the one of the FDA is a bit smaller.

If you want, send me the raw data off-list.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Brus
★    

Spain,
2023-06-15 16:07
(172 d 18:27 ago)

@ Helmut
Posting: # 23598
Views: 874
 

 No ‘R1’ and ‘R2’ in replicate designs

Hi Helmut,

Sorry, because I am lego and I can´t understand all correctly.

❝ There is no ‘R1’ and ‘R2’, only one R repeatedly administered in the sequences in different periods. Say, the sequences are TRR | RTR | RRT. Did the CRO call the first administration in each sequence R1 and the second R2 (while dropping T)?

Yes. But, although this is not the usual approach, if you set the R treatment as R1 and R2, you can make an R1 / R2 comparison, as if they were different treatments, right? And in this way, Is the 90% CI calculated by the CRO well calculated? It seems to me to be a very large interval for a CV of 20%.

❝ As said above, such a ‘comparison’ will not work.

Why? Can you explain?

❝ You tried CI2CV(lower=0.8731, upper=1.4057, design="2x2x2", n=40), right? That’s not what you have. After recoding (wild guess) you have an Incomplete Block Design. Other degrees of freedom, :blahblah:

Yes, I know, but in any case, the difference should be decimals, not changes from 20% to 70%, right? I can't understand how they get a CV of 20% and I get 70%. What's wrong? Besides, with the CI being so wide, it seems to be much more than 20% CV.

❝ These formulas are for a 2×2×2 crossover and are also implemented in CI2CV(). But again, there is only one R. A ‘PE’ and ‘CI’ doesn’t make sense.

❝ You can only calculate the CVwR according to the EMA’s or the FDA’s models. The results will be similar, though quite often the one of the FDA is a bit smaller.

But, although this is not the usual approach, if you set the R treatment as R1 and R2, you can make an R1 / R2 comparison, if you want to do it just for curiosity, considering them as different treatments, it will be feasible, right? And in this way, Is the 90% CI calculated by the CRO well calculated? It seems to me to be a very large interval for a CV of 20%. Or mine?


Edit: Standard quotes restored; see also this post #8[Helmut]
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2023-06-15 16:49
(172 d 17:45 ago)

@ Brus
Posting: # 23599
Views: 863
 

 Rose is a rose is a rose is a rose

Hi Brus,

❝ Sorry, because I am lego and I can´t understand all correctly.

:-D

❝ ❝ There is no ‘R1’ and ‘R2’, only one R repeatedly administered in the sequences in different periods. Say, the sequences are TRR | RTR | RRT. Did the CRO call the first administration in each sequence R1 and the second R2 (while dropping T)?

❝ Yes. But, although this is not the usual approach, if you set the R treatment as R1 and R2, you can make an R1 / R2 comparison, as if they were different treatments, right?

Nope. Though you call them different treatments, they aren’t. You have one and only one treatment – administered in different sequences / periods.

❝ And in this way, Is the 90% CI calculated by the CRO well calculated? It seems to me to be a very large interval for a CV of 20%.

There is no right or wrong. Calculating a PE and CI doesn’t make sense.

❝ ❝ As said above, such a ‘comparison’ will not work.

❝ Why? Can you explain?

[image]I’ll try… In a 2×2×2 crossover you have two different but similar things, say, apples and oranges and compare their weights (© Stephen Senn).
In your partial replicate you ignored the apple and tried to compare oranges with oranges. What? Heck, they are the same. Only conditions differed.

❝ ❝ You can only calculate the CVwR according to the EMA’s or the FDA’s models. The results will be similar, though quite often the one of the FDA is a bit smaller.

❝ But, although this is not the usual approach, if you set the R treatment as R1 and R2, you can make an R1 / R2 comparison, if you want to do it just for curiosity, considering them as different treatments, it will be feasible, right? And in this way, Is the 90% CI calculated by the CRO well calculated?

No, that’s nonsense. I think we are going in circles. What do you want achieve?

Rose is a rose is a rose is a rose.

Let’s explore the EMA’s reference data set II, both with the approach given in the Q&A as well as with yours. See the [image]-script at the end.

Type III AOV (CVwR: EMA approach)
Response: log(PK)
                 Df   Sum Sq  Mean Sq  F value     Pr(>F)
sequence          2 0.023936  0.01197  0.08523    0.91862
period            2 0.039640  0.01982  1.43328    0.24898
sequence:subject 21 2.948968  0.14043 10.15485 4.6897e-11
Residuals        46 0.636114  0.01383                   
CVwR (%)                     
11.80027

Type III AOV (pseudo 'R2' vs 'R1')
Response: log(PK)
                 Df   Sum Sq  Mean Sq F value     Pr(>F)
sequence          2 0.049616  0.02481 0.28973    0.75141
period            2 0.018554  0.00928 0.71416    0.50112
treatment         1 0.000044  0.00004 0.00336    0.95435
sequence:subject 21 1.798142  0.08563 6.59175 3.0663e-05
Residuals        21 0.272787  0.01299                   
CVw (%)                     
11.43441                   

'PE':  99.43%
'CI':  83.90%, 117.84%


Not only the SEs are different but also the degrees of freedom and therefore, the MSEs / CVs.
CI2CV() from the ‘pseudo R comparison’ does not work with any design-argument.


library(PowerTOST)
# EMA Dataset II (partial replicate, balanced, complete, 24 subjects)
EMA.II    <- read.csv("https://bebac.at/downloads/ds02.csv",
                      colClasses = c(rep("factor", 4), "numeric"))
options(digits = 12) # more digits for anova
# The EMA’s approach for CVwR
mod.CVwR  <- lm(log(PK) ~ sequence + subject %in% sequence + period,
                          data = EMA.II)
typeIII.a <- anova(mod.CVwR)
attr(typeIII.a, "heading")[1] <- "Type III AOV (CVwR: EMA approach)"
MSdenom   <- typeIII.a["sequence:subject", "Mean Sq"]
df2       <- typeIII.a["sequence:subject", "Df"]
fvalue    <- typeIII.a["sequence", "Mean Sq"] / MSdenom
df1       <- typeIII.a["sequence", "Df"]
typeIII.a["sequence", 4] <- fvalue
typeIII.a["sequence", 5] <- pf(fvalue, df1, df2, lower.tail = FALSE)
CVwR      <- 100 * mse2CV(typeIII.a["Residuals", "Mean Sq"])
typeIII.a <- rbind(typeIII.a, CVwR = c(NA, NA, CVwR, NA, NA))
row.names(typeIII.a)[5] <- "CVwR (%)"

# Only R; clumsy but transparent
# Keep the codes of sequence and period

EMA.II.R  <- EMA.II[EMA.II$treatment == "R", ]
EMA.II.Rs <- cbind(EMA.II.R, dummy = NA_character_)
EMA.II.Rs$dummy[EMA.II.Rs$sequence == "TRR" & EMA.II.Rs$period == 2] <- "R1"
EMA.II.Rs$dummy[EMA.II.Rs$sequence == "TRR" & EMA.II.Rs$period == 3] <- "R2"
EMA.II.Rs$dummy[EMA.II.Rs$sequence == "RTR" & EMA.II.Rs$period == 1] <- "R1"
EMA.II.Rs$dummy[EMA.II.Rs$sequence == "RTR" & EMA.II.Rs$period == 3] <- "R2"
EMA.II.Rs$dummy[EMA.II.Rs$sequence == "RRT" & EMA.II.Rs$period == 1] <- "R1"
EMA.II.Rs$dummy[EMA.II.Rs$sequence == "RRT" & EMA.II.Rs$period == 2] <- "R2"
EMA.II.Rs$dummy <- as.factor(EMA.II.Rs$dummy)
EMA.II.Rs$treatment <- EMA.II.Rs$dummy
# Compare second administration of R with the first
mod.ABE <- lm(log(PK) ~ sequence + subject %in% sequence + period + treatment,
                        data = EMA.II.Rs)
typeIII.b <- anova(mod.ABE)
attr(typeIII.b, "heading")[1] <- "Type III AOV (pseudo \'R2\' vs \'R1\')"
MSdenom   <- typeIII.b["sequence:subject", "Mean Sq"]
df2       <- typeIII.b["sequence:subject", "Df"]
fvalue    <- typeIII.b["sequence", "Mean Sq"] / MSdenom
df1       <- typeIII.b["sequence", "Df"]
typeIII.b["sequence", 4] <- fvalue
typeIII.b["sequence", 5] <- pf(fvalue, df1, df2, lower.tail = FALSE)
CVw       <- 100 * mse2CV(typeIII.b["Residuals", "Mean Sq"])
typeIII.b <- rbind(typeIII.b, CVw = c(NA, NA, CVwR, NA, NA))
row.names(typeIII.b)[6] <- "CVw (%)"
PE        <- 100 * exp(coef(mod.ABE)[["treatmentR2"]])
CI        <- setNames(100 * as.numeric(exp(confint(mod.ABE, "treatmentR2",
                                                   level= 1 - 2 * 0.05))),
                      c("lower", "upper"))
txt       <- paste("\n\'PE\':", sprintf("%6.2f%%", PE),
                   "\n\'CI\':", sprintf("%6.2f%%, %6.2f%%\n",
                                        CI["lower"], CI["upper"]))
print(typeIII.a, digits = 6, signif.stars = FALSE)
print(typeIII.b, digits = 6, signif.stars = FALSE); cat(txt)


Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Achievwin
★    

US,
2023-06-19 21:42
(168 d 12:52 ago)

@ Brus
Posting: # 23604
Views: 818
 

 No ‘R1’ and ‘R2’ in replicate designs

Rose is a rose is a rose is a rose

Not always... :confused::-D:-D:confused:.... there are some blessing from hell formulations.....:confused::-D:-D:confused:
UA Flag
Activity
 Admin contact
22,811 posts in 4,783 threads, 1,639 registered users;
40 visitors (0 registered, 40 guests [including 7 identified bots]).
Forum time: 09:35 CET (Europe/Vienna)

I have never in my life learned anything
from any man who agreed with me.    Dudley Field Malone

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5