roman_max
☆

Russia,
2018-12-10 18:53
(1460 d 05:55 ago)

Posting: # 19674
Views: 12,899

## PE vs GMR [🇷 for BE/BA]

Dear all,

some time ago almost the same question was posted on the forum, but for me this problem persist so far. My question is: does PE and GMR represent different metrics, numerically and/or statistically?

I have posted it under the R for BE/BA category because I faced with this when I performed NCA/stat analysis utilizing bear-package.

On the one hand GMR by definition is a ratio of LSM T/LSM R, or in log scale Mean(lnCmaxT) minus Mean(lnCmaxR)
On the other hand PE is a sqrt(LL*UL)

In my example for Cmax below GMR=4.097-3.971=0.126, after exponentiation =1.1343
PE=sqrt(0.98809*1.30785=1.13678, which correspond to the table data.

So the results for GMR and PE are slightly different: 1.1343 for GMR and 1.13678 for PE. But why? Is my logic somewhere wrong?

The example output data:
Stat. Summaries for Pivotal Parameters of Bioequivalence (N = 45 )
----------------------------------------------------------------

Parameters Test_Mean Test_SD Ref_Mean Ref_SD
Cmax 67.390 31.872 60.398 31.526
AUC0-t 105.235 40.376 106.518 39.090
AUC0-inf 107.544 40.624 109.815 40.131
ln(Cmax) 4.097 0.494 3.971 0.527
ln(AUC0-t) 4.581 0.406 4.597 0.394
ln(AUC0-inf) 4.605 0.397 4.629 0.389

Statistical Summaries for Pivotal Parameters of Bioequivalence (N = 45 )
(cont'd)
----------------------------------------------------------------

Parameters F values P values PE (%) Lo 90%CI Up 90%CI
Cmax 1.492 0.229 - - -
AUC0-t 0.099 0.755 - - -
AUC0-inf 9.612 0.000 - - -
ln(Cmax) 2.410 0.002 113.678 98.809 130.785
ln(AUC0-t) 10.306 0.000 98.499 92.772 104.580
ln(AUC0-inf) 10.564 0.000 97.757 92.232 103.614

-------------------------------------------
Both F values and P values were obtained
from ANOVA respectively.
90%CI: 90% confidence interval
PE(%): point estimate; = squared root of
(lower 90%CI * upper 90%CI)
Ohlbe
★★★

France,
2018-12-11 01:28
(1459 d 23:21 ago)

@ roman_max
Posting: # 19676
Views: 10,090

## PE vs GMR

Dear roman_max,

❝ My question is: does PE and GMR represent different metrics, numerically and/or statistically?

Yes.

❝ On the one hand GMR by definition is a ratio of LSM T/LSM R

No. This is PE. The GMR is the ratio of geometric means, not of LSM (LSMEAN in SAS-speak). For more details on how to calculate LSM and the difference between LSM and geometric mean, see this oldie.

PE and GMR will be identical if you have a balanced study (same number of subjects in sequence RT and sequence TR). If the study is unbalanced (e.g. if you have a dropout), you'll get different results. The difference is usually minimal, but can be more important if you have an outlier.

Regards
Ohlbe
roman_max
☆

Russia,
2018-12-12 17:31
(1458 d 07:17 ago)

@ Ohlbe
Posting: # 19683
Views: 9,878

## PE vs GMR

Dear Ohlbe,

thanks a lot for clarification and the post provided. Because Im using R/bear package seems that terminology in terms of MEANS are similar with SAS (LSMEAN). Im not familiar with the latter software.

Very often I face with that in BE study reports submitted to EMA/FDA by different companies GMR is the metric, not PE (along with its CIs). This is the reason why I interested in what we really "calculate" and why, is it specific of particular software or using of different terminology or something else.

In a BE study report there is a summary table for presenting BE results, where geometric means (or LSMEANs?) in natural scale, GMR (%) (or PE(%)?) and CI(%) should be reported.

In my example with R/bear for Cmax and log(Cmax) in output file I have the following data (designations "as is" in the output file):
Dependent Variable: log(Cmax)
------------------------------------------
n1(R -> T) = 23
n2(T -> R) = 22
N(n1+n2) = 45
Lower criteria = 80.000 %
Upper criteria = 125.000 %
MEAN-ref = 3.970504
MEAN-test = 4.097524
MSE = 0.1563787
SE = 0.08338823
Estimate(test-ref) = 0.1282001

* Classical (Shortest) 90% C.I. for log(Cmax) *

Point Estimate CI90 lower CI90 upper
113.678 98.809 130.785

As Yung-jin explained me that PE in R/bear is not a direct arithmetic difference, and it is estimated in a specific linear model in ANOVA.

MEAN-ref and MEAN-test are correspond to each LSMEAN in PK Parameter Summaries, it is OK.

<< PK Parameter Summaries >>

ln(Cmax)
-----------------------------------
summary Test Ref Ratio
LSMEAN 4.097 3.971 1.043
This is for logarithms

So, Exp(LSMEAN[T])=60.159 and Exp(LSMEAN[R])=53.038, and these are geometric means, right?
So, the ratio of geometric means (GMR) is 60.159/53.038=113.428

Does my understanding correct now?

But what to report to authorities? PE or GMR? For balanced study they are equal, but for imbalanced they are slightly different.
And what about CIs? Should I construct them for GMR as well?

Thanks in advance for further clarifications.
mittyri
★★

Russia,
2018-12-12 18:40
(1458 d 06:08 ago)

@ roman_max
Posting: # 19685
Views: 9,882

## PE vs GMR in Bear

Dear roman_max,

❝ Very often I face with that in BE study reports submitted to EMA/FDA by different companies GMR is the metric, not PE (along with its CIs). This is the reason why I interested in what we really "calculate" and why, is it specific of particular software or using of different terminology or something else.

Yes, the terminology is confusing.
Even if you look into EMA guideline:
The assessment of bioequivalence is based upon 90% confidence intervals for the ratio of the population geometric means (test/reference) for the parameters under consideration.
<...>
The geometric mean ratio (GMR) should lie within the conventional acceptance range 80.00-125.00%.

They are using 'geometric means', 'geometric mean ratio' terms, but adjusted means (aka LSMeans) should be taken into account instead. As an example please see the code and conclusions in EMA PKWP Q&A

❝ In a BE study report there is a summary table for presenting BE results, where geometric means (or LSMEANs?) in natural scale, GMR (%) (or PE(%)?) and CI(%) should be reported.

Common practice is to present GeoLSMeans (backtransformed LSMeans) and their ratio (PE).

❝ MEAN-ref = 3.970504

❝ MEAN-test = 4.097524

❝ MSE = 0.1563787

❝ SE = 0.08338823

❝ Estimate(test-ref) = 0.1282001

4.097524 - 3.970504 = 0.12702

❝ Point Estimate CI90 lower CI90 upper

113.678 98.809 130.785

log(113.678) = 0.1281997 = Estimate(test-ref)

❝ As Yung-jin explained me that PE in R/bear is not a direct arithmetic difference, and it is estimated in a specific linear model in ANOVA.

Sounds strange to me. Do we have the ratio of LSMeans, the ratio of GeoMeans and something else?

❝ << PK Parameter Summaries >>

❝ summary Test Ref Ratio

❝ LSMEAN 4.097 3.971 1.043

See above:
MEAN-test = 4.097524 which could be rounded to 4.098, not 4.97

❝ So, Exp(LSMEAN[T])=60.159 and Exp(LSMEAN[R])=53.038, and these are geometric means, right?

These are GeoLSMeans

❝ Does my understanding correct now?

I think we need to wait for Yung-jin. I'm too lazy to dig into BEAR

❝ But what to report to authorities? PE or GMR? For balanced study they are equal, but for imbalanced they are slightly different.

GeoLSMs and their ratio is a common practice.
You can try to add a footnote and describe somewhere why GeoMeans could be unequal to LSMeans

❝ And what about CIs? Should I construct them for GMR as well?

See above. Do not try to complicate your life

Kind regards,
Mittyri
yjlee168
★★★

Kaohsiung, Taiwan,
2018-12-12 19:58
(1458 d 04:51 ago)

@ mittyri
Posting: # 19686
Views: 9,938

## estimate(test-ref) in bear

Dear mittyri & all others,

❝ ❝ As Yung-jin explained me that PE in R/bear is not a direct arithmetic difference, and it is estimated in a specific linear model in ANOVA.

❝ Sounds strange to me. Do we have the ratio of LSMeans, the ratio of GeoMeans and something else?

❝ Let's wait for Yung-jin's comments

Estimate(test-ref) in bear is not obtained directly from the difference of mean values or lsmean. It is obtained from one of coefficients of lm() function. I cut the outputs from a test run and show it as follows:
---
Summary Pivotal Parameters of BE Study
------------------------------------------
Dependent Variable: log(Cmax)
------------------------------------------
n1(R -> T) = 7
n2(T -> R) = 7
N(n1+n2) = 14
Lower criteria = 80.000 %
Upper criteria = 125.000 %
MEAN-ref = 7.32951
MEAN-test = 7.40146
MSE = 0.01780339
SE = 0.05043155
Estimate(test-ref) = 0.07195028

######  testing  ######
Call:
lm(formula = log(Cmax) ~ seq + subj:seq + prd + drug, data = TotalData,
na.action = na.exclude)

Residuals:
Min       1Q   Median       3Q      Max
-0.13876 -0.08473  0.00000  0.08473  0.13876

Coefficients: (14 not defined because of singularities)
Estimate Std. Error t value Pr(>|t|)
(Intercept)  7.48268    0.10086  74.187   <2e-16 ***
seq2        -0.06357    0.13343  -0.476   0.6423
prd2        -0.05095    0.05043  -1.010   0.3323
drug2        0.07195    0.05043   1.427   0.1792
...
---

please browse "Interpreting regression coefficient in R" for more details (the explanations for the coefficient x32). Does anyone know the name of this coefficient in statistics? Thanks.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
mittyri
★★

Russia,
2018-12-12 23:57
(1458 d 00:51 ago)

@ yjlee168
Posting: # 19688
Views: 10,100

## estimate(test-ref) and lsmeans in bear

Dear Yung-jin,

Thank you for the explanation.

❝ Estimate(test-ref) in bear is not obtained directly from the difference of mean values or lsmean. It is obtained from one of coefficients of lm() function.

In your example the dataset is balanced.
The question is how do you obtain LSMeans or what do you output as
MEAN-ref = 7.32951
MEAN-test = 7.40146

❝ please browse "Interpreting regression coefficient in R" for more details (the explanations for the coefficient x32). Does anyone know the name of this coefficient in statistics? Thanks.

That is called as a contrast.

Here's a little code where I show that the LSMeans difference is a point estimate irrespective of the sequence balance.

library(data.table)
library(curl)
library(emmeans)
alpha <- 0.05
setorder(PKdf, Subj)

factorcols <- colnames(PKdf)[-5] PKdf[,(factorcols) := lapply(.SD, as.factor), .SDcols = factorcols]
ow <- options()
options(contrasts=c("contr.treatment","contr.poly"))

# get rid of 3 subjects in one sequence
PKdf_1 <- PKdf[which(Subj!=1L&Subj!=2L&Subj!=4L),]
logmeans <- data.frame(PKdf_1[, lapply(.SD, function(x){mean(log(x))}), by = Trt, .SDcols="Var"])
colnames(logmeans) <- c("Trt", "logmean")

muddle   <- lm(log(Var)~Trt+Per+Seq+Subj, data=PKdf_1)

print(table(PKdf_1[, Seq]))
# RT TR
# 12 18

# in the standard contrasts option "contr.treatment"
# the coefficient TrtT is the difference T-R since TrtR is set to zero
lnPE     <- coef(muddle)["TrtT"] cat("The contrast from lm() is ", lnPE)
# The contrast from lm() is  -0.04975903

print(logmeans)
# Trt  logmean
# 1   T 4.916300
# 2   R 4.980382

# compare with LSMeans
LSMobj <- lsmeans(muddle, 'Trt')
print(cbind.data.frame(Trt = summary(LSMobj)$Trt,LSMean = summary(LSMobj)$lsmean))
#   Trt   LSMean
# 1   R 4.988564
# 2   T 4.938805

cat("LSMeans difference is ", summary(LSMobj)$lsmean[2] - summary(LSMobj)$lsmean[1])
# LSMeans difference is  -0.04975903

LSMpair <- pairs(LSMobj, reverse = T)
cat("The contrast from LSMeans difference is ", summary(LSMpair)\$estimate)
# The contrast from LSMeans difference is  -0.04975903

lnCI     <- confint(muddle,c("TrtT"), level=1-2*alpha)
print(lnCI)
# 5 %        95 %
#   TrtT -0.1013179 0.001799863

print(confint(LSMpair, level =0.9))
# contrast    estimate         SE df   lower.CL  upper.CL
# T - R    -0.04975903 0.02911397 13 -0.1013179 0.001799863
#
# Results are averaged over the levels of: Per, Subj, Seq
# Results are given on the log (not the response) scale.
# Confidence level used: 0.9

#reset options
options(ow)

Kind regards,
Mittyri
yjlee168
★★★

Kaohsiung, Taiwan,
2018-12-13 13:20
(1457 d 11:29 ago)

(edited by yjlee168 on 2018-12-13 14:19)
@ mittyri
Posting: # 19689
Views: 9,749

## estimate(test-ref) and lsmeans in bear

Dear mittyri,

Very sorry about the delay of this replied message.

❝ In your example the dataset is balanced.

That's correct.

❝ The question is how do you obtain LSMeans or what do you output as

MEAN-ref = 7.32951

❝ MEAN-test = 7.40146

These two values were simply obtained from mean(LnCmax). So they were geometric means as defined. Since only balanced crossover studies will be analyzed in bear, so lsmeans are the same as arithmetic means.

❝ ❝ please browse "[link=nk]" for more details (the explanations for the coefficient x32). Does anyone know the name of this coefficient in statistics? Thanks.

❝ That is called as a contrast.

Ok, thanks a lot. And also I will try your codes later with my test dataset. Your codes show and clarify the differences in calculation of lsmeans and geometric means for an unbalanced crossover study, as well as the contrast is equal to lnPE(I don't know that before!). Please correct me if I do not interpret your codes correctly. Thanks again.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
mittyri
★★

Russia,
2018-12-11 17:30
(1459 d 07:19 ago)

@ roman_max
Posting: # 19680
Views: 9,979

## PE vs GMR

Dear roman_max,

just some additional questions coninuing Ohlbe's post above:
Are you dealing with replicate or simple crossover?
Are there any dropouts? I see you included in the analysis 45 subjects. Did they finish the study as planned?

Kind regards,
Mittyri
roman_max
☆

Russia,
2018-12-12 17:41
(1458 d 07:07 ago)

@ mittyri
Posting: # 19684
Views: 9,860

## PE vs GMR

Dear mittyri,

thank you for the questions. This study was a classic BE with 2x2x2 design. There was 1 dropout, that was excluded from PK and stat.analysis, so I had imbalance in sequences.

I've replied to Ohlbe and asked several additional questions. If you also have your opininon please comment on, I have to find out a solution to the uncertainties
ElMaestro
★★★

Denmark,
2018-12-12 22:39
(1458 d 02:09 ago)

@ roman_max
Posting: # 19687
Views: 9,862

## PE vs GMR

Hi roman_max,

I am using the terms PE and GMR interchangeably.
I am not necessarily happy to do so but there you are.

On the log scale we have differences of model effects (what Yung-jin mentioned, but with that coding it becomes so difficult to relate it to real life), differences of geometric means, differences of LSMeans and God knows what else.

All this boils down to the same thing with balanced designs, and to confusion more generally. Probably someone is too proud to fix it. Semantics is only important when regulators think so.

Pass or fail!
ElMaestro
roman_max
☆

Russia,
2018-12-14 00:11
(1457 d 00:38 ago)

@ ElMaestro
Posting: # 19690
Views: 9,736

## PE vs GMR

Hi ElMaestro,
Thanks for the opinion. Indeed, no problem when there are no dropouts from the study. And definitely you don't know how a particular software use its sophisticated logic in calculations, applying coefficients, modeling etc. Sometimes I also use SPSS and it's syntax, but it cannot account for disbalance like R lm function. So it depends..
So, as a solution I will make a footnotes to tables to explain "readers" in RA what a particular metrics mean and how they were calculated / estimated. Along with SAP :)
Hope it will be sufficient to our regulator not to be knocked down
mittyri
★★

Russia,
2018-12-14 16:35
(1456 d 08:13 ago)

@ roman_max
Posting: # 19691
Views: 9,649

## SPSS vs lm

Hi roman_max,

❝ Sometimes I also use SPSS and it's syntax, but it cannot account for disbalance like R lm function.

that is strange
We compared SPSS vs PHX WNL vs SAS vs lm() using PK datasets with replicated designs (including some sophisticated with dropouts) and didn't find any significant deviations. So I think it can.

Kind regards,
Mittyri
roman_max
☆

Russia,
2018-12-15 14:37
(1455 d 10:11 ago)

@ mittyri
Posting: # 19693
Views: 9,644

## SPSS vs lm

Dear mittyri,
Could you please share the SPSS syntax you use for modeling factors in BE study? :) I use GLM UNIANOVA.
mittyri
★★

Russia,
2018-12-15 23:07
(1455 d 01:41 ago)

@ roman_max
Posting: # 19694
Views: 9,634

## SPSS BE code

Dear roman_max,

As far as I can see you asked for that code already

Didn't that code fit well?
The code we used for comparison is similar to that one.

Kind regards,
Mittyri
ElMaestro
★★★

Denmark,
2018-12-14 17:11
(1456 d 07:38 ago)

@ roman_max
Posting: # 19692
Views: 9,646

## PE vs GMR

Hello roman_max,

❝ And definitely you don't know how a particular software use its sophisticated logic in calculations, applying coefficients, modeling etc. Sometimes I also use SPSS and it's syntax, but it cannot account for disbalance like R lm function.

First of all, there is no hidden secret in the way the software applies logic. All this is generally completely documented and clear, albeit not always easy to understand (the part I am currently struggling with is the actual derivation of an LS Mean, generally, for example also in models that are mopre sophisticated than those used for BE like when covariates are in play. I can't say the SAS documentation is very understandable. But that's just me).
The issue, as I understood your question, related to more to what we call the output and if there is a discrepancy in the nomenclature.

Next, SPSS will definitely be able to account for imbalance, I am sure of it. If it doesn't appear so when you run the model then this may be a case of garbage in, garbage out.

Pass or fail!
ElMaestro