d_labes
★★★

Berlin, Germany,
2009-04-01 19:00
(5474 d 17:44 ago)

Posting: # 3442
Views: 12,465
 

 Bear to bear interval with 90% confidence [🇷 for BE/BA]

Dear Bears, dear Rusers, dear all,

recent I (The power to know is with me) had the opportunity to take a deeper look on to R and into the source code of bear.

Since I am a novice in R I could not fully understand all the lot of code. But I argue that the 90% confidence interval is calculated in BANOVA.r (the part of bear which deals with the SAS GLM counterpart) but in a different manner.

My interpretation of the code is: The MSE is taken from the lm() call, but point estimate is calculated by the ordinary means mT and mR.
Any body out there: Am I right?

If I am right: This is ok, as long as the design is balanced between sequences. but for unbalanced data there is a difference

In SAS this is done usually via LSMeans or an ESTIMATE statement.

Astonishing enough there seems no counterpart to LSMeans in R. After reading tons of Web pages it seems this another mine field like Type III sums of squares.

But nevertheless there is the task to estimate the treatment effect with 90% confidence interval (from the parameter of the fitted model), i.e. from a counterpart of the ESTIMATE statement.

Dear Bears have a look on to this page. Eventually this helps you. I don't understand it, because I am a novice ;-) .

Regards,

Detlew
ElMaestro
★★★

Denmark,
2009-04-01 21:01
(5474 d 15:44 ago)

@ d_labes
Posting: # 3445
Views: 11,058
 

 Bear to bear interval with 90% confidence

Dear DLabes,

I could be wrong, but.........

❝ My interpretation of the code is: The MSE is taken from the lm() call, but

❝ point estimate is calculated by the ordinary means mT and

❝ mR.

❝ Any body out there: Am I right?


Better let the authors answers here, but if you are right then Bear has a potential problem in the case of imbalance. But you could also ask yourself: Do I want to test if the means differ or do I want to test if the LSMeans differ?
I once heard that some statisticians think that a diference in LSMeans should be tested with type III SS. After all LSMeans are means that are adjusted for all other effects, while type III SS for a factor X, corresponds to the variation caused by factor X given the presence of all other factors (thus 'single term deletions') which may be the as saying that a type III SS is the variation for factor X adjusted for the variation caused by everything else. So I may digress and ask: If type III SS provide tests for LSMeans, then which SS are appropriate for ordinary means? And why would the former be more relevant than the latter from a practical standpoint?

❝ If I am right: This is ok, as long as the design is balanced between

❝ sequences. but for unbalanced data there is a difference


Certainly.

❝ In SAS this is done usually via LSMeans or an ESTIMATE statement.


❝ Astonishing enough there seems no counterpart to LSMeans in R. After

❝ reading tons of Web pages it seems this another mine field like Type III

❝ sums of squares.


It is quite easy to find websites with renowned statisticians who do not really believe that LSMeans are more relevant than ordinary normal old-fashioned boring means.
The people who write R do it for a reason. For example some of them do not like SAS. Therefore they don't like LSMeans and type III SS. Therefore these things are not really implemented in R. It's just like getting tired of one type of random number generator and then coming up with a new one which has better attributes (better is in this regard completely in the eyes of the beholder).
So can we extract LSMeans from an lm fit? No!
Can we in stead extract the difference in LSMeans (test minus R) from an lm fit? Yes! And this is exactly what we need for the BE evaluation, because log(LSMean(T)/LSMean(R))=log(LSMean(T)-log(LSMean(R)).
As far as I know the reason has to do with differences in the way model matrices are specified in R and SAS. I was at some point thinking about asking a q about this here because I noticed that some model matrices in R are not full rank. Anyone with insight, please comment!

❝ But nevertheless there is the task to estimate the treatment effect with

❝ 90% confidence interval (from the parameter of the fitted model), i.e.

❝ from a counterpart of the ESTIMATE statement.


Extract the difference in LSMeans, then you have your estim T/R ratio. Extract the MSE, then you get your sigma. Calculate the 90% CI, leave from work early and enjoy the good weather.


EM.
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2009-04-01 22:38
(5474 d 14:07 ago)

@ ElMaestro
Posting: # 3447
Views: 10,888
 

 Bear to bear interval with 90% confidence

Dear ElMaestro,

❝ The people who write R do it for a reason. For example some of them do not

❝ like SAS. Therefore they don't like LSMeans and type III SS.[...]


You heard that, too. We have been quite frustrated when coding bear and tried to validate obtained results from bear with SAS counterparts. Do you remember that, when we announced bear v1.0, you once suggested that we should have Type III SS output in bear? Then we started getting more information about the differences between SAS and R.

❝ So can we extract LSMeans from an lm fit? No!

❝ Can we in stead extract the difference in LSMeans (test minus R) from an

❝ lm fit? Yes! And this is exactly what we need for the BE evaluation,

❝ because log(LSMean(T)/LSMean(R))=log(LSMean(T)-log(LSMean(R))[...]


Interesting points. Do you mean for unbalanced data or for all kinds here? I don't know this before.

❝ As far as I know the reason has to do with differences in the way model

❝ matrices are specified in R and SAS. I was at some point thinking about

❝ asking a q about this here because I noticed that some model matrices in R

❝ are not full rank. Anyone with insight, please comment!


We maybe need to go to some UseR! Forums to find out.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2009-04-01 23:28
(5474 d 13:16 ago)

(edited by ElMaestro on 2009-04-02 07:56)
@ yjlee168
Posting: # 3448
Views: 10,906
 

 Bear to bear interval with 90% confidence

Hi again,

❝ ❝ And this is exactly what we need for the BE evaluation,

❝ ❝ because log(LSMean(T)/LSMean(R))=log(LSMean(T)-log(LSMean(R))[...]


❝ Interesting points. Do you mean for unbalanced data or for all kinds

❝ here? I don't know this before.


Yes, this is actually the easy part.
If you use something like this:

TotallyFantasticFit=lm(lnAUC~Subj+Seq+Per+Trt)

Then you can get the difference in LSMeans via:
summary(TotallyFantasticFit)

Note that the sign of the treatment effect will be depending on the way the data have been garbled initially: It will be equal to either (LSMean(T)-LSMean(R)) or to (LSMean(R)-LSMean(T)) so be aware of the sign.

The alternative way to do it is to simply calculate LSMeans by averaging over the sequences, see Chow & Liu's book.

It is solely a matter of taste.
Your end result (T/R ratio) will be the same regardless. And it will work for balanced as well as unbalanced datasets.

EM.

Edit 02.04.09: Error in my explanation of LSMean calculation.
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2009-04-01 22:15
(5474 d 14:29 ago)

@ d_labes
Posting: # 3446
Views: 10,990
 

 Bear to bear interval with 90% confidence

Dear D. Labes,

Please allow me to quickly and briefly respond this message.

❝ My interpretation of the code is: The MSE is taken from the lm() call, but

❝ point estimate is calculated by the ordinary means mT and mR.

❝ Any body out there: Am I right?


Exactly.

❝ If I am right: This is ok, as long as the design is balanced between

❝ sequences. but for unbalanced data there is a difference


We have discussed this back to previous versions. Indeed, we should use lme() (counterpart to SAS PROC MIXED) to analyze unbalanced data, not using lm() here. We forget to take care of this part so far. Many thanks for reminding us. We need to write a conditional statement when doing 2x2x2 crossover design to separate the analysis of unbalanced data using lme().

❝ Astonishing enough there seems no counterpart to LSMeans in R. After

❝ reading tons of Web pages it seems this another mine field like Type III

❝ sums of squares.


Exactly, just like replied message by ElMaestro. In R, there is no counterpart to LSMeans.

❝ But nevertheless there is the task to estimate the treatment effect with

❝ 90% confidence interval (from the parameter of the fitted model), i.e.

❝ from a counterpart of the ESTIMATE statement.


In R, I think lme() should be able to solve the problem easily and nicely. We have previously validated the results done with lme() for unbalanced data compared with SAS PROC MIXED. We got the same. Unfortunately, we just forgot to implement this into bear at that moment.


❝ Dear Bears have a look on to this page.

❝ Eventually this helps you. I don't understand it, because I am

❝ a novice ;-) .


Thanks for your pointer.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2009-04-01 23:47
(5474 d 12:57 ago)

@ yjlee168
Posting: # 3450
Views: 10,831
 

 Bear to bear interval with 90% confidence

Hi again again,

❝ We have discussed this back to previous versions. Indeed, we should use

❝ lme() (counterpart to SAS PROC MIXED) to analyze unbalanced data, not using

❝ lm() here. We forget to take care of this part so far. Many thanks for

❝ reminding us. We need to write a conditional statement when doing 2x2x2

❝ crossover design to separate the analysis of unbalanced data using lme().


I have come to realise* that one should be safe enough with lm for unbalanced data as long as we talk 2,2,2-BE studies, but you might want to make sure to use LSMeans as indicated in the previous post rather than means (at least if your intention is to get SAS-like results).
As soon as we talk more complex designs and/or inclusion of subjects with missing period values a switch to a mixed muddle/REML is mandatory.
To cut it all short: lme will always work for you, even with 2,2,2-studies but from a computational viewpont lme for 2,2,2-BE is somehow overkill. On a modern computer it takes a split second anyway and you never see he difference in terms of computational effort.

EM.

*: Didn't dlabes some time last year explain this? I remember I tried really hard to challenge lm with an unbalanced dataset to see if I could get it "wrong" with lm due to imbalance in a 2,2,2-study, but I failed to do that.
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2009-04-05 01:11
(5471 d 11:33 ago)

@ ElMaestro
Posting: # 3467
Views: 10,934
 

 Bear to bear interval with 90% confidence

Dear ElMaestro,

We have tried your method with lm() and it worked perfectly and was just exactly the same as the SAS output using balanced and unbalanced data. We also tried the method (this link) pointed by D. Labes, and it didn't work. We're still trying to find out why it didn't work. It could be due to that the dataset we used to test included a missing value. I will post the final result here later if necessary. Now I post the results obtained from your suggested method here with balanced and unbalanced dataset.

Balanced dataset with bear
      Statistical analysis (ANOVA(lm))
[...] **************** Classical (Shortest) 90% C.I. for lnCmax *****************

  CI90_lower Point_estimated CI90_upper
1     97.550         106.184    115.582 [...]  
**************** Classical (Shortest) 90% C.I. for lnAUC0t ****************

  CI90_lower Point_estimated CI90_upper
1     94.459         106.640    120.391 [...]
**************** Classical (Shortest) 90% C.I. for lnAUC0INF **************

  CI90_lower Point_estimated CI90_upper
1     94.478         106.335    119.681[...]

Balanced dataset with SAS only showing Cmax here
[...]Least Squares Means

                                                                 H0:LSMean1=
                                                       lncmax      LSMean2
                                         drug          LSMEAN       Pr > |t|

                                         1         7.33071429         0.2313
                                         2         7.39071429


                                                lncmax
                                  drug          LSMEAN      90% Confidence Limits

                                  1           7.330714        7.270747     7.390681
                                  2           7.390714        7.330747     7.450681

Least Squares Means for Effect drug

                                             Difference
                                                Between    90% Confidence Limits for
                                 i    j           Means       LSMean(i)-LSMean(j)

                                 1    2       -0.060000       -0.144806     0.024806


                                             drug coding: 1-Ref. & 2-Test
[...] Summary:
lnCmax:
point->106.184
lowerCI->97.55
upperCI->115.582

lnAUC0t:
point->106.640
lowerCI->94.459
upperCI->120.391

lnAUC0inf:
point->106.335
lowerCI->94.478
upperCI->119.681


Unbalanced dataset using bear
[...] **************** Classical (Shortest) 90% C.I. for lnCmax ****************

  CI90_lower Point_estimated CI90_upper
1     96.260         105.553    115.745[...] **************** Classical (Shortest) 90% C.I. for lnAUC0t ****************

  CI90_lower Point_estimated CI90_upper
1     92.360         105.202    119.830[...] **************** Classical (Shortest) 90% C.I. for lnAUC0INF **************

  CI90_lower Point_estimated CI90_upper
1     92.419         104.915    119.100[...]


Unbalanced with SAS only showing Cmax here
{...]
Least Squares Means

                                                                 H0:LSMean1=
                                                       lncmax      LSMean2
                                         drug          LSMEAN       Pr > |t|

                                         1         7.32916667         0.3149
                                         2         7.38321429


                                                lncmax
                                  drug          LSMEAN      90% Confidence Limits

                                  1           7.329167        7.263994     7.394339
                                  2           7.383214        7.318042     7.448387


                                         Least Squares Means for Effect drug

                                             Difference
                                                Between    90% Confidence Limits for
                                 i    j           Means       LSMean(i)-LSMean(j)

                                 1    2       -0.054048       -0.146215     0.038120

                                             drug coding: 1-Ref. & 2-Test  [...] Summary as follows:
lnCmax:
point->105.55
lowerCI->96.260
upperCI->115.745

lnAUC0t:
point->105.202
lowerCI->92.36
upperCI->119.83

lnAUC0inf:
point->104.915
lowerCI->92.419
upperCI->119.100


❝ *: Didn't dlabes some time last year explain this? I remember I tried

❝ really hard to challenge lm with an unbalanced dataset to see if I could

❝ get it "wrong" with lm due to imbalance in a 2,2,2-study, but I failed to

❝ do that.


Yes, you're right. Sorry about my poor memory. Thank you for your suggestions.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2009-04-05 15:23
(5470 d 21:22 ago)

@ yjlee168
Posting: # 3472
Views: 10,734
 

 Bear to bear interval with 90% confidence

Hi,

lm and lme should give you the same result for balanced and unbalanced 2,2,2-anova and 90% CI.
If you get dissimilar results then try to force lme to exclude subjects with a mising period. I think it is via setting the na.action to "na.omit" or "na.exclude" or something like that. Not sure, not able to test it right now.
Alternatively, to be completely safe you could also simply cleanup the dataframe or dataset and remove the subject(s) in question. Depending on how you implement your derivation of LSMeans this might actually be a good option, avoiding perhaps some confusion.

Best regards
EM.
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2009-04-06 09:50
(5470 d 02:54 ago)

@ ElMaestro
Posting: # 3479
Views: 10,841
 

 Bear to bear interval with 90% confidence

Dear EM,

❝ lm and lme should give you the same result for balanced and unbalanced

❝ 2,2,2-anova and 90% CI.


Exactly.

❝ If you get dissimilar results then try to force lme to exclude subjects

❝ with a mising period. I think it is via setting the na.action to "na.omit"

❝ or "na.exclude" or something like that. Not sure, not able to test it right

❝ now.[...]


We will try that later. Thank you so much.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2009-04-07 10:46
(5469 d 01:58 ago)

@ yjlee168
Posting: # 3489
Views: 10,773
 

 Data set

Dear Yung-yin,

can you give details of the dataset?

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2009-04-07 15:44
(5468 d 21:00 ago)

@ d_labes
Posting: # 3495
Views: 10,809
 

 Data set

Dear D. Labes,

Sure.

The dataset is real and is used for the balanced study.
subj,trt,seq,prd,Cmax,AUC0t,AUC0INF,LnCmax,LnAUC0t,LnAUC0INF
1,1,2,2,1739,14445,14933,7.46,9.58,9.61
1,2,2,1,1633,12294,12972,7.40,9.42,9.47
2,1,1,1,1481,12516,13185,7.30,9.43,9.49
2,2,1,2,1837,15299,16209,7.52,9.64,9.69
3,1,2,2,1780,15371,16032,7.48,9.64,9.68
3,2,2,1,2073,15184,15691,7.64,9.63,9.66
4,1,1,1,1374,11063,11668,7.23,9.31,9.36
4,2,1,2,1629,13982,14650,7.40,9.55,9.59
5,1,2,2,1555,13971,14557,7.35,9.54,9.59
5,2,2,1,1385,11852,12550,7.23,9.38,9.44
6,1,1,1,1756,15376,15964,7.47,9.64,9.68
6,2,1,2,1522,13838,14343,7.33,9.54,9.57
7,1,2,2,1566,13442,14068,7.36,9.51,9.55
7,2,2,1,1643,12361,12979,7.40,9.42,9.47
8,1,1,1,1939,13422,14001,7.57,9.50,9.55
8,2,1,2,1615,14347,14681,7.39,9.57,9.59
9,1,2,2,1475,12410,12915,7.30,9.43,9.47
9,2,2,1,1759,15804,16565,7.47,9.67,9.72
10,1,1,1,1388,13310,13985,7.24,9.50,9.55
10,2,1,2,1483,11711,12544,7.30,9.37,9.44
11,1,2,2,1127,9353,9750,7.03,9.14,9.19
11,2,2,1,1682,15371,16029,7.43,9.64,9.68
12,1,1,1,1542,15015,15757,7.34,9.62,9.67
12,2,1,2,1247,10609,11093,7.13,9.27,9.31
13,1,2,2,1235,9723,10375,7.12,9.18,9.25
13,2,2,1,1605,15428,16308,7.38,9.64,9.70
14,1,1,1,1598,14977,15916,7.38,9.61,9.68
14,2,1,2,1718,17803,18870,7.45,9.79,9.85


Since we don't any unbalanced dataset on hand, we simply delete the Subject #14 from the above balanced dataset to obtain the following unbalanced dataset.
subj,trt,seq,prd,Cmax,AUC0t,AUC0INF,LnCmax,LnAUC0t,LnAUC0INF
1,1,2,2,1739,14445,14933,7.46,9.58,9.61
1,2,2,1,1633,12294,12972,7.4,9.42,9.47
2,1,1,1,1481,12516,13185,7.3,9.43,9.49
2,2,1,2,1837,15299,16209,7.52,9.64,9.69
3,1,2,2,1780,15371,16032,7.48,9.64,9.68
3,2,2,1,2073,15184,15691,7.64,9.63,9.66
4,1,1,1,1374,11063,11668,7.23,9.31,9.36
4,2,1,2,1629,13982,14650,7.4,9.55,9.59
5,1,2,2,1555,13971,14557,7.35,9.54,9.59
5,2,2,1,1385,11852,12550,7.23,9.38,9.44
6,1,1,1,1756,15376,15964,7.47,9.64,9.68
6,2,1,2,1522,13838,14343,7.33,9.54,9.57
7,1,2,2,1566,13442,14068,7.36,9.51,9.55
7,2,2,1,1643,12361,12979,7.4,9.42,9.47
8,1,1,1,1939,13422,14001,7.57,9.5,9.55
8,2,1,2,1615,14347,14681,7.39,9.57,9.59
9,1,2,2,1475,12410,12915,7.3,9.43,9.47
9,2,2,1,1759,15804,16565,7.47,9.67,9.72
10,1,1,1,1388,13310,13985,7.24,9.5,9.55
10,2,1,2,1483,11711,12544,7.3,9.37,9.44
11,1,2,2,1127,9353,9750,7.03,9.14,9.19
11,2,2,1,1682,15371,16029,7.43,9.64,9.68
12,1,1,1,1542,15015,15757,7.34,9.62,9.67
12,2,1,2,1247,10609,11093,7.13,9.27,9.31
13,1,2,2,1235,9723,10375,7.12,9.18,9.25
13,2,2,1,1605,15428,16308,7.38,9.64,9.7

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2009-04-06 12:49
(5469 d 23:55 ago)

@ yjlee168
Posting: # 3481
Views: 11,016
 

 Models (not necessarily nice looking young woman)

Dear Yung-jin, dear ElMaestro,

sorry but I was a little short in time last week to share the discussion, which I have kicked off naively.

Without delving into the details of your arguments/discussion contributions:
The key is that we are fitting a linear model to our data, because we are interested in estimating the difference between our treatments/formulations in respect to some target parameters (AUC, Cmax or whatever, log-transformed or not).

And this from our fitted model, with all the eventually existing cofactors included to account for their influence.

As ElMaestro has pointed out, this can be done via the coefficients of the model (including a 90% confidence interval for the equivalence test) associated to the treatment.

But this is only true if your have only two treatments/formulations of interest, because lm() or lme() estimate the first of the coefficients to zero.
If your order of treatments is so that reference comes first, then T-R is the value of the coefficient associated with test.

If you consider a broader class of cross-over designs with more treatments/formulations, f.i. 3x3 (three treatment, 3 period) or 4x4 cross-over, you have to deal with the estimation of more then one contrast, which you cannot obtain from the confidence intervals of the coefficients itself.
You have to estimate the difference between the coefficients of interest, with their variation and associated 90% confidence intervals.

If you deal with that via LSMeans or not, is a matter of taste I think.
If I understand the SAS ESTIMATE statement and the R counterpart described on the WEB site I pointed to above ;-) .
All stuff in respect of LSMeans (coefficients to average over all other effects present in the model) vanishes, if you go to the LSMeans differences. The only task left is to estimate differences in the treatment associated model coefficients.

Then the only task is to identify which piece of R will do the task for you. Or you must/can :-D do it by yourself using the coefficients and the residual variance and associated df of the fitted model.

BTW: @ElMaestro:
Seems part of the R community is seeing SAS as "The dark side of the power (to know)".
Seems every one needs some M$ :-P (in german "Klein weich") to claim one self can do better.
(this is not meant in respect to you personally, but a BTW to your answer to me above concerning Rusers and Rdevelopers)

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2009-04-07 16:05
(5468 d 20:39 ago)

@ d_labes
Posting: # 3496
Views: 10,871
 

 Models (not necessarily nice looking young woman)

Dear D. Labes,

❝ If you consider a broader class of cross-over designs with more

❝ treatments/formulations, f.i. 3x3 (three treatment, 3 period) or 4x4

❝ [...]


Indeed.


❝ If you deal with that via LSMeans or not, is a matter of taste I think.

❝ If I understand the SAS ESTIMATE statement and the R counterpart described

❝ on the WEB site I pointed to above ;-) .


We've tried that without success. Some error messages occurred with tested dataset as posted previously. We're still working on it.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,639 registered users;
80 visitors (0 registered, 80 guests [including 4 identified bots]).
Forum time: 11:45 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5