d_labes
★★★

Berlin, Germany,
2009-04-17 12:06

Posting: # 3554
Views: 61,401

## Freedom for replicate bears [R for BE/BA]

Replicate berlin buddy bears

Dear all, especially the Ruses among you,

(ok, the category could also has been Regulatives/guidelines)
continuing my deeper look into R exemplified by the source code of bear I noticed the following:

The key statement in bear for evaluating replicate BE studies (astonishing enough claimed as only 2-sequence designs allowed?) is
  model <- lme(y~seq+prd+drug, random=~1|subj, data=blabla, method="REML")
with y the PK parameter under evaluation (raw or log-transformed).

This is the equivalent of SAS Proc MIXED code:
  Proc MIXED data=BlaBla ;     class subj drug prd seq;     model y=drug prd seq /ddfm=BETWITHIN;     random subj;     estimate 'T-R' tmt -1 1 /cl alpha=0.1;   run;

The used model assumes equal within/between variabilities for Test and Reference and also no subject-by-treatment-interaction, if I'm right.
This is very stringent and consequent, if we talk about ABE.

But in real world there are regulators out there!
And some of them do not like consequent scientific assumptions / models.
Thus it happens that the FDA model for evaluating ABE within replicate designs (discussed occasionally here in the forum, SEARCH!!!) must have different within/between variabilities for Test and Reference and also subject-by-treatment-interaction included in the model.

After reading again tons of Websites (seems nobody evaluates x-over studies with R) I came up with the following attempt in R:
  model2 <- lme(y~ drug + prd + seq, # this random statement fits a symmetric positive definite covariance matrix, # identical to UN in SAS???                 random= ~ tmt-1|subj,                 #different within variabilities                                  weights=varIdent(form = ~ 1 | drug),                 data=BlaBla, method="REML") 
This code gives for the bear built in dataset the same covariance parameters as the SAS code from the FDA statistical guidance except the fact that SAS Proc MIXED uses variances and the lme() code uses standard deviations.
--- snip ----
SAS:
                              The Mixed Procedure                               Estimated G Matrix             Row    Effect    tmt    subject        Col1        Col2               1    tmt       R       1          0.02220    0.002729               2    tmt       T       1         0.002729     0.01492                          Covariance Parameter Estimates                                               Standard         Z  Cov Parm     Subject    Group    Estimate       Error     Value        Pr Z   UN(1,1)      subject              0.02220    0.009077      2.45      0.0072   UN(2,1)      subject             0.002729    0.005318      0.51      0.6079   UN(2,2)      subject              0.01492    0.006095      2.45      0.0072   Residual     subject    tmt R    0.000058    0.000023      2.50      0.0061   Residual     subject    tmt T    0.000011    4.762E-6      2.37      0.0090 
R (have changed drug to tmt, prd to period, subj to subject)
cat("G matrixn") getVarCov(model2) G matrix Random effects variance covariance matrix           tmtR      tmtT tmtR 0.0222050 0.0027286 tmtT 0.0027286 0.0149240   Standard Deviations: 0.14901 0.12216 print(model2) Linear mixed-effects model fit by REML   Data: PKparms   Log-restricted-likelihood: 98.1282   Fixed: y ~ tmt + period + sequence  (Intercept)         tmtT      period2      period3      period4 sequenceTRRT   7.35627371   0.06373952  -0.06170010  -0.05964129   0.00348174   0.00266416 Random effects:  Formula: ~tmt - 1 | subject  Structure: General positive-definite, Log-Cholesky parametrization          StdDev     Corr tmtR     0.14901302 tmtR tmtT     0.12216340 0.15  any body out there who knows how to get more decimals here? Residual 0.00335726      Variance function:  Structure: Different standard deviations per stratum  Formula: ~1 | tmt  Parameter estimates:       T       R 1.00000 2.27674 Number of Observations: 56 Number of Groups: 14
--- end snip ---
I was soooo happy: An old SAS dog, nothing learned else, has become a Ruser!

But then it came to the freedom to some degree:
--- snip ---
SAS: now with the DDFM=SATTERTHWAITE (DDFM = denominator degree of freedom method)
          Type 3 Tests of Fixed Effects                Num     Den  Effect         DF      DF    F Value    Pr > F  Drug            1      12       1.79    0.2053  prd             3    14.3       2.59    0.0930  seq             1      12       0.00    0.9623
R:
anova(model2,type="marginal")[2:4,]          numDF denDF  F-value p-value tmt          1    38 1.793900  0.1884 period       3    38 2.593551  0.0667 sequence     1    12 0.002331  0.9623
--- end snip ---

Ok, I have learned (by reading again tons of websites) that R has no package with Satterthwaite denominator degrees of freedom in mixed models.
But all other available DDFM in Proc MIXED gave not a denDF=38 for the treatment effect. One gets always lower denDF in SAS and as a consequence of that wider 90% confidence intervals!

Thus my choice of software would be lme() and R because with that it is easier to make my sponsors happy .
But the question is: Are regulators evenly happy with that?

Whats going on here? Do I miss anything in the R approach?
Or is the denominator DF question also like type III SumOfSquares? (I have read some Websites indicating that).

BTW: Parallel Bears (crossing at infinity according to the geometric axioms) have also same variance . But this is another story.

Regards,

Detlew
ElMaestro
★★★

Belgium?,
2009-04-17 14:50

@ d_labes
Posting: # 3556
Views: 56,401

## Freedom for replicate bears

Hi dlabes,

» Or is the denominator DF question also like type III SumOfSquares? (I have read some Websites indicating that).

As far as I recall, the R default -with no option to change it- is to use a denDF which is similar to SAS' "containment" meffud. I you insist on having Satterthwaite's denDF or whatever, then R will not be your package of choice. One little problem is that FDA indirectly endorsed SAS some time ago, so many regulators (incl. EU) expect an output which can be replicated with SAS using FDA scripts. Then clearly R will fall short of the goal. I guess you can safely conclude this is another "type III SS" or LSMean phenomenon. Big bro dictates the standard and regulators have no clue. Do it the SAS way or do it wrong. OK, that was a provocation, sorry.

One can go on and on; discussion of the proof that REML gives unbiased
coefficients in unbalanced datasets comes next, I guess?

Best regards
EM.
d_labes
★★★

Berlin, Germany,
2009-04-17 15:07

@ ElMaestro
Posting: # 3557
Views: 56,385

## Contain does not contain

Hi ElMaestro,

» As far as I recall, the R default -with no option to change it- is to use a denDF which is similar to SAS' "containment" meffud.

Here are the SAS results with DDFM=CONTAIN
   Type 3 Tests of Fixed Effects                Num     Den  Effect         DF      DF    F Value    Pr > F  tmt             1      24       1.79    0.1930  period          3      26       2.59    0.0741  sequence        1      26       0.00    0.9619

Seems not to contain the R results , as already mention in my previous post.

Regards,

Detlew
ElMaestro
★★★

Belgium?,
2009-04-18 03:45

@ d_labes
Posting: # 3560
Views: 56,513

## Freedom for replicate bears

Dear dlabes

» I was soooo happy: An old SAS dog, nothing learned else, has become a Ruser!

Amazing Grace, how sweet the sound,
That saved a wretch like me.
I once was lost but now am found,
Was blind, but now I see.

(John Newton, 1725-1807)

You did the right thing by parking the power in the corner. It must have been a tough decision. Now go bash it with a baseball bat, bury it in your back yard and you will have my deepest respect.

Sincerely,
EM.
d_labes
★★★

Berlin, Germany,
2009-04-20 08:42

@ ElMaestro
Posting: # 3569
Views: 56,250

## Freedom incompatible with baseball bats

Dear ElMaestro

» [...] Now go bash it with a baseball bat, bury it in your back yard and you will have my deepest respect.

what a political incorrect suggestion !
It is the "Power to know"!

Regards,

Detlew
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-19 21:38

@ d_labes
Posting: # 3567
Views: 56,419

## Freedom for replicate bears

Dear D. Labes,

»
» Replicate berlin buddy bears

very cute. I like that!

» The key statement in bear for evaluating replicate BE studies (astonishing enough claimed as only 2-sequence designs allowed?) is

I think we can easily program bears to handle ABE data from more than 2 seq and/or more than 2 tmt in the future.

» [...]
» tmtR     0.14901302 tmtR
» tmtT     0.12216340 0.15 any body out there who knows how to get more decimals here?

You can try "options (digits=n)" in R console to increase decimals, where n the number decimal you like to display. If not work, it can be the built-in option in the package.

» [...] But all other available DDFM in Proc MIXED gave not a denDF=38 for the treatment effect. One gets always lower denDF in SAS and as a consequence of that wider 90% confidence intervals!

But as I can remember, we had validated 90%CI obtained from bears with those from SAS with replicate study using lme(). I will post the results here later. Basically we got the equivalent 90%CIs with SAS. Indeed, we know DenDFs obtained from bears are different from those using SAS. However, 90% CIs are the same.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
d_labes
★★★

Berlin, Germany,
2009-04-20 08:30

@ yjlee168
Posting: # 3568
Views: 56,240

## Freedom to some degree

Dear bears, dear all

» You can try "options (digits=n)" in R console to increase decimals, where n the number decimal you like to splay. If not work, it can be the built-in option in the package.

Of course I had tried this.
Unfortunately all this doesn't help here.

» But as I can remember, we had validated 90%CI obtained from bears with those from SAS with replicate study using lme(). I will post the results here later. Basically we got the equivalent 90%CIs with SAS. Indeed, we know DenDFs obtained from bears are different from those using SAS. However, 90% CIs are the same.

Don't worry about this, because SAS and bear gave the same result (also with DDFM=SATTERTHWAITE or KR) as long as
  lme(y~tmt+period+sequence, random=1|subject, data=blabla)
and the Proc MIXED equivalent of that is used. The difference comes only into effect if one tries to implement the FDA model (which is mandatory I think, at least for studies aimed to the FDA regulated market).

My questions remains:
Do I do something wrong here? If yes, what?
If not, what are the chances that regulators accept the bear / R handling of replicate studies?

Regards,

Detlew
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-21 14:03

@ d_labes
Posting: # 3575
Views: 56,275

## Freedom to some degree

dear D. Labes,

» My questions remains:
» Do I do something wrong here? If yes, what?

I don't know why you used TYPE=UN in your SAS code. In the FDA 2001 Guidance (Appendix E, p.37), it says: "...In the Random statement, TYPE=FA0(2) could possibly be replaced by TYPE=CSH. This guidance recommends that TYPE=UN not be used, as it could result in an invalid (i.e., not nonnegative definite) estimated covariance matrix..."

» If not, what are the chances that regulators accept the bear / R handling of replicate studies?

As long as bear complies with the FDA Guidance (2001) and is statistically correct, it should be acceptable, I guess.

Edit: Link corrected for FDA's new site. [Helmut]

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
d_labes
★★★

Berlin, Germany,
2009-04-21 16:11

@ yjlee168
Posting: # 3578
Views: 56,234

## FDA loves replicate bears?

Dear Yung-jin,

» I don't know why you used TYPE=UN in your SAS code. [...]

This was because I could not identify the syntax to implement such things (FA0(2) or CSH) within lme() and I had the guess that the lme() output
"Random effects:  Formula: ~tmt - 1 | subject  Structure: General positive-definite, Log-Cholesky parametrization"
corresponds eventually to SAS UN parameterization. BTW could not find any hint up to now what this really means: General positive-definite, Log-Cholesky parametrization. Questions for that are always answered: Buy the book of Pinheiro/Bates "Mixed-Effects Models in S and S-PLUS", Springer (2000), there it is described. But recent I am a little short in money. We have a financial crisis .

» [...] As long as bear complies with the FDA Guidance (2001) and is statistically correct, it should be acceptable, I guess.

Sure! As long as ...
But the FDA statistical guidance explicitly states the model with different inter/intra-subject variabilities (Proc MIXED code in appendix G, I think) in evaluating ABE in replicate studies.

And this is not the model implemented at present in bear for evaluation of replicate studies.
I myself are convinced that the assumptions of no subject-by-treatment interaction and equal within subject variabilities are reasonable if we talk about ABE. Although with these assumptions an advantage of replicate designs vanishes, namely that we are able to study different variabilities.

But unfortunately statistically correct or reasonable is in regulators view not always correct or reasonable.
Not so astonishing because statisticians are them selfs of such different opinions if I think about Type III SumOfSqares or now the question of denominator degrees of freedom.

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-04-22 00:46

@ d_labes
Posting: # 3580
Views: 56,685

## Cholesky factor ?!

Dear D. Labes,

» [...] BTW could not find any hint up to now what this really means: General positive-definite, Log-Cholesky parametrization. Questions for that are always answered: Buy the book of Pinheiro/Bates "Mixed-Effects Models in S and S-PLUS", Springer (2000), there it is described.

I have it in my shelf, but
1. too stupid to understand >90% of it's contents
2. too short of time to dig into it
3. but maybe this is helpful to initiates? On page 59 you find the only mentioning of 'Cholesky' in the entire book…
4. Another source

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-04-22 11:20

@ Helmut
Posting: # 3581
Views: 56,192

## Cholesky factor ?!

Dear Helmut,

thanks very much for the links.

» [ I.] too stupid to understand >90% of it's contents
» [II.] too short of time to dig into it […]

Me too. But my percentage for [I.] is higher . Much to much "matratzen" in it for an Initiated.

Regards,

Detlew
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-23 08:08

@ d_labes
Posting: # 3587
Views: 56,288

## FDA loves replicate bears?

Dear D. Labes,

Thank you for your message. I find that currently bear's model dealing with replicate ABE data is equivalent to SAS Proc MIXED as the follows:
PROC MIXED; CLASSES SEQ SUBJ PER TRT; MODEL Y = SEQ PER TRT/ DDFM=SATTERTH; RANDOM SUB=SUBJ G; REPEATED/GRP=TRT SUB=SUBJ; ESTIMATE 'T vs. R' TRT 1 -1/CL ALPHA=0.1;
(Ref.: Jones, B and Kenward MG. Design and Analysis of Cross-Over Trials, 2nd ed., Chapman & Hall/CRC, 2003). If using TYPE setting, either FA0(2) or CSH as recommended by FDA, we will not get the exactly same 90%CIs from running bear using lme(). I guess we need to find out how to comply with here.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
d_labes
★★★

Berlin, Germany,
2009-04-23 09:48

@ yjlee168
Posting: # 3588
Views: 56,427

## FDA loves SAS code?

Dear Yung-jin,

where does the code come from?
I cannot find it in Jones / Kenward.
And sorry it does not work for me
(Syntax errors in the random statement! Maybe there is something missing, Intercept or sub(seq)?).

And if you have working code, please guide me to the different intra-subject variabilities (resulting always from the SAS REPEATED statement) in the results from bear's lme() statement.
Up to now I have not found any equivalent in bear's output.

And what about the DDFM=blabla? I have learned so far (in reading tons of Web sites) that there is no such thing as different denominator degrees of freedom methods in lme()? I had been informed further, that the successor of lme() - lmer() - does not give any DenDF and p-values for the effects in the model (fixed or random) by design, because the inventor and developer D. Bates does not belief that the test statistics follow any of the usually applied distribution (F distr.). See here f.i. A consequent but strange countenance.
Since SAS Proc MIXED uses these DDFM also in calculation of CI's for the treatment effect, there must be any difference.

BTW: Kenward/Jones in their Proc MIXED code always use DDFM=KR, i.e. the Kenward/Roger method. Not so astonishing I think .

BTW2: Let me state explicitly that these discussions are not for proving you / bear wrong.
I am seriously on the way to change "The power to know" to R, but only if I can fulfill the needs in statistics for bioequivalence studies (regulators bullet proof), my everyday bread and butter work.
And currently I suspect that this is not feasible for replicate studies.
Or is anybody out there to prove me that black is white?
Anybody out there who has evaluated a replicate study with R (not necessarily bear) which was approved by a regulatory agency?

Regards,

Detlew
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-23 10:46
(edited by yjlee168 on 2009-04-23 12:27)

@ d_labes
Posting: # 3590
Views: 56,321

## FDA loves SAS code? definitely.

Dear D. Labes,

Sorry about the code. Here is the correct one.
proc mixed data=blabla; class seq subj prd tmt; model lnAUC=seq prd tmt/ ddfm=kenwardroger; random subj(seq); lsmeans tmt/pdiff cl alpha=0.1; estimate ’ABE for lnAUC’ tmt -1 1; run;
I've changed the parameter names to fit our discussion.

» where does the code come from?
» I cannot find it in Jones / Kenward.

Ch. 7.

» And what about the DDFM=blabla? I have learned so far (in reading tons of Web sites) that there is no such thing as different denominator degrees of freedom methods in lme()?

Nope, we cannot find equivalent one, either.

» BTW: Kenward/Jones in their Proc MIXED code always use DDFM=KR, i.e. the Kenward/Roger method. Not so astonishing I think .

Yes, you're right.

» BTW2: Let me state explicitly that these discussions are not for proving you / bear wrong. I am seriously on the way to change "The power to know" to R, but only if I can fulfill the needs in statistics for bioequivalence studies (regulators bullet proof), my everyday bread and butter work.

Wow. You can choose both, can't you? If you can't, better stay with Power to know for now.

» And currently I suspect that this is not feasible for replicate studies. Or is anybody out there to prove me that black is white?

no luck so far.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
d_labes
★★★

Berlin, Germany,
2009-04-23 15:07

@ yjlee168
Posting: # 3594
Views: 56,191

## SAS? or R? or what?

Dear Yung-jin,

» » where does the code come from?
» » I cannot find it in Jones / Kenward.
»
» Ch. 7.

Ok, this is the code for a classical 2x2x2 cross-over, as I had stated above in my opening of this thread.

» Wow. You can choose both, can't you? If you can't, better stay with Power to know for now.

Of course I could use both. But why?
Why should an old lazy SAS dog take the extra effort in learning R (or any other Statistical system) if he had to stay with SAS for part of his work ?
And if he had all things he needed in his everyday work already coded in that infamous "Power to know".

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-04-23 15:25

@ d_labes
Posting: # 3595
Views: 56,250

## SAS? or R? or what?

Dear D. Labes and all bears!

» Why should an old lazy SAS dog take the extra effort in learning R (or any other Statistical system) if he had to stay with SAS for part of his work ?

Select at least one:
• Masochism
• Out of curiosity
• Following Karl R. Popper: 'Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve.'
• Following Konrad Lorenz: 'The best method for a scientist to stay young is to abolish a favourite hypothesis every morning before breakfast.' (Of course he didn't follow this wise suggestion in his own work.)
• Other (which one?)
@all: What do you think of adding a new category Bear to the forum. I can easily move all previous threads to that category which would assist the search-function also.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2009-04-23 16:06

@ Helmut
Posting: # 3596
Views: 56,119

## SAS? or R? or what?

Dear Helmut!

» Select at least one:

My favorite one is Lorenzian.

Unfortunately I have not heard about earlier. Now I argue it is to late for me to stay young . Additionally unfortunately it was my habit not to have a breakfast.

My second one is: Masochism

Regards,

Detlew
Helmut
★★★

Vienna, Austria,
2009-04-23 16:30

@ d_labes
Posting: # 3597
Views: 56,305

## SAS? or R? or what?

Dear D. Labes!

» My favorite one is Lorenzian.

Yeah, I like it also very much.

» Unfortunately I have not heard about earlier.

I was quoting according to my vanishing memory from an Interview by Franz Kreuzer aired by Austrian (ORF) and German TV (ZDF) in May 1980. It was repeated on 01 March this year on 3SAT – unfortunatelly I missed it…

The entire text was published in 1981 (Konrad Lorenz, Franz Kreuzer: Leben ist Lernen, Piper, ISBN 3-492-10223-9). I found a PDF on the net (don't ask me where, it's illegal) and could not locate the quote I gave… So I've heard the quote somewhere else…

» Now I argue it is to late for me to stay young .

Come on! Another quote from the mentioned book:
<div lang="de" xml:lang="de">

K: Wie ertragen Sie das Altern? Wie haben Sie gemerkt, daß Sie zu altern begannen, worunter leiden Sie, welche Gegenwehr leisten Sie? Gibt es irgendeinen Gewinn, der die Bürde des Alterns leichter macht?
L: Das Altern ist eine progressive Erkrankung mit absolut infamster Prognose. Keiner hat es über­lebt. Die Nachteile des Alters, die ich empfinde – und jeder kann das nur für sich selbst be­ant­worten – sind mehr körperlicher als geistiger Natur. Ich leide an meinen Arthrosen, ich leide daran, daß ich mit einem Stock gehe, und ich leide an Ermüdbarkeit, ich kann nicht mehr so viel arbeiten, wie ich immer konnte. Ich sehe Vorteile darin, daß man Unwichtiges leichter vergißt als Wichtiges, so daß das Meer des Vergessens Berggipfel isoliert und stehen läßt, was die Übersicht erleichtert. Ich glaube, eine bessere Übersicht zu haben. Das sind in wenigen Worten die Vor- und Nachteile des Alterns.

</div>

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-23 19:56

@ d_labes
Posting: # 3598
Views: 56,163

## SAS? YES; R? why not? as long as they are useful...

Dear D. Labes,

» Ok, this is the code for a classical 2x2x2 cross-over, as I had stated above in my opening of this thread.

Yes, but just let you know that we used the SAS code to validate bear
when doing replicate data analysis. So... it is not appropriate to do so?

» Of course I could use both. But why?

As "the Power to know ", everything should be easy for him to get to know it. We all can see and appreciate your sincere and long-term contributions to this Forum. So why not keep The Power to Know? And R? why not? You just use two or more statistical languages to speak BE/BA. That's why this Forum is so amazing and so interesting because of the members like you and many others.

» Why should an old lazy SAS dog take the extra effort in learning

No switch, please. Be a SAS old dog AND also a R new dog. Thank you so much.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Helmut
★★★

Vienna, Austria,
2009-04-23 20:03

@ yjlee168
Posting: # 3599
Views: 56,177

## New Forum-Category 'bear'?

Dear Yung-jin,

what do you think about a special category bear of the forum?

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-23 20:19

@ Helmut
Posting: # 3600
Views: 56,210

## New Forum-Category 'bear'?

Dear Helmut,

Thank you for your kindly offering. How about the category R for BE/BA? I don't think bear should be the category name because of there are so many R tools applied in BE/BA or appearing in this Forum. Bear is just a small piece of them. Then we can expect or encourage more users to use R in the community in the future. What do you think? Many thanks.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Helmut
★★★

Vienna, Austria,
2009-04-23 20:37

@ yjlee168
Posting: # 3601
Views: 56,232

## New Forum-Category 'R for BE/BA'

Dear Yung-jin,

yeah that's a good idea. Up to new horizons! I moved all respective threads from the category Software.
If you want to set a link at your homepage to the new category, use
https://forum.bebac.at/?category=19.

Cheers,
Helmut Schütz

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-24 09:19

@ Helmut
Posting: # 3604
Views: 56,102

## New Forum-Category 'R for BE/BA'

Dear Helmut.

» If you want to set a link at your homepage to the new category, use
https://forum.bebac.at/?category=19.

Done! A very nice and informative category. Thank you so much.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
d_labes
★★★

Berlin, Germany,
2009-04-24 08:27

@ yjlee168
Posting: # 3603
Views: 55,963

## Replicated again

Dear Yung-jin,

» » Ok, this is the code for a classical 2x2x2 cross-over, as I had stated above in my opening of this thread.
» Yes, but just let you know that we used the SAS code to validate bear when doing replicate data analysis. So… it is not appropriate to do so?

If we talk about FDA view, definitely NO.
If we talk about reference scaling in ABE also definitely NO.

If we talk about ABE in the strictest sense, Ok.

Regards,

Detlew
yjlee168
★★

Kaohsiung, Taiwan,
2009-04-24 09:26

@ d_labes
Posting: # 3605
Views: 56,040

## Replicated again

Dear D. Labes,

» If we talk about FDA view, definitely NO.
» If we talk about reference scaling in ABE also definitely NO.
» If we talk about ABE in the strictest sense, Ok.

Thank you for your message. We will continue to search and figure out how to do REML and DDFM (as SAS Proc MIXED & Random options) with R to comply with FDA Guidance.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
ElMaestro
★★★

Belgium?,
2010-06-03 05:21

@ d_labes
Posting: # 5419
Views: 55,458

## getting decimals

Ahoy d_labes,

there was still some low hanging fruit to pick in this old thread...

» >getVarCov(model2)
» G matrix
» Random effects variance covariance matrix
»           tmtR      tmtT
» tmtR 0.0222050 0.0027286
» tmtT 0.0027286 0.0149240
»   Standard Deviations: 0.14901 0.12216
(and blah)
» Random effects:
»  Formula: ~tmt - 1 | subject
»  Structure: General positive-definite, Log-Cholesky parametrization
»          StdDev     Corr
» tmtR     0.14901302 tmtR
» tmtT     0.12216340 0.15 any body out there who knows how to get
» more decimals here?

Corr(A,B)=cov(A,B)/sqrt(Var(A)*Var(B))
Here = 0.0027286/(sqrt(0.0222050*0.0149240)), I guess the object returned by getVarCov() contains all the further decimals anyone can dream of.

Best regards
EM.
yjlee168
★★

Kaohsiung, Taiwan,
2010-06-03 08:37
(edited by yjlee168 on 2010-06-03 12:23)

@ ElMaestro
Posting: # 5421
Views: 55,508

## getting decimals

Dear ElMaestro,

Great. Your post reminds me to modify R codes in bear, as suggested in the very first post of this thread (adding option weights=varIdent(form = ~ 1 | drug)) by D. Labes. Thank you so much.

» Corr(A,B)=cov(A,B)/sqrt(Var(A)*Var(B))
» Here = 0.0027286/(sqrt(0.0222050*0.0149240)), I guess the object returned by getVarCov() contains all the further decimals anyone can dream of.

It sounds possible only if can we modify the codes for lme() originated from nlme package. The d_labes' output was obtained from lme() directly.

note: the question for d_labes was deleted. Sorry about this. Forget how to use bbcode of strike out. It should be used such as delete text.

All the best,
-- Yung-jin Lee
bear v2.8.7:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
ElMaestro
★★★

Belgium?,
2010-06-04 01:59

@ yjlee168
Posting: # 5433
Views: 55,424

## getting decimals

Dear Bears,

» It sounds possible only if can we modify the codes for lme() originated from nlme package. The d_labes' output was obtained from lme() directly.

Yes, the output has a default number of decimals that is a little too low and which nobody seems to be able to change with easy commands. I meant to show how to get more decimals. Once done, I guess you can probably manipulate the lme object directly (e.g. manually replace "0.15" by "0.1498xxxxx")
before printing. This does not require a modification of the lme() internals.

Best regards
EM.

Le tits now.

Best regards,
ElMaestro
ElMaestro
★★★

Belgium?,
2011-01-27 21:50

@ d_labes
Posting: # 6502
Views: 54,392

## Freedom for replicate bears

Dear dlabes and others,

» After reading again tons of Websites (seems nobody evaluates x-over studies with R) I came up with the following attempt in R:
»   model2 <- lme(y~ drug + prd + seq,
»        random= ~ tmt-1|subj,
»        #different within variabilities
»        weights=varIdent(form = ~ 1 | drug),
»        data=BlaBla, method="REML")

I am starting to believe lme does in some cases not really reach the goal, so I switched to lmer (package lme4). Syntactically something like:
 ElModel=lmer(logCmax ~ per+seq+trt + (trt-1|subj)) 
-which achieves the same.
It will occasionally -depending on the dataset- actually fit the data better (that is, to a higher likelihood and thus with better sigma estimates) than lme can.

And also:
Who's got a 3-period dataset (ref. replicated, but not test-replicated) with a known sWR that I can look at?

Mucho gracias

Le tits now.

Best regards,
ElMaestro
d_labes
★★★

Berlin, Germany,
2011-01-28 09:02

@ ElMaestro
Posting: # 6503
Views: 54,554

## lmer vs. lme

Dear ElMaestro!

» I am starting to believe lme does in some cases not really reach the goal, so I switched to lmer (package lme4). Syntactically something like:
»  ElModel=lmer(logCmax ~ per+seq+trt + (trt-1|subj))
» -which achieves the same.
Really

Using the newer lmer() instead of lme() is another possibility, considered by me also before coming up with my R implementation attempt of the FDA Proc MIXED code. But it lacks the separate estimability of the intra-subject error variance for Test and Reference, I think. As far as I know (from the R help lists) there are no plans to introduce somefink like weights in lmer().
Therefore it is not so useful in the framework of scABE, I think.

I could be wrong, but ...

Regards,

Detlew
ElMaestro
★★★

Belgium?,
2011-01-28 09:42

@ d_labes
Posting: # 6505
Views: 54,482

## lmer vs. lme

Ahoy d_labes,

» But it lacks the separate estimability of the intra-subject error variance for Test and Reference, I think. As far as I know (from the R help lists) there are no plans to introduce somefink like weights in lmer().

Here are two partial outputs from lme and lmer:

With lme:
 Log-restricted-likelihood: 98.12818
(...)
 Random effects:
  Formula: ~trt - 1 | subj
  Structure: General positive-definite, Log-Cholesky parametrization
          StdDev      Corr
 trt1     0.149013060 trt1
 trt2     0.122163414 0.15
 Residual 0.007643612

With lmer:
 AIC    BIC logLik deviance REMLdev
 -170.2 -150.0  95.12   -229.3  -190.2
(...)
 Random effects:
 Groups   Name Variance   Std.Dev.  Corr
 subj     trt1 2.2218e-02 0.1490555
         trt2 1.4913e-02 0.1221188 0.150
 Residual      3.3117e-05 0.0057548

Note the slightly better likelihood found by lmer. Changes quite a bit (%-wise, I mean) on the residual sigma.

Le tits now.

Best regards,
ElMaestro
d_labes
★★★

Berlin, Germany,
2011-01-28 11:37
(edited by d_labes on 2011-01-28 11:56)

@ ElMaestro
Posting: # 6506
Views: 54,357

## lmer vs. lme

Old pirate,

where are the intra-individual (error or residual) variances or sd for Test and Reference?
You have only one residual.
The trt1 and trt2 variances (and corr) in your output are the between subject covariance parameter.

Regards,

Detlew
ElMaestro
★★★

Belgium?,
2011-01-28 12:14

@ d_labes
Posting: # 6507
Views: 54,481

## V

Dear d_labes,

» where are the intra-individual (error or residual) variances or sd for Test and Reference?
» You have only one residual.
» The trt1 and trt2 variances (and corr) in your output are the between subject covariance parameter.

Interesting. Let us discuss for a moment the covariance matrix V (not Z and not G, and at this point also not discussing which syntax SAS applies for the given problem (leaves disussions about CSH, FA(0) out here)).
V holds the key.
Further, let us assume for simplicity we only look at the first subject which is the first 4x4 of V where the data are ordered as TRTR (i.e. per1-per2-per3-per4 for the first subject; in the original dataset above they were ordered differently but the order in this context does not change the point):

Data in per1 and per3 correlate within the subject as (s)he gets T (within variance VT).
Data in per2 and per4 correlate within the subject as (s)he gets R (within variance VR).
To this we add a common error variance on the diagonal (V0).

Having no particular mathematical understanding, the first 4x4 of V I think will be:
V0+VT     0         VT       0 0         V0+VR     0        VR VT        0         V0+VT    0 0         VR        0        V0+VR

Both of the models discussed above correspond to a fit with this V, as far as I can see.

Le tits now.

Best regards,
ElMaestro
d_labes
★★★

Berlin, Germany,
2011-01-28 14:12
(edited by d_labes on 2011-01-28 15:32)

@ ElMaestro
Posting: # 6508
Views: 54,438

## V corrected

Dear ElMaestro,

I think your V needs to be corrected to reflect the FDA model for replicate cross-over designs.
See Patterson et al
"REPLICATE DESIGNS AND AVERAGE, INDIVIDUAL, AND POPULATION BIOEQUIVALENCE"
GSK BDS Technical Report 2002 – 01
online resource

It has to be composed from these two parts:
(sequence TRTR, hope I got all these nasty subscripts right )
between-subject:

  vBT   covTR  vBT  covTR covTR   vBR  covTR  vBR  vBT   covTR  vBT  covTR covTR   vBR  covTR  vBR 

and within-subject diagonal matrix:

   vWT              vWR           vWT               vWR 

I'm not so familiar with the so-called V matrix but I guess its the sum of both?*see PS
Thus you have 5 variance-covariance parameters to estimate:
• vBT, vBT and covTR from the between-subject variation
• vWT and vWR from the within-subject part
In your model/abbreviated output I only identify 4 covariance parameters:
vBT, vBT and covTR, latter here given as Corr(elation) and one residual error term which is presumable the mean of vWT and vWR or other spoken common within-subject error (assuming vWT = vWR=ve).

Have a look at the very beginning of this thread to see that FDA's Proc MIXED and my lme() attempt are giving the same covariance parameter values (except that Proc MIXED gives variances/covariances and lme() stddev's for the between part and an error term and factors for T/R for the within-subject part, i.e. you have to calculate f.i. sWR as error(residual) term multiplied by the factor for R.

*PS: Regarding the matratzes we had it already here.
----------------------------
link to technical report corrected

Regards,

Detlew
ElMaestro
★★★

Belgium?,
2011-01-28 15:20

@ d_labes
Posting: # 6509
Views: 54,351

## V corrected

Dear d_labes,

thanks for spending time to educate me.
You might be right as you usually are, but it makes no sense to me. Let us for the time being forget the covariance of T and R. I am not disputing there should be a term for it, though.

With the intra-subject sWR and sWT only on the diagonal of V my alarm bells are ringing. Imagine a ref-replicated design; following your thinking we could fit an sWT on the diagonal of V even though Test isn't replicated. This sounds very wrong.
Apparently others have got fits and sWT values for 2-trt, 3-seq, 3-per designs (see e.g. here). This is fishy business.

V can be considered as ZGZT plus R for the model afficionados , but we need not consider it here. The intra-subject issue can be modeled in either one of them, as long as the resulting V is the right one.

The link you pasted doesn't work here. I'd be happy to read more about it though.

Edit: Link corrected (David changed the domain name and folder from www.boomer.org/pkin/ to www.pharmpk.com/) [Helmut]

Le tits now.

Best regards,
ElMaestro
d_labes
★★★

Berlin, Germany,
2011-01-28 16:10

@ ElMaestro
Posting: # 6510
Views: 54,338

## Peculiar partial replicate

Dear Öberster Größter Meister,

» With the intra-subject sWR and sWT only on the diagonal of V my alarm bells are ringing. Imagine a ref-replicated design; following your thinking we could fit an sWT on the diagonal of V even though Test isn't replicated. This sounds very wrong. Apparently others have got fits and sWT values for 2-trt, 3-seq, 3-per designs (see e.g. here). This is fishy business.

Well noticed . We had discussed this peculiarity of the partial replicate design also here in the forum if you remember. See f.i. this post and followings. But be warned: Rather lengthy and nitpickin' .

As you can see there we get s2WT, but rather different values between different software. And, as I have noticed, you get also different values if you change optimization parameters in SAS Proc MIXED. Seems the REML optimisation stops by chance. Thus that value is not real, the model is over-specified, but the REML or ML estimation can't of course decide how the model has to be formulated without an explicit value for it.
It's the same as if we attempt to fit a 2x2 design with distinct intra-subject error terms for Test and Reference. We could not be successful because only (s2WT+s2WR)*0.5 is estimable.

Meanwhile I have the opinion that we can't handle this evaluation via such a detailed model of the different sources of variation. I at any rate was not successful to implement a reasonable simpler model in Proc MIXED.
Maybe this is one reason why the FDA code given in the progesterone guidance resorts to evaluation via intra-subject contrast?