mittyri
★★  

Russia,
2015-04-14 10:35
(3271 d 11:40 ago)

Posting: # 14681
Views: 29,982
 

 Reproducing Bear results in Phoenix [🇷 for BE/BA]

Thread locked
Dear Yung-jin, Helmut and all,

Could you please guide me, I cannot reproduce the results from Bear demo sheet (SingleRep - full replicate design) in Phoenix.
I used the WNL reference template published on this forum.
For Bear the results are (lnCmax only for sake of brevity):
**************** Classical (Shortest) 90% C.I. for lnCmax ****************
        Point Estimate   CI90 lower   CI90 upper
Ratio          106.581       98.364      115.485


For Phoenix:
Source    Ratio_%Ref_  CI_90_Lower  CI_90_Upper
Method C  106.5814736   97.91422516 116.0159363
Method A  106.5814736  101.8775288  111.5026114
Method B  106.5814736  101.8775288  111.5026114


Exploring the dataset I've found that the sequences aren't common RTRT/TRTR, but RTTR/TRRT. Is it important for calculations?


Edit: Category changed. [Helmut]

Kind regards,
Mittyri
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-14 16:07
(3271 d 06:09 ago)

@ mittyri
Posting: # 14683
Views: 28,548
 

 Bug in bear?

 
Hi Mittyri,

good news: I could reproduce you results in bear v2.6.4 (on R v3.1.3 64bit) and Phoenix/WinNonlin v6.4 64bit.

❝ Exploring the dataset I've found that the sequences aren't common RTRT/TRTR, but RTTR/TRRT. Is it important for calculations?


In the Phoenix-templates, no. Personally I always code sequences for clarity. In this case RTTR instead of 1 and TRRT instead of 2.

Bad news: Something seems to be strange in bear.
Phoenix reports for the log-transformed means:

Least squares means
Formulation_Cal  Estimate  StdError Denom_DF T_stat   P_value Conf  T_crit Lower_CI Upper_CI
--------------------------------------------------------------------------------------------
             R    7.32814 0.0398516    12   183.886    0.0000   90   1.782    7.257    7.399
             T    7.39188 0.0326557    12   226.358    0.0000   90   1.782    7.334    7.450

… with an estimated difference of 0.0637395 and SE 0.0475893.

In bear:

          MEAN-ref = 7.3261
         MEAN-test = 9.5346
                SE = 0.047587
Estimate(test-ref) = 0.06374

9.5346 – 7.3261 = 0.06374‽ :confused:

If I calculate adjusted means manually …

R = (RRTTR+RTRRT)/2 = (7.35801+7.29827)/2 = 7.32814
T = (TRTTR+TTRRT)/2 = (7.35934+7.42442)/2 = 7.39188

… confirming Phoenix’ results.

How the standard errors are calculated is beyond me. Manually (by √s²/n) I get 0.039174 (R), 0.032647 (T), and 0.048889 (T–R). Same if I send BE Method C log | Ratios Test=T to descriptive statistics in PHX.


PS: I you want to give it a try in your preferred software, the dataset:
subject treatment sequence period Cmax
1          T        TRRT      1   1633
1          R        TRRT      2   1739
1          R        TRRT      3   1700
1          T        TRRT      4   1641
2          R        RTTR      1   1481
2          T        RTTR      2   1837
2          T        RTTR      3   1847
2          R        RTTR      4   1461
3          T        TRRT      1   2073
3          R        TRRT      2   1780
3          R        TRRT      3   1790
3          T        TRRT      4   2083
4          R        RTTR      1   1374
4          T        RTTR      2   1629
4          T        RTTR      3   1619
4          R        RTTR      4   1364
5          T        TRRT      1   1385
5          R        TRRT      2   1555
5          R        TRRT      3   1545
5          T        TRRT      4   1386
6          R        RTTR      1   1756
6          T        RTTR      2   1522
6          T        RTTR      3   1512
6          R        RTTR      4   1746
7          T        TRRT      1   1643
7          R        TRRT      2   1566
7          R        TRRT      3   1576
7          T        TRRT      4   1653
8          R        RTTR      1   1939
8          T        RTTR      2   1615
8          T        RTTR      3   1625
8          R        RTTR      4   1949
9          T        TRRT      1   1759
9          R        TRRT      2   1475
9          R        TRRT      3   1485
9          T        TRRT      4   1769
10         R        RTTR      1   1388
10         T        RTTR      2   1483
10         T        RTTR      3   1493
10         R        RTTR      4   1398
11         T        TRRT      1   1682
11         R        TRRT      2   1127
11         R        TRRT      3   1117
11         T        TRRT      4   1692
12         R        RTTR      1   1542
12         T        RTTR      2   1247
12         T        RTTR      3   1257
12         R        RTTR      4   1532
13         T        TRRT      1   1605
13         R        TRRT      2   1235
13         R        TRRT      3   1245
13         T        TRRT      4   1615
14         R        RTTR      1   1598
14         T        RTTR      2   1718
14         T        RTTR      3   1728
14         R        RTTR      4   1588

The dataset looks artificial to me – in most cases exactly ±10 in the respective periods…

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
mittyri
★★  

Russia,
2015-04-14 22:43
(3270 d 23:32 ago)

@ Helmut
Posting: # 14688
Views: 28,234
 

 Bug in bear?

 
Guten Abend!

❝ Bad news: Something seems to be strange in bear.


I'm a little bit surprised. No validation was performed for replicate design? I've found the validation report only for 2X2X2 crossover (dated 2009)
I know that some companies use Bear for BE estimation during the registration procedure. No statistical validation after installation?

❝ The dataset looks artificial to me – in most cases exactly ±10 in the respective periods…


The author of the package mentioned that these demo sets aren't artificial

Kind regards,
Mittyri
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-15 04:17
(3270 d 17:59 ago)

@ mittyri
Posting: # 14689
Views: 28,233
 

 Bug in bear?

 
Доброе Утро!

❝ No validation was performed for replicate design?


Duno.

❝ I know that some companies use Bear for BE estimation during the registration procedure. No statistical validation after installation?


To quote myself:

“The ultimate responsibility [for the validation] in a controlled environ-
ment lies in the user’s hands.”


I tried EMA’s (full replicate) Data Set I in bear. I recoded DATA to Cmax and logDATA to lnCmax. Kept columns AUC0t, AUC0INF, lnAUC0t, and lnAUC0INF (set all cells to NA). The import seems to work. Observations: AUC0t, AUC0INF, lnAUC0t, and lnAUC0INF are converted to character vari­ables with empty cells. I had to change the variable type to “numeric” in order to get the NAs back. However, after closing the data editor bear told me:
<0 rows> (or 0-length row.names)
Error in Fdata[[1]] : subscript out of bounds

I suspected issues with incomplete subjects. Deleted subjects 11, 20, 24, 31, 42, 67, 69, 71. Same error.
I guess we have to wait until Yung-jin passes by. ;-)

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-20 12:13
(3265 d 10:02 ago)

@ Helmut
Posting: # 14712
Views: 26,467
 

 use the same dataset?

 
Dear Helmut,

Finally, I don't think so. See the the following:

❝ ...the dataset:

subject treatment sequence period Cmax

❝ 1          T        TRRT      1   1633

❝ 1          R        TRRT      2   1739

❝ 1          R        TRRT      3   1700 1650

❝ 1          T        TRRT      4   1641 1541

❝ 2          R        RTTR      1   1481

❝ 2          T        RTTR      2   1837

❝ 2          T        RTTR      3   1847

❝ 2          R        RTTR      4   1461

❝ 3          T        TRRT      1   2073

❝ 3          R        TRRT      2   1780

❝ 3          R        TRRT      3   1790 1800

❝ 3          T        TRRT      4   2083 2183

❝ 4          R        RTTR      1   1374

❝ 4          T        RTTR      2   1629

❝ 4          T        RTTR      3   1619

❝ 4          R        RTTR      4   1364 1564

❝ 5          T        TRRT      1   1385

❝ 5          R        TRRT      2   1555

❝ 5          R        TRRT      3   1545

❝ 5          T        TRRT      4   1386

❝ 6          R        RTTR      1   1756

❝ 6          T        RTTR      2   1522

❝ 6          T        RTTR      3   1512

❝ 6          R        RTTR      4   1746

❝ 7          T        TRRT      1   1643

❝ 7          R        TRRT      2   1566

❝ 7          R        TRRT      3   1576 1680

❝ 7          T        TRRT      4   1653

❝ 8          R        RTTR      1   1939

❝ 8          T        RTTR      2   1615

❝ 8          T        RTTR      3   1625 1725

❝ 8          R        RTTR      4   1949

❝ 9          T        TRRT      1   1759

❝ 9          R        TRRT      2   1475

❝ 9          R        TRRT      3   1485 1885

❝ 9          T        TRRT      4   1769

❝ 10         R        RTTR      1   1388

❝ 10         T        RTTR      2   1483

❝ 10         T        RTTR      3   1493 1693

❝ 10         R        RTTR      4   1398 1598

❝ 11         T        TRRT      1   1682

❝ 11         R        TRRT      2   1127

❝ 11         R        TRRT      3   1117 1317

❝ 11         T        TRRT      4   1692

❝ 12         R        RTTR      1   1542

❝ 12         T        RTTR      2   1247

❝ 12         T        RTTR      3   1257 1657

❝ 12         R        RTTR      4   1532 1632

❝ 13         T        TRRT      1   1605

❝ 13         R        TRRT      2   1235

❝ 13         R        TRRT      3   1245 1440

❝ 13         T        TRRT      4   1615

❝ 14         R        RTTR      1   1598

❝ 14         T        RTTR      2   1718

❝ 14         T        RTTR      3   1728 1828

❝ 14         R        RTTR      4   1588 1688


[edited] I think that the output dataset (SingleRep_stat_demo.csv as your original one) is not consistent with what it really used to run the demo. My mistake again. Very sorry about that. :-( I will make it consistent later.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-15 11:38
(3270 d 10:38 ago)

@ mittyri
Posting: # 14691
Views: 28,228
 

 Bugs in bear with replicated demo data set?

 
Dear mittyri and Helmut,

Thanks for reporting the errors of bear. Please allow me to quickly respond all your questions first.
  1. data source of replicated BE study for demo used in bear: yes, it's indeed artificial data set generated from a 2x2x2 study; however the data set for 2x2x2 study is real. That's because we do not have real data to test. Some users previously suggest that the data set from textbooks or literature can be used instead. But I am sure that we have validated the results manually with MS-excel for some steps only (not all steps; we have many discussions about the differences of SAS and R about dealing linear mixed model in this forum.) Very sorry about this.
  2. differences between WNL and bear as OP mentioned: I will check the codes and update history again and will report what I find here asap.
  3. error in Fdata[[1]] : subscript out of bounds: Helmut, may I have your data file to try for this error? Never see that error before.
Please let me know if I still miss anything for this thread.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-17 03:21
(3268 d 18:55 ago)

(edited by yjlee168 on 2015-04-17 12:53)
@ mittyri
Posting: # 14693
Views: 27,880
 

 Bugs fixed in bear?

 
Dear mittyri and Helmut,

I would like to respond all questions in this thread.
  • There is still no way (so far) to validate bear with WNL or SAS when analyzing replicate data set since the differences of mixed model applied between R (lme) and SAS (PROC MIXED) or WNL (same as SAS?). The issue has been discussed a lots in this forum.
  • The error reported by Helmut has been fixed. It is due to the error of parameter transfer between built functions. It affects replicate and parallel data analysis. The correct one should be

      Dependent Variable: log(Cmax)       
    --------------------------------------
              n1(seq 1)= 7
              n2(seq 2)= 7
              N(n1+n2) = 14
        Lower criteria = 80.000 %
        Upper criteria = 125.000 %
              MEAN-ref = 7.326071
             MEAN-test = 7.39
                    SE = 0.04758714
    Estimate(test-ref) = 0.06373952


  • The data loading error as in Fdata[[1]] : subscript out of bounds: I am still working on it. Now loading data is OK. But need more tests. Omitting NA in R, it will delete the entire row (i.e. data of an entire period). The error is caused by applying NA omit after loading data. Thus everything is gone since all AUC0-t and AUC0-inf are NAs. The other thing is that using semicolon as the separator in a *.csv dataset for import with bear may not work properly. If so, consider to use comma as the separator instead semicolon. I still don't know why.
  • Hopefully all these can be done for next release (v2.6.5).
That's all. Have a nice weekend.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-17 16:25
(3268 d 05:51 ago)

@ yjlee168
Posting: # 14694
Views: 27,863
 

 Bugs fixed in bear?

 
Dear Yung-jin,

❝ … SAS (PROC MIXED) or WNL (same as SAS?) …


PHX/WNL uses a mixed-effects model with REML. In Proc Mixed you can get ML as well.

[…] The correct one should be


  Dependent Variable: log(Cmax)       

--------------------------------------

          MEAN-ref = 7.326071

         MEAN-test = 7.39

                SE = 0.04758714

Estimate(test-ref) = 0.06373952


Hhm. I got R 7.32814, T 7.39188, PE 0.0637395 (PHX + manual) and SE 0.0475893 (PHX). Both bear’s R and T are shifted –0.03% compared to PHX/manual. Seems that the PE is correct, but how is it calculated from the means? IMHO 7.39 – 7.326071 = 0.063929 ≠ 0.06373952.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-17 20:50
(3268 d 01:25 ago)

(edited by yjlee168 on 2015-04-17 21:48)
@ Helmut
Posting: # 14695
Views: 27,876
 

 Bugs fixed in bear?

 
Dear Helmut,

Estimate(test-ref) here is obtained from lme() (red color number),

  Statistical analysis (lme) - replicate BE study               
-------------------------------------------------
  Dependent Variable: ln(Cmax)                                           
Linear mixed-effects model fit by REML
 Data: inputdata
        AIC       BIC   logLik
  -174.2564 -153.2241 98.12818

Random effects:
 Formula: ~drug - 1 | subj
 Structure: General positive-definite, Log-Cholesky parametrization
         StdDev      Corr
drug1    0.149004554 drug1
drug2    0.122159679 0.15
Residual 0.007644172     

Variance function:
 Structure: Different standard deviations per stratum
 Formula: ~1 | drug
 Parameter estimates:
        1         2
1.0000000 0.4391939
Fixed effects: log(Cmax) ~ seq + prd + drug
                Value  Std.Error DF   t-value p-value
(Intercept)  7.356274 0.05636146 38 130.51957  0.0000
seq2         0.002664 0.05517303 12   0.04829  0.9623
prd2        -0.061700 0.04760132 38  -1.29618  0.2027
prd3        -0.059641 0.04760132 38  -1.25293  0.2179
prd4         0.003482 0.00164306 38   2.11917  0.0407
drug2        0.063740 0.04758714 38   1.33943  0.1884
 Correlation:
      (Intr) seq2   prd2   prd3   prd4 
seq2  -0.573                           
prd2  -0.520  0.199                     
prd3  -0.520  0.199  0.999             
prd4  -0.015  0.000  0.017  0.017       
drug2 -0.519  0.000  0.000  0.000  0.000
...

Point estimate is calculated from 100*eEstimate(test-ref).
The means are calculated directly from dataset.
Fdata<-split(TotalData, list(TotalData$drug))
RefData<-Fdata[[1]]
TestData<-Fdata[[2]]
ref_lnCmax<-mean(RefData$lnCmax)   ### MEAN-ref
test_lnCmax<-mean(TestData$lnCmax) ### MEAN-test

[edited] do we use the same dataset?: confused:

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2015-04-18 01:14
(3267 d 21:02 ago)

@ yjlee168
Posting: # 14696
Views: 27,693
 

 Bugs fixed in bear?

 
Hi yung-jin (and Helmut a bit),

I have no certain idea about what's going on there but:

❝ Fixed effects: log(Cmax) ~ seq + prd + drug

  1. Could you try and fit fixed effects as log(Cmax) ~ 0+ drug + seq + prd, and then extract your drug effects directly from that.
  2. Could you compare and check the presence and handling of na's in your comparisons?

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-18 13:12
(3267 d 09:04 ago)

@ ElMaestro
Posting: # 14697
Views: 27,662
 

 lme() in bear

 
Dear Elmaestro,

❝ 1. Could you try and fit fixed effects as log(Cmax) ~ 0+ drug + seq + prd, and then extract your drug effects directly from that.


OK. add an intercept?

  Statistical analysis (lme) - replicate BE study               
-------------------------------------------------
  Dependent Variable: ln(Cmax)                                           
Linear mixed-effects model fit by REML
 Data: inputdata
        AIC       BIC   logLik
  -174.2564 -153.2241 98.12818

Random effects:
 Formula: ~drug - 1 | subj
 Structure: General positive-definite, Log-Cholesky parametrization
         StdDev      Corr
drug1    0.149004554 drug1
drug2    0.122159679 0.15
Residual 0.007644172     

Variance function:
 Structure: Different standard deviations per stratum
 Formula: ~1 | drug
 Parameter estimates:
        1         2
1.0000000 0.4391939
Fixed effects: log(Cmax) ~ 0 + seq + prd + drug
          Value  Std.Error DF   t-value p-value
seq1   7.356274 0.05636146 12 130.51957  0.0000
seq2   7.358938 0.05152645 12 142.81866  0.0000
prd2  -0.061700 0.04760132 39  -1.29618  0.2025
prd3  -0.059641 0.04760132 39  -1.25293  0.2177
prd4   0.003482 0.00164306 39   2.11917  0.0405
drug2  0.063740 0.04758714 39   1.33943  0.1882


Looks like the same.

❝ 2. Could you compare and check the presence and handling of na's in your comparisons?

  • in this demo dataset, there is no any NA involved.
  • any NA must be removed (as the entire period) first before doing lme(); otherwise, it will crash.
So there is no way to compare the difference.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2015-04-18 13:28
(3267 d 08:48 ago)

@ yjlee168
Posting: # 14698
Views: 27,598
 

 lme() in bear

 
Hi yung-jin,

❝ ❝ 1. Could you try and fit fixed effects as log(Cmax) ~ 0+ drug + seq + prd, and then extract your drug effects directly from that.


❝ OK. add an intercept?


No, take away an intercept.

❝ Fixed effects: log(Cmax) ~ 0 + seq + prd + drug


In a sense this isn't what I suggested. Please try and fit it with drug specified as the first effect. In stead of
log(Cmax) ~ 0 + seq + prd + drug
see if you can make sure it somehow fits
log(Cmax) ~ 0 + drug + prd + seq
or equally good
log(Cmax) ~ 0 + drug + seq + prd

order of effects has lexical importance and determines how many effect estimates you get per term. Accordingly, when you fit sequence as the first term you will not see one effect estimate for each drug.

Good luck and sorry if my initial suggestion was not too well described.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-18 14:02
(3267 d 08:14 ago)

@ ElMaestro
Posting: # 14699
Views: 27,708
 

 lme() in bear

 
Dear ElMaestro,

❝ No, take away an intercept.


I see.

❝ or equally good

log(Cmax) ~ 0 + drug + seq + prd <-- I pick this one.


  Statistical analysis (lme) - replicate BE study               
-------------------------------------------------
  Dependent Variable: ln(Cmax)                                           
Linear mixed-effects model fit by REML
 Data: inputdata
        AIC       BIC   logLik
  -174.2564 -153.2241 98.12818

Random effects:
 Formula: ~drug - 1 | subj
 Structure: General positive-definite, Log-Cholesky parametrization
         StdDev      Corr
drug1    0.149004555 drug1
drug2    0.122159678 0.15
Residual 0.007644172     

Variance function:
 Structure: Different standard deviations per stratum
 Formula: ~1 | drug
 Parameter estimates:
        1         2
1.0000000 0.4391939
Fixed effects: log(Cmax) ~ 0 + drug + seq + prd
          Value  Std.Error DF   t-value p-value
drug1  7.356274 0.05636146 38 130.51957  0.0000
drug2  7.420013 0.05152645 38 144.00398  0.0000
seq2   0.002664 0.05517303 13   0.04829  0.9622
prd2  -0.061700 0.04760132 38  -1.29618  0.2027
prd3  -0.059641 0.04760132 38  -1.25293  0.2179
prd4   0.003482 0.00164306 38   2.11917  0.0407


❝ order of effects has lexical importance and determines how many effect estimates you get per term. Accordingly, when you fit sequence as the first term you will not see one effect estimate for each drug.


Aha! Now I remember that. Thank you so much to refresh my memory. But now I don't know which one I should extract. :confused: What do you think?

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2015-04-18 15:42
(3267 d 06:34 ago)

@ yjlee168
Posting: # 14701
Views: 27,582
 

 lme() in bear

 
Hi yung-jin,

Fixed effects: log(Cmax) ~ 0 + drug + seq + prd

          Value  Std.Error DF   t-value p-value

drug1  7.356274 0.05636146 38 130.51957  0.0000

drug2  7.420013 0.05152645 38 144.00398  0.0000


❝ Aha! Now I remember that. Thank you so much to refresh my memory. But now I don't know which one I should extract. :confused: What do you think?


Drug diff. = 7.420013-7.356274 = about 0.06374.

Perhaps we should turn to contrast coding to find out why Helmut's effect don't match. As long as the drug difference is good there is no genuine problem, right?

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-18 16:08
(3267 d 06:08 ago)

@ ElMaestro
Posting: # 14702
Views: 27,573
 

 Mean means

 
Hi ELMaestro,

❝ As long as the drug difference is good there is no genuine problem, right?


For BE, yes.
But do you think that assessors (re-calculating studies) would love to see treatment means differing to other results (software, pocket-calculator, paper/pencil/brain)?

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2015-04-18 18:01
(3267 d 04:15 ago)

@ Helmut
Posting: # 14703
Views: 27,642
 

 Mean means

 
Good afternoon Helmut,

❝ But do you think that assessors (re-calculating studies) would love to see treatment means differing to other results (software, pocket-calculator, paper/pencil/brain)?


In all likelihood not. "If it doesn't match SAS then it must be wrong", right?
I must read up on the c.c. matter when I have got the time. By the way I wonder if "LS Means" have any meaningful interpretation for mixed models when we are not working with least squares minimisation?? I can't say the definition of LS Means that I have seen in e.g. the SAS help files are particularly helpful to me. Perhaps this is just due to my walnut-sized brain that gives up too quickly...

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-19 02:59
(3266 d 19:16 ago)

@ ElMaestro
Posting: # 14705
Views: 27,397
 

 Mean means

 
Hi ElMaestro,

❝ "If it doesn't match SAS then it must be wrong", right?


Didn’t say that. ;-)
If we have a four-period, two-sequence replicate design, balanced, and com­plete data: marginal treatment means = adjusted means (SAS-lingo: LSMs).

❝ I must read up on the c.c. matter when I have got the time. By the way I wonder if "LS Means" have any meaningful interpretation for mixed models when we are not working with least squares minimisation??


Why the heck? IMHO, for this dataset all means should agree, regardless the model – which they do in Vienna (to 15 significant digits; not shown):
                                  R        T
Method A (EMA, subjects fixed)   7.328141 7.391880
Method B (EMA, subjects random)  7.328141 7.391880
Method C (FDA = mixed efects)    7.328141 7.391880
Marginal means (Gnumeric)        7.328141 7.391880

So why do we get…
bear                             7.356274 7.420013
… which are ~3.8% higher for both treatments? No, I’m not satisfied if only the PE agrees – I expect to get a nasty surprise once we start to deal with un­ba­lanced data.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2015-04-19 11:14
(3266 d 11:01 ago)

@ Helmut
Posting: # 14706
Views: 27,296
 

 Mean means

 
Hi Helmut,

                                  R        T

❝ Method A (EMA, subjects fixed)   7.328141 7.391880

❝ Method B (EMA, subjects random)  7.328141 7.391880

❝ Method C (FDA = mixed efects)    7.328141 7.391880

❝ Marginal means (Gnumeric)        7.328141 7.391880

❝ So why do we get…

bear                             7.356274 7.420013

❝ … which are ~3.8% higher for both treatments? No, I’m not satisfied if only the PE agrees – I expect to get a nasty surprise once we start to deal with un­ba­lanced data.


You're right, it does look surprising.
Can you ask Phoenix what the log likelihood was when it had identified the minimum and compare with Yung-jin's logLik above?

And what about those DF's?

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-19 13:08
(3266 d 09:08 ago)

@ ElMaestro
Posting: # 14707
Views: 27,172
 

 LL and AIC

 
Hi ElMaestro,

❝ You're right, it does look surprising.


Since Gnumeric is my (non-)pocket calculator (no modeling!) I concur that PHX’ results are the “correct” ones.

❝ Can you ask Phoenix what the log likelihood was when it had identified the minimum and compare with Yung-jin's logLik above?


Maybe it’s better to look at the AIC (lower = better)
               AIC       LL
bear          -174.2564  98.12818
PHX Method A   -29.0235  33.51175
PHX Method B   -54.8339  35.41696
PHX Method C  -190.0907 106.04533


❝ And what about those DF's?


Identical.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2015-04-19 13:42
(3266 d 08:33 ago)

@ Helmut
Posting: # 14708
Views: 27,172
 

 Confused

 
Good morning Helmut,

❝ Maybe it’s better to lok at the AIC (smaller = better)

               AIC

❝ bear          -174.2564

❝ PHX Method A   -29.0235

❝ PHX Method B   -54.8339

❝ PHX Method C  -190.0907


I'd prefer to compare the logarithmic likelihood to see which optimiser found the best peak, if they are not the same.
The 1:1 comparison here is Method C vs. bear using lme. The above AICs would give the impression that bear did not find the optimum well; possibly something to do with the model specification, but I hope to see the log likelihood and the df's.
From the posts above I'd say I am not completely sure the df's are equal all over the range, cf. the drug df's used for the t-statistics.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-20 01:36
(3265 d 20:40 ago)

@ Helmut
Posting: # 14709
Views: 27,182
 

 lsmeans for mixed model in R

 
Dear Helmut and Elmaestro,

I also tried some other methods. I used two R packages lsmeans and lmerTest to do analysis of mixed model first.
if I used lmerTest (also required package pbkrtest),
modlnCmax<-lmer(log(Cmax) ~ seq +  prd + drug + (1|subj), data=TotalData, REML=TRUE)
lsm_lnCmax<-lsmeans(modlnCmax,test.effs="drug") ### for lmerTest, or
lsm_lnCmax<-lsmeans(modlnCmax,"drug")           ### for lsmeans
I got the results as
Least Squares Means table:
        seq prd drug Estimate Standard Error   DF t-value Lower CI Upper CI p-value   
drug  1  NA  NA  1.0   7.3633         0.0273 19.4     270     7.31     7.42  <2e-16 ***
drug  2  NA  NA  2.0   7.4098         0.0273 19.4     272     7.35     7.47  <2e-16 ***


If I used lsmeans package, I got
drug   lsmean         SE    df lower.CL upper.CL
 1    7.363255 0.02726719 19.38 7.306260 7.420250
 2    7.409809 0.02726719 19.38 7.352813 7.466804

Results are averaged over the levels of: seq, prd
Confidence level used: 0.95

Original results from bear are
  summary  Test   Ref Ratio
1  LSMEAN 7.411 7.365 1.006
2    S.D. 0.106 0.120 0.020
3  C.V(%) 1.424 1.633 1.992


Both lsmeans and lmerTest only work with lmer(), not lme(). What do you think about these results? I'm :confused:.

Ref. link: detlew's The SAS way in Rrrr land

❝ ...

                                  R        T

❝ Method A (EMA, subjects fixed)   7.328141 7.391880

❝ Method B (EMA, subjects random)  7.328141 7.391880

❝ Method C (FDA = mixed efects)    7.328141 7.391880

❝ Marginal means (Gnumeric)        7.328141 7.391880

❝ So why do we get…

bear                             7.356274 7.420013

❝ … which are ~3.8% higher for both treatments? No, I’m not satisfied if only the PE agrees – I expect to get a nasty surprise once we start to deal with un­ba­lanced data.


All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-20 03:28
(3265 d 18:47 ago)

@ yjlee168
Posting: # 14711
Views: 26,800
 

 lsmeans() & lme()

 
Hi Yung-jin,

funny – while you posted I experimented with lsmeans as well. Not so bad. ;-)

lsmeans only work with lmer(), not lme().


That’s not correct – see help(models)!

Hint: If in a replicate design evaluated by a mixed effects model the SEs of R and T are equal, likely something is wrong with the coding.

I hijacked your RepMIX() and renamed to honor ElMaestro.
# Change to directory containing "SingleRep_stat_demo.csv" first!
# setwd("E:/Public/Documents/BEBAC/bear/")
library(nlme)
library(lsmeans)
cnames <- c("subj", "drug", "seq", "prd", "Cmax",
  "AUC0t","AUC0INF","lnCmax","lnAUC0t","lnAUC0INF")
TotalData <- read.csv("SingleRep_stat_demo.csv",
  header=T, row.names=NULL, col.names=cnames, sep=",", dec=".")
TotalData[, !(names(TotalData) %in% c("AUC0t", "AUC0INF", "lnCmax",
  "lnAUC0t","lnAUC0INF"))] muddle <- lme(log(Cmax) ~ drug+seq+prd, random = ~drug-1|subj,
  data=TotalData)
lsmeans(muddle, "drug", cov.reduce=F, weights="equal")

 drug   lsmean         SE df lower.CL upper.CL
    1 7.328141 0.02504047 12 7.273582 7.382699
    2 7.391880 0.03485385 12 7.315940 7.467820

Results are averaged over the levels of: seq, prd
Confidence level used: 0.95


Bingo for the means! The SEs are still off (compared to PHX).
Level Estimate StdError DF
  R   7.328141 0.039852 12
  T   7.391880 0.032656 12

At least something to start from.


Post #500 this year.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-20 12:23
(3265 d 09:52 ago)

@ Helmut
Posting: # 14713
Views: 26,421
 

 lsmeans() & lme()

 
Dear Helmut,

Yes, this message is great.:thumb up:

❝ That’s not correct – see help(models)!


Thanks. Got it.

❝ ...

❝ TotalData[, !(names(TotalData) %in% c("AUC0t", "AUC0INF", "lnCmax",

❝ "lnAUC0t","lnAUC0INF"))]

❝ muddle <- lme(log(Cmax) ~ drug+seq+prd, random = ~drug-1|subj,

❝ data=TotalData)


I need one more line to run lsmeans here. muddle.rg1<-ref.grid(muddle,data=TotalData); otherwise, it would crash for no data. I don't know why.

lsmeans(muddle.rg1, "drug", cov.reduce=F, weights="equal")


❝ Bingo for the means! The SEs are still off (compared to PHX).

Level Estimate StdError DF

❝   R   7.328141 0.039852 12
❝   T   7.391880 0.032656 12

❝ At least something to start from.


OK.


Post #500 this year.

Congratulations! :ok:

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2015-04-20 12:35
(3265 d 09:41 ago)

@ Helmut
Posting: # 14714
Views: 26,548
 

 lsmeans() & lme()

 
Gentlemen,

lsmeans(muddle, "drug", cov.reduce=F, weights="equal")


❝ drug   lsmean         SE df lower.CL upper.CL

❝    1 7.328141 0.02504047 12 7.273582 7.382699

❝    2 7.391880 0.03485385 12 7.315940 7.467820


❝ Results are averaged over the levels of: seq, prd

❝ Confidence level used: 0.95



Well, it seems you are finally getting there, although it really isn't pretty. I think all this is about treatment effects in perception must be Least Squares Means whatever that term truly means. I don't uncritically subscribe to that view. R actually got those treatment effects right but it did not report what was expected or hoped for by those who had a love affair with SAS' invention.

Note that 'averaged over the levels' part. In a nutshell, this reminds me that one day I will add a package called 'marginal means' and it will do absolutely nothing except alias the lsmeans function to a function called marginal.means and suddenly everything will make a lot more sense. I will receive the Fields Medal for it. At least. Plus 17 Michelin Stars and the Golden Palms.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-20 16:34
(3265 d 05:42 ago)

@ ElMaestro
Posting: # 14715
Views: 26,762
 

 keep it simple!

 
Hi ElMaestro,

❝ […] those who had a love affair with SAS' invention.


Statler & Waldorf-day today?

❝ […] one day I will add a package called 'marginal means' and it will do absolutely nothing except alias the lsmeans function to a function called marginal.means and suddenly everything will make a lot more sense. I will receive the Fields Medal for it. At least. Plus 17 Michelin Stars and the Golden Palms.


Go ahead. I have to repeat myself: Here we are dealing with a balanced, repli­cated design and complete data. I’m not (!) dealing with any muddle. I’m averaging subjects’ responses for each treatment and later averaging the means.
Try it in OO Calc

subj drug seq prd lnCmax   meanR   meanT    R(1)    R(2)    T(1)    T(2)
1      1   2   2  7.46107 7.44972 ------- ------- 7.44972 ------- -------
1      1   2   3  7.43838 ------- ------- ------- ------- ------- -------
1      2   2   1  7.39817 ------- 7.40062 ------- ------- ------- 7.40062
1      2   2   4  7.40306 ------- ------- ------- ------- ------- -------
2      1   1   1  7.30047 7.29367 ------- 7.29367 ------- ------- -------
2      1   1   4  7.28688 ------- ------- ------- ------- ------- -------
2      2   1   2  7.51589 ------- 7.51860 ------- ------- 7.51860 -------
2      2   1   3  7.52132 ------- ------- ------- ------- ------- -------
3      1   2   2  7.48437 7.48717 ------- ------- 7.48717 ------- -------
3      1   2   3  7.48997 ------- ------- ------- ------- ------- -------
3      2   2   1  7.63675 ------- 7.63916 ------- ------- ------- 7.63916
3      2   2   4  7.64156 ------- ------- ------- ------- ------- -------
4      1   1   1  7.22548 7.22183 ------- 7.22183 ------- ------- -------
4      1   1   4  7.21818 ------- ------- ------- ------- ------- -------
4      2   1   2  7.39572 ------- 7.39264 ------- ------- 7.39264 -------
4      2   1   3  7.38956 ------- ------- ------- ------- ------- -------
5      1   2   2  7.34923 7.34601 ------- ------- 7.34601 ------- -------
5      1   2   3  7.34278 ------- ------- ------- ------- ------- -------
5      2   2   1  7.23346 ------- 7.23382 ------- ------- ------- 7.23382
5      2   2   4  7.23418 ------- ------- ------- ------- ------- -------
6      1   1   1  7.47079 7.46794 ------- 7.46794 ------- ------- -------
6      1   1   4  7.46508 ------- ------- ------- ------- ------- -------
6      2   1   2  7.32778 ------- 7.32448 ------- ------- 7.32448 -------
6      2   1   3  7.32119 ------- ------- ------- ------- ------- -------
7      1   2   2  7.35628 7.35946 ------- ------- 7.35946 ------- -------
7      1   2   3  7.36265 ------- ------- ------- ------- ------- -------
7      2   2   1  7.40428 ------- 7.40731 ------- ------- ------- 7.40731
7      2   2   4  7.41035 ------- ------- ------- ------- ------- -------
8      1   1   1  7.56993 7.57250 ------- 7.57250 ------- ------- -------
8      1   1   4  7.57507 ------- ------- ------- ------- ------- -------
8      2   1   2  7.38709 ------- 7.39018 ------- ------- 7.39018 -------
8      2   1   3  7.39326 ------- ------- ------- ------- ------- -------
9      1   2   2  7.29641 7.29979 ------- ------- 7.29979 ------- -------
9      1   2   3  7.30317 ------- ------- ------- ------- ------- -------
9      2   2   1  7.47250 ------- 7.47534 ------- ------- ------- 7.47534
9      2   2   4  7.47817 ------- ------- ------- ------- ------- -------
10     1   1   1  7.23562 7.23921 ------- 7.23921 ------- ------- -------
10     1   1   4  7.24280 ------- ------- ------- ------- ------- -------
10     2   1   2  7.30182 ------- 7.30518 ------- ------- 7.30518 -------
10     2   1   3  7.30854 ------- ------- ------- ------- ------- -------
11     1   2   2  7.02731 7.02286 ------- ------- 7.02286 ------- -------
11     1   2   3  7.01840 ------- ------- ------- ------- ------- -------
11     2   2   1  7.42774 ------- 7.43070 ------- ------- ------- 7.43070
11     2   2   4  7.43367 ------- ------- ------- ------- ------- -------
12     1   1   1  7.34084 7.33758 ------- 7.33758 ------- ------- -------
12     1   1   4  7.33433 ------- ------- ------- ------- ------- -------
12     2   1   2  7.12850 ------- 7.13249 ------- ------- 7.13249 -------
12     2   1   3  7.13648 ------- ------- ------- ------- ------- -------
13     1   2   2  7.11883 7.12286 ------- ------- 7.12286 ------- -------
13     1   2   3  7.12689 ------- ------- ------- ------- ------- -------
13     2   2   1  7.38088 ------- 7.38398 ------- ------- ------- 7.38398
13     2   2   4  7.38709 ------- ------- ------- ------- ------- -------
14     1   1   1  7.37651 7.37337 ------- 7.37337 ------- ------- -------
14     1   1   4  7.37023 ------- ------- ------- ------- ------- -------
14     2   1   2  7.44892 ------- 7.45182 ------- ------- 7.45182 -------
14     2   1   3  7.45472 ------- ------- ------- ------- ------- -------
                     mean 7.32814 7.39188 7.35801 7.29827 7.35934 7.42442
                        n 14      14      7       7       7       7

(7.35801+7.29827)/2=7.32814
(7.35934+7.42442)/2=7.39188


… or:

cnames <- c("subj", "drug", "seq", "prd", "Cmax",
  "AUC0t","AUC0INF","lnCmax","lnAUC0t","lnAUC0INF")
TotalData <- read.csv("SingleRep_stat_demo.csv",
  header=T, row.names=NULL, col.names=cnames, sep=",", dec=".")
Cmax <- TotalData[, !(names(TotalData) %in% c("AUC0t", "AUC0INF",
  "lnCmax", "lnAUC0t","lnAUC0INF"))] RSeq1 <- subset(Cmax, (drug == 1 & seq == 1))
RSeq2 <- subset(Cmax, (drug == 1 & seq == 2))
TSeq1 <- subset(Cmax, (drug == 2 & seq == 1))
TSeq2 <- subset(Cmax, (drug == 2 & seq == 2))
RSeq1[, "lnCmax"] <- log(RSeq1$Cmax)
RSeq2[, "lnCmax"] <- log(RSeq2$Cmax)
TSeq1[, "lnCmax"] <- log(TSeq1$Cmax)
TSeq2[, "lnCmax"] <- log(TSeq2$Cmax)
SubjInRSeq1 <- length(RSeq1[, 1])/2
SubjInRSeq2 <- length(RSeq2[, 1])/2
SubjInTSeq1 <- length(TSeq1[, 1])/2
SubjInTSeq2 <- length(TSeq2[, 1])/2
SubjMeansInRSeq1 <- vector("numeric", length=SubjInRSeq1)
SubjMeansInRSeq2 <- vector("numeric", length=SubjInRSeq2)
SubjMeansInTSeq1 <- vector("numeric", length=SubjInTSeq1)
SubjMeansInTSeq2 <- vector("numeric", length=SubjInTSeq2)
for(j in 1:SubjInRSeq1)
  SubjMeansInRSeq1[j] <- (RSeq1$lnCmax[j]+RSeq1$lnCmax[j+SubjInRSeq1])/2
for(j in 1:SubjInRSeq2)
  SubjMeansInRSeq2[j] <- (RSeq2$lnCmax[j]+RSeq2$lnCmax[j+SubjInRSeq2])/2
for(j in 1:SubjInTSeq1)
  SubjMeansInTSeq1[j] <- (TSeq1$lnCmax[j]+TSeq1$lnCmax[j+SubjInTSeq1])/2
for(j in 1:SubjInTSeq2)
  SubjMeansInTSeq2[j] <- (TSeq2$lnCmax[j]+TSeq2$lnCmax[j+SubjInTSeq2])/2
SubjMeansR <- c(SubjMeansInRSeq1, SubjMeansInRSeq2)
SubjMeansT <- c(SubjMeansInTSeq1, SubjMeansInTSeq2)
MeanR <- sum(SubjMeansR)/(SubjInRSeq1+SubjInRSeq2)
MeanT <- sum(SubjMeansT)/(SubjInTSeq1+SubjInTSeq2)
cat("R:", sprintf("%.6f", MeanR),
"\nT:", sprintf("%.6f", MeanT), "\n")

R: 7.328141
T: 7.391880


Here (‼) you could even ignore the entire data structure:

R <- subset(Cmax, drug == 1)
R <- R[, "lnCmax"] <- log(R$Cmax)
T <- subset(Cmax, drug == 2)
T <- T[, "lnCmax"] <- log(T$Cmax)
MeanR <- mean(R)
MeanT <- mean(T)
cat("R:", sprintf("%.6f", MeanR),
"\nT:", sprintf("%.6f", MeanT), "\n")

R: 7.328141
T: 7.391880


I’m not promoting SAS. :-D We have to find a way to get the treatment-means from the model Yung-jin is using in bear – which must agree with the simple / marginal / however-you-like-to-name-them means in a balanced case first. Then we have to check what’s happening to unbalanced datasets.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
ElMaestro
★★★

Denmark,
2015-04-20 17:39
(3265 d 04:37 ago)

@ Helmut
Posting: # 14716
Views: 26,322
 

 keep it simple!

 
Hi Helmut,

❝ Statler & Waldorf-day today?


Every day is a Statler & Waldorf day :-D Not good for my blood pressure. Today's major trouble is bacterial DNA, though. Why do people publish sensational papers about having sequenced a new bacterial species when they are not publishing the genome? :confused:

❝ cnames <- c("subj", "drug", "seq", "prd", "Cmax", MeanR),

❝ (...) "\nT:", sprintf("%.6f", MeanT), "\n")


Ohdearohgoodnessmelordhavemercy. Yuck. :ponder:

❝ I’m not promoting SAS. :-D


Don't worry. I know that for a long time already.

There is still something that really bothers me and that is the log Likelihood difference of R versus WNL. With REML I think optimisation switches back and forth be in terms of the estimates. You fiddle with the covariance matrix and read the logLikelihood. Then you fiddle with the fixed effects and read the logLikelihood. Then you fiddle with the covariance matrix, and so forth. Repeat until some convergence criterion is met. ML would only be plain and simple covariance matrix fiddling, I think (?). Thus I have a vague and completely unsubstantiated feeling this could possibly be fixed by proper inputs and that you'd then see agreement about the likelihoods and probably the treatment effects too. It is a guess, and undertunately I do not know my way around with R or mixed models to an extent where I can do it.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-20 18:35
(3265 d 03:40 ago)

@ ElMaestro
Posting: # 14717
Views: 26,264
 

 ML vs. REML

 
Hi ElMaestro,

❝ There is still something that really bothers me and that is the log Likelihood difference of R versus WNL. With REML I think optimisation switches back and forth be in terms of the estimates. […]



Maybe, maybe not. PHX’ manual only tells me:

The linear mixed effects model is:

y = Xβ + Zγ + ε,
V = Variance(y) =ZGZT + R.

Let θ be a vector consisting of the variance parameters in G and R. The full maximum likelihood procedure (ML) would simultaneously esti­mate both the fixed effects parameters β and the variance parameters θ by maximizing the likelihood of the observations y with respect to these parameters. In contrast, restricted maximum likelihood esti­ma­tion (REML) maximizes a likelihood that is only a function of the vari­ance parameters θ and the observations y, and not a function of the fixed effects parameters. Hence for models that do not contain any fixed effects, REML would be the same as ML.


❝ ML would only be plain and simple covariance matrix fiddling, I think (?).


Well, we do have fixed effects, right? At least in PHX/WNL, only REML is imple­mented. IIRC, REML is recommended by Patterson/Jones somewhere.


Thread closed. Please continue over there.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-20 01:44
(3265 d 20:32 ago)

@ yjlee168
Posting: # 14710
Views: 27,566
 

 lme() with NA in R

 
Dear Elmaestro and Helmut,

I like to correct what I have said about lme() with NA. We can use lme(..., na.action=na.exclude) if there are some NA in dataset. Looks like this option can avoid crash if we do not omit NA. Sounds great.

any NA must be removed (as the entire period) first before doing lme(); otherwise, it will crash.[/list] So there is no way to compare the difference.


All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2015-04-18 15:30
(3267 d 06:46 ago)

@ yjlee168
Posting: # 14700
Views: 27,676
 

 Dataset

 
Dear Yung-jin,

❝ do we use the same dataset? :confused:


I’m using bear’s SingleRep_stat_demo.csv (to confirm my numbers see above). I worked with the raw-data and made the log-transformation internally in full precision.
In the dataset lnCmax is given with two decimal figures. If I use those I get R 7.3261, T 7.3900, PE 0.0639.

Seems that there are also some rounding issues in part of the data:
subj drug seq prd Cmax lnCmax round(log(Cmax), 2)
1     1    2   3  1700   7.43   7.44
2     1    1   4  1461   7.28   7.29
3     1    2   3  1790   7.48   7.49
4     2    1   3  1619   7.38   7.39
4     1    1   4  1364   7.21   7.22
6     1    1   4  1746   7.46   7.47
8     1    1   4  1949   7.57   7.58
9     2    2   4  1769   7.47   7.48
10    2    1   3  1493   7.30   7.31
11    1    2   3  1117   7.01   7.02
12    2    1   3  1257   7.13   7.14
13    1    2   3  1245   7.12   7.13
13    2    2   4  1615   7.38   7.39


… leading to R 7.3289, T 7.3918, PE 0.0629.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2015-04-19 01:14
(3266 d 21:02 ago)

@ Helmut
Posting: # 14704
Views: 27,482
 

 Dataset and lme() in bear

 
Dear Helmut and Elmaestro,

Thank you both so much, especially during the weekend.

@Helmut - Agree. Some rounding errors may occur if started with the raw data. However, when doing lme(), I use lme(log(Cmax)~...) (the raw data) instead of lme(lnCmax~...). PE still comes very close. That is strange, too.

❝ I’m using bear’s SingleRep_stat_demo.csv (to confirm my numbers see above). I worked with the raw-data and made the log-transformation internally in full precision.


@Elmaestro - Great. Got it.

❝ Drug diff. = 7.420013-7.356274 = about 0.06374.


Have a nice weekend.

All the best,
-- Yung-jin Lee
bear v2.9.1:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
UA Flag
Activity
 Admin contact
22,957 posts in 4,819 threads, 1,636 registered users;
71 visitors (0 registered, 71 guests [including 6 identified bots]).
Forum time: 21:16 CET (Europe/Vienna)

Nothing shows a lack of mathematical education more
than an overly precise calculation.    Carl Friedrich Gauß

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5