Bug in bear? [🇷 for BE/BA]
Hi Mittyri,
good news: I could reproduce you results in bear v2.6.4 (on R v3.1.3 64bit) and Phoenix/WinNonlin v6.4 64bit.
In the Phoenix-templates, no. Personally I always code sequences for clarity. In this case RTTR instead of 1 and TRRT instead of 2.
Bad news: Something seems to be strange in bear.
Phoenix reports for the log-transformed means:
… with an estimated difference of 0.0637395 and SE 0.0475893.
In bear:
9.5346 – 7.3261 = 0.06374‽
If I calculate adjusted means manually …
… confirming Phoenix’ results.
How the standard errors are calculated is beyond me. Manually (by √s²/n) I get 0.039174 (R), 0.032647 (T), and 0.048889 (T–R). Same if I send
PS: I you want to give it a try in your preferred software, the dataset:
The dataset looks artificial to me – in most cases exactly ±10 in the respective periods…
good news: I could reproduce you results in bear v2.6.4 (on R v3.1.3 64bit) and Phoenix/WinNonlin v6.4 64bit.
❝ Exploring the dataset I've found that the sequences aren't common RTRT/TRTR, but RTTR/TRRT. Is it important for calculations?
In the Phoenix-templates, no. Personally I always code sequences for clarity. In this case RTTR instead of 1 and TRRT instead of 2.
Bad news: Something seems to be strange in bear.
Phoenix reports for the log-transformed means:
Least squares means
Formulation_Cal Estimate StdError Denom_DF T_stat P_value Conf T_crit Lower_CI Upper_CI
--------------------------------------------------------------------------------------------
R 7.32814 0.0398516 12 183.886 0.0000 90 1.782 7.257 7.399
T 7.39188 0.0326557 12 226.358 0.0000 90 1.782 7.334 7.450
… with an estimated difference of 0.0637395 and SE 0.0475893.
In bear:
MEAN-ref = 7.3261
MEAN-test = 9.5346
SE = 0.047587
Estimate(test-ref) = 0.06374
9.5346 – 7.3261 = 0.06374‽

If I calculate adjusted means manually …
R = (RRTTR+RTRRT)/2 = (7.35801+7.29827)/2 = 7.32814
T = (TRTTR+TTRRT)/2 = (7.35934+7.42442)/2 = 7.39188
… confirming Phoenix’ results.
How the standard errors are calculated is beyond me. Manually (by √s²/n) I get 0.039174 (R), 0.032647 (T), and 0.048889 (T–R). Same if I send
BE Method C log
| Ratios Test=T
to descriptive statistics in PHX.PS: I you want to give it a try in your preferred software, the dataset:
subject treatment sequence period Cmax
1 T TRRT 1 1633
1 R TRRT 2 1739
1 R TRRT 3 1700
1 T TRRT 4 1641
2 R RTTR 1 1481
2 T RTTR 2 1837
2 T RTTR 3 1847
2 R RTTR 4 1461
3 T TRRT 1 2073
3 R TRRT 2 1780
3 R TRRT 3 1790
3 T TRRT 4 2083
4 R RTTR 1 1374
4 T RTTR 2 1629
4 T RTTR 3 1619
4 R RTTR 4 1364
5 T TRRT 1 1385
5 R TRRT 2 1555
5 R TRRT 3 1545
5 T TRRT 4 1386
6 R RTTR 1 1756
6 T RTTR 2 1522
6 T RTTR 3 1512
6 R RTTR 4 1746
7 T TRRT 1 1643
7 R TRRT 2 1566
7 R TRRT 3 1576
7 T TRRT 4 1653
8 R RTTR 1 1939
8 T RTTR 2 1615
8 T RTTR 3 1625
8 R RTTR 4 1949
9 T TRRT 1 1759
9 R TRRT 2 1475
9 R TRRT 3 1485
9 T TRRT 4 1769
10 R RTTR 1 1388
10 T RTTR 2 1483
10 T RTTR 3 1493
10 R RTTR 4 1398
11 T TRRT 1 1682
11 R TRRT 2 1127
11 R TRRT 3 1117
11 T TRRT 4 1692
12 R RTTR 1 1542
12 T RTTR 2 1247
12 T RTTR 3 1257
12 R RTTR 4 1532
13 T TRRT 1 1605
13 R TRRT 2 1235
13 R TRRT 3 1245
13 T TRRT 4 1615
14 R RTTR 1 1598
14 T RTTR 2 1718
14 T RTTR 3 1728
14 R RTTR 4 1588
The dataset looks artificial to me – in most cases exactly ±10 in the respective periods…
—
Dif-tor heh smusma 🖖🏼 Довге життя Україна!![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
Dif-tor heh smusma 🖖🏼 Довге життя Україна!
![[image]](https://static.bebac.at/pics/Blue_and_yellow_ribbon_UA.png)
Helmut Schütz
![[image]](https://static.bebac.at/img/CC by.png)
The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes

Complete thread:
- Reproducing Bear results in Phoenix mittyri 2015-04-14 08:35
- Bug in bear?Helmut 2015-04-14 14:07
- Bug in bear? mittyri 2015-04-14 20:43
- Bug in bear? Helmut 2015-04-15 02:17
- use the same dataset? yjlee168 2015-04-20 10:13
- Bug in bear? mittyri 2015-04-14 20:43
- Bugs in bear with replicated demo data set? yjlee168 2015-04-15 09:38
- Bugs fixed in bear? yjlee168 2015-04-17 01:21
- Bugs fixed in bear? Helmut 2015-04-17 14:25
- Bugs fixed in bear? yjlee168 2015-04-17 18:50
- Bugs fixed in bear? ElMaestro 2015-04-17 23:14
- lme() in bear yjlee168 2015-04-18 11:12
- lme() in bear ElMaestro 2015-04-18 11:28
- lme() in bear yjlee168 2015-04-18 12:02
- lme() in bear ElMaestro 2015-04-18 13:42
- Mean means Helmut 2015-04-18 14:08
- Mean means ElMaestro 2015-04-18 16:01
- Mean means Helmut 2015-04-19 00:59
- Mean means ElMaestro 2015-04-19 09:14
- LL and AIC Helmut 2015-04-19 11:08
- Confused ElMaestro 2015-04-19 11:42
- LL and AIC Helmut 2015-04-19 11:08
- lsmeans for mixed model in R yjlee168 2015-04-19 23:36
- lsmeans() & lme() Helmut 2015-04-20 01:28
- lsmeans() & lme() yjlee168 2015-04-20 10:23
- lsmeans() & lme() ElMaestro 2015-04-20 10:35
- keep it simple! Helmut 2015-04-20 14:34
- keep it simple! ElMaestro 2015-04-20 15:39
- ML vs. REML Helmut 2015-04-20 16:35
- keep it simple! ElMaestro 2015-04-20 15:39
- keep it simple! Helmut 2015-04-20 14:34
- lsmeans() & lme() Helmut 2015-04-20 01:28
- Mean means ElMaestro 2015-04-19 09:14
- Mean means Helmut 2015-04-19 00:59
- Mean means ElMaestro 2015-04-18 16:01
- Mean means Helmut 2015-04-18 14:08
- lme() in bear ElMaestro 2015-04-18 13:42
- lme() in bear yjlee168 2015-04-18 12:02
- lme() with NA in R yjlee168 2015-04-19 23:44
- lme() in bear ElMaestro 2015-04-18 11:28
- lme() in bear yjlee168 2015-04-18 11:12
- Dataset Helmut 2015-04-18 13:30
- Dataset and lme() in bear yjlee168 2015-04-18 23:14
- Bugs fixed in bear? ElMaestro 2015-04-17 23:14
- Bugs fixed in bear? yjlee168 2015-04-17 18:50
- Bugs fixed in bear? Helmut 2015-04-17 14:25
- Bug in bear?Helmut 2015-04-14 14:07