Pakawadee
☆    

2010-05-03 13:48
(5474 d 03:01 ago)

Posting: # 5273
Views: 26,170
 

 F value for sequence effect [🇷 for BE/BA]

Response: lnCmax
          Df   Sum Sq  Mean Sq F value Pr(>F)
seq        1 0.000017 0.000017  0.0011 0.9743
prd        1 0.027294 0.027294  1.7187 0.2144
drug       1 0.025583 0.025583  1.6110 0.2284
seq:subj  12 0.255270 0.021273  1.3395 0.3103
Residuals 12 0.190565 0.015880


The above ANOVA table is an example in this webpage. I am confused about F value of sequence. Why it is 0.0011 (=MSseq/MSresidual)? I think it should be 0.0008 (=MSseq/MSsubj). Due to my knowledge in statistic is very limit, I afraid that I would missunderstand some concept of ANOVA for BE. May anyone help to explain?
Thanks
Pakawadee


Edit: Not in this webpage, but in the example (second output) of bear's homepage. BTW, testing for a sequence effect (or unequal carry-over, if you like) in 2×2 cross-over studies is futile anyhow (see this post). [Helmut]
AA
☆    

Ahmedabad,
2010-05-03 15:38
(5474 d 01:11 ago)

@ Pakawadee
Posting: # 5274
Views: 24,555
 

 F value for sequence effect

Dear Pakawadee,

You are absolutely right. The F value of sequence has been calculated using (=MSseq/MSresidual) and it has been calculated using Type I ss.

It will be calculated as (=MSseq/MSsubj) only incase if it has been predefined in the protocol, "the sequence effect will be tested using subject within sequence as an error term".


Edit: Full quote removed. Please delete anything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Ohlbe]
Ohlbe
★★★

France,
2010-05-03 16:16
(5474 d 00:33 ago)

@ Pakawadee
Posting: # 5275
Views: 24,349
 

 F value for sequence effect

Dear Pakawadee,

Have a look at this thread.

Regards
Ohlbe
Pakawadee
☆    

2010-05-04 05:48
(5473 d 11:01 ago)

@ Ohlbe
Posting: # 5277
Views: 24,141
 

 F value for sequence effect

Dear all,

Many thanks for your answers. ^^ Does it mean that I need to recalculate F-value of sequence effect by myself, when I run R program for BE? Does it have any way to directly get a right result for sequence by running the program?

regards
Pakawadee
ElMaestro
★★★

Denmark,
2010-05-04 10:48
(5473 d 06:01 ago)

@ Pakawadee
Posting: # 5278
Views: 24,349
 

 F value for sequence effect

Dear Pakawadee,

❝ ^^ Does it mean that I need to recalculate F-value of sequence effect by myself, when I run R program for BE? Does it have any way to directly get a right result for sequence by running the program?


R's anova default is to give you type I sums of squares with all terms tested against the residual model variation. Therefore, if you use plain R, you will have to test the sequence effect against the proper between-subject term manually. Also, if your dataset is unbalanced you will notice a slight discordance with the power to know's results as they by default are created with type III sums of squares.

Best regards
EM.

Pass or fail!
ElMaestro
Pakawadee
☆    

2010-05-04 14:12
(5473 d 02:37 ago)

@ ElMaestro
Posting: # 5280
Views: 24,278
 

 F value for sequence effect

Dear ElMaestro,

Thank you very much. I do understand better. :-D

Sincerely
Pakawadee
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-05-04 15:00
(5473 d 01:49 ago)

@ ElMaestro
Posting: # 5282
Views: 24,395
 

 F value; rounding

Dear ElMaestro, Pakawadee and bears...

Hope to clarify the confusion. The example was calculated by bear v2.3.0 (not 'plain' R). I checked v2.4.1 on R2.9.0 and v2.4.2 on R2.10.1 - both use the correct error terms, i.e., MSE[(seq)] / MSE[subj(seq)].

Just discovered a little problem. bear log-transforms raw values and rounds them to three significant digits. That's nasty, because results will not agree with other software taking raw values as input and using logs in full precision.
From 2x2 example dataset, Cmax:
   subj drug seq prd Cmax lnCmax
1     1    1   2   2 1739 7.46
...
28   14    2   1   2 1718 7.45


bear (all versions?):
106.184  [97.550  - 115.582 ]
WinNonlin 5.3; 4-digit Cmax-values, logtransformation:
106.2319 [97.5839 - 115.6462]
WinNonlin 5.3; 3-digit lnCmax-values:
106.1837 [97.5479 - 115.5839]

IMHO it's pretty to have values in a nice formatted table, but log-transformed values should be treated internally in full precision.

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2010-05-04 15:35
(5473 d 01:14 ago)

@ Helmut
Posting: # 5283
Views: 24,298
 

 log rounding machine

Dear Helmut,

❝ IMHO it's pretty to have values in a nice formatted table, but log-transformed values should be treated internally in full precision.


full ACK :ok:.

Moreover there is no need to have the lnAUC (or logAUC or how named ever) or other log-transformed PK metrics as separate values because in lm() or lme() the models can be stated as

  mlm <- lm(log(AUC) ~ whatever+effects+you+need, data=your.data.frame)

with only having the AUC values in your.data.frame.

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-04 23:27
(5472 d 17:22 ago)

@ Helmut
Posting: # 5287
Views: 24,720
 

 Rounding or not to rounding

Dear Helmut and D. Labes,

Sorry about this. I didn't notice that this thread was about bear at the beginning until I saw the edited footnote by Helmut. Thanks you all for all messages here.

❝ Just discovered a little problem. bear log-transforms raw values and rounds them to three significant digits. That's nasty, [...]



Yes. Then I tested bear (v2.5.0, not released yet!) to compare the runs between the pre-transformed manually Cmax to 3 digits and internally transformed with the model (lnCmax<- lm(log(Cmax) ~ seq + subj:seq + prd + drug , data=TotalData)) as suggested by D. Labes, using a 2x2x2 crossover demo dataset in bear (single-dosed). I got the same results as follows. I think two methods are taking the same values to calculate under R, not manually pre-transformed values. That's why no difference is observed.

- Pre-transformed manually---
  Statistical analysis (ANOVA(lm))   
--------------------------------------------------------------------------
  Dependent Variable: lnCmax                                             

Type I SS
Analysis of Variance Table

Response: lnCmax
          Df   Sum Sq  Mean Sq F value Pr(>F)
seq        1 0.000690 0.000690  0.0388 0.8472
prd        1 0.018169 0.018169  1.0205 0.3323
drug       1 0.036238 0.036238  2.0354 0.1792
subj(seq) 12 0.283676 0.023640  1.3278 0.3155
Residuals 12 0.213641 0.017803               

Type III SS
Single term deletions

Model:
lnCmax ~ seq + subj:seq + prd + drug
          Df Sum of Sq     RSS     AIC F value  Pr(F)
<none>                 0.21364 -104.52               
prd        1  0.018169 0.23181 -104.23  1.0205 0.3323
drug       1  0.036238 0.24988 -102.13  2.0354 0.1792
subj(seq) 12  0.283676 0.49732 -104.86  1.3278 0.3155

Tests of Hypothesis for SUBJECT(SEQUENCE) as an error term

Error: subj
          Df  Sum Sq   Mean Sq F value Pr(>F)
prd:drug   1 0.00069 0.0006903  0.0292 0.8672
Residuals 12 0.28368 0.0236397               

Error: Within
          Df   Sum Sq  Mean Sq F value Pr(>F)
prd        1 0.018169 0.018169  1.0205 0.3323
drug       1 0.036238 0.036238  2.0354 0.1792
Residuals 12 0.213641 0.017803               

Intra_subj. CV = 100*sqrt(abs(exp(MSResidual)-1)) = 13.4025 %
Inter_subj. CV = 100*sqrt(abs(exp((MSSubject(seq)-MSResidual)/2)-1)) = 5.4059 %
    MSResidual = 0.01780339
MSSubject(seq) = 0.0236397
[...] **************** Classical (Shortest) 90% C.I. for lnCmax ****************

  Point_estimate CI90_lower CI90_upper
1        107.460     98.223    117.566
[...]


--- internally transformed method ---

  Statistical analysis (ANOVA(lm))   
--------------------------------------------------------------------------
  Dependent Variable: lnCmax                                             

Type I SS
--- internally transformed method ---
Analysis of Variance Table

Response: log(Cmax)
          Df   Sum Sq  Mean Sq F value Pr(>F)
seq        1 0.000690 0.000690  0.0388 0.8472
prd        1 0.018169 0.018169  1.0205 0.3323
drug       1 0.036238 0.036238  2.0354 0.1792
subj(seq) 12 0.283676 0.023640  1.3278 0.3155
Residuals 12 0.213641 0.017803               

Type III SS
Single term deletions

Model:
log(Cmax) ~ seq + subj:seq + prd + drug
          Df Sum of Sq     RSS     AIC F value  Pr(F)
<none>                 0.21364 -104.52               
prd        1  0.018169 0.23181 -104.23  1.0205 0.3323
drug       1  0.036238 0.24988 -102.13  2.0354 0.1792
subj(seq) 12  0.283676 0.49732 -104.86  1.3278 0.3155

Tests of Hypothesis for SUBJECT(SEQUENCE) as an error term

Error: subj
          Df  Sum Sq   Mean Sq F value Pr(>F)
prd:drug   1 0.00069 0.0006903  0.0292 0.8672
Residuals 12 0.28368 0.0236397               

Error: Within
          Df   Sum Sq  Mean Sq F value Pr(>F)
prd        1 0.018169 0.018169  1.0205 0.3323
drug       1 0.036238 0.036238  2.0354 0.1792
Residuals 12 0.213641 0.017803               

Intra_subj. CV = 100*sqrt(abs(exp(MSResidual)-1)) = 13.4025 %
Inter_subj. CV = 100*sqrt(abs(exp((MSSubject(seq)-MSResidual)/2)-1)) = 5.4059 %
    MSResidual = 0.01780339
MSSubject(seq) = 0.0236397               
[...] **************** Classical (Shortest) 90% C.I. for lnCmax ****************

  Point_estimate CI90_lower CI90_upper
1        107.460     98.223    117.566
[...]

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2010-05-05 01:23
(5472 d 15:26 ago)

@ yjlee168
Posting: # 5289
Views: 24,213
 

 Rounding or not to rounding

Hi Bears,

Re. the anova tables, where's the type III SS Seq effect :-D - did you finally decide to let SAS go its own ways?

EM.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-05 03:17
(5472 d 13:32 ago)

@ ElMaestro
Posting: # 5290
Views: 24,311
 

 Rounding or not to rounding

Dear ElMaestro,

Thank you for reminding me. You've helped a lots last year in Type III SS. It's still there... I extract the part of Type III SS seq effect from the above post as follows.

❝ Re. the anova tables, where's the type III SS Seq effect :-D - did you finally decide to let SAS go its own ways?

[...] Type III SS
Single term deletions

Model:
log(Cmax) ~ seq + subj:seq + prd + drug
          Df Sum of Sq     RSS     AIC F value  Pr(F)
<none>                 0.21364 -104.52               
prd        1  0.018169 0.23181 -104.23  1.0205 0.3323
drug       1  0.036238 0.24988 -102.13  2.0354 0.1792
subj(seq) 12  0.283676 0.49732 -104.86  1.3278 0.3155

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2010-05-05 14:15
(5472 d 02:34 ago)

@ yjlee168
Posting: # 5295
Views: 24,250
 

 Rounding or not to rounding

Dear bears,

❝ It's still there... I extract the part of Type III SS seq effect from the above post as follows. (...)


A good swash of saltwater coming over the rail from from a northwestern gale in the mid-atlantic hitting one square in the face surely impairs the vision. I wonder if I am just being subject to chronic effects of such exposure, as I am still unable to see what is "still there".

As for d_labes' comment below, I fully support it. For verification purposes I'd stick to one dataset forever, or at least make sure future versions always have the datasets from older versions built into them.
In addition, it would not harm if a built-in dataset was taken from a textbook somewhere, so that everyone is reasonably assured to be dealing with apples and apples rather than apples and pears.

Best regards and again what a fine piece of work the Bear package is.
EM.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-05 14:59
(5472 d 01:50 ago)

@ ElMaestro
Posting: # 5297
Views: 24,275
 

 Rounding or not to rounding

Dear Elmaestro,

❝ exposure, as I am still unable to see what is "still there".


The above outputs were obtained from the following codes:
...
lnCmax_ss<- lm(log(Cmax_ss) ~ seq + subj:seq + prd + drug , data=Data)
...
cat("Type III SS\n")
lnCmax_ss_drop1 <- drop1(lnCmax_ss, test="F")
row.names(lnCmax_ss_drop1)[4]="subj(seq)"
show(lnCmax_ss_drop1)

Are you saying that these output are not "Type III SS"? :confused:

❝ In addition, it would not harm if a built-in dataset was taken from a textbook somewhere, so that everyone is reasonably assured to be dealing with apples and apples rather than apples and pears.


Thank you. We DID borrow some dataset from the Textbook - Chow SC and Liu JP. Design and Analysis of Bioavailability and Bioequivalence Studies, 3rd. ed., CRC Press, 2009, such as when developing outlier detection algorithms, for the purpose of convenient validation or comparison, since WNL and other software do not provide similar functions of this part or datasets, etc.. However, the Chow & Liu's Textbook does.


Edit: Added backslash. At posting (or preview) single backslashes are stripped. If your code contains backslashes, please add a second one. Sorry for the inconvenience. [Helmut]

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2010-05-05 15:15
(5472 d 01:34 ago)

@ yjlee168
Posting: # 5298
Views: 24,388
 

 Rounding or not to rounding

Dear Yung-jin

❝ Are you saying that these output are not "Type III SS"? :confused:


Not at all what I tried to say. There is no line in the anova tables specifying a p-value for the type III sequence effect; the tables include just Subject, Period and Treatment. You can find a (which is not the same as the) P-value for seq by using drop1 on a model without Subject.
If you follow the true definition of type III SS there will be no reduction of SS by inclusion of Seq once Subj is in the model for a 2,2,2-BE study. This is the R way. You may see the SS reduction for this effect specified as something effectively zero such as 1.23456e-7 etc., reflecting just machine precision at the stopping criterion for the fitting Al Gore Rhythm.
On the other hand, one can convince oneself by the possible presence of a sequence effect manually using the approach shown in the book by Chow & Liu. SAS therefore -if I recall right- spits out a positive SS reduction, which can just be obtained by fiddling as described above (I think I posted how to do it in R some time ago, just cannot find it now). I am not implying the SAS way is better or more correct.
Bear is fine as it is.

Best regards
EM.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-05 15:28
(5472 d 01:21 ago)

@ ElMaestro
Posting: # 5299
Views: 24,245
 

 Rounding or not to rounding

Dear Elmaestro,

Could you see your old post? I am confused right now...

❝ Not at all what I tried to say. There is no line in the anova tables [...]



❝ [...]

❝ posted how to do it in R some time ago, just cannot find it now). I am not implying the SAS way is better or more correct.


I know what you mean. Thanks.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-06 01:58
(5471 d 14:51 ago)

@ ElMaestro
Posting: # 5306
Views: 24,550
 

 Type III SS sequence effect -revised

Dear ElMaestro,

What about the following outputs?
[...] Type III SS
Single term deletions

Model:
log(Cmax) ~ seq + subj + prd + drug
       Df Sum of Sq     RSS     AIC
<none>              0.21364 -104.52
seq     0  0.000000 0.21364 -104.52
subj   12  0.283676 0.49732 -104.86
prd     1  0.018169 0.23181 -104.23
drug    1  0.036238 0.24988 -102.13


❝ Not at all what I tried to say. There is no line in the anova tables

❝ [...]


All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
ElMaestro
★★★

Denmark,
2010-05-06 09:27
(5471 d 07:22 ago)

@ yjlee168
Posting: # 5307
Views: 24,113
 

 Type III SS sequence effect -revised

Hi again,

❝ seq 0 0.000000 0.21364 -104.52


Since you ask I'd say this line is not informative; Seq will always have a zero (well, effectively zero) SS with type III when you are using R and drop1. SAS gives you another value for reasons explained above. So as you ask, I'd suggest to either remove the line completely or to prepare a type III SS for Seq the way SAS does. This is not because I endorse the SAS way of looking at things, rather it's just because as it is the anova table entry conveys no information.

Best regards
EM.

Pass or fail!
ElMaestro
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-06-03 15:04
(5443 d 01:45 ago)

@ ElMaestro
Posting: # 5425
Views: 24,071
 

 SAS-like Type III SS sequence with bear?

Dear ElMaestro and all,

I made some code modifications and got the output like as follows.

Statistical analysis (ANOVA(lm))   
-----------------------------------------------------------
   Dependent Variable: log(Cmax)                           

Analysis of Variance Table

Response: log(Cmax)
          Df   Sum Sq  Mean Sq F value Pr(>F)
prd        1 0.027294 0.027294  1.7187 0.2144
drug       1 0.025583 0.025583  1.6110 0.2284
subj(seq) 12 0.255270 0.021272  1.3395 0.3103
Residuals 12 0.190565 0.015880               

Analysis of Variance Table

Response: log(Cmax)
           DF    Type I SS  Mean Square  F Value  Pr > F
prd         1     0.027294     0.027294   1.7187 0.21439
drug        1     0.025583     0.025583   1.6110 0.22842
subj(seq)  12     0.255270     0.021272   1.3395 0.31028

Analysis of Variance Table

Response: log(Cmax)
           DF  Type III SS  Mean Square  F Value  Pr > F
prd         1     0.027294     0.027294   1.7187 0.21439
drug        1     0.025583     0.025583   1.6110 0.22842
subj(seq)  12     0.255270     0.021272   1.3395 0.31028
-----------------------------------------------------------

Tests of Hypothesis using the Type I MS for
SUBJECT(SEQUENCE) as an error term
            Df  Sum Sq   Mean Sq F value Pr(>F)
seq          1 0.00002 0.0000172   9e-04 0.9764
Residuals   26 0.49871 0.0191813               

Tests of Hypothesis using the Type III MS for
SUBJECT(SEQUENCE) as an error term
            Df  Sum Sq   Mean Sq F value Pr(>F)
seq          1 0.00002 0.0000172   9e-04 0.9764
Residuals   26 0.49871 0.0191813


❝ ❝ seq 0 0.000000 0.21364 -104.52


❝ Since you ask I'd say this line is not informative; Seq will always have a

❝ zero (well, effectively zero) SS with type III when you are using R and

❝ drop1. SAS gives you another value for reasons explained above. [...]



What do you think about this revision? Thanks.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2010-05-05 13:40
(5472 d 03:09 ago)

@ yjlee168
Posting: # 5293
Views: 24,206
 

 New dataset?

Dear Yung-jin,

if I look on your ANOVA results and the CIs I guess your single dose 2x2 cross-over dataset has changed?
Or is the difference solely due to rounding? :ponder:

IMHO it would be a very good idea to have the same demo datasets across the various versions. With that given one can make a short test of the reliability of bear results and we are talking always about the same numbers regardless which bear version we have currently installed (sometimes called backward compatibility :-D).

It would also a very good idea to have the same datasets regardless if we start from NCA or later with statistical evaluation, i.e. the concentration-time values of a demo dataset should correspond to the PK metrics used later.

BTW: I question your results! Full match of the results with log to full precision or with log rounded to 3 digits (after dec. separator), respectively, in the ANOVA's and the CI's up to the last printed digits has a probability of very near to zero.

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-05 14:31
(5472 d 02:18 ago)

@ d_labes
Posting: # 5296
Views: 24,121
 

 New dataset?

Dear D. Labes,

Great and nice suggestions. I only changed parallel raw dataset that will be used when running demo from "NCA -> Statistical analysis" only, since bear has been developed. So it contains seq, per, trt, time, and drug conc.. However, in bear, we also have independent demo datasets for NCA or statistical analysis ONLY. These dataset were most artificial and they had nothing to do with the related raw dataset (demo for "NCA--> Statistical Analysis"), such as 2x2x2 crossover. Helmut was confused by this before (see this post). I am very sorry about this. But I will keep in my mind not to change these dataset later.

❝ IMHO it would be a very good idea to have the same demo datasets across the various versions.


❝ It would also a very good idea to have the same datasets regardless if we start from NCA or later with statistical evaluation, i.e. the concentration-time values of a demo dataset should correspond to the PK metrics used later.


This suggestion will definitely conflict with the previous one, because I will need to change some of datasets. However, it's really a very good idea.

❝ BTW: I question your results! Full match of the results with log to full precision or with log rounded to 3 digits (after dec. separator), respectively, in the ANOVA's and the CI's up to the last printed digits has a probability of very near to zero.


Yes, it is possible, but most of time (I think) are unlikely. The transformed from original data, such as Cmax and AUCs, are done under R, not manually. That's why it makes no difference, because both transformations are done by the same machine under R. It could be different when comparing between different machines, such as a 32-bit vs. a 64-bit machine, or between different OS.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
d_labes
★★★

Berlin, Germany,
2010-05-05 18:00
(5471 d 22:49 ago)

@ yjlee168
Posting: # 5301
Views: 24,310
 

 log to 3 decimals guess work

Dear Yung-jin,

❝ ❝ BTW: I question your results! Full match of the results with log to full precision or with log rounded to 3 digits (after dec. separator), respectively, in the ANOVA's and the CI's up to the last printed digits has a probability of very near to zero.


❝ Yes, it is possible, but most of time (I think) are unlikely. The transformed from original data, such as Cmax and AUCs, are done under R, not manually. That's why it makes no difference, because both transformations are done by the same machine under R. It could be different when comparing between different machines, such as a 32-bit vs. a 64-bit machine, or between different OS.


Totally confused :confused:.

You have described in your original post that you compared

❝ ... the runs between the pre-transformed

manually Cmax to 3 digits and internally transformed with the model

❝ (lnCmax<- lm(log(Cmax) ~ seq + subj:seq + prd + drug , data=TotalData))

(emphasis by me)

I guess that you compared the logs calculated to full precision
TotalData$lnCmax <- log(TotalData$Cmax)
lm(lnCmax ~ seq + subj:seq + prd + drug, data=TotalData)

versus my suggestion
lm(log(Cmax) ~ seq + subj:seq + prd + drug, data=TotalData).

To get Helmut's concern change
TotalData$lnCmax <- round(log(TotalData$Cmax),3)
or even more drastically round to 2 decimals.

Regards,

Detlew
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-05 22:13
(5471 d 18:36 ago)

@ d_labes
Posting: # 5302
Views: 24,932
 

 log to 3 decimals guess work

Dear D, Labes,

Sorry to confuse you again.:-D Actually it is not rounding, but directly transforming in bear.

❝ I guess that you compared the logs calculated to full precision

TotalData$lnCmax <- log(TotalData$Cmax)

❝ lm(lnCmax ~ seq + subj:seq + prd + drug, data=TotalData)

❝ versus my suggestion

lm(log(Cmax) ~ seq + subj:seq + prd + drug, data=TotalData).


Yes, this is exactly the way that we do in bear with the code as follow:
TotalData<-data.frame(subj = as.factor(Total$subj), drug = as.factor(Total$drug), Cmax = Total$Cmax, AUC0t = Total$AUC0t, AUC0INF = Total$AUC0INF,lnCmax = log(Total$Cmax),lnAUC0t = log(Total$AUC0t),lnAUC0INF = log(Total$AUC0INF))

❝ To get Helmut's concern change

TotalData$lnCmax <- round(log(TotalData$Cmax),3)

❝ or even more drastically round to 2 decimals.


No! In bear, only "T/R ratio (%)" and "Power (%)" in "Sample size estimation" use round() function right now. Later (v2.5.0), in Welch t test, we will also use round() to present the point estimate and 90%CIs.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-05-05 22:29
(5471 d 18:20 ago)

@ yjlee168
Posting: # 5303
Views: 24,229
 

 rounding and presentation of data

Dear bears and D. Labes,

I have to cut in. ;-)
I would not round at all - and would simply delete the columns of log-transformed values from the output. It's quite unusual to present anything than data in raw-scale. From the headings of descriptive and comparative statistics it's clear enough, that the analysis was done either in raw-scale or log-scale.
When it comes to presenting the PE and CI (in percent), I would use round(x, digits = 2), because this format is exactly what FDA and EMA want to see. :pirate:

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
yjlee168
★★★
avatar
Homepage
Kaohsiung, Taiwan,
2010-05-05 23:05
(5471 d 17:44 ago)

@ Helmut
Posting: # 5304
Views: 24,210
 

 rounding and presentation of data

Dear Helmut and D. Labes,

I think I miss one critical point here. In bear, we also use the function formatC(log(Cmaxi), format="f", digits=2 or 3) to present log transformed values, where i is the subj ID. It is just for the purpose of pretty table as you said in previous post. But the function, formatC(.), will not do a real rounding at at.

❝ I would not round at all - and would simply delete the columns of log-transformed values from the output. [...]


O.k.. Thanks.

❝ When it comes to presenting the PE and CI (in percent), I would use round(x, digits = 2), because this format is exactly what FDA and EMA want to see. :pirate:


I see. To keep 3 digits for 90%CI is that in some countries the regulatory agent asks to do so (they like to see more). Thus, rounding or not rounding is up to user's choice. The outputs obtained from bear just serves as part of attachments or appendix in the whole package of a BE final report. Thanks for your message.

All the best,
-- Yung-jin Lee
bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan https://www.pkpd168.com/bear
Download link (updated) -> here
martin
★★  

Austria,
2010-05-06 09:53
(5471 d 06:56 ago)

@ yjlee168
Posting: # 5308
Views: 24,111
 

 rounding and presentation of data in R

dear all!

you may find the following option of interest which is frequently used in R (e.g. default of print.lm and print.glm)

digits = max(3, getOption("digits") - 3)
format(x, digits=digits)
signif(x, digits=digits)


best regards

martin
d_labes
★★★

Berlin, Germany,
2010-05-06 12:12
(5471 d 04:37 ago)

@ Helmut
Posting: # 5309
Views: 24,160
 

 Canadian logs

Dear Helmut,

❝ I would not round at all -


Full ACK.

❝ ... It's quite unusual to present anything than data in raw-scale.


But ... See Canada-A, Chapter 8 SAMPLE ANALYSIS FOR A COMPARATIVE BIOAVAILABILITY STUDY, f.i. Table 8.5. Note also logs with 2 decimals! :cool:
And this has survived into the new DRAFT.

Sincerely yours
The Guideline believer :pirate:
ElMaestro
★★★

Denmark,
2010-05-06 12:21
(5471 d 04:28 ago)

@ d_labes
Posting: # 5310
Views: 24,316
 

 Canadian logs

Hi all,

❝ And this has survived into the new DRAFT.

No big deal. There are much worse problems with improper handling of Logs out there.

Best regards
EM.

Pass or fail!
ElMaestro
Helmut
★★★
avatar
Homepage
Vienna, Austria,
2010-05-06 16:57
(5470 d 23:52 ago)

@ d_labes
Posting: # 5313
Views: 24,035
 

 Canadian logs

Dear D. Labes!

❝ And this has survived into the new DRAFT.


Wow!
Table A2-A: Subject Number 010** ID I: Subject did not appear for either period.
In the following tables values are given for this subject. Spooky!

Table A2-I: Variance Estimates for ln(AUCT)
Subject(Seq) 0.2648
Residual 0.0729

Hhm. :confused:
I get 0.265081 and 0.0716621 for AUCT (transformation in full precision) and 0.264352 and 0.073161 for the 2-decimal transformed values. Strange.

Note also the simplified formula for CV in Table A2-I (also a survivor from 1992)...

Dif-tor heh smusma 🖖🏼 Довге життя Україна! [image]
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. 🚮
Science Quotes
d_labes
★★★

Berlin, Germany,
2010-05-06 17:38
(5470 d 23:11 ago)

@ Helmut
Posting: # 5314
Views: 24,170
 

 Canadian lost

Dear Helmut!

❝ Table A2-A: Subject Number 010** ID I: Subject did not appear for either period.

❝ In the following tables values are given for this subject. Spooky!


There is another one: Subject Number 009, ID T which has according to Table A2-A absolved both periods but then abruptly vanished.

Eventually kidnapped by pirates under the commandership of a well know Old sailor :-D.

Regards,

Detlew
UA Flag
Activity
 Admin contact
23,424 posts in 4,927 threads, 1,673 registered users;
80 visitors (0 registered, 80 guests [including 2 identified bots]).
Forum time: 16:50 CEST (Europe/Vienna)

There are two possible outcomes: if the result confirms the
hypothesis, then you’ve made a measurement. If the result is
contrary to the hypothesis, then you’ve made a discovery.    Enrico Fermi

The Bioequivalence and Bioavailability Forum is hosted by
BEBAC Ing. Helmut Schütz
HTML5