Pakawadee ☆ 2010-05-03 13:48 (5474 d 03:41 ago) Posting: # 5273 Views: 26,173 |
|
Response: lnCmax The above ANOVA table is an example in this webpage. I am confused about F value of sequence. Why it is 0.0011 (=MSseq/MSresidual)? I think it should be 0.0008 (=MSseq/MSsubj). Due to my knowledge in statistic is very limit, I afraid that I would missunderstand some concept of ANOVA for BE. May anyone help to explain? Thanks Pakawadee Edit: Not in this webpage, but in the example (second output) of bear's homepage. BTW, testing for a sequence effect (or unequal carry-over, if you like) in 2×2 cross-over studies is futile anyhow (see this post). [Helmut] |
AA ☆ Ahmedabad, 2010-05-03 15:38 (5474 d 01:51 ago) @ Pakawadee Posting: # 5274 Views: 24,558 |
|
Dear Pakawadee, You are absolutely right. The F value of sequence has been calculated using (=MSseq/MSresidual) and it has been calculated using Type I ss. It will be calculated as (=MSseq/MSsubj) only incase if it has been predefined in the protocol, "the sequence effect will be tested using subject within sequence as an error term". Edit: Full quote removed. Please delete anything from the text of the original poster which is not necessary in understanding your answer; see also this post! [Ohlbe] |
Ohlbe ★★★ France, 2010-05-03 16:16 (5474 d 01:12 ago) @ Pakawadee Posting: # 5275 Views: 24,352 |
|
Pakawadee ☆ 2010-05-04 05:48 (5473 d 11:41 ago) @ Ohlbe Posting: # 5277 Views: 24,144 |
|
Dear all, Many thanks for your answers. ^^ Does it mean that I need to recalculate F-value of sequence effect by myself, when I run R program for BE? Does it have any way to directly get a right result for sequence by running the program? regards Pakawadee |
ElMaestro ★★★ Denmark, 2010-05-04 10:48 (5473 d 06:41 ago) @ Pakawadee Posting: # 5278 Views: 24,352 |
|
Dear Pakawadee, ❝ ^^ Does it mean that I need to recalculate F-value of sequence effect by myself, when I run R program for BE? Does it have any way to directly get a right result for sequence by running the program? R's anova default is to give you type I sums of squares with all terms tested against the residual model variation. Therefore, if you use plain R, you will have to test the sequence effect against the proper between-subject term manually. Also, if your dataset is unbalanced you will notice a slight discordance with the power to know's results as they by default are created with type III sums of squares. Best regards EM. — Pass or fail! ElMaestro |
Pakawadee ☆ 2010-05-04 14:12 (5473 d 03:17 ago) @ ElMaestro Posting: # 5280 Views: 24,281 |
|
Dear ElMaestro, Thank you very much. I do understand better. ![]() Sincerely Pakawadee |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2010-05-04 15:00 (5473 d 02:29 ago) @ ElMaestro Posting: # 5282 Views: 24,398 |
|
Dear ElMaestro, Pakawadee and bears... Hope to clarify the confusion. The example was calculated by bear v2.3.0 (not 'plain' R). I checked v2.4.1 on R2.9.0 and v2.4.2 on R2.10.1 - both use the correct error terms, i.e., MSE[(seq)] / MSE[subj(seq)]. Just discovered a little problem. bear log-transforms raw values and rounds them to three significant digits. That's nasty, because results will not agree with other software taking raw values as input and using logs in full precision. From 2x2 example dataset, Cmax:
subj drug seq prd Cmax lnCmax bear (all versions?): 106.184 [97.550 - 115.582 ] WinNonlin 5.3; 4-digit Cmax-values, logtransformation: 106.2319 [97.5839 - 115.6462] WinNonlin 5.3; 3-digit lnCmax-values: 106.1837 [97.5479 - 115.5839] IMHO it's pretty to have values in a nice formatted table, but log-transformed values should be treated internally in full precision. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2010-05-04 15:35 (5473 d 01:54 ago) @ Helmut Posting: # 5283 Views: 24,301 |
|
Dear Helmut, ❝ IMHO it's pretty to have values in a nice formatted table, but log-transformed values should be treated internally in full precision. full ACK ![]() Moreover there is no need to have the lnAUC (or logAUC or how named ever) or other log-transformed PK metrics as separate values because in lm() or lme() the models can be stated as mlm <- lm(log(AUC) ~ whatever+effects+you+need, data=your.data.frame) with only having the AUC values in your.data.frame. — Regards, Detlew |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-04 23:27 (5472 d 18:02 ago) @ Helmut Posting: # 5287 Views: 24,723 |
|
Dear Helmut and D. Labes, Sorry about this. I didn't notice that this thread was about bear at the beginning until I saw the edited footnote by Helmut. Thanks you all for all messages here. ❝ Just discovered a little problem. bear log-transforms raw values and rounds them to three significant digits. That's nasty, [...] Yes. Then I tested bear (v2.5.0, not released yet!) to compare the runs between the pre-transformed - Pre-transformed Statistical analysis (ANOVA(lm)) --- internally transformed method --- Statistical analysis (ANOVA(lm)) — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
ElMaestro ★★★ Denmark, 2010-05-05 01:23 (5472 d 16:05 ago) @ yjlee168 Posting: # 5289 Views: 24,216 |
|
Hi Bears, Re. the anova tables, where's the type III SS Seq effect ![]() EM. — Pass or fail! ElMaestro |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-05 03:17 (5472 d 14:11 ago) @ ElMaestro Posting: # 5290 Views: 24,314 |
|
Dear ElMaestro, Thank you for reminding me. You've helped a lots last year in Type III SS. It's still there... I extract the part of Type III SS seq effect from the above post as follows. ❝ Re. the anova tables, where's the type III SS Seq effect [...]
Type III SS — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
ElMaestro ★★★ Denmark, 2010-05-05 14:15 (5472 d 03:13 ago) @ yjlee168 Posting: # 5295 Views: 24,253 |
|
Dear bears, ❝ It's still there... I extract the part of Type III SS seq effect from the above post as follows. (...) A good swash of saltwater coming over the rail from from a northwestern gale in the mid-atlantic hitting one square in the face surely impairs the vision. I wonder if I am just being subject to chronic effects of such exposure, as I am still unable to see what is "still there". As for d_labes' comment below, I fully support it. For verification purposes I'd stick to one dataset forever, or at least make sure future versions always have the datasets from older versions built into them. In addition, it would not harm if a built-in dataset was taken from a textbook somewhere, so that everyone is reasonably assured to be dealing with apples and apples rather than apples and pears. Best regards and again what a fine piece of work the Bear package is. EM. — Pass or fail! ElMaestro |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-05 14:59 (5472 d 02:30 ago) @ ElMaestro Posting: # 5297 Views: 24,278 |
|
Dear Elmaestro, ❝ exposure, as I am still unable to see what is "still there". The above outputs were obtained from the following codes: ... Are you saying that these output are not "Type III SS"? ![]() ❝ In addition, it would not harm if a built-in dataset was taken from a textbook somewhere, so that everyone is reasonably assured to be dealing with apples and apples rather than apples and pears. Thank you. We DID borrow some dataset from the Textbook - Chow SC and Liu JP. Design and Analysis of Bioavailability and Bioequivalence Studies, 3rd. ed., CRC Press, 2009, such as when developing outlier detection algorithms, for the purpose of convenient validation or comparison, since WNL and other software do not provide similar functions of this part or datasets, etc.. However, the Chow & Liu's Textbook does. Edit: Added backslash. At posting (or preview) single backslashes are stripped. If your code contains backslashes, please add a second one. Sorry for the inconvenience. [Helmut] — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
ElMaestro ★★★ Denmark, 2010-05-05 15:15 (5472 d 02:13 ago) @ yjlee168 Posting: # 5298 Views: 24,391 |
|
Dear Yung-jin ❝ Are you saying that these output are not "Type III SS"? Not at all what I tried to say. There is no line in the anova tables specifying a p-value for the type III sequence effect; the tables include just Subject, Period and Treatment. You can find a (which is not the same as the) P-value for seq by using drop1 on a model without Subject. If you follow the true definition of type III SS there will be no reduction of SS by inclusion of Seq once Subj is in the model for a 2,2,2-BE study. This is the R way. You may see the SS reduction for this effect specified as something effectively zero such as 1.23456e-7 etc., reflecting just machine precision at the stopping criterion for the fitting Al Gore Rhythm. On the other hand, one can convince oneself by the possible presence of a sequence effect manually using the approach shown in the book by Chow & Liu. SAS therefore -if I recall right- spits out a positive SS reduction, which can just be obtained by fiddling as described above (I think I posted how to do it in R some time ago, just cannot find it now). I am not implying the SAS way is better or more correct. Bear is fine as it is. Best regards EM. — Pass or fail! ElMaestro |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-05 15:28 (5472 d 02:00 ago) @ ElMaestro Posting: # 5299 Views: 24,248 |
|
Dear Elmaestro, Could you see your old post? I am confused right now... ❝ Not at all what I tried to say. There is no line in the anova tables [...] ❝ [...] ❝ posted how to do it in R some time ago, just cannot find it now). I am not implying the SAS way is better or more correct. I know what you mean. Thanks. — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-06 01:58 (5471 d 15:30 ago) @ ElMaestro Posting: # 5306 Views: 24,553 |
|
Dear ElMaestro, What about the following outputs? [...]
Type III SS ❝ Not at all what I tried to say. There is no line in the anova tables ❝ [...] — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
ElMaestro ★★★ Denmark, 2010-05-06 09:27 (5471 d 08:01 ago) @ yjlee168 Posting: # 5307 Views: 24,116 |
|
Hi again, ❝ seq 0 0.000000 0.21364 -104.52 Since you ask I'd say this line is not informative; Seq will always have a zero (well, effectively zero) SS with type III when you are using R and drop1. SAS gives you another value for reasons explained above. So as you ask, I'd suggest to either remove the line completely or to prepare a type III SS for Seq the way SAS does. This is not because I endorse the SAS way of looking at things, rather it's just because as it is the anova table entry conveys no information. Best regards EM. — Pass or fail! ElMaestro |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-06-03 15:04 (5443 d 02:25 ago) @ ElMaestro Posting: # 5425 Views: 24,074 |
|
Dear ElMaestro and all, I made some code modifications and got the output like as follows. Statistical analysis (ANOVA(lm)) ❝ ❝ seq 0 0.000000 0.21364 -104.52 ❝ ❝ Since you ask I'd say this line is not informative; Seq will always have a ❝ zero (well, effectively zero) SS with type III when you are using R and ❝ drop1. SAS gives you another value for reasons explained above. [...] What do you think about this revision? Thanks. — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
d_labes ★★★ Berlin, Germany, 2010-05-05 13:40 (5472 d 03:48 ago) @ yjlee168 Posting: # 5293 Views: 24,209 |
|
Dear Yung-jin, if I look on your ANOVA results and the CIs I guess your single dose 2x2 cross-over dataset has changed? Or is the difference solely due to rounding? ![]() IMHO it would be a very good idea to have the same demo datasets across the various versions. With that given one can make a short test of the reliability of bear results and we are talking always about the same numbers regardless which bear version we have currently installed (sometimes called backward compatibility ![]() It would also a very good idea to have the same datasets regardless if we start from NCA or later with statistical evaluation, i.e. the concentration-time values of a demo dataset should correspond to the PK metrics used later. BTW: I question your results! Full match of the results with log to full precision or with log rounded to 3 digits (after dec. separator), respectively, in the ANOVA's and the CI's up to the last printed digits has a probability of very near to zero. — Regards, Detlew |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-05 14:31 (5472 d 02:58 ago) @ d_labes Posting: # 5296 Views: 24,124 |
|
Dear D. Labes, Great and nice suggestions. I only changed parallel raw dataset that will be used when running demo from "NCA -> Statistical analysis" only, since bear has been developed. So it contains seq, per, trt, time, and drug conc.. However, in bear, we also have independent demo datasets for NCA or statistical analysis ONLY. These dataset were most artificial and they had nothing to do with the related raw dataset (demo for "NCA--> Statistical Analysis"), such as 2x2x2 crossover. Helmut was confused by this before (see this post). I am very sorry about this. But I will keep in my mind not to change these dataset later. ❝ IMHO it would be a very good idea to have the same demo datasets across the various versions. ❝ It would also a very good idea to have the same datasets regardless if we start from NCA or later with statistical evaluation, i.e. the concentration-time values of a demo dataset should correspond to the PK metrics used later. This suggestion will definitely conflict with the previous one, because I will need to change some of datasets. However, it's really a very good idea. ❝ BTW: I question your results! Full match of the results with log to full precision or with log rounded to 3 digits (after dec. separator), respectively, in the ANOVA's and the CI's up to the last printed digits has a probability of very near to zero. Yes, it is possible, but most of time (I think) are unlikely. The transformed from original data, such as Cmax and AUCs, are done under R, not manually. That's why it makes no difference, because both transformations are done by the same machine under R. It could be different when comparing between different machines, such as a 32-bit vs. a 64-bit machine, or between different OS. — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
d_labes ★★★ Berlin, Germany, 2010-05-05 18:00 (5471 d 23:29 ago) @ yjlee168 Posting: # 5301 Views: 24,313 |
|
Dear Yung-jin, ❝ ❝ BTW: I question your results! Full match of the results with log to full precision or with log rounded to 3 digits (after dec. separator), respectively, in the ANOVA's and the CI's up to the last printed digits has a probability of very near to zero. ❝ ❝ Yes, it is possible, but most of time (I think) are unlikely. The transformed from original data, such as Cmax and AUCs, are done under R, not manually. That's why it makes no difference, because both transformations are done by the same machine under R. It could be different when comparing between different machines, such as a 32-bit vs. a 64-bit machine, or between different OS. Totally confused ![]() You have described in your original post that you compared ❝ ... the runs between the pre-transformed ❝ ❝ (lnCmax<- lm(log(Cmax) ~ seq + subj:seq + prd + drug , data=TotalData)) I guess that you compared the logs calculated to full precision TotalData$lnCmax <- log(TotalData$Cmax) versus my suggestion lm(log(Cmax) ~ seq + subj:seq + prd + drug, data=TotalData) .To get Helmut's concern change TotalData$lnCmax <- round(log(TotalData$Cmax),3) or even more drastically round to 2 decimals. — Regards, Detlew |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-05 22:13 (5471 d 19:16 ago) @ d_labes Posting: # 5302 Views: 24,935 |
|
Dear D, Labes, Sorry to confuse you again. ![]() ❝ I guess that you compared the logs calculated to full precision ❝ ❝ lm(lnCmax ~ seq + subj:seq + prd + drug, data=TotalData) ❝ versus my suggestion ❝ Yes, this is exactly the way that we do in bear with the code as follow:
TotalData<-data.frame(subj = as.factor(Total$subj), drug = as.factor(Total$drug), Cmax = Total$Cmax, AUC0t = Total$AUC0t, AUC0INF = Total$AUC0INF,lnCmax = log(Total$Cmax),lnAUC0t = log(Total$AUC0t),lnAUC0INF = log(Total$AUC0INF)) ❝ To get Helmut's concern change ❝ ❝ or even more drastically round to 2 decimals. No! In bear, only "T/R ratio (%)" and "Power (%)" in "Sample size estimation" use round() function right now. Later (v2.5.0), in Welch t test, we will also use round() to present the point estimate and 90%CIs. — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2010-05-05 22:29 (5471 d 18:59 ago) @ yjlee168 Posting: # 5303 Views: 24,232 |
|
Dear bears and D. Labes, I have to cut in. ![]() I would not round at all - and would simply delete the columns of log-transformed values from the output. It's quite unusual to present anything than data in raw-scale. From the headings of descriptive and comparative statistics it's clear enough, that the analysis was done either in raw-scale or log-scale. When it comes to presenting the PE and CI (in percent), I would use round(x, digits = 2) , because this format is exactly what FDA and EMA want to see. ![]() — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
yjlee168 ★★★ ![]() ![]() Kaohsiung, Taiwan, 2010-05-05 23:05 (5471 d 18:24 ago) @ Helmut Posting: # 5304 Views: 24,213 |
|
Dear Helmut and D. Labes, I think I miss one critical point here. In bear, we also use the function formatC(log(Cmaxi), format="f", digits=2 or 3) to present log transformed values, where i is the subj ID. It is just for the purpose of pretty table as you said in previous post. But the function, formatC(.), will not do a real rounding at at. ❝ I would not round at all - and would simply delete the columns of log-transformed values from the output. [...] O.k.. Thanks. ❝ When it comes to presenting the PE and CI (in percent), I would use I see. To keep 3 digits for 90%CI is that in some countries the regulatory agent asks to do so (they like to see more). Thus, rounding or not rounding is up to user's choice. The outputs obtained from bear just serves as part of attachments or appendix in the whole package of a BE final report. Thanks for your message. — All the best, -- Yung-jin Lee bear v2.9.2:- created by Hsin-ya Lee & Yung-jin Lee Kaohsiung, Taiwan https://www.pkpd168.com/bear Download link (updated) -> here |
martin ★★ Austria, 2010-05-06 09:53 (5471 d 07:35 ago) @ yjlee168 Posting: # 5308 Views: 24,114 |
|
dear all! you may find the following option of interest which is frequently used in R (e.g. default of print.lm and print.glm) digits = max(3, getOption("digits") - 3) best regards martin |
d_labes ★★★ Berlin, Germany, 2010-05-06 12:12 (5471 d 05:16 ago) @ Helmut Posting: # 5309 Views: 24,163 |
|
Dear Helmut, ❝ I would not round at all - Full ACK. ❝ ... It's quite unusual to present anything than data in raw-scale. But ... See Canada-A, Chapter 8 SAMPLE ANALYSIS FOR A COMPARATIVE BIOAVAILABILITY STUDY, f.i. Table 8.5. Note also logs with 2 decimals! ![]() And this has survived into the new DRAFT. Sincerely yours The Guideline believer ![]() |
ElMaestro ★★★ Denmark, 2010-05-06 12:21 (5471 d 05:07 ago) @ d_labes Posting: # 5310 Views: 24,319 |
|
Hi all, ❝ And this has survived into the new DRAFT. Best regards EM. — Pass or fail! ElMaestro |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2010-05-06 16:57 (5471 d 00:31 ago) @ d_labes Posting: # 5313 Views: 24,038 |
|
Dear D. Labes! ❝ And this has survived into the new DRAFT. Wow! Table A2-A: Subject Number 010** ID I: Subject did not appear for either period. In the following tables values are given for this subject. Spooky! Table A2-I: Variance Estimates for ln(AUCT) Subject(Seq) 0.2648 Residual 0.0729 Hhm. ![]() I get 0.265081 and 0.0716621 for AUCT (transformation in full precision) and 0.264352 and 0.073161 for the 2-decimal transformed values. Strange. Note also the simplified formula for CV in Table A2-I (also a survivor from 1992)... — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2010-05-06 17:38 (5470 d 23:51 ago) @ Helmut Posting: # 5314 Views: 24,173 |
|
Dear Helmut! ❝ Table A2-A: Subject Number 010** ID I: Subject did not appear for either period. ❝ In the following tables values are given for this subject. Spooky! There is another one: Subject Number 009, ID T which has according to Table A2-A absolved both periods but then abruptly vanished. Eventually kidnapped by pirates under the commandership of a well know Old sailor ![]() — Regards, Detlew |