yjlee168 ★★ Kaohsiung, Taiwan, 20080712 20:53 (edited by yjlee168 on 20080713 02:31) Posting: # 2020 Views: 50,935 
Thread locked 
Hi, all We like to announce the release of bear (GPL, free opensource) which is a data analytical tool for average bioequivalence (AB) for R in this forum. Bear will perform some sample size estimation for ABE. Also it can do noncompartmental analysis (NCA) and then ANOVA (GLM). The data can be imported into bear with .csv format. With bear, all you need is a spreadsheet, such as MSExcel (if it's affordable) or OpenOffice Calc (which is a very nice freeware!) or even a plain .txt editor which is used to create your .csv format for the import of your data. We have tested bear with one of BE dataset and found that all output results to be the same as those generated by commercial software (like NCA using WinNonlin and ANOVA with SAS). Let us know if you have any question or comment about it. Thank you. Visit it's website at http://pkpd.kmu.edu.tw/bear. All the best, Hsinya Lee & Yungjin Lee, College of Pharmacy, KMU Kaohsiung, Taiwan 
Helmut ★★★ Vienna, Austria, 20080714 04:12 @ yjlee168 Posting: # 2021 Views: 47,528 

Dear Hsinya and Yungjin! Wonderful, thank you! I had a quick view already, some first impressions below:
For 20% CV, theta 95%, power 80% bear comes up with:  I implemented Hauschke's approximation in Excel (sorry, a quickshot) and obtained: 9.1763 / sequence = 18.3526 / total > rounded up to next even number = 20 David Dubins FARTSSIE comes up with 19 / total StudySize2.01: 9.1876 / sequence = 18.3752 / total My Rcode (modified from Patterson's/Jones' SAS) would give a total of 20 (power 83.47%). With bear's suggested n=18 I got power of only 79.1% (StudySize). I failed in reading in the NCA test file. I tried it with tabulator and semicolon as field separators and got an error: Error in readChar(con, 5L, useBytes = TRUE) :
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20080714 10:26 @ Helmut Posting: # 2022 Views: 47,570 

Dear HS, Thank you for your testing. » I had a quick view already, some first impressions below: » – The plots for selection of data points in NCA should be in semilog scale rather than linear. Definitely. You're correct! We will change this asap. » – Maybe it's possible to open only one graphics device and turn the history on  instead of opening a new device for every plot. For cleaning reason on the screen, the plots should be displayed as you suggested. We will try and see if it can be done. » – In sample size estimation you should consider rounding up to the next even number: » Example: » For 20% CV, theta 95%, power 80% bear comes up with: »  » <<Suggestion>> »  » n>= 9.13 (sample size number for per sequence) » Total sample size 18 »  » I implemented Hauschke's approximation in Excel (sorry, a quickshot) and obtained: » 9.1763 / sequence = 18.3526 / total > rounded up to next even number = 20 » David Dubins FARTSSIE comes up with 19 / total » StudySize2.01: 9.1876 / sequence = 18.3752 / total » My Rcode (modified from Patterson's/Jones' SAS) would give a total of 20 (power 83.47%). » With bear's suggested n=18 I got power of only 79.1% (StudySize). Yes, we should not use rounding up. It should be 20 when we have estimated sample size of 18.3. It will be fixed in the next release. » I failed in reading in the NCA test file. I tried it with tabulator and » semicolon as field separators and got an error: » Error in readChar(con, 5L, useBytes = TRUE) : » cannot open connection We have not tested with a .csv file using semicolon(;) before. Please try with comma(,) to see if it can work normally. I just tried with semicolon as field separator, it can cause error. » – I would avoid the term " Tests of Hypothese using the Type I MS for » SUBJECT(SEQUENCE) as an error term " and use "Tests of » Hypothesis for SUBJECT(SEQUENCE) as an error term " instead. There were some discussions on the Rlist and here also about Types IIII; don't fall into the trap of using SAS' proprietary terminology… Of course. will be fixed, too. »  IMHO a posteriori power is meaningless. Just for the purpose of submission to regulatory agent (at least in Taiwan). However, you're absolutely right. » Two points for the wishlist: » – CV_{intra} and CV_{inter} » – intrasubject residuals  something everybody misses in WinNonlin… These seems not so difficult to implement. We will add these as you suggest. » Again, many thanks; I will continue testing  time allowing. Thank you so much, HS. Best regards, Yungjin Lee 
Helmut ★★★ Vienna, Austria, 20080714 14:02 @ yjlee168 Posting: # 2025 Views: 47,491 

Dear Yungjin! » » I failed in reading in the NCA test file. I tried it with tabulator and semicolon as field separators and got an error: » » Error in readChar(con, 5L, useBytes = TRUE) : » » cannot open connection » » We have not tested with a .csv file using semicolon(;) before. Please try with comma(,) to see if it can work normally. I just tried with semicolon as field separator, it can cause error. Hhm, I failed with colon as well (file: [path to file]/TEST.CSV). ,subj,drug,seq,prd,Cmax,AUC0t,AUC0INF,LnCmax,LnAUC0t,LnAUC0INF Importing the same file directly with R worked. > data < read.table("[path to file]/TEST.CSV", header=TRUE, Is bear looking at a particular path  diiferently from the one set in R? BTW, somewhere it should be documented which format of both numbers and field separators are allowed/needed. Although everybody using R knows that the decimal separator is the point '.', converting CSVfiles from some localized Excel versions can be cumbersome. For instance the German standard for numbers uses the comma ',' as a decimal separator. I think the only 'save' field separators would be blank or tabulator. » »  IMHO a posteriori power is meaningless. » » Just for the purpose of submission to regulatory agent (at least in Taiwan). Interesting! The only country I knew before asking  sometimes  for a posteriori power was Greece. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20080714 18:12 (edited by yjlee168 on 20080714 18:32) @ Helmut Posting: # 2028 Views: 47,634 

Dear HS, » Is bear looking at a particular path  differently from the one set in R? Yes, R uses its accessible file path. You can type getwd() on R consoleto see the R file path in your computer. Of course, you can change it yourself. Please see this R Windows FAQ or R Mac OS FAQ about "What are HOME and working directories?" We will show this directory in next version to avoid this error. » BTW, somewhere it should be documented which format of both numbers and field separators are allowed/needed. Although everybody using R knows that the decimal separator is the point '.', converting CSVfiles from some localized Excel versions can be cumbersome. For instance the German standard for numbers uses the comma ',' as a decimal separator. » I think the only 'save' field separators would be blank or tabulator. You're absolutely right. We will see what we can do about this. In this case, we have to leave it as options to allow users to use semicolon (;) or comma (,) as the separator. Is there any other separators which are used in .csv format? » Interesting! » The only country I knew before asking  sometimes  for a posteriori power was Greece. Even worse that, in Taiwan, a BE study with a MR dosage form still requires a multipledosing design. We seem need more time to change. Thanks again, HS. Best regards, Yungjin 
Helmut ★★★ Vienna, Austria, 20080714 18:56 @ yjlee168 Posting: # 2029 Views: 47,407 

Dear Yungjin! » » Is bear looking at a particular path  differently from the one set in R? » Yes, R uses its accessible file path. Strange. Of course it changed the path to different locations (including the usual suspects: the desktop, path of R, and R/bin) and placed the file in all these locations. In my example the path was set by R, and I was able to read it from the console by read.table()  but got the error from bear . » [...] we have to leave it as options to allow users to use semicolon (;) or comma (,) as the separator. Is there any other separators which are used in .csv format? Actually there's not even a definition what 'CSV' stands for. The original term was indeed 'Comma Separated variables', but better should be read 'Character Separated variables'. Personally I got only files with tabs, commas, blanks, semicolons, and  only once!  colons. Maybe it's possible to implement a question/answer game with the user about his/her file's content and format. Otherwise I expect you will get a lot of emails... Since everything is possible, read.table() has these two arguments:sep the field separator character. Values on each line of the file are separated by this character. If sep = "" (the default for read.table ) the separator is 'white space', that is one or more spaces, tabs, newlines or carriage returns.dec the character used in the file for decimal points. » Thanks again, HS. You are welcome. Please call me Helmut; stupid me has chosen my initials as the user name in registering to my own forum– — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Helmut ★★★ Vienna, Austria, 20080714 20:35 @ yjlee168 Posting: # 2032 Views: 47,539 

Dear Yungjin, » I see. I think you're talking about NCA output. You first did NCA and saved the output file. Then load this NCA output when you planned to do GLM. The error occurred at this moment. Exactly! I must confess, my description was not a proper bug report... » You have proposed a very nice way to fix this problem. Yes, we will try to make if more flexible. Does a user know what these options for? No idea. Sometimes I believe, that anybody in doing PK should now about how his/her file's content. But on the other hand, importing a file with field sep ';' and dec '.' also isn't straightforward as one might think (e.g., in WinNonlin):Open > Filetype ASCII Data (*.dat;*dta;*.prn;*.csv) > oops, everything in the first column…Ah, there's an Optionsbutton, hhm, select X Automatically determine file options > same result…Next try: Field delimiter ;… working, but header in first row, etc.WinNonlin has no option to select the decimal separator anyway. Of course there are combinations which are impossible to resolve, e.g. the file nasty.csv :x,y where in the third row every import routine will end with an error. No program can 'guess', that it was produced with a badly configured German Excel (the two numbers 1.5 and 2.5) were meant... If we change nasty.csv (assuming a German Excel) to:t;c WinNonlin would read in the file, but treat the third row as plain next. Writing =A1+5 to cell C1 we get 6 , but for =A2+5 in cell C2 we get #VALUE! ...» or still we are going to get lots of emails, anyway? Hard to tell. I wouldn't bother too much about makink the input routines 'foolproof', but invest a little effort in the help files telling which format is acceptable  and which one is not. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
martin ★★ Austria, 20080714 12:18 @ yjlee168 Posting: # 2023 Views: 47,403 

dear hsinya and yungjin I would like to acknowledge your work !! I suggest that you may think about implementing a more sophisticated approach for estimation of the terminal elimination rate as currently only the last 3 data points are used: [...] lambda, the terminal phase rate constant, will be estimated from the slope of linear regression with the last three data points [...] when using the OLS (ordinary least squares) criteria for estimation, the terminal elimination rate can be defined as a weighted sum of logtransformed concentrations where the weights are determined by the sampling schedule. in the case of a larger time gap between the last two measurement time points compared to the first two measurement time points, the concentration at the last time point is weighted more than the remaining two concentrations. this may lead to an inaccurate estimate of the terminal elimination rate when the last concentration is inaccurate for some reason (e.g. a higher inaccuracy of a bioanalytical method when dealing with values near to the limit of detection). more sophisticated and/or robust approaches for estimation of the terminal elimination rate are for example discussed in this post. best regards martin 
yjlee168 ★★ Kaohsiung, Taiwan, 20080714 19:02 @ martin Posting: # 2030 Views: 47,465 

Dear Martin, » I suggest that you may think about implementing a more sophisticated approach for estimation of the terminal elimination rate as currently only the last 3 data points are used: At the beginning, when we planned how to calculate lambda in bear, we have looked for many options. The question is that there seems no "standard" method so far to calculate lambda. That's why we choose the last 3point regression method. It's simple and easy to implement. It's just like that we use linear trapezoidal (LT) method to calculate AUC. Scientifically speaking, we all don't think that LT is as accurate as many others. In order to validate our results obtained from bear, we choose WinNonlin as the "reference" to compare. » [...] lambda, the terminal phase rate constant, will be estimated from the slope of linear regression with the last three data points [...]» » when using the OLS (ordinary least squares) criteria for estimation, the terminal elimination rate can be defined as a weighted sum of logtransformed concentrations where the weights are determined by the sampling schedule. in the case of a larger time gap between the last two measurement time points compared to the first two measurement time points, the concentration at the last time point is weighted more than the remaining two concentrations. this may lead to an inaccurate estimate of the terminal elimination rate when the last concentration is inaccurate for some reason (e.g. a higher inaccuracy of a bioanalytical method when dealing with values near to the limit of detection). You're absolutely right. We will see what we can do about the sophisticated method to calculate lambda. » more sophisticated and/or robust approaches for estimation of the terminal elimination rate are for example discussed in this post. A very good pointer and references there. Many thanks. All the best, Yungjin 
Aceto81 ★ Belgium, 20080715 10:07 @ yjlee168 Posting: # 2035 Views: 47,360 

» » more sophisticated and/or robust approaches for estimation of the terminal elimination rate are for example discussed in this post. » A very good pointer and references there. Many thanks. Yungjin, first of all thanks for sharing the Bear package. For the estimation of the terminal elimination rate and the number of points to pick, I already wrote a function in the past which uses the WinNonLin approach as in the thread mentioned here before. Maybe it is of some help (with some adjustments for your code).
dat < data.frame(time,conc) Best regards Ace  Edit: I changed the tabs in the post to serial blanks; otherwise on some displays the output gets confusing. See also this post for updated code (preventing C_{max}/t_{max} to be included). [Helmut] 
yjlee168 ★★ Kaohsiung, Taiwan, 20080716 07:22 (edited by Jaime_R on 20080716 08:55) @ Aceto81 Posting: # 2042 Views: 47,255 

Dear Ace, Thank you very much for your sharing and sorry for the delay of this message. We will try to implement your algorithm in bear as the other options for users to choose. Do you have ref. for the algorithm/code you provide? All the best, Yungjin  Edit: Full quote removed. Please see this post! [Jaime] 
Aceto81 ★ Belgium, 20080716 09:47 (edited by Aceto81 on 20080716 10:55) @ yjlee168 Posting: # 2045 Views: 47,201 

Dear Yungjin, I'm sorry but the only reference I have was the thread mentioned above, and my comparison with WinNonLin. But the good news: maybe I've got a solution for the csv problem with the , and ; read.multi < function(file) read.table(file,sep=ifelse(regexpr(";",scan(file,"character",1,quiet=T))>(1),";",","),header=T) Kind Regards Ace  Edit: Full quote removed. Please see this post! [Ohlbe] Edit: Added the code [Ace] 
yjlee168 ★★ Kaohsiung, Taiwan, 20080716 21:32 @ Aceto81 Posting: # 2049 Views: 47,160 

Dear Ace, Great! Thank you for your sharing codes. We will try to make bear as friendly as possible. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 ★★ Kaohsiung, Taiwan, 20080923 12:13 @ Aceto81 Posting: # 2387 Views: 47,516 

Dear Ace, We've tried to implement your methods (codes) into bear now. But we find that the difference between your method and WinNonlin. Here is your method as R code: b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) And we got Call: Your method picked 6 data points for this example. However, WinNonlin picked 5 data points as follows: Model: Plasma Data, Extravascular Administration What are we missing here? Thanks. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes ★★★ Berlin, Germany, 20080923 14:41 @ yjlee168 Posting: # 2390 Views: 46,736 

Dear Yungjin Lee, I do not own WINNONLIN, but I suggest that the point t_{max}, C_{max} should not be included in the terminal part. I further suggest that the TTT (twotimes t_{max}) rule should be implemented to restrict the number of points in the terminal part, provided that at least 3 points are available. See this thread for the TTT rule. May be the Akaike information criterion AIC (especially the AICc, small sample formula) is a better choice than adj. RSquare. — Regards, Detlew 
yjlee168 ★★ Kaohsiung, Taiwan, 20080924 22:41 @ d_labes Posting: # 2397 Views: 46,970 

Dear d_labes, Yes, Helmut also comments about this. He even said it should not include T_{max} and C_{max} even for 1compartment model. I read previous threads as you and Helmut pointed. I think that it is not appropriate to include T_{max} and C_{max} to calculate lambda_{z} for a drug with a 2compartment model. However, for a drug with 1compartment model, theoretically, it should be o.k.. But I will think about that later. Currently, we do not plan to include T_{max} and C_{max} to estimate lambda_{z}. Some automatic algorithms (such as that suggested by Ace or WinNonlin) may pick C_{max} to estimate lambda_{z}. We now allow users to pick these data points visually and manually in bear. We will try to implement the TTT rule in bear. About using AICs to choose appropriate data points to estimate lambda_{z} is very interesting. As far as I know, AIC is used to do model selection or model discrimination. Basically, we will fit different models with the same observed data (points). I really don't know if we can use AICs to vary data points to the same model (linear regression). That's why we will not try to abandon or pick some data points from the observed dataset, when doing pk/pd modeling, to a model. But it is a very interesting and innovative approach. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Aceto81 ★ Belgium, 20080925 10:56 @ d_labes Posting: # 2401 Views: 46,726 

» I do not own WINNONLIN, but I suggest that the point t_{max}, C_{max} should not be included in the terminal part. » » I further suggest that the TTT (twotimes t_{max}) rule should be implemented to restrict the number of points in the terminal part, provided that at least 3 points are available. » See this thread for the TTT rule. » » May be the Akaike information criterion AIC (especially the AICc, small sample formula) is a better choice than adj. RSquare. If the C_{max} may not be included, this explains the difference between the R code and WinNonlin. If you change for (i in (nrow(dat)3):which.max(dat$conc)) to for (i in (nrow(dat)3):(which.max(dat$conc)+1)) then C_{max} is excluded. And you would get the same result as WinNonlin. Ace 
yjlee168 ★★ Kaohsiung, Taiwan, 20080925 21:00 @ Aceto81 Posting: # 2414 Views: 46,635 

Dear Ace, Got it! We'll try it again and will report the result back here soon. Many thanks. » If the C_{max} may not be included, this explains the difference between the R code and WinNonlin. » If you change » for (i in (nrow(dat)3):which.max(dat$conc)) » to » for (i in (nrow(dat)3):(which.max(dat$conc)+1)) » then C_{max} is excluded. » » And you would get the same result as WinNonlin. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 ★★ Kaohsiung, Taiwan, 20080926 08:03 @ Aceto81 Posting: # 2419 Views: 46,699 

Dear Ace, We tried three examples with codes as follows: Ex.01 b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) Ex. 02 b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) Ex. 03 b<c(0,0.5,0.75,1,1.5,2,3,4,8,12,24) The final result showed that only Ex. 01 got the exactly same data points (5's) with that of WinNonlin. With Ex. 02 and Ex. 03, your method picked 5's and 4's data points, respectively, compared to 4's and 3's picked by WinNonlin. There were still something wrong with your method/codes. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Aceto81 ★ Belgium, 20080926 10:25 @ yjlee168 Posting: # 2421 Views: 46,666 

Dear Yungjin, Thanks for trying my codes with different examples, I learn a lot too. So here we go: It looks like the round was not a good idea, this was the reason for the failure of example 2. For example 3: nrow(dat)3:nrow(dat) gives 4 rows, instead of 3, so the code wasn't able to pick less than 4 points.So here is the new code, I wraped it up to a function and run your samples through the function:
f< function(dat) { So for your examples: ex1: b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) ex2: b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) ex3: b<c(0,0.5,0.75,1,1.5,2,3,4,8,12,24) This equals to the WinNonlin results (finally....) Ace 
yjlee168 ★★ Kaohsiung, Taiwan, 20080928 00:08 @ Aceto81 Posting: # 2428 Views: 46,911 

Dear Ace, Thanks for your assistance. I look through your codes and find that your codes pick the correct numbers of data points this time. So I change one line of your codes from return(n_lambda) to summary(lm(log(conc)~time,dat[(nrow(dat)n_lambda+1):nrow(dat),])) because I want to see if the final results are the same with those obtained from WinNonlin. Our original code for this line is summary(lm(log(conc)~time,dat[i:nrow(dat),])) which apparently does not include n_lambda in there. So the final results are the following (dumping from R console):f< function(dat) { Bingo! The final results are exactly the same as those obtained from WinNonlin with these three examples. We will test more examples to confirm this. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 ★★ Kaohsiung, Taiwan, 20080929 13:32 @ Aceto81 Posting: # 2437 Views: 46,580 

Dear Ace, This is my second post replied to your previous post. In my first replied message, I said that the three examples (Ex. 0103) that we tested before were all matched the data points picked by WinNonlin (WNL). Then we go further to test three more examples (Ex. 0406) and we find that both your method and WNL may pick (Tmax, Cmax) to estimate lambda_{z} with Ex. 04 and Ex. 06. Please see the following results. Ace Ex. 04 b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) WNL  Ex. 04 ...truncated here Ace  Ex.05 b<c(0,0.5,0.75,1,1.5,2,3,4,8,12,24) Ace  Ex. 06 b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) WNL  Ex. 06 ...truncated here Looks like that your method and WNL are quite consistent in data point selection for estimation of lambda_{z} now. Question is that both methods still cannot absolutely rule out the data point of Cmax (e.g. Ex 04 and Ex. 06). — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut ★★★ Vienna, Austria, 20080929 15:54 @ yjlee168 Posting: # 2438 Views: 46,933 

Dear Yungjin, wonderful! I don't know which version of WinNonlin you are using, but I could reproduce your results of Ex.04 in the current version v5.2.1. This is contradictory to the manual and the online help. Now for the weird part. I tried the dataset also in WinNonlin v6 beta Phoenix and got:
Summary Table As you can see only the last three data points were picked, although according to the manual the algorithm should be the same as in previous versions. For both cases we should consider contacting Pharsight's support to clarify the issue. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 ★★ Kaohsiung, Taiwan, 20080929 18:45 (edited by yjlee168 on 20080929 19:16) @ Helmut Posting: # 2439 Views: 47,048 

Dear Helmut, We used WNL v5.1.1 here. What WNL actually does in estimation of lambda_{z} is to include the last three data points first. And calculate its adjusted R^{2} for this step. Then include one more data point (i.e. the 4^{th} data point from the last) and calculate adj. R^{2} again. If the difference between these two adj. R^{2}s is less than or equal to 0.0001, then stop including further. If not, then include the 4^{th} data points and add one more data points (i.e. the 5^{th} data point from the last)... It will not stop until it reaches Cmax. That's why WNL may include Cmax to estimate lambda_{z}. WNL should stop including at the data point next to Cmax. The result you got from WNL v6 beta seems not meet its stop criterion. The adj. R^{2} for Ex. 04 including Cmax is 0.9941 win WNL v5.1.1 (or your WNL 5.2.1); however, v6 beta got 0.9937 only. WLN v6 beta  Ex. 04 (quoted from your previous post)
... WLN  Ex. 04 Model: Plasma Data, Extravascular Administration » For both cases we should consider contacting Pharsight's support to clarify the issue. Yes, I agree with you. However, if you don't agree to include Cmax to estimate lambda_{z}, then WNL (v5.x.x) should not be used for this purpose. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 ★★ Kaohsiung, Taiwan, 20080930 08:10 @ Helmut Posting: # 2440 Views: 46,669 

Dear Helmut, I tried to eliminate Cmax from data point selection for lambda_{z} estimation with Ace's method (codes/algorithms). I found something interesting. WinNonlin (WNL) v6 beta does actually exclude Cmax from point selection! I got the same results obtained from modified Ace's method as WNL v6 beta did (your previous post) with Ex.04 (and probably Ex. 06, too). In my previous post, the algorithm I mentioned about data point selection by WNL may not be correct. As the matter of the fact, WNL v5.x.x calculates each adj. R^{2} value starting with the last three data points, then the last 4 data points, the last 5 data points... until including Cmax. Finally it picks the dataset which had the maximum adj. R^{2} value among others to do linear regression for estimation of lambda_{z}. However, with WNL v6 beta, the calculation or searching will stop at the data point next to Cmax. So I changed the line of for (i in (nrow(dat)2):(which.max(dat$conc+1))) with Ace's method to for (i in (nrow(dat)2):(which.max(dat$conc))+1) (R scripts). I found that I could successfully eliminate Cmax from data points selection. Then the results surprised me! We had the same result as those you got from WNL v6 beta! See the following (pasted from R console).Ex. 04 Call: Ex. 06 Call: Ex. 05 had the same result as the that obtained from previous version. It happened when the dataset that included Cmax did not have the maximum adj. R^{2}. If you like, you can test Ex. 06 (WNL v5.x.x included the Cmax with this example, too) with WNL v6 beta to see if you can get the exact same results as ours. It should, I guess. We will still test more data set. If no other errors, we will include this method into bear. Time to move to TTT method coding... — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes ★★★ Berlin, Germany, 20080930 09:43 (edited by d_labes on 20080930 10:08) @ yjlee168 Posting: # 2441 Views: 46,662 

Dear folks, if you need an independent opinion (no R, no Far side but 'the power to know' ) here are the results of loglinear regression with SAS for example 4 up to 6 decimals, without point C_{max}, t_{max}:
N used lambda_{Z} Rsq Adj.Rsq If I understood the algorithm right, 3 points in the terminal phase should be the solution. From WinNONLIN users guide (dont know which version, found it on the Indernet): "[...] WinNonlin repeats regressions using the last three points with nonzero concentrations, then the last four points, last five, etc. Points prior to C_{max} [...] are not used [...] For each regression, an adjusted R2 is computed [...] The regression with the largest adjusted R2 is selected to estimate lambda_{Z}, with these caveats: • If the adjusted R2 does not improve, but is within 0.0001 of the largest adjusted R2 value, the regression with the larger number of points is used. • lambda_{Z} must be positive, and calculated from at least three data points." If you use C_{max}, t_{max} one gets:
N used lambda_{Z} Rsq Adj.Rsq If the algorithm is not serial (stop if no increase in adjR2), and this seems to be the case, then Nused=7; — Regards, Detlew 
yjlee168 ★★ Kaohsiung, Taiwan, 20080930 12:23 @ d_labes Posting: # 2445 Views: 46,399 

Dear DLabes, You're correct about the method of data point selection by WNL. Thanks. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 ★★ Kaohsiung, Taiwan, 20080930 12:34 @ d_labes Posting: # 2446 Views: 46,649 

Dear DLabes, We have some initial results for the estimation of lambda_{z} using TTT method. The following are the output obtained from R console. b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) It seems not so difficult to implement TTT method using R. Meanwhile, it is easy to validate visually. Thanks. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes ★★★ Berlin, Germany, 20080930 14:48 @ yjlee168 Posting: # 2449 Views: 46,529 

Dear Yungjin Lee, » We have some initial results for the estimation of lambda_{z} using TTT method. Congratulation. Because I'm not an useR! I cannot totally follow your code. But I think your implementation is the TTT rule as invented: Take all points after 2 x t_{max} for estimating lambda_{Z}. Am I right? If yes, I wish to specify my suggestion further: The TTT rule is only valid if the concentration time course resembles a onecompartmental curve (see the paper of the inventors for this^{1)}). Thus my suggestion is to restrict the maximum number of time points to use for lambda_{Z} estimation to the TTT range and in that range apply the search for points with adjR2 or whatever fit criterion you wish to apply. I have no systematic evidence but after some cursory case evaluations I intuitively feel (ok this is not very scientific ) that this method should choose reasonable number of points for the terminal phase, although in cases where the curves do not resemble onecompartmental shape. ^{1) }Scheerans C, Derendorf H and C Kloft Proposal for a Standardised Identification of the MonoExponential Terminal Phase for Orally Administered Drugs Biopharm Drug Dispos 29, 145–157 (2008) — Regards, Detlew 
yjlee168 ★★ Kaohsiung, Taiwan, 20080930 20:28 @ d_labes Posting: # 2453 Views: 46,522 

Dear DLabes, useR! < this is marvelous. I really like it very much. » But I think your implementation is the TTT rule as invented: Take all points after 2 x t_{max} for estimating lambda_{Z}. Am I right? Yes. We follow your suggestion to implement TTT method. » Thus my suggestion is to restrict the maximum number of time points to use for lambda_{Z} estimation to the TTT range and in that range apply the search for points with adjR2 or whatever fit criterion you wish to apply. You're absolutely right. Question is how we are going to set the maximum number of time points? How to define a reasonable number of points? If there is a acceptable number, your suggestion should be easy to implement. It will sound like the combination way of the TTT method with adj. R^{2} (Ace's proposed algorithm and WinNonlin method). — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes ★★★ Berlin, Germany, 20081001 08:43 (edited by d_labes on 20081001 08:59) @ yjlee168 Posting: # 2457 Views: 46,518 

Dear Yungjin Lee, » useR! < this is marvelous. I really like it very much. Compensation for that is that I have 'The power to know'!!! This is, on what grounds ever, seen as 'industrial standard' in the pharmaceutical industry I'm employed in. » [...] Question is how we are going to set the maximum number of time points? How to define a reasonable number of points? Let us take our example 4. My suggested algorithm is: Take all the points from TTT to last as maximum number to use, i.e. from t=3 to 24 h (points 812). Within that range apply the maximum adjR2 algorithm (or an other best fit). For example 4 this would give the last 3 points. In that example the result coincides with the adjR2 method using all points after C_{max}. For other curves it must not be. The combination of TTT with a best fit method avoids the choosing of to many points, a common flaw seen with the adjR2 method (see our thread here on TTT). Further on it restricts the computation time of the best fit method (adjR2 or whatever) if this is not implemented in a serial way. — Regards, Detlew 
yjlee168 ★★ Kaohsiung, Taiwan, 20081002 12:40 @ d_labes Posting: # 2465 Views: 46,346 

Dear DLabes, » Within that range apply the maximum adjR2 algorithm (or an other best fit). » For example 4 this would give the last 3 points. I've got your points. Great. Now which criterion of the best fit should we use for the combo method? AIC or ARS? Looks like more people will vote for AIC, I guess. It should be implementable, too, with R. or both? Leave the option to the users to decide? — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes ★★★ Berlin, Germany, 20081002 13:59 @ yjlee168 Posting: # 2466 Views: 46,242 

Dear Yungjin, » [...] Now which criterion of the best fit should we use for the combo method? AIC or ARS? Looks like more people will vote for AIC, I guess. It should be implementable, too, with R. or both? Leave the option to the users to decide? My personal choice is AIC, but with the full formula (see this message) to account for the varying number of data points. Eventually the small sample AICc should be considered: AICc=AIC + 2*p*(p+1)/(np1) But with that n=3 is a problem (I have dealt with in an empirical manner). But this is only an opinion. Therefore a choice from the user would be a good idea I think . — Regards, Detlew 
Aceto81 ★ Belgium, 20080930 15:11 @ yjlee168 Posting: # 2451 Views: 46,397 

» It seems not so difficult to implement TTT method using R. Meanwhile, it is easy to validate visually. Thanks. without the for loop:
b<c(0,0.25,0.5,0.75,1,1.5,2,3,4,8,12,24) Ace 
yjlee168 ★★ Kaohsiung, Taiwan, 20080930 19:57 @ Aceto81 Posting: # 2452 Views: 46,373 

Dear Ace, Genius. It works well too. Much, much better algorithm than ours. We learn a lot. Many thanks. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Aceto81 ★ Belgium, 20081001 14:18 @ yjlee168 Posting: # 2460 Views: 46,363 

» Genius. It works well too. Much, much better algorithm than ours. We learn a lot. We all learn I think. A suggestion to implement the number of points used: let the user choose which method to go for (I like d_labes suggestion of the combination), but it shouldn't be to hard to implement all of them by the use of some simple if statements:
method.chosen < c("WNL","TTT","combination") This can only work when you make a function of my previous code which returns the number of points used (function.Ace) Ace 
yjlee168 ★★ Kaohsiung, Taiwan, 20081002 12:29 @ Aceto81 Posting: # 2464 Views: 46,388 

Dear Ace, Yes, we've been planning to do that just following what you've proposed here. Thank you so much for always being so helpful. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Aceto81 ★ Belgium, 20081003 15:33 @ yjlee168 Posting: # 2473 Views: 46,381 

» Yes, we've been planning to do that just following what you've proposed here. » Thank you so much for always being so helpful. Hello, if I've got some time, I'm happy to help you. As for the selection of number of points used, I wrote a quick function, which gives you an overview:
f < function(dat){ The code makes an comparison between TTT, AIC criterium, ARS, the combination of TTT and ARS and the lee (although sometimes lee doesn't work according to the lt parameter set). You can adjust the lee parameters by typing it into the function eg. f(dat,lt=TRUE, method="ols") Maybe it's helpfull for you Ace 
yjlee168 ★★ Kaohsiung, Taiwan, 20081003 21:22 @ Aceto81 Posting: # 2476 Views: 46,303 

Dear Ace, Thank you so much. Your codes just conclude the thread discussion of lambda_{z} using TTT method with a best of fit criterion (either ARS or AIC). We will try your codes asap. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
ElMaestro ★★★ Denmark, 20080722 14:35 (edited by ElMaestro on 20080722 15:01) @ yjlee168 Posting: # 2071 Views: 47,165 

Hi Hsinya & Yungjin, this effort is much appreciated. Thanks a lot, it must have taken you a lot of time. Please allow me to ask few questions or give a few comments:
Best regards, EM 
yjlee168 ★★ Kaohsiung, Taiwan, 20080723 09:55 (edited by yjlee168 on 20080723 12:57) @ ElMaestro Posting: # 2072 Views: 47,186 

Dear EM, » From this I get the impression that the ANOVA in fact is based on lm, not GLM; for example "lm(LnAUC0INF ~ seq + subj + prd + drug, data = TotalData)"?! Is this intended  shouldn't it be glm? » » 2. The intrinsic R glm function does afair not allow for random effects (in contrast to glm in SAS); analysing datasets with a syntax like "glm(LnAUC0INF ~ seq + subj + prd + drug)" might treat subj as fixed. Exactly. You're right. We use "lm" function to run anova, not "glm" from R. However, the outputs we have tested with one BE dataset are exactly the same with those generated by SAS with proc glm; statement. Yes, we tried "glm" at the beginning, and got some different results from it. Then we changed to "lm". Bingo! Your comments exactly help to solve something that we don't know before. » 3. As far as I remember, both lm and glm are unsafe in cases with imbalance for crossover studies (can't find a ref right now, though, sorry about it). I am under the impression that this type of analysis should be done explicitly with a mixed model (LME, using the REML approach) to ascertain you get the correct sigma. Our package is built currently only for the 2trt x 2per x 2seq, balanced, nonreplicated, crossover designed BE study. The mixed model (such as proc mixed; in SAS or "lme" with REML) may be suitable forreplicated, imbalanced, crossover designed BE study. We have planned to add more functions later. At that moment, "lme" will be used. Ref.: » 4. Although SumsofSquares is pretty much a mine field, I think it would be wise to at least implement an option of getting type III SS. Some regulators seem unwilling to accept anything other than type III, while others couldn't care less (I have it on good account ). Yes, we add "type III SS" in the next release. We're fixing bear right now. Hopefully it can be done before the end of July. » ...I am no expert (neither in statistics nor in BE) I should add. You certainly are an expert in BE . Thank you for your comments. Edit: Link corrected for FDA’s new site. [Helmut] — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
ElMaestro ★★★ Denmark, 20080724 10:04 (edited by ElMaestro on 20080724 12:18) @ yjlee168 Posting: # 2076 Views: 47,464 

Dear Yungjin, I don't think the software should state it uses GLM if it doesn't. But that's of course only a personal view and I mean no offense at all. Regarding proc mixed and proc GLM in SAS my impression is* that for balanced data they achieve the same thing, but proc mixed is more computationally intensive. Therefore, in the old days when computers were slow GLM was preferred. Nowadays, when computational power is not an issue it makes little difference. For data without balance, however, proc mixed is used and proc GLM may be unsafe. All in all, one can say it is always kind of safe to use proc mixed. Thus, I'd suggest you to use a mixed model the following way: "lmeYungjin < LnAUC0INF ~ seq + prd + drug, random=~1subj, method="REML") From the lmeYungjin fit you can extract the correct sigma (which is correct whether or not you have balance) and the treatment difference and use this to construct the 90% CI. Next, you can do a glm as follows: "glmYungjin < glm(LnAUC0INF ~ seq + prd + drug + subj)" followed by "drop1(glmYungjin, test="F") to get a glm anova with type III SS. Last you just need to update the anova with the correct error term for the sequence effect (between subj error). Result: Anova and 90% CI done the way SASoholics like and it is resistant to imbalance.... Best regards EM *: When I say "my impression is" then it of course means I could be completely wrong and I'd be happy to stand corrected accordingly. 
yjlee168 ★★ Kaohsiung, Taiwan, 20080725 20:38 @ ElMaestro Posting: # 2088 Views: 46,968 

Dear EM, Yes, so we've decided to used "ANOVA(lm)" instead of "GLM" in bear. The "lme" method from package nlme will be implemented and tested. And "lme" will be used to replace current "lm" if everything runs o.k.. Thank you for your suggestions. » "lmeYungjin < LnAUC0INF ~ seq + prd + drug, random=~1subj, method="REML") Should this be lmeYungjin < lme(LnAUC0INF...) ?— All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
ElMaestro ★★★ Denmark, 20080728 08:42 @ yjlee168 Posting: # 2089 Views: 46,900 

» » "lmeYungjin < LnAUC0INF ~ seq + prd + drug, random=~1subj, method="REML") » » Should this be lmeYungjin < lme(LnAUC0INF...) ?Yes, that's correct. Sorry about the syntactical typo. EM. 
ElMaestro ★★★ Denmark, 20080923 16:10 @ yjlee168 Posting: # 2391 Views: 46,657 

» Yes, we tried "glm" at the beginning, and got some different results from it. Then we changed to "lm". Bingo! Your comments exactly help to solve something that we don't know before. Hi again, I am in principle a little surprised to hear that R's glm is not able to give you what you want, while lm is. glm and lm and the accompanying anova should give pretty much the same result as long as you don't play around with glm's functionality for the error distribution. Do you have a sample dataset that illustrates the problem you identified with glm? Best regards EM. 
yjlee168 ★★ Kaohsiung, Taiwan, 20080924 22:00 @ ElMaestro Posting: # 2396 Views: 46,743 

Dear EM, Thanks for your comments. We went back to test glm in bear again. And find that you're correct about glm. Basically, both lm and glm generate the same results with the tested dataset. The following are the Type I SS and Type III SS obtained from lm and glm. lm  Type I SS glmData <lm(Cmax ~ sequence+ period + drug +subject, data=KK) lm  Type III SS drop1(glmData, test="F") glm  Type I SS glmData <glm(Cmax ~ sequence+ period + drug +subject, data=KK) glm  Type III SS drop1(glmData, test="F") Look like that only AICs are different when comparing these two methods. Since SAS does not calculate AICs in its output, we thus cannot compare which one is correct. The reason we chose lm, instead of glm, might be that the output generated by lm is more like to that of SAS. And we didn't quite understand what "Deviance Resid." (in glm output) meant at beginning. Again, both glm and lm generate almost the same results. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
ElMaestro ★★★ Denmark, 20080925 09:03 @ yjlee168 Posting: # 2400 Views: 46,860 

Hi again, I am happy to hear that it is now working with glm. Note that subject is "nested within sequence"  that is why you get slightly odd results (zero deviation + convergence criterion) for your sequence effect after drop1. I think it goes like this*: First the model is fit with Subj, Treatm, Period and Sequence. Then, to find the SS for Sequence a model with Subj, Treatm and Period is fit. The difference in residuals is supposed to be the type III Sequence effect. However, because of the subject being nested in sequence thing, removal of the Sequence from the full model corresponds to removing nothing (i.e. Sequence is already there because of the inclusion of Subject). Thus, a compensation for this phenomenon is necessary. SAS apparent figgers it out automatically, while R doesn't. I am not aware of any syntactical trick to make R compensate for the nesting. If others know it then it would be relevant to hear about it. Otherwise I guess there is room for improvement on the part of the R development team? EM. *: As usual I could be completely wrong. 
ElMaestro ★★★ Denmark, 20080925 13:52 @ ElMaestro Posting: # 2404 Views: 46,599 

» Otherwise I guess there is room for improvement on the part of the R development team? I have been thinking a little about this now... Perhaps one should consider it a feature rather than a bug that R's drop1 does not compensate for the nesting. After all, it does exactly what drop1 is supposed to do (which is dropping one factor at a time). Comments, anyone? EM. 
Helmut ★★★ Vienna, Austria, 20080925 14:39 @ ElMaestro Posting: # 2405 Views: 46,566 

— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
ElMaestro ★★★ Denmark, 20080925 14:55 @ Helmut Posting: # 2406 Views: 46,556 

» Maybe this thread and the references will help… Dear HS, thanks for the help. However I think that link misses the point a little bit. The question I have been discussing with myself this morning (yes, ok, that perhaps sounds borderline schizophrenic) is not what type III SS are, rather how they should be presented for a nested situation; the R way of doing single term deletion is to do single term deletion, while the SAS way of doing single term deletion is to do single term deletion with the added bonus of rabbits out of a hat for the factor in which another factor is nested. You have an opinion perhaps? EM. 
Helmut ★★★ Vienna, Austria, 20080925 15:33 @ ElMaestro Posting: # 2407 Views: 46,607 

Dear ElMaestro! » You have an opinion perhaps? You gave a wonderful description of the differences between R and SAS . I prefer the Rway, because IMHO it's more transparent – but I’m not an SASuser.Maybe we should wait for DLabes passing by to obtain more insights on — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes ★★★ Berlin, Germany, 20080925 16:45 @ Helmut Posting: # 2411 Views: 46,755 

Dear ElMaestro, dear HS, I cannot follow your discussion exactly because I have no experience with R up to now. Without entering combat in the war of religions about type1, type3 and so on 'Sum of squares' (Ssq) and and their appropriateness: SAS Proc GLM behaves like R, I think. If the model is specified as (1) log(PKparm) = sequence treatment period subject you will get DF=0 and type3 Ssq = missing or zero (I do not remember my first attempt, in which I tried this model also. Only if the model is specified as (2) log(PKparm) = sequence treatment period subject(sequence) where subject(sequence) means subject nested within sequence, an appropriate DF and a value for the type 3 Ssq is obtained. This is because the Ssq(sequence) and Ssq(subject(sequence)) sum up to the Ssq(subject) which means model (1) is overspecified. So I presume that R gives analog results if the model is specified according to formula (2). I'm not sure (as already said, ≤zero experience with R) but think I found nesting operator in R is  (pipe). Thus do me the favor and try it in R. — Regards, Detlew 
ElMaestro ★★★ Denmark, 20080925 19:23 @ d_labes Posting: # 2412 Views: 46,673 

» Thus do me the favor and try it in R. Dear d_labes, just did that. Actually, I do not know of the "" to specify nesting. I used "Subj %in% Seq" etc but still Seq comes out null in a type III anova. Feature, not bug. EM. 
yjlee168 ★★ Kaohsiung, Taiwan, 20080925 20:49 @ ElMaestro Posting: # 2413 Views: 46,608 

dear all, I don't know if I follow your discussion exactly or not. We use ':' to do nesting in R. For instance glmData <lm(Cmax ~ seq + period + treatm + subject:seq, data=KK) Let me know if I don't follow your discussions well. Thank you all. » just did that. Actually, I do not know of the "" to specify nesting. I used "Subj%in%Seq" etc but still Seq comes out null in a type III anova. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut ★★★ Vienna, Austria, 20080926 00:23 @ ElMaestro Posting: # 2416 Views: 46,910 

Dear ElMaestro! » Actually, I do not know of the "" to specify nesting. » I used "Subj%in%Seq" I don't know both notations. See the documentation of lm under 'Details':Details Models for lm are specified symbolically. A typical model has the form response ~ terms where response is the (numeric) response vector and terms is a series of terms which specifies a linear predictor for response . A terms specification of the form first + second indicates all the terms in first together with all the terms in second with duplicates removed. A specification of the form first:second indicates the set of terms obtained by taking the interactions of all terms in first with all terms in second . The specification first*second indicates the cross of first and second . This is the same as first + second + first:second .— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
ElMaestro ★★★ Denmark, 20080926 09:15 @ Helmut Posting: # 2420 Views: 46,494 

Hi, Help>Manuals (in PDF)>An introduction to R (Page 51 with v 2.7.1) y ~ A*B y ~ A + B + A:B y ~ B %in% A y ~ A/B Two factor nonadditive model of y on A and B. The first two specify the same crossed classification and the second two specify the same nested classification. In abstract terms all four specify the same model subspace. EM. 
Aceto81 ★ Belgium, 20080926 12:26 @ Helmut Posting: # 2422 Views: 47,117 

» » Actually, I do not know of the "" to specify nesting. » » I used "Subj%in%Seq" I think the "" is only for lme (but I can be wrong). It's also used for lattice plots, where it means "conditioned on" (see http://www.itc.nl/~rossiter/teach/R/RIntro_ov.pdf page 47) Ace 
yjlee168 ★★ Kaohsiung, Taiwan, 20081018 21:03 @ ElMaestro Posting: # 2557 Views: 46,306 

dear EM, Although now bear v2.0.1 calculates Type III SS with ANOVA, Type III SS are the same as the Type I SS when the BE study is a balanced, crossover design. Bear is now only applicable to a balanced, crossover designed BE study. So, can 'some regulators' agree with this or are they willing to accept Type I SS in this situation? Thanks. » 4. Although SumsofSquares is pretty much a mine field, I think it would be wise to at least implement an option of getting type III SS. Some regulators seem unwilling to accept anything other than type III, while others couldn't care less (I have it on good account ). This means that the choice of type III is a safe choice for the time being. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
ElMaestro ★★★ Denmark, 20081021 13:09 @ yjlee168 Posting: # 2571 Views: 46,375 

» So, can 'some regulators' agree with this or are they willing to accept Type I SS in this situation? Hi, I am not entirey sure that I understand you. You should ask a regulator, by the way... You can do type III SS with "drop1", and it will work for unbalanced data so I don't see a problem in any way. You can also do it by fitting models one by one and comparing residuals. In the end it will work out to exactly the same result. I see no reason why you would not do this in your package  please tell me why ("Bear is now only applicable to a balanced, crossover designed BE study"???). And remember, drop1 is good with both glm and lm . You can do type III's via a mixed model, too, using the approach I showed you earlier, and again it will give you valid results for balanced and unbalanced data.Whenever a study is balanced regulators will / should accept everything, because type I = type III (= type other, too). 99.999% don't care anyway, because they haven't got the faintest clue about statistics, as far as I've heard. I hope this provided an answer to your question? Have a nice day. EM. 
yjlee168 ★★ Kaohsiung, Taiwan, 20081021 13:19 @ ElMaestro Posting: # 2572 Views: 46,215 

Dear EM, Since bear now is designed only applicable to analyze data obtained from a balanced, crossover BE study, that's why I ask if it is necessary to display Type III SS with Type I SS simultaneously for such a BE study at this moment. Currently, it seems not necessary to do that. Thanks for your message. » Whenever a study is balanced regulators will / should accept everything, because type I = type III (= type other, too). 99.999% don't care anyway, because they haven't got the faintest clue about statistics, as far as I've heard. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes ★★★ Berlin, Germany, 20081021 14:31 @ yjlee168 Posting: # 2573 Views: 46,115 

Dear Yungjin! » Since bear now is designed only applicable to analyze data obtained ^^^ » from a balanced, crossover BE study, ^^^ This would be a major flaw for bear because imbalanced BE studies (with respect to imbalanced number of volunteers in sequences) are more the rule than an exception! This would mean that those studies cannot analyzed with bear!? In view of all the goodies implemented until now this would be a pity. — Regards, Detlew 
yjlee168 ★★ Kaohsiung, Taiwan, 20081021 19:01 @ d_labes Posting: # 2574 Views: 46,145 

dear DLabes, Indeed. We are currently implement lme function for bear. Until then, bear can analyze data obtained from unbalanced/imbalanced BE studies. — All the best, Yungjin Lee bear v2.8.4: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
ElMaestro ★★★ Denmark, 20081024 11:26 @ yjlee168 Posting: # 2578 Views: 46,072 

Hi, I really don't understand this. You can use the exact same syntax to analyse balanced AND imbalanced data. So why not just do it? I see no point (justification) in (for) restricting the calculations or ambitions to balanced data only. Best regards EM. 
Helmut ★★★ Vienna, Austria, 20081024 11:58 @ ElMaestro Posting: # 2579 Views: 46,270 

Dear ElMaestro, Yungjin and Hsinya Lee! I agree with ElMaestro 100%. I will lock the thread with this post, because the depth of replies slowly blocks the entire main page of the forum. Please open a new thread for the actual version of bear (2.0.1 upwards). Thank you. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 