d_labes Hero Berlin, Germany, 20100422 09:43 Posting: # 5183 Views: 30,755 

Dear all, especially dear bears, since I'm very interested in using R for evaluation of BE studies I had a closer look at the bear code for parallel group studies (code inspection as part of validation ). If I got it right the code used is (f.i. AUC logtransformed): lme(lnAUC0t ~ drug, random=~1subj, data=TotalData, method="REML" ) I am wondering where this code comes from, what this code does an why it works anyhow . IMHO this model, one (fixed) effect for the treatment and one (random) effect for the subjects must be overspecified. We have only one value for a distinct subject treated with Test or Reference and thus we are not able to separate this uniquely into 2 effects, one part for treatment and one attributed to the subject. Nevertheless the lme() call produces a result. Try it out in bear. If I follow the strange and crude EMA suggestion and use all effects as fixed lmModel < lm(lnAUC0t ~ drug + subj, data=TotalData) with the bear builtin dataset for parallel groups I get the anticipated result:
>summary(lmModel) BTW: I would go for a parallel groups study with exactly 2 groups with the 'simple' ttest (Welch variant as described by Helmut long ago) . Is anybody out there who knows a generalization of the Welch test to more than 2 groups? Any hint would be very appreciated! Or could we use pairwise Welch ttests for that? — Regards, Detlew 
ElMaestro Hero Denmark, 20100422 10:53 (edited by ElMaestro on 20100422 11:52) @ d_labes Posting: # 5184 Views: 28,612 

Ahoy d_labes! » If I got it right the code used is (f.i. AUC logtransformed): » lme(lnAUC0t ~ drug, random=~1subj, data=TotalData, method="REML" ) » I am wondering where this code comes from, what this code does an why it works anyhow . » IMHO this model, one (fixed) effect for the treatment and one (random) effect for the subjects must be overspecified. With REML, the likelihood is optimised on the covariance matrix V (=ZGZ^{t}+R). With "random=~1subj" I think we do the most simple specification of V, as we put in sigma on the diagonal, and zeros elsewhere. Thus, subjects introduce a random effect (think sigma on the diagonal), nature of treatment not considered. » We have only one value for a distinct subject treated with Test or Reference and thus we are not able to separate this uniquely into 2 effects, one part for treatment and one attributed to the subject. In my opinion, and I could be wrong, with this specification of the model we are not trying to do exactly that through the proposed code. We are only saying that each subject is associated with a degree of variability which is described by a common sigma in V, regardless of the treatment given for any data point. Best regards from the seven seas, EM. Correction: I mean sigma^{2} on the diagonal. — I could be wrong, but… Best regards, ElMaestro  since June 2017 having an affair with the bootstrap. 
d_labes Hero Berlin, Germany, 20100422 12:00 @ ElMaestro Posting: # 5186 Views: 28,582 

Ahoy ElMaestro! » In my opinion, and I could be wrong, with this specification of the model we are not trying to do exactly that [...] (underlined by me ) There is a hint I meanwhile discovered: Also the summary() function works on the lmemodel intervals() does not: > intervals(lmeModel, level=0.9) .Only the fixed effects parameter can be obtained, astonishing enough : > intervals(lmeModel, which="fixed", level=0.9) Seems to me as there is something wrong with the random part. — Regards, Detlew 
ElMaestro Hero Denmark, 20100422 19:47 @ d_labes Posting: # 5188 Views: 28,521 

Dear d_labes, » Seems to me as there is something wrong with the random part. OK, then here's a potential theory: if we do lme(lnAUC0t ~ drug, random=~1subj, data=TotalData, method="REML" ) on a dataset where we have parallel nonreplicated groups (one observation per subject) then R might be trying to actually fit the error sigma^{2} on the diagonal of V (R matrix), and for ZGZ^{t} possibly ending up with a subject sigma^{2} on the diagonal. Since there is only one observation per subject, I guess this would amount to a V with errorsigma^{2} plus subjectsigma^{2} on the diagonal. Now errorsigma^{2} plus subjectsigma^{2} can be estimated, but the individual components cannot (would explain anomalies, at least). We would effectively only be working with a single sigma^{2}. In this case, I would be inclined to imagine that the errorsigma^{2} plus subjectsigma^{2} would equal the residual error from an anova on lm(lnAUC0t~drug) or something like that. Best regards EM. — I could be wrong, but… Best regards, ElMaestro  since June 2017 having an affair with the bootstrap. 
d_labes Hero Berlin, Germany, 20100423 07:09 @ ElMaestro Posting: # 5192 Views: 28,521 

Dear Old saylor, » In this case, I would be inclined to imagine that the errorsigma^{2} plus subjectsigma^{2} would equal the residual error from an anova on lm(lnAUC0t~drug) or something like that. Full ACK. To verify this (bear v 2.4.1 parallel group dataset): 'Mixed effects model' Random effects: ElGrandeEM model: Call: Note the totally identical results for the 'drug' effects. — Regards, Detlew 
yjlee168 Senior Kaohsiung, Taiwan, 20100425 21:29 @ d_labes Posting: # 5218 Views: 28,498 

Dear d_labes, » There is a hint I meanwhile discovered: » Also the summary() function works on the lmemodel intervals() does not: » > intervals(lmeModel, level=0.9) » Error in intervals.lme(lmeModel) : .I got the following results for ln(Cmax) using builtin dataset for parallel study in bear v2.4.2 with the model of lme(lnCmax ~ drug, random=~1subj, data=TotalData, method="REML"). Approximate 90% confidence intervals — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 Senior Kaohsiung, Taiwan, 20100422 21:09 @ d_labes Posting: # 5189 Views: 28,524 

Dear d_labes & ElMaestro, » Call: » lm(formula = lnAUC0t ~ drug + subj, data = TotalData) Is this possible caused by unbalanced parallel data set? I changed the builtin dataset in the demo of parallel study as an unbalanced (subject in Test: 14's and 13's in Ref.) since bear v2.4.2 because it could cause error with previous version when the parallel was unbalanced when doing NCA. You can see the output file called Statistical_summaries.txt after running the demo dataset for more details. Therefore, I don't know if it is appropriate to use lm() to run unbalanced dataset as you did here. The reason to do that is only for my convenience when coding bear and testing it. The unbalanced dataset is quite special for an unbalanced, parallel BE study in bear. I got the bug information from a Greek user. » Residuals: » ALL 20 residuals are 0: no residual degrees of freedom! » (overspecified!) » » Coefficients: (1 not defined because of singularities) » Estimate Std. Error t value Pr(>t) » (Intercept) 7.3199 NA NA NA » drug2 0.0622 NA NA NA » subj2 0.5366 NA NA NA » [...] — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes Hero Berlin, Germany, 20100423 07:12 @ yjlee168 Posting: # 5193 Views: 28,510 

Dear Yungjin, » Is this possible caused by unbalanced parallel data set? I have used the parallel group dataset from bear version 2.4.1, 10 subjects in each treatment group. » ... Therefore, I don't know if it is appropriate to use lm() to run unbalanced dataset as you did here. (underlined by me) IMHO definitely yes. BTW: My preferred lm() model would be EMs above . I have all my days compared 2 parallel groups (if only 2) with ttest if only the treatment effect was under consideration. And this is a special case of oneway ANOVA with treatment as factor. But if we consider different variabilities in the treatment groups it is another story! To ask it again: What is the idea behind your model, where does it come from? Do you have a reference for me? lm() or lme() is only a question of considering some effects as random or not.— Regards, Detlew 
yjlee168 Senior Kaohsiung, Taiwan, 20100423 19:14 @ d_labes Posting: # 5207 Views: 28,456 

Dear d_labes, Thank you for your message. » IMHO definitely yes. O.k. » To ask it again: What is the idea behind your model, where does it come from? Do you have a reference for me? lm() or lme() is only a question of considering some effects as random or not.No reference for this. Simply try to translate the codes of Power to know into R. When we were developing bear for parallel BE study, we tried to get exactly the same results as Power to know or WinNonlin. The codes of Power to know for parallel BE study looks like
PROC GLM DATA=EXAMPLE; So we used lme() (not lm() ) to meet this requirement for validation purpose. lmeCmax_ss<lme(Cmax_ss ~ drug, random=~1subj, data=Data, method="REML" ) Just like analyzing a replicate crossover. I can not find the results obtained from analyzing with WinNonlin using a linear mixed effects model. As far as I could remember, we got pretty similar results as with WinNonlin at that moment. Then when search the answer for your question int his thread, I find that we probably can simply use lm() as what we do in nonreplicate 2x2x2 crossover BE study, such aslmCmax_ss< lm(Cmax_ss ~ drug , data=TotalData) Which model is more appropriate for analyzing parallel BE study with R? — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut Hero Vienna, Austria, 20100423 22:28 @ yjlee168 Posting: # 5208 Views: 28,648 

Dear bears! » Just like analyzing a replicate crossover. I can not find the results obtained from analyzing with WinNonlin using a linear mixed effects model. As far as I could remember, we got pretty similar results as with WinNonlin at that moment. D. Labes referred to one of my old posts: WinNonlin's method (assuming equal variances) is flawed  even in the current releases WinNonlin 5.3 and Phoenix/WinNonlin 6.1. The model is equivalent to the common oneway ANOVA (but like everything in WinNonlin based on maximum likelihood); only one fixed effect = treatment (no random subject term whatsoever). A random subject term does not make sense IMHO. » Which model is more appropriate for analyzing parallel BE study with R? Agreeing with D. Labes in the top post I would also use a ttest for unequal variances. That's the default of t.test and in conformity with FDA's requirements. For simple code see the subsequent post in the quoted thread.— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 Senior Kaohsiung, Taiwan, 20100424 17:36 @ Helmut Posting: # 5213 Views: 28,455 

Dear Helmut, d_labes and ElMaestro, Thank you all for your great inputs and suggestions in this thread. We have decided to rewrite the codes of bear in the part of parallel BE study. » D. Labes referred to one of my old posts: WinNonlin's method (assuming equal variances) is flawed  even in the current releases » » Agreeing with D. Labes in the top post I would also use a ttest for unequal variances. That's the default of t.test and in conformity with FDA's requirements. For simple code see the subsequent post in the quoted thread.I see. Thanks again. — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 Senior Kaohsiung, Taiwan, 20100425 22:09 @ Helmut Posting: # 5219 Views: 28,437 

Dear Helmut, BTW, may I ask if the analysis of a replicate BE study is acceptable with WNL v5.x or v6.x? WNL uses a mixed effect model, too, to analyze a replicate BE. » D. Labes referred to one of my old posts: WinNonlin's method (assuming equal variances) is flawed  even in the current releases WinNonlin 5.3 and Phoenix/WinNonlin 6.1. — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut Hero Vienna, Austria, 20100425 23:29 @ yjlee168 Posting: # 5221 Views: 28,437 

Dear bears, » BTW, may I ask if the analysis of a replicate BE study is acceptable with WNL v5.x or v6.x? WNL uses a mixed effect model, too, to analyze a replicate BE. What do you mean by "acceptable"? — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 Senior Kaohsiung, Taiwan, 20100426 06:59 @ Helmut Posting: # 5228 Views: 28,426 

Dear Helmut, One of bear users found the differences when he compared bear and WNL with his dataset obtained from a replicate BE study. I found that the mixed effect model used in WNL v6.x was different from bear. In WNL, the fixed terms included: int + SEQUENCE + FORMULATION + PERIOD; and the random terms: FORMULATION*SUBJECT + PERIOD*FORMULATION*SUBJECT. I don't know why WNL sets its models like this. In bear, the fixed effects were SEQUENCE + PERIOD + FORMULATION; the random effect was only FORMULATION/SUBJECT. That's why I asked the question. I should search first, like this post of yours... » What do you mean by "acceptable"? — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut Hero Vienna, Austria, 20100426 14:15 @ yjlee168 Posting: # 5236 Views: 28,408 

Dear bears, it's not that easy. To quote from the manual: Replicated crossover designs For replicated crossover designs, the default model used in the Bioequivalence Wizard is the example model given for replicated designs in Appendix E of the FDA guidance: Fixed effects model is:
Appendix E to FDA (2001) states: The following illustrates an example of program statements to run the average BE analysis using PROC MIXED in SAS version 6.12, with SEQ, SUBJ, PER, and TRT identifying sequence, subject, period, and treatment variables, respectively, and Y denoting the response measure (e.g., log(AUC), log(Cmax)) being analyzed:
The Estimate statement assumes that the code for the T formulation precedes the code for the R formulation in sort order (this would be the case, for example, if T were coded as 1 and R were coded as 2). If the R code precedes the T code in sort order, the coefficients in the Estimate statement would be changed to 1 1. In the Random statement, TYPE=FA0(2) could possibly be replaced by TYPE=CSH. This guidance recommends that TYPE=UN not be used, as it could result in an invalid (i.e., not nonnegative definite) estimated covariance matrix. We have already discovered differences between SAS and WinNonlin in the evaluation of replicate designs. » I should search first, like this post of yours... No, that's another pot of tea! I was referring to EMA's guideline asking for ANOVA with fixed effects  which is not clever anyhow. — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 Senior Kaohsiung, Taiwan, 20100425 17:34 @ yjlee168 Posting: # 5214 Views: 28,491 

Dear d_labes, When I revised bear today, I found that the following two models got the same results (with builtin dataset in bear v2.4.2) for CIs. » lmeCmax_ss<lme(Cmax_ss ~ drug, random=~1subj, data=Data, » method="REML" ) Dependent Variable: lnCmax and » lmCmax_ss< lm(Cmax_ss ~ drug , data=TotalData) Call: Just a quick response. I am studying Helmut's t tests for further revision. — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
ElMaestro Hero Denmark, 20100425 18:40 (edited by ElMaestro on 20100425 21:42) @ yjlee168 Posting: # 5215 Views: 28,558 

Hi Bears, » When I revised bear today, I found that the following two models got the same results (...) The results are similar for the fixed effect (here: formulation). The relationship between the lm's residual and the lme's variance components is described above. Now, just to add a little confusion: D_labes pointed out some confusion about the results from the lme, but in my opinion there is also funny stuff going on with the lm: Negative adjusted r squared. Another cosmic mindf#&@er from the realm of statistics. I love it. Best regards EM. PS: Yes, I am aware of the definitions, e.g. Wikipedia. 
Helmut Hero Vienna, Austria, 20100425 20:38 @ yjlee168 Posting: # 5216 Views: 28,559 

Dear bears, » When I revised bear today, […] (with builtin dataset in bear v2.4.2) for CIs. Are you trying to confuse me? The builtin dataset consists of 20 (10:10) observations. For comparisons can we please stick to this dataset, lnAUC0t only? subj drug Cmax AUC0t AUC0INF lnCmax lnAUC0t lnAUC0INF For nonbears, here is the dataset (balanced, 20 subjects) for download. With this code # Change to folder containg data files in RGui! I get Welch Two Sample ttest As expected the CI is slightly wider as the ones obtained by simple lm or the strange lme (83.312123.920). Have a look at the plot. Edit: Remove the boxplot(lnAUC0t ~ drug, col="lightgray") THX to D. Labes for betatesting. [Helmut] — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 Senior Kaohsiung, Taiwan, 20100425 20:44 @ Helmut Posting: # 5217 Views: 28,401 

Dear Helmut, In bear v2.4.2, I've changed to dataset for demo in parallel study (see my previous post of this thread). The dataset should be subj drug Cmax AUC0t AUC0INF lnCmax lnAUC0t lnAUC0INF Sorry about this. — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut Hero Vienna, Austria, 20100425 23:13 @ yjlee168 Posting: # 5220 Views: 28,358 

Dear bears, well I'm also running v2.4.2 here. I got the 20 subjects dataset from the paralleldemo output 'lme_stat.txt'... — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 Senior Kaohsiung, Taiwan, 20100426 06:16 (edited by yjlee168 on 20100426 06:45) @ Helmut Posting: # 5226 Views: 28,497 

Dear Helmut, I noticed that d_labes got the same dataset as yours (df = 18 in his post), but he used v2.4.1. I also download bear (v2.4.2) installation file (bear_2.4.2.tar.gz) from one of CRAN ftp sites, and unzip it to view the dataset file (../bear/data/Paralleldata.r) for parallel. It is the new dataste with 27 subjects (test: 14's, ref.: 13's). So I don't know why your dataset was not updated simultaneously when you updated bear to v2.4.2. I'll try to find out what it's going on. Thanks for your message. » well I'm also running v2.4.2 here. I got the 20 subjects dataset from the paralleldemo output 'lme_stat.txt'... — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
Helmut Hero Vienna, Austria, 20100426 11:12 @ yjlee168 Posting: # 5232 Views: 28,400 

Dear bears, OK, checked it again. Paralleldata.r contains 27 subjects, but the demo works with the old one of 20. Still no idea where bear 2.4.2 gets the data from if I run the parallel design demo. Strange. Only if I run the NCAdemo first, I get the new dataset… — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
yjlee168 Senior Kaohsiung, Taiwan, 20100426 16:43 @ Helmut Posting: # 5240 Views: 28,347 

Dear Helmut, I know what it's going on. You're right about this. When you just do 'Statistical analysis', the parameter values were entered directly from one (demoBANOVA.r) of Rscript files, not from a single data file. The data file, Paralleldata.r, is specifically used for the demo purpose from NCA > Statistical analysis. Nothing's wrong so far. Thank you for your feedback. » works with the old one of 20. Still no idea where bear 2.4.2 gets the data from if I run the parallel design demo. Strange. Only if I run the NCAdemo first, I get the new dataset… — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
yjlee168 Senior Kaohsiung, Taiwan, 20100426 06:41 @ Helmut Posting: # 5227 Views: 28,331 

Dear Helmut, » T/R [%] with alpha 0.05 (90% CI) » Point Estimate CL90 lower CL90 upper » Ratio 101.607 82.858 124.598 » » As expected the CI is slightly wider as the ones obtained by simple lm or the strange lme (83.312123.920). Have a look at the plot. Great. Thank you so much for the comparison. Interesting. — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes Hero Berlin, Germany, 20100426 07:04 @ Helmut Posting: # 5229 Views: 28,330 

Dear Helmut, » # Change to folder containg data files in RGui! » ... » boxplot(lnAUC0t ~ drug, col="lightgray", yl) Ooops: > Error in eval(expr, envir, enclos) : object 'yl' not found » Have a look at the plot. A very fine example of heteroscedasticity and eventually of nonnormality . With ttest assuming equal variances one gets: Point Estimate CL90 lower CL90 upper Homework: Compare this to the results of lm() or lme() in this thread . To repeat my question above: Any hint for more then 2 groups? Very seldom for BE studies, I know. But according to Murphy's law I have to deal with a 3parallelgroup design already tomorrow. — Regards, Detlew 
yjlee168 Senior Kaohsiung, Taiwan, 20100426 07:22 @ d_labes Posting: # 5230 Views: 28,408 

Dear d_labes, » To repeat my question above: Any hint for more then 2 groups? Are the following Power to know codes feasible (3group Parallel)? Never try it before myself.
PROC GLM DATA=EXAMPLE; » Very seldom for BE studies, I know. But according to Murphy's law I have to deal with a 3parallelgroup design already tomorrow. — All the best, Yungjin Lee bear v2.8.3: created by Hsinya Lee & Yungjin Lee Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear Download link (updated) > here 
d_labes Hero Berlin, Germany, 20100426 11:29 @ yjlee168 Posting: # 5233 Views: 28,371 

Dear Yungjin, » Are the following Power to know codes feasible (3group Parallel)? Yesss Sir. But only if we assume equal variances within the groups. I do not have the power _{to know} any way to get unequal variance assumption into Proc GLM. And at least the mighty FDA does not want it this way from us . — Regards, Detlew 
Helmut Hero Vienna, Austria, 20100426 12:45 @ d_labes Posting: # 5234 Views: 28,370 

Dear D. Labes and Yungjin, » Yesss Sir. But only if we assume equal variances within the groups. » I do not have the power _{to know} any way to get unequal variance assumption into Proc GLM. » » And at least the mighty FDA does not want it this way from us . FDA’s requirement is the justification for guidelinebelievers. I think it’s not by chance that FDA calls for procedures for unequal variances if they are available (specifically for parallel groups and replicate designs). Bad luck for 2×2 crossovers (common variance part of the model). IMHO good statistical practice would call for the Welshtest; it’s also not by chance that var.equal = FALSE is the default in R’s t.test .— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20100426 13:58 @ Helmut Posting: # 5235 Views: 28,290 

Dear Helmut, I thought love was only true in fairy tales » FDA’s requirement is the justification for guidelinebelievers. Thanks for this 'nom du guerre' . » IMHO good statistical practice would call for the Welshtest I would say more tentatively: some sort of ... Best regards, yours respectfully GuidelineBeliever 
Helmut Hero Vienna, Austria, 20100426 14:31 @ d_labes Posting: # 5237 Views: 28,305 

Dear D. Labes, THX for bringing this song back to my memory (German: schlimmer Ohrwurm). » » FDA’s requirement is the justification for guidelinebelievers. » Thanks for this 'nom du guerre' . Well, of course I didn't mean you by that. Only a hint for people too lazy to go beyond guidelines. » » IMHO good statistical practice would call for the Welshtest » I would say more tentatively: some sort of ... OK, if you like: Welch, Satterthwaite, AspinWelch, Howe, MillikenJohnson… A warning for M$Excel users: TTEST(matrix1;matrix2;2;3) for unequal variances uses a crude method, i.e., integer degrees of freedom. Another reason to avoid this piece. — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Helmut Hero Vienna, Austria, 20100426 10:55 @ d_labes Posting: # 5231 Views: 28,383 

Dear D. Labes, » » boxplot(lnAUC0t ~ drug, col="lightgray", yl) » Ooops: > Error in eval(expr, envir, enclos) : object 'yl' not found Shit! Of course instead of boxplot(lnAUC0t ~ drug, col="lightgray" use boxplot(lnAUC0t ~ drug, col="lightgray") » A very fine example of heteroscedasticity and eventually of nonnormality . Maybe. But we are not allowed to test that anyhow. For nonbears, here is the dataset (imbalanced, 27 subjects) for download. » With ttest assuming equal variances one gets: » Point Estimate CL90 lower CL90 upper » Ratio 101.61 83.31 123.92 » » Homework: Compare this to the results of lm() or lme() in this thread » . Yes, sure. What else? That's why we should avoid it. » To repeat my question above: Any hint for more then 2 groups? » Very seldom for BE studies, I know. But according to Murphy's law I have to deal with a 3parallelgroup design already tomorrow. I went through the literature last week and did not come to a conclusion yet. Essentially there are two approaches:
Moser BK and GR Stevens Depending on the study’s target I would consider a Bonferroniadjustment (that’s even the original application: independent tests). — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20100426 14:36 @ Helmut Posting: # 5238 Views: 28,440 

Dear Helmut, » ... I would go with Welchtests, because ANOVA (or lm and lme) assume » homoscedasticity. What about: require(nlme) # HS: added, because not loaded by default This gives (20 subjects bear data) : Generalized least squares fit by REML But astonishing : intervals(glsModel, level=0.9) — Regards, Detlew 
Helmut Hero Vienna, Austria, 20100426 15:00 @ d_labes Posting: # 5239 Views: 28,277 

Dear D. Labes, » But astonishing: » » #backtransformed » T vs. R 83.31 101.61 123.92 (same as without weights or lm()Very strange... — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
Helmut Hero Vienna, Austria, 20100426 23:36 @ d_labes Posting: # 5242 Views: 28,260 

Dear D. Labes, you have already outed yourself as a passionate wheelreinventer, but why don't you simply go with the Welchtest (orwhatsover for unequal variances)? Since our Capt'n luuuvs simulations, I recycled some of his code to play width: sims < 10000 — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20100428 08:58 @ Helmut Posting: # 5253 Views: 28,328 

Dear Helmut,  Sandwich  » you have already outed yourself as a passionate wheelreinventer, but why don't you simply go with the Welchtest (orwhatsover for unequal variances)? To cite myself near the beginning of this thread: "I would go for a parallel groups study with exactly 2 groups with the 'simple' ttest (Welch variant ...)". Simple was here meant with respect to the lme() model used in bear. But I'm seeking for something that can be generalized easily to more then 2 groups. Pairwise tests I'm not so happy with. Eventually this (recently discovered) could be a way: Herberich E, Sikorski J, Hothorn T (2010) "A Robust Procedure for Comparing Multiple Means under Heteroscedasticity in Unbalanced Designs" PLoS ONE 5(3): e9788. doi:10.1371/journal.pone.0009788 online resource Easy to implement in R, only a couple of code lines. But I was not a sandwich gourmet up to now .  Simsalabim  <nitpicking> » ... » lnPKT < rnorm(n=nT,mean=MeanT,sd=CVT*MeanT) » lnPKR < rnorm(n=nR,mean=MeanR,sd=CVR*MeanR) » ... If you are really interested in dealing with logtransformed metrics as lnPK suggests I would suggest you the following modification: # short hand function Of course this will not affect the comparison between ttest with equal variances and Welch ttest I think. </nitpicking> — Regards, Detlew 
Helmut Hero Vienna, Austria, 20100428 12:19 @ d_labes Posting: # 5257 Views: 28,215 

Dear D. Labes! » But I'm seeking for something that can be generalized easily to more then 2 groups. Pairwise tests I'm not so happy with. OK. Interesting reference, although the word "robust" in the title may trigger the nonparametricpanicreflex in some regulators. Of course you are right about rlnorm(…)  quick and dirty as always.— All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
martin Senior Austria, 20100502 16:22 @ d_labes Posting: # 5272 Views: 28,453 

dear d_labes! I just want to draw your attention to the parametrization of R function rlnorm . When you want to generate lognormal distributed random variables with a given mean and CV on the observed scale  I suggest to use meanlog=log(mean)0.5*log(1+cv^2) instead of meanlog=log(mean) . # example best regards martin 
d_labes Hero Berlin, Germany, 20100503 14:22 @ martin Posting: # 5276 Views: 28,102 

dear martin! » ... When you want to generate lognormal distributed random variables with a given mean and CV on the observed scale  I suggest to use meanlog=log(mean)0.5*log(1+cv^2) instead of meanlog=log(mean) ... Thanks for clarifying this. — Regards, Detlew 
ElMaestro Hero Denmark, 20130726 19:42 (edited by ElMaestro on 20130726 20:00) @ martin Posting: # 11065 Views: 19,614 

Hi Martin and all, Sorry to embark on archaelogy. » When you want to generate lognormal distributed random variables with a given mean and CV on the observed scale  I suggest to use meanlog=log(mean)0.5*log(1+cv^2) instead of meanlog=log(mean) . » » # example » set.seed(020510) » n < 1E6 » mue < 5 » cv < 0.5 » » x < rlnorm(n=n, meanlog=log(mue)0.5*log(1+cv^2), sdlog=sqrt(log(1+cv^2))) » y < rlnorm(n=n, meanlog=log(mue), sdlog=sqrt(log(1+cv^2))) » » mean(x); sd(x)/mean(x) » mean(y); sd(y)/mean(y) Well.... Is that really something anyone wants to do? When we simulate BE, like in crossover trials, we most likely want to simulate with a known geometric mean ratio on the observed scale and a known CV on the observed scale. I could have misunderstood rlnorm entirely, but otherwise: GeoMean=function(a) Stuff like that really hurts. I would any day prefer to use rnorm to avoid the confusion:logTRratios=rnorm(n=1e6, mean=log(0.95), sd=sqrt(log(1+cv^2))) Let me add that for those who simulate Test and Ref data individually (yes, there are still backward people who do that), the subtraction of something like 0.5*log(1+cv^2) will cancel out as long as a common variance term is used and balance is present because log(T)log(R)=log(T/R).— I could be wrong, but… Best regards, ElMaestro  since June 2017 having an affair with the bootstrap. 
Helmut Hero Vienna, Austria, 20130728 00:01 @ ElMaestro Posting: # 11081 Views: 19,319 

Hi ElMaestro & Martin, » Sorry to embark on archaelogy. Great. Comes handy for some sim’s I’m currently running! meang < function(z) { exp(mean(log(na.omit(z)))) } a. mean %RE a. mean CV %RE CV g. mean %RE g. mean Or alternatively: boxplot(log(x), log(y), ylab="log(x, y)", Sometimes R drives my nuts. Modified from help(rlnorm) :The log normal distribution has density f(x) = 1/(√(2π)σx)e^{–(logx–μ)²/(2σ²)} where μ and σ are the mean and standard deviation of the logarithm.The mean is E(X) = e^{μ+σ²/2}, the median is med(X) = e^{μ}, and the variance Var(X) = e^{2μ+σ²}e^{σ²–1} and hence the coefficient of variation is √(e^{σ²}–1) […]. How I love the word “hence”. — All the best, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 