ElMaestro Hero Denmark, 20100206 17:10 (edited by ElMaestro on 20100206 17:25) Posting: # 4724 Views: 6,998 

Bong sure, referring to the other thread where the new BE guideline from EMEA is discussed and where HS proposed a simulation. As usual I am several steps behind as I am not sure I understand all details of the discussion. Nevertheless, the word anticonservative was right in my face. Can that claim be generalised in any way? Following the philosophy of HS' single simulations experiment I think we could simulate different 3per, 3trt datasets under various conditions, analyse them by a mixed model and record sigma = the residual error (this is the s used in calculation of the conficence interval). Next, prune the dataset so it's possible to analyse it as a 2,2,2BE dataset with a normal linear model, and then obtain the residual and compare. The philosophy here is that if the pruning results in a sigma that is on average larger than the sigma obtained through the full dataset, then the approach is conservative; otherwise it is not. Here's some elmaestrolophystic code (probably bugged and wrong): load(nlme) On my machine the code above is a win for anticonservativism. But that is dependent on the settings of means and CV in the treatment groups. HS: I got the impression that you simulated your dataset on the ordinary scale, then logtransformed the data. At least, if I logtransform your data then I can reproduce your result (with the code above). Here I am simulating normal dist data on the log scale. Seems more right to me. Bugs galore.... EM. Update a few mins after posting: Seems escape characters do not go well on this forum?! The last two lines starting with "cat" was intended to finish with a letter n preceded by a downright slash but I cannot paste that for some reason. Edit: Yeah, that's a bug (see here). Corrected in the v.1.7.7beta scripts  hopefully available here soon. [HS] 
Helmut Hero Vienna, Austria, 20111103 21:12 @ ElMaestro Posting: # 7598 Views: 5,610 

Hi to all simulants! I modified your code according to Martin's suggestion. I tested three scenarios (10 000 sims each):
require(nlme) Is this correct – or did I bungle the code?
T R1 R2 Cons. Anticons. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
ElMaestro Hero Denmark, 20111103 21:43 @ Helmut Posting: # 7599 Views: 5,529 

Hi HS, 2 comments. 1: » ElMaestrosMixedModel < lme(y1 ~ 0 + seq1 + per1 + trt1, random = ~1sub1) I am not sure why ElMaestro used a mixed model here; doesn't this specification just give the same as a linear model with seq1, per1, trt1, and sub1 as fixed factors ? 2: I think we need to take degrees of freedom into consideration too. After all, the critical value of the tdist (and thus the width of the CI which is the ultimate indicator) depends heavily on it. I think the df's differ between the two scenarios. I hope this does not lead into a discussion about how df's are calculated for a mixed model (R will not do Satterthwaite etc.), a discussion where I have absolutely nothing to contribute. I am sorry to question my own sanity. — if (3) 4 Best regards, ElMaestro "(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018. 
Helmut Hero Vienna, Austria, 20111104 01:56 @ ElMaestro Posting: # 7601 Views: 5,754 

Salute! » I am not sure why ElMaestro used a mixed model here; doesn't this specification just give the same as a linear model with seq1, per1, trt1, and sub1 as fixed factors ? In terms of the residual variance – yes. Good ol’ days of subjects as random. Met last week another statistical pro who was upset upon the ‘all fixed’ story. » I think we need to take degrees of freedom into consideration too. […] I think the df's differ between the two scenarios. Right. » I hope this does not lead into a discussion about how df's are calculated for a mixed model (R will not do Satterthwaite etc.), No, no. Everything is balanced here. 20 dfs in the Williams’ design and 10 after R2 is thrown out. Another try: set.seed(12041958) # keep this line to compare results only Sorry for the lengthy code; couldn’t do better. T R1 R2 Cons. (CV_{intra}) Liberal (CV_{intra}) Hhm. Kannitverstan. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
ElMaestro Hero Denmark, 20111104 08:41 @ Helmut Posting: # 7602 Views: 5,480 

Good morning HS, I might add: At initialisation:
RM.be < 0 At evaluation: if ((FMlo>=80) && (FMhi<=125)) FM.be < FM.be + 1 ...and then compare FM.be and RM.be at exit. And now it's time for the second cup of coffee. Have a great day. EM. 
d_labes Hero Berlin, Germany, 20111104 09:40 @ Helmut Posting: # 7603 Views: 5,506 

Dear Helmut! One fundamental question: If you simulate with rlnorm you get lognormal distributed values. Like those for AUC and Cmax in BE studies were also a lognormal distribution is assumed. The logs of the values simulated with rlnorm will be normally distributed. So long so good. Nothing new told here. But now how to analyse them ? Your code f.i. the line » FullModel < lm(y1 ~ 0 + seq1 + sub1 %in% seq1 + per1 + trt1) or what I would do FullModel < lm(log(y1) ~ 0 + seq1 + sub1 %in% seq1 + per1 + trt1) — Regards, Detlew 
Helmut Hero Vienna, Austria, 20111104 16:05 @ d_labes Posting: # 7607 Views: 5,633 

Dear simulants! » or what I would do » FullModel < lm(log(y1) ~ 0 + seq1 + sub1 %in% seq1 + per1 + trt1) Oh THX! Affects also the TR. Latest version (flexible sample sizes, corrected T/R1 and T/R2): set.seed(12041958) # keep this line to compare results only Results: T R1 R2 Full BE (CV_{intra}) Red. BE (CV_{intra}) I have no idea how to simulate CV_{intra}; CV_{total} is not what I really want (to see if reference’s different variances have an influence on the result). — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
ElMaestro Hero Denmark, 20111104 16:22 (edited by ElMaestro on 20111104 22:36) @ Helmut Posting: # 7610 Views: 5,525 

Hi Hs, » I have no idea how to simulate CV_{intra}; CV_{total} is not what I really want (to see if reference’s different variances have an influence on the result). Neither had I when I started this thread, but now I have and I don't think it is difficult. Our model (simplest case but not satisfying d_labes' concern): y~Seq+Subj+Treat+Per+error There is one error in play unless we do a mixed model. So we sample the error (intrasubject) from a normal distribution and add it to the fixed effects. We can set the fixed effects to zero for Per and Seq, and tweak the Treat's as per our wishes for T/R ratios. Edit: The following code possibly shows what I suggest (and I am as always probably wrong):
##..and analyse from here I have not debugged so it is prolly full of errors. — if (3) 4 Best regards, ElMaestro "(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018. 
d_labes Hero Berlin, Germany, 20111107 11:16 @ ElMaestro Posting: # 7631 Views: 5,587 

Dear Ol' Pirate, dear Helmut, » » I have no idea how to simulate CV_{intra}; CV_{total} is not what I really want (to see if reference’s different variances have an influence on the result). » » Neither had I when I started this thread, but now I have and I don't think it is difficult. Our model (simplest case but not satisfying d_labes' concern): » » y~Seq+Subj+Treat+Per+error » » There is one error in play unless we do a mixed model. So we sample the error (intrasubject) from a normal distribution and add it to the fixed effects. We can set the fixed effects to zero for Per and Seq, and tweak the Treat's as per our wishes for T/R ratios. Totally right if you think in terms of the EMA mantra 'all effects fixed'. But then you have to specify subjects effects in some reasonable way. What you do is to assume subjects effect of zero or Dollyclones of a subject having all the same subject effect. Don't know how this affects your goal. Don't also know what a reasonable way is to specify subjects fixed effects. At least you have many, many scenarios beside those you like to consider. The way out would be to consider subjects effects as random with a CV associated. Of course this is a mixed model. No one else than EMA has doubt about it. Refer f.i. to Chow, Liu "Design and Analysis of Bioavailability ..." page 42 for the model of the classical 2x2 crossover. To generalize this to the 3x3 study one could simulate the logs of a PK metric via a multivariate normal distribution with variancecovariance matrix
( var_{T}+var_{S} var_{S} var_{S} ) where var_{T}, var_{R1} and var_{R2} are the intrasubject variances and var_{S} the variability of the subjects effect (betweensubjects variability). This model of course neglects any subjectbytreatment interaction. Together with a proper specification of the mean vector (µ_{T}, µ_{R1}, µ_{R2}) you get from the multivariate normal distribution (R package mvtnorm) vectors of simulated logs for T, R1 and R2. In this notation one neglects also period effects. If you like to deal with them you have to write down the above variancecovariance matrix for the period values for each sequence analogous to Chow, Liu for the 2x2 crossover. Another possibility would be the simulation with a zero mean vector and add the necessary fixed effects (treatment, period) afterwards. Hope this gibberish sounds reasonable to you and I have not made a big mistake. Code follows if I have tested it . — Regards, Detlew 
ElMaestro Hero Denmark, 20111108 11:02 (edited by ElMaestro on 20111108 17:16) @ d_labes Posting: # 7634 Views: 5,386 

All hands on deck! I hope this illustrates the issue of no treament differences when fixed effects between subjects are zerod or some scatter in fixed effects between subjects exists: Per=c(1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3, 1,2,3) Have a good day. As always it is prolly wrong and bugged. EM — if (3) 4 Best regards, ElMaestro "(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018. 
d_labes Hero Berlin, Germany, 20111125 13:54 @ Helmut Posting: # 7723 Views: 5,133 

Dear Helmut! Quite a time ago I had promised to come out with a simulation including subject variability and intrasubject correlation. Due to spare time I could not until now consider this. Before posting my results I would like to compare it to your code. Inspecting it I have some doubts: » # massacrating out everything related to R2 » # means that when T was before R1 we now have a seq 1 otherwise a seq 2 » seq2 < as.factor(rep(c(1,1, 2,2, 1,1, 1,1, 2,2, 2,2), size/6)) » sub2 < as.factor(rep(1:size, each = 2)) » per2 < as.factor(rep(1:2, size)) » trt2 < as.factor(rep(c(1,2, 2,1, 1,2, 1,2, 2,1, 2,1), size/6)) Must the EMA guidance interpreted in this way, i.e. analysing the dataset with 'pseudo' periods and sequences? Or should it interpreted the way which is employed in the Q&A for the replicate design (intrasubject variability of the reference). That means only massacre out the data concerning R2 but retaining the original periods and sequences. » FullModel < lm(log(y1) ~ 0 + seq1 + sub1 %in% seq1 + per1 + trt1) » FMDelta < mean(log(y1)[trt1==1])mean(log(y1)[trt1==2]) » FMdf < aov(FullModel)$df.residual » FMMSE < summary(FullModel)$sigma^2/FMdf » FMlo < 100*exp(FMDelta  qt(10.05,FMdf)*sqrt(2*FMMSE/size)) » FMhi < 100*exp(FMDelta + qt(10.05,FMdf)*sqrt(2*FMMSE/size)) » FMCI < FMhi  FMlo » FMCVintra < c(FMCVintra, sqrt(exp(FMMSE)1)) At least the code for FMMSE has stolen my sleep . According to the help page of lm.summary() "... sigma  the square root of the estimated variance of the random error" and IMHO then the MSE is sigma^2 without the division by the degrees of freedom!Check this out via anova(FullModel)["Residuals","Mean Sq"]==summary(FullModel)$sigma^2 .Or did I miss sumfink here? — Regards, Detlew 
Helmut Hero Vienna, Austria, 20111125 14:23 @ d_labes Posting: # 7724 Views: 5,152 

Dear Detlew! » Inspecting it I have some doubts: » » […] some nonsense code » » Must the EMA guidance interpreted in this way, i.e. analysing the dataset with 'pseudo' periods and sequences? Or should it interpreted the way which is employed in the Q&A for the replicate design (intrasubject variability of the reference). That means only massacre out the data concerning R2 but retaining the original periods and sequences. Exactly! Should forget copypasting old code when new ‘knowledge’ became available in the meantime. » » FMMSE < summary(FullModel)$sigma^2/FMdf » At least the code for FMMSE has stolen my sleep . According to the help page of lm.summary() "... sigma  the square root of the estimated variance of the random error" and IMHO then the MSE is sigma^2 without the division by the degrees of freedom!Oh s**t! » Check this out via anova(FullModel)["Residuals","Mean Sq"]==summary(FullModel)$sigma^2 .» Or did I miss sumfink here? No, no. Sorry! — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20111104 15:46 @ ElMaestro Posting: # 7606 Views: 5,474 

Dear EM, » load(nlme)
nConservative=0 Really? I would suggest: If you consider 95, 90 and 110 as the arithmetic means on the untransformed scale use log(x)sd^{2}/2 as the mean of the logs. If your numbers 0.15, 0.10 and 0.15 are the CV's user sd = sqrt(log(CV*CV+1)) as sd of the logs. The manner you have written the simulation for the three treatments suggest you consider the log(metric) for the 3 treatments as independent. Is this really the case for a crossover where the 3 treatment were applied to each subject ? IMHO we have to consider some correlation between them. — Regards, Detlew 
ElMaestro Hero Denmark, 20111104 16:06 @ d_labes Posting: # 7608 Views: 5,489 

Hi d_labes, » Really? » I would suggest: » If you consider 95, 90 and 110 as the arithmetic means on the untransformed scale use log(x)sd^{2}/2 as the mean of the logs. » If your numbers 0.15, 0.10 and 0.15 are the CV's user sd = sqrt(log(CV*CV+1)) as sd of the logs. I think Martin raised this before. I don't understand this aspect. Feel free to educate me, you great scientist! » The manner you have written the simulation for the three treatments suggest you consider the log(metric) for the 3 treatments as independent. » Is this really the case for a crossover where the 3 treatment were applied to each subject ? IMHO we have to consider some correlation between them. Probably a good point. But let's take the simple 2,2,2BE scenario as example. Treatmeant T and R is probably correlated within patients; but we still only work with a covariance matrix with a single sigma squared on the diagonal and zeros elsewhere. I don't know how to take any such correlation into effect, but if you wish to do it a way forward might be a mixed model. As you can see above the original code was something I wrote 20 months ago, and I am today questioning my sanity back then. Actually I am still bonkers, but I guess that's another matter . EM. 
Helmut Hero Vienna, Austria, 20111104 16:09 @ d_labes Posting: # 7609 Views: 5,511 

Dear Detlew! » The manner you have written the simulation for the three treatments suggest you consider the log(metric) for the 3 treatments as independent. » Is this really the case for a crossover where the 3 treatment were applied to each subject ? IMHO we have to consider some correlation between them. Exactly. Saw your post after submitting mine. See its end… — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
d_labes Hero Berlin, Germany, 20111108 11:31 @ ElMaestro Posting: # 7635 Views: 5,455 

Moin moin SimulAnts! Sitting here, being bored , wondering about the meaning of liberal (anticonservative), conservative within the context discussed here. Sure we don't do dirty politics here . I have found this online resource and this one on the topic. I think this has a direct influence on the scenarios to simulate here, namely we have to simulate ratios on the acceptance borders (H0: inequivalence) to get an estimate of the type I error alpha. Otherwise we simulate some sort of power at the set ratios (near 1) I think. And higher power may be an indication of a liberal test but must not. It can also be that the alpha is <5% (conservative) but the test has more power. BTW: post 499 . — Regards, Detlew 
martin Senior Austria, 20111108 20:50 @ d_labes Posting: # 7640 Views: 5,326 

Dear d_labes! You may find the following paper of interest where the terms "liberal" and "conservative" are used for evaluation of the simulation results. Jaki T, Wolfsegger MJ, Lawo JP (2010). Establishing bioequivalence in complete and incomplete data designs using AUCs. Journal of Biopharmaceutical Statistics, 20(4):803–820. DOI: 10.1080/10543401003618835. A preprint of this article can be found here. hope this helps Martin 
Helmut Hero Vienna, Austria, 20111108 22:56 @ martin Posting: # 7641 Views: 5,385 

Dear Martin! THX for reminding me on this paper – I remember your presentation at the Inst. for Biostat. also quite well. I overlooked this sentence in the discussion: The total sample size in the complete data design is smaller than for the serial sampling design, with the advantage disappearing with increasing intrasubject correlation with little effect of the number of measured time points to estimate the AUC. (my emphasis) I looked into my database and evaluated the withinsubject correlation in 130+ studies and close to 3,000 subjects. I considered only studies showing BE (I can hear you shout: “Selection bias!!”). Not surprisingly correlations were quite high. Forgot to bring the data to Berlin. I will post some results ASAP. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 
martin Senior Austria, 20111109 09:01 @ Helmut Posting: # 7642 Views: 5,318 

Dear HS! This is extremely interesting. I would by happy if you can you provide some summary regarding the correlation between time points. (e.g. was the correlation constant over time, ...). best regards Martin PS.: As far as I know, Thomas will give a presentation on flexible batch designs at the Inst. for Biostat in the near future. I hope that you will find some time to participate as your input is highly appreciated!! I suppose that there will be also a debriefing at the old AKH. 
Helmut Hero Vienna, Austria, 20111125 17:16 @ martin Posting: # 7725 Views: 5,304 

Dear Martin! » This is extremely interesting. I would by happy if you can you provide some summary regarding the correlation between time points. (e.g. was the correlation constant over time, …). As I told you last Wednesday at the ‘debriefing’ I would need a month of spare time to retrieve them (~100,000 data pairs in various formats). I can only serve with an overview. 133 crossover studies, 2843 subjects, AUC_{t} or AUC_{τ}: min 2.5% med 97.5% max R² of my sample has an interesting distribution… Of course, correlation depends on the sample size* – but you get an impression. Studies with low correlations contained one ‘outlier’ each but were still large enough to show BE (well, that was my selection bias). Sponsors n=24.
— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. ☼ Science Quotes 