Bioequivalence and Bioavailability Forum

Main page Policy/Terms of Use Abbreviations Latest Posts

 Log-in |  Register |  Search

Back to the forum  Query: 2017-05-30 05:34 CEST (UTC+2h)
 
d_labes
Hero

Berlin, Germany,
2010-04-22 11:43

Posting: # 5183
Views: 30,567
 

 Parallel bears meeting at random in infinity [R for BE/BA]

Dear all, especially dear bears,

since I'm very interested in using R for evaluation of BE studies I had a closer look at the bear code for parallel group studies (code inspection as part of validation :-D).

If I got it right the code used is (f.i. AUC log-transformed):
lme(lnAUC0t ~ drug, random=~1|subj, data=TotalData, method="REML" )
I am wondering where this code comes from, what this code does an why it works anyhow :ponder:.

IMHO this model, one (fixed) effect for the treatment and one (random) effect for the subjects must be over-specified. We have only one value for a distinct subject treated with Test or Reference and thus we are not able to separate this uniquely into 2 effects, one part for treatment and one attributed to the subject.

Nevertheless the lme() call produces a result.
Try it out in bear.

If I follow the strange and crude EMA suggestion and use all effects as fixed
lmModel <- lm(lnAUC0t ~ drug + subj, data=TotalData)
with the bear built-in dataset for parallel groups I get the anticipated result:
>summary(lmModel)
Call:
lm(formula = lnAUC0t ~ drug + subj, data = TotalData)

Residuals:
ALL 20 residuals are 0: no residual degrees of freedom! (over-specified!)

Coefficients: (1 not defined because of singularities)
            Estimate Std. Error t value Pr(>|t|)
(Intercept)   7.3199         NA      NA       NA
drug2        -0.0622         NA      NA       NA
subj2        -0.5366         NA      NA       NA
[...]
>anova(lmModel)
Analysis of Variance Table

Response: lnAUC0t
          Df  Sum Sq  Mean Sq F value Pr(>F)
drug       1 0.00127 0.001270               
subj      18 1.17959 0.065533               
Residuals  0 0.00000


BTW: I would go for a parallel groups study with exactly 2 groups with the 'simple' t-test (Welch variant as described by Helmut long ago) :cool:.
Is anybody out there who knows a generalization of the Welch test to more than 2 groups? Any hint would be very appreciated!
Or could we use pairwise Welch t-tests for that?

Regards,

Detlew
ElMaestro
Hero

Denmark,
2010-04-22 12:53
(edited by ElMaestro on 2010-04-22 13:52)

@ d_labes
Posting: # 5184
Views: 28,447
 

 Parallel bears meeting at random in infinity

Ahoy d_labes!

» If I got it right the code used is (f.i. AUC log-transformed):
» lme(lnAUC0t ~ drug, random=~1|subj, data=TotalData, method="REML" )
» I am wondering where this code comes from, what this code does an why it works anyhow :ponder:.
» IMHO this model, one (fixed) effect for the treatment and one (random) effect for the subjects must be over-specified.

With REML, the likelihood is optimised on the covariance matrix V (=ZGZt+R). With "random=~1|subj" I think we do the most simple specification of V, as we put in sigma on the diagonal, and zeros elsewhere. Thus, subjects introduce a random effect (think sigma on the diagonal), nature of treatment not considered.

» We have only one value for a distinct subject treated with Test or Reference and thus we are not able to separate this uniquely into 2 effects, one part for treatment and one attributed to the subject.

In my opinion, and I could be wrong, with this specification of the model we are not trying to do exactly that through the proposed code. We are only saying that each subject is associated with a degree of variability which is described by a common sigma in V, regardless of the treatment given for any data point.

Best regards from the seven seas,
EM.

Correction: I mean sigma2 on the diagonal.

I could be wrong, but…


Best regards,
ElMaestro


GCP lecturer: "And who do we consider the most important person in a clinical trial?"
Principal Investigator: "Me!".
d_labes
Hero

Berlin, Germany,
2010-04-22 14:00

@ ElMaestro
Posting: # 5186
Views: 28,424
 

 Parallel groups in bear - CIs

Ahoy ElMaestro!

» In my opinion, and I could be wrong, with this specification of the model we are not trying to do exactly that [...]
(underlined by me ;-))

There is a hint I meanwhile discovered:
Also the summary() function works on the lme-model intervals() does not:
> intervals(lmeModel, level=0.9)
Error in intervals.lme(lmeModel) :
  Cannot get confidence intervals on var-cov components: Non-positive definite approximate variance-covariance
.

Only the fixed effects parameter can be obtained, astonishing enough :confused::
> intervals(lmeModel, which="fixed", level=0.9)
Approximate 90% confidence intervals

 Fixed effects:
                 lower    est.     upper
(Intercept)  6.9615133 7.10189 7.2422667
drug2       -0.1825826 0.01594 0.2144626

Seems to me as there is something wrong with the random part.

Regards,

Detlew
ElMaestro
Hero

Denmark,
2010-04-22 21:47

@ d_labes
Posting: # 5188
Views: 28,355
 

 Parallel groups in bear - CIs

Dear d_labes,

» Seems to me as there is something wrong with the random part.

OK, then here's a potential theory:
if we do lme(lnAUC0t ~ drug, random=~1|subj, data=TotalData, method="REML" )
on a dataset where we have parallel non-replicated groups (one observation per subject) then R might be trying to actually fit the error sigma2 on the diagonal of V (R matrix), and for ZGZt possibly ending up with a subject sigma2 on the diagonal. Since there is only one observation per subject, I guess this would amount to a V with error-sigma2 plus subject-sigma2 on the diagonal.
Now error-sigma2 plus subject-sigma2 can be estimated, but the individual components cannot (would explain anomalies, at least). We would effectively only be working with a single sigma2.
In this case, I would be inclined to imagine that the error-sigma2 plus subject-sigma2 would equal the residual error from an anova on lm(lnAUC0t~drug) or something like that.

Best regards
EM.

I could be wrong, but…


Best regards,
ElMaestro


GCP lecturer: "And who do we consider the most important person in a clinical trial?"
Principal Investigator: "Me!".
d_labes
Hero

Berlin, Germany,
2010-04-23 09:09

@ ElMaestro
Posting: # 5192
Views: 28,361
 

 Parallel groups in bear - CIs

Dear Old saylor,

» In this case, I would be inclined to imagine that the error-sigma2 plus subject-sigma2 would equal the residual error from an anova on lm(lnAUC0t~drug) or something like that.

Full ACK.

To verify this (bear v 2.4.1 parallel group dataset):
'Mixed effects model'
Random effects:
 Formula: ~1 | subj
        (Intercept)  Residual
StdDev:   0.2396947 0.0898856   sqrt(0.2396947^2+ 0.0898856^2)=0.2559941

Fixed effects: lnAUC0t ~ drug
              Value  Std.Error DF  t-value p-value
(Intercept) 7.10189 0.08095245 18 87.72916  0.0000
drug2       0.01594 0.11448405 18  0.13923  0.8908


ElGrandeEM model:
Call:
lm(formula = lnAUC0t ~ drug, data = TotalData)

...

Coefficients:
            Estimate Std. Error t value Pr(>|t|)   
(Intercept)  7.10189    0.08095  87.729   <2e-16 ***
drug2        0.01594    0.11448   0.139     0.89   

Residual standard error: 0.256 on 18 degrees of freedom
Multiple R-squared: 0.001076,   Adjusted R-squared: -0.05442
F-statistic: 0.01939 on 1 and 18 DF,  p-value: 0.8908


Note the totally identical results for the 'drug' effects.

Regards,

Detlew
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-25 23:29

@ d_labes
Posting: # 5218
Views: 28,334
 

 Parallel groups in bear - CIs

Dear d_labes,

» There is a hint I meanwhile discovered:
» Also the summary() function works on the lme-model intervals() does not:
» > intervals(lmeModel, level=0.9)
» Error in intervals.lme(lmeModel) :
»   Cannot get confidence intervals on var-cov components: Non-positive
» definite approximate variance-covariance
.

I got the following results for ln(Cmax) using built-in dataset for parallel study in bear v2.4.2 with the model of lme(lnCmax ~ drug, random=~1|subj, data=TotalData, method="REML").
Approximate 90% confidence intervals

 Fixed effects:
                  lower       est.     upper
(Intercept)  7.28570851 7.35203371 7.4183589
drug2       -0.08362165 0.00848616 0.1005940
attr(,"label")
[1] "Fixed effects:"

 Random Effects:
  Level: subj
                       lower      est.        upper
sd((Intercept)) 1.185598e-13 0.1310857 144934894939

 Within-group standard error:
       lower         est.        upper
1.124716e-87 4.915712e-02 2.148474e+84

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-22 23:09

@ d_labes
Posting: # 5189
Views: 28,360
 

 Parallel bears meeting at random in infinity

Dear d_labes & ElMaestro,

» Call:
» lm(formula = lnAUC0t ~ drug + subj, data = TotalData)

Is this possible caused by unbalanced parallel data set? I changed the built-in dataset in the demo of parallel study as an unbalanced (subject in Test: 14's and 13's in Ref.) since bear v2.4.2 because it could cause error with previous version when the parallel was unbalanced when doing NCA. You can see the output file called Statistical_summaries.txt after running the demo dataset for more details. Therefore, I don't know if it is appropriate to use lm() to run unbalanced dataset as you did here. The reason to do that is only for my convenience when coding bear and testing it. The unbalanced dataset is quite special for an unbalanced, parallel BE study in bear. I got the bug information from a Greek user.

» Residuals:
» ALL 20 residuals are 0: no residual degrees of freedom!
» (over-specified!)
»
» Coefficients: (1 not defined because of singularities)
» Estimate Std. Error t value Pr(>|t|)
» (Intercept) 7.3199 NA NA NA
» drug2 -0.0622 NA NA NA
» subj2 -0.5366 NA NA NA
» [...]

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
d_labes
Hero

Berlin, Germany,
2010-04-23 09:12

@ yjlee168
Posting: # 5193
Views: 28,347
 

 Modelling Parallel bears

Dear Yung-jin,

» Is this possible caused by unbalanced parallel data set?

:no: I have used the parallel group dataset from bear version 2.4.1, 10 subjects in each treatment group.

» ... Therefore, I don't know if it is appropriate to use lm() to run unbalanced dataset as you did here.
(underlined by me)

IMHO definitely yes.

BTW: My preferred lm() model would be EMs above :-D. I have all my days compared 2 parallel groups (if only 2) with t-test if only the treatment effect was under consideration. And this is a special case of one-way ANOVA with treatment as factor.
But if we consider different variabilities in the treatment groups it is another story!

To ask it again: What is the idea behind your model, where does it come from? Do you have a reference for me? lm() or lme() is only a question of considering some effects as random or not.

Regards,

Detlew
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-23 21:14

@ d_labes
Posting: # 5207
Views: 28,292
 

 Modelling Parallel bears

Dear d_labes,

Thank you for your message.

» IMHO definitely yes.

O.k.

» To ask it again: What is the idea behind your model, where does it come from? Do you have a reference for me? lm() or lme() is only a question of considering some effects as random or not.

No reference for this. Simply try to translate the codes of Power to know into R. When we were developing bear for parallel BE study, we tried to get exactly the same results as Power to know or WinNonlin. The codes of Power to know for parallel BE study looks like
PROC GLM DATA=EXAMPLE;
CLASS SUBJ TRT;
MODEL LAUCT LAUCI LCMAX=TRT;
ESTIMATE ‘A vs. B’ TRT 1–1;
LSMEAN TRT;
RUN;

So we used lme() (not lm()) to meet this requirement for validation purpose.
lmeCmax_ss<-lme(Cmax_ss ~ drug, random=~1|subj, data=Data, method="REML" )
Just like analyzing a replicate crossover. I can not find the results obtained from analyzing with WinNonlin using a linear mixed effects model. As far as I could remember, we got pretty similar results as with WinNonlin at that moment. Then when search the answer for your question int his thread, I find that we probably can simply use lm() as what we do in non-replicate 2x2x2 crossover BE study, such as
lmCmax_ss<- lm(Cmax_ss ~ drug , data=TotalData)
Which model is more appropriate for analyzing parallel BE study with R?

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-24 00:28

@ yjlee168
Posting: # 5208
Views: 28,480
 

 Validating vs. WinNonlin...

Dear bears!

» Just like analyzing a replicate crossover. I can not find the results obtained from analyzing with WinNonlin using a linear mixed effects model. As far as I could remember, we got pretty similar results as with WinNonlin at that moment.

D. Labes referred to one of my old posts: WinNonlin's method (assuming equal variances) is flawed - even in the current releases WinNonlin 5.3 and Phoenix/WinNonlin 6.1. The model is equivalent to the common one-way ANOVA (but like everything in WinNonlin based on maximum likelihood); only one fixed effect = treatment (no random subject term whatsoever). A random subject term does not make sense IMHO.

» Which model is more appropriate for analyzing parallel BE study with R?

Agreeing with D. Labes in the top post I would also use a t-test for unequal variances. That's the default of t.test and in conformity with FDA's requirements. For simple code see the subsequent post in the quoted thread.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-24 19:36

@ Helmut
Posting: # 5213
Views: 28,291
 

 Validating vs. WinNonlin...

Dear Helmut, d_labes and ElMaestro,

Thank you all for your great inputs and suggestions in this thread. We have decided to rewrite the codes of bear in the part of parallel BE study.

» D. Labes referred to one of my old posts: WinNonlin's method (assuming equal variances) is flawed - even in the current releases
»
» Agreeing with D. Labes in the top post I would also use a t-test for unequal variances. That's the default of t.test and in conformity with FDA's requirements. For simple code see the subsequent post in the quoted thread.

I see. Thanks again.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-26 00:09

@ Helmut
Posting: # 5219
Views: 28,276
 

 Validating vs. WinNonlin...

Dear Helmut,

BTW, may I ask if the analysis of a replicate BE study is acceptable with WNL v5.x or v6.x? WNL uses a mixed effect model, too, to analyze a replicate BE.

» D. Labes referred to one of my old posts: WinNonlin's method (assuming equal variances) is flawed - even in the current releases WinNonlin 5.3 and Phoenix/WinNonlin 6.1.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 01:29

@ yjlee168
Posting: # 5221
Views: 28,275
 

 Validating vs. WinNonlin...

Dear bears,

» BTW, may I ask if the analysis of a replicate BE study is acceptable with WNL v5.x or v6.x? WNL uses a mixed effect model, too, to analyze a replicate BE.

What do you mean by "acceptable"?

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-26 08:59

@ Helmut
Posting: # 5228
Views: 28,259
 

 WNL in replicate BE

Dear Helmut,

One of bear users found the differences when he compared bear and WNL with his dataset obtained from a replicate BE study. I found that the mixed effect model used in WNL v6.x was different from bear. In WNL, the fixed terms included: int + SEQUENCE + FORMULATION + PERIOD; and the random terms: FORMULATION*SUBJECT + PERIOD*FORMULATION*SUBJECT. I don't know why WNL sets its models like this. In bear, the fixed effects were SEQUENCE + PERIOD + FORMULATION; the random effect was only FORMULATION/SUBJECT. That's why I asked the question. I should search first, like this post of yours... :-(

» What do you mean by "acceptable"?

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 16:15

@ yjlee168
Posting: # 5236
Views: 28,236
 

 WNL in replicate BE

Dear bears,

it's not that easy. To quote from the manual:


Replicated crossover designs

For replicated crossover designs, the default model used in the Bioequivalence Wizard is the example model given for replicated designs in Appendix E of the FDA guidance:

Fixed effects model is:
  • DependentVariable = Intercept + Sequence + Formulation + Period
Random effects model is:
  • Treatment
  • Variance Blocking Variable set to Subject
  • Type set to Banded No-Diagonal Factor Analytic with 2 factors
Repeated specification is:
  • Period
  • Variance Blocking Variables set to Subject
  • Group set to Treatment
  • Type set to Variance Components
Rather than using the implicit repeated model as in Appendix E, the time vari­able is made explicit as Period.


Appendix E to FDA (2001) states:

The following illustrates an example of program statements to run the average BE analysis using PROC MIXED in SAS version 6.12, with SEQ, SUBJ, PER, and TRT identifying sequence, subject, period, and treatment variables, respectively, and Y denoting the response measure (e.g., log(AUC), log(Cmax)) being analyzed:

PROC MIXED;
CLASSES SEQ SUBJ PER TRT;
MODEL Y = SEQ PER TRT/ DDFM=SATTERTH;
RANDOM TRT/TYPE=FA0(2) SUB=SUBJ G;
REPEATED/GRP=TRT SUB=SUBJ;
ESTIMATE 'T vs. R' TRT 1 -1/CL ALPHA=0.1;


The Estimate statement assumes that the code for the T formulation precedes the code for the R formulation in sort order (this would be the case, for example, if T were coded as 1 and R were coded as 2). If the R code precedes the T code in sort order, the coefficients in the Estimate statement would be changed to -1 1.

In the Random statement, TYPE=FA0(2) could possibly be replaced by TYPE=CSH. This guidance recommends that TYPE=UN not be used, as it could result in an invalid (i.e., not nonnegative definite) estimated covariance matrix.


We have already discovered differences between SAS and WinNonlin in the evaluation of replicate designs.

» I should search first, like this post of yours... :-(

No, that's another pot of tea! I was referring to EMA's guideline asking for ANOVA with fixed effects - which is not clever anyhow.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-25 19:34

@ yjlee168
Posting: # 5214
Views: 28,324
 

 Modelling Parallel bears

Dear d_labes,

When I revised bear today, I found that the following two models got the same results (with built-in dataset in bear v2.4.2) for CIs.
» lmeCmax_ss<-lme(Cmax_ss ~ drug, random=~1|subj, data=Data,
» method="REML" )
Dependent Variable: lnCmax                                             

Linear mixed-effects model fit by REML
 Data: TotalData
        AIC       BIC   logLik
  -14.15488 -9.279372 11.07744

Random effects:
 Formula: ~1 | subj
        (Intercept)   Residual
StdDev:   0.1310857 0.04915712

Fixed effects: lnCmax ~ drug
               Value  Std.Error DF   t-value p-value
(Intercept) 7.352034 0.03882889 25 189.34445  0.0000
drug2       0.008486 0.05392284 25   0.15738  0.8762
 Correlation:
      (Intr)
drug2 -0.72

Standardized Within-Group Residuals:
        Min          Q1         Med          Q3         Max
-0.83569001 -0.14504896  0.05106144  0.21037957  0.69279954

Number of Observations: 27
Number of Groups: 27

and
» lmCmax_ss<- lm(Cmax_ss ~ drug , data=TotalData)
Call:
lm(formula = lnCmax ~ drug, data = TotalData)

Residuals:
     Min       1Q   Median       3Q      Max
-0.33321 -0.05783  0.02036  0.08388  0.27623

Coefficients:
            Estimate Std. Error t value Pr(>|t|)   
(Intercept) 7.352034   0.038829 189.344   <2e-16 ***
drug2       0.008486   0.053923   0.157    0.876   
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.14 on 25 degrees of freedom
Multiple R-squared: 0.0009897,  Adjusted R-squared: -0.03897
F-statistic: 0.02477 on 1 and 25 DF,  p-value: 0.8762

Just a quick response. I am studying Helmut's t tests for further revision.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
ElMaestro
Hero

Denmark,
2010-04-25 20:40
(edited by ElMaestro on 2010-04-25 23:42)

@ yjlee168
Posting: # 5215
Views: 28,391
 

 Modelling Parallel bears

Hi Bears,

» When I revised bear today, I found that the following two models got the same results (...)

The results are similar for the fixed effect (here: formulation). The relationship between the lm's residual and the lme's variance components is described above.

Now, just to add a little confusion:
D_labes pointed out some confusion about the results from the lme, but in my opinion there is also funny stuff going on with the lm: Negative adjusted r squared. Another cosmic mindf#&@er from the realm of statistics. I love it.

Best regards
EM.

PS: Yes, I am aware of the definitions, e.g. Wikipedia.
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-25 22:38

@ yjlee168
Posting: # 5216
Views: 28,391
 

 Dataset

Dear bears,

» When I revised bear today, […] (with built-in dataset in bear v2.4.2) for CIs.

Are you trying to confuse me? The built-in dataset consists of 20 (10:10) observations. For comparisons can we please stick to this dataset, lnAUC0t only?
   subj drug Cmax AUC0t AUC0INF lnCmax lnAUC0t lnAUC0INF
1     1    1 88.6  1510    1530 4.4841  7.3199    7.3330
2     2    1 52.5   883     890 3.9608  6.7833    6.7912
3     3    1 92.0  1650    1670 4.5218  7.4085    7.2226
4     4    1 56.0  1015    1050 4.0254  6.9226    6.9565
5     5    1 84.0  1556    1570 4.4308  7.3499    7.3588
6     6    1 84.8  1412    1432 4.4403  7.2528    7.2668
7     7    1 83.0  1353    1356 4.4188  7.2101    7.2123
8     8    1 96.4  1443    1450 4.5685  7.2745    7.2793
9     9    1 68.1  1299    1305 4.2210  7.1694    7.1740
10   10    1 33.5   560     570 3.5115  6.3279    6.3456
11   11    2 70.3  1284    1290 4.2528  7.1577    7.1624
12   12    2 73.5  1391    1340 4.2973  7.2378    7.2004
13   13    2 50.2   873     890 3.9160  6.7719    6.7920
14   14    2 62.2  1211    1230 4.1304  7.0992    7.1148
15   15    2 74.1  1233    1255 4.3054  7.1172    7.1349
16   16    2 60.4  1172    1182 4.1010  7.0665    7.0750
17   17    2 60.4  1172    1185 4.1010  7.0665    7.0775
18   18    2 75.3  1336    1355 4.3215  7.1974    7.2116
19   19    2 76.8  1348    1355 4.3412  7.2064    7.2116
20   20    2 82.9  1419    1425 4.4176  7.2577    7.2619


For non-bears, here is the dataset (balanced, 20 subjects) for download.

With this code

# Change to folder containg data files in RGui!
TotalData <- read.csv("PAR-BEAR.txt", header=T, sep="\t", dec=".")
attach(TotalData)
subj <- as.factor(subj)
drug <- as.factor(drug)
boxplot(lnAUC0t ~ drug, col="lightgray", yl)
points(jitter(as.numeric(drug), factor=0.25), lnAUC0t, col="blue")
result <- t.test(lnAUC0t ~ drug, var.equal = FALSE, conf.level = 0.90)
result
cat("T/R [%] with alpha 0.05 (90% CI)", sep=" ", fill=TRUE)
tbldiff <- matrix(
  c(as.numeric(exp(diff(result$estimate))),
  sort(as.numeric(exp(-result$conf.int)))),
  byrow = TRUE, nrow = 1)
dimnames(tbldiff) <- list("Ratio",
                     c("  Point Estimate",
                       "  CL90 lower",
                       "  CL90 upper" ))
round(tbldiff*100,3)


I get

        Welch Two Sample t-test

data:  lnAUC0t by drug
t = -0.1392, df = 12.043, p-value = 0.8916
alternative hypothesis: true difference in means is not equal to 0
90 percent confidence interval:
 -0.2199223  0.1880423
sample estimates:
mean in group 1 mean in group 2
        7.10189         7.11783

T/R [%] with alpha 0.05 (90% CI)
        Point Estimate   CL90 lower   CL90 upper
Ratio          101.607       82.858      124.598


As expected the CI is slightly wider as the ones obtained by simple lm or the strange lme (83.312-123.920). Have a look at the plot. ;-)


Edit: Remove the red part of the code. The line should read
boxplot(lnAUC0t ~ drug, col="lightgray")
THX to D. Labes for beta-testing. [Helmut]

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-25 22:44

@ Helmut
Posting: # 5217
Views: 28,233
 

 Dataset

Dear Helmut,

In bear v2.4.2, I've changed to dataset for demo in parallel study (see my previous post of this thread). The dataset should be
   subj drug Cmax    AUC0t   AUC0INF   lnCmax  lnAUC0t lnAUC0INF
1     2    1 1481 12516.14 13184.818 7.300473 9.434774  9.486821
2     4    1 1374 11063.10 11668.332 7.225481 9.311371  9.364634
3     6    1 1756 15376.35 15964.184 7.470794 9.640586  9.678103
4     8    1 1939 13421.72 14000.997 7.569928 9.504630  9.546884
5    10    1 1388 13309.95 13984.567 7.235619 9.496267  9.545710
6    12    1 1542 15015.32 15757.848 7.340836 9.616827  9.665094
7    14    1 1598 14977.16 15915.711 7.376508 9.614282  9.675062
8    16    1 1837 15298.65 16209.441 7.515889 9.635520  9.693349
9    18    1 1629 13981.65 14650.054 7.395722 9.545501  9.592199
10   20    1 1522 13837.73 14342.880 7.327781 9.535154  9.571009
11   22    1 1615 14346.65 14686.419 7.387090 9.571272  9.594678
12   24    1 1483 11711.38 12544.205 7.301822 9.368316  9.437014
13   26    1 1247 10608.50 11085.836 7.128496 9.269411  9.313424
14    1    2 1739 14445.27 14925.656 7.461066 9.578123  9.610837
15    3    2 1780 15371.40 16031.970 7.484369 9.640264  9.682340
16    5    2 1555 13971.45 14556.963 7.349231 9.544771  9.585825
17    7    2 1566 13441.55 14068.450 7.356280 9.506106  9.551690
18    9    2 1475 12410.03 12916.097 7.296413 9.426260  9.466230
19   11    2 1127  9353.00  9764.323 7.027315 9.143452  9.186491
20   13    2 1235  9723.45 10332.892 7.118826 9.182296  9.243088
21   15    2 1633 12293.90 12974.543 7.398174 9.416858  9.470745
22   17    2 2073 15184.00 15677.310 7.636752 9.627998  9.659970
23   19    2 1385 11851.62 12550.153 7.233455 9.380220  9.437488
24   21    2 1643 12361.30 12979.350 7.404279 9.422326  9.471115
25   23    2 1759 15804.40 16565.390 7.472501 9.668044  9.715071
26   25    2 1682 15370.93 16052.139 7.427739 9.640233  9.683597
27   27    2 1605 15427.60 16303.724 7.380879 9.643913  9.699149


Sorry about this.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 01:13

@ yjlee168
Posting: # 5220
Views: 28,194
 

 Dataset

Dear bears,

well I'm also running v2.4.2 here. I got the 20 subjects dataset from the parallel-demo output 'lme_stat.txt'...

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-26 08:16
(edited by yjlee168 on 2010-04-26 08:45)

@ Helmut
Posting: # 5226
Views: 28,337
 

 Dataset

Dear Helmut,

I noticed that d_labes got the same dataset as yours (df = 18 in his post), but he used v2.4.1. I also download bear (v2.4.2) installation file (bear_2.4.2.tar.gz) from one of CRAN ftp sites, and unzip it to view the dataset file (../bear/data/Paralleldata.r) for parallel. It is the new dataste with 27 subjects (test: 14's, ref.: 13's). So I don't know why your dataset was not updated simultaneously when you updated bear to v2.4.2. I'll try to find out what it's going on. Thanks for your message.

» well I'm also running v2.4.2 here. I got the 20 subjects data-set from the parallel-demo output 'lme_stat.txt'...

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 13:12

@ yjlee168
Posting: # 5232
Views: 28,233
 

 NCA → Statistical analysis for parallel study

Dear bears,

OK, checked it again. Paralleldata.r contains 27 subjects, but the demo works with the old one of 20. Still no idea where bear 2.4.2 gets the data from if I run the parallel design demo. Strange.
Only if I run the NCA-demo first, I get the new dataset…

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-26 18:43

@ Helmut
Posting: # 5240
Views: 28,187
 

 NCA → Statistical analysis for parallel study

Dear Helmut,

I know what it's going on. You're right about this. When you just do 'Statistical analysis', the parameter values were entered directly from one (demoBANOVA.r) of R-script files, not from a single data file. The data file, Paralleldata.r, is specifically used for the demo purpose from NCA -> Statistical analysis. Nothing's wrong so far. Thank you for your feedback.

» works with the old one of 20. Still no idea where bear 2.4.2 gets the data from if I run the parallel design demo. Strange. Only if I run the NCA-demo first, I get the new dataset…

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-26 08:41

@ Helmut
Posting: # 5227
Views: 28,168
 

 dilemma

Dear Helmut,

» T/R [%] with alpha 0.05 (90% CI)
» Point Estimate CL90 lower CL90 upper
» Ratio 101.607 82.858 124.598
»
» As expected the CI is slightly wider as the ones obtained by simple lm or the strange lme (83.312-123.920). Have a look at the plot. ;-)

Great. Thank you so much for the comparison. Interesting.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
d_labes
Hero

Berlin, Germany,
2010-04-26 09:04

@ Helmut
Posting: # 5229
Views: 28,168
 

 Equal variances

Dear Helmut,

» # Change to folder containg data files in RGui!
» ...
» boxplot(lnAUC0t ~ drug, col="lightgray", yl)
Ooops: > Error in eval(expr, envir, enclos) : object 'yl' not found

» Have a look at the plot. ;-)

A very fine example of hetero-scedasticity and eventually of non-normality :cool:.

With t-test assuming equal variances one gets:
         Point Estimate   CL90 lower   CL90 upper
Ratio           101.61        83.31       123.92


Homework: Compare this to the results of lm() or lme() in this thread :-D.

To repeat my question above: Any hint for more then 2 groups?
Very seldom for BE studies, I know. But according to Murphy's law I have to deal with a 3-parallel-group design already tomorrow.

Regards,

Detlew
yjlee168
Senior
Homepage
Kaohsiung, Taiwan,
2010-04-26 09:22

@ d_labes
Posting: # 5230
Views: 28,238
 

 Equal variances

Dear d_labes,

» To repeat my question above: Any hint for more then 2 groups?

Are the following Power to know codes feasible (3-group Parallel)? Never try it before myself.
PROC GLM DATA=EXAMPLE;
CLASS SUBJ TRT;
MODEL LAUCT LAUCI LCMAX=TRT;
ESTIMATE 'A vs. B' TRT 1–1 0; (Difference between trt A and trt B)
ESTIMATE 'A vs. C' TRT 1 0–1; (difference between trt A and trt C)
ESTIMATE 'B vs. C' TRT 0 1–1; (difference between trt B and trt C)
LSMEAN TRT;
RUN;

» Very seldom for BE studies, I know. But according to Murphy's law I have to deal with a 3-parallel-group design already tomorrow.

All the best,
---Yung-jin Lee
[image]bear v2.8.3:- created by Hsin-ya Lee & Yung-jin Lee
Kaohsiung, Taiwan http://pkpd.kmu.edu.tw/bear
Download link (updated) -> here
d_labes
Hero

Berlin, Germany,
2010-04-26 13:29

@ yjlee168
Posting: # 5233
Views: 28,211
 

 GLM = Equal variances

Dear Yung-jin,

» Are the following Power to know codes feasible (3-group Parallel)?

Yesss Sir. But only if we assume equal variances within the groups.
I do not have the power to know any way to get unequal variance assumption into Proc GLM.

And at least the mighty FDA does not want it this way from us :no:.

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 14:45

@ d_labes
Posting: # 5234
Views: 28,205
 

 GLM = Equal variances

Dear D. Labes and Yung-jin,

» Yesss Sir. But only if we assume equal variances within the groups.
» I do not have the power to know any way to get unequal variance assumption into Proc GLM.
»
» And at least the mighty FDA does not want it this way from us :no:.

FDA’s requirement is the justification for guideline-believers. I think it’s not by chance that FDA calls for procedures for unequal variances if they are available (specifically for parallel groups and replicate designs). Bad luck for 2×2 cross-overs (common variance part of the model).
IMHO good statistical practice would call for the Welsh-test; it’s also not by chance that var.equal = FALSE is the default in R’s t.test.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2010-04-26 15:58

@ Helmut
Posting: # 5235
Views: 28,121
 

 I'm a believer

Dear Helmut,

I thought love was only true in fairy tales
meant for someone else but not for me
love was out to get me that's the way it seemed
disappointment haunted all my dreams

Then I saw her face now I'm a believer not a trace of doubt in my mind
I'm in love I'm a believer I couldn't leave her if I tried
Neil Diamond/the monkeys 1966


» FDA’s requirement is the justification for guideline-believers.
Thanks for this 'nom du guerre' :pirate:.

» IMHO good statistical practice would call for the Welsh-test
:ponder: I would say more tentatively: some sort of ...

Best regards,
yours respectfully
Guideline-Believer
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 16:31

@ d_labes
Posting: # 5237
Views: 28,145
 

 I'm a believer

Dear D. Labes,

THX for bringing this song back to my memory (German: schlimmer Ohrwurm).

» » FDA’s requirement is the justification for guideline-believers.
» Thanks for this 'nom du guerre' :pirate:.

Well, of course I didn't mean you by that. Only a hint for people too lazy to go beyond guidelines. ;-)

» » IMHO good statistical practice would call for the Welsh-test
» :ponder: I would say more tentatively: some sort of ...

OK, if you like: Welch, Satterthwaite, Aspin-Welch, Howe, Milliken-Johnson…

A warning for M$-Excel users: TTEST(matrix1;matrix2;2;3) for unequal variances uses a crude method, i.e., integer degrees of freedom. Another reason to avoid this piece.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 12:55

@ d_labes
Posting: # 5231
Views: 28,219
 

 Equal variances

Dear D. Labes,

» » boxplot(lnAUC0t ~ drug, col="lightgray", yl)
» Ooops: > Error in eval(expr, envir, enclos) : object 'yl' not found

Shit! Of course instead of
boxplot(lnAUC0t ~ drug, col="lightgray", yl)
use
boxplot(lnAUC0t ~ drug, col="lightgray")

» A very fine example of hetero-scedasticity and eventually of non-normality :cool:.

Maybe. But we are not allowed to test that anyhow. ;-)
For non-bears, here is the dataset (imbalanced, 27 subjects) for download.

» With t-test assuming equal variances one gets:
»          Point Estimate   CL90 lower   CL90 upper
» Ratio           101.61        83.31       123.92
»
» Homework: Compare this to the results of lm() or lme() in this thread
» :-D.

Yes, sure. What else? That's why we should avoid it.

» To repeat my question above: Any hint for more then 2 groups?
» Very seldom for BE studies, I know. But according to Murphy's law I have to deal with a 3-parallel-group design already tomorrow.

I went through the literature last week and did not come to a conclusion yet. Essentially there are two approaches:
  • one-way ANOVA with post-hoc tests
  • between group t-tests
So if you want to get an opinion for tomorrow: I would go with Welch-tests, because ANOVA (or lm and lme) assume homoscedasticity. See also:

Moser BK and GR Stevens
Homogeneity of variance in the two-sample means test
Amer Statist 46, 19-21 (1992)


Depending on the study’s target I would consider a Bonferroni-adjustment (that’s even the original application: independent tests).

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2010-04-26 16:36

@ Helmut
Posting: # 5238
Views: 28,283
 

 gls() for unequal variances?

Dear Helmut,

» ... I would go with Welch-tests, because ANOVA (or lm and lme) assume
» homoscedasticity.

What about:
require(nlme) # HS: added, because not loaded by default
# must use gls() because lme() does not allow for no random effect
glsModel <- gls(lnAUC0t ~ drug, weights=varIdent(form=~1|drug), data=TotalData)
summary(glsModel)


This gives (20 subjects bear data) :
Generalized least squares fit by REML
  Model: lnAUC0t ~ drug
  Data: TotalData
       AIC      BIC    logLik
  8.491722 12.05321 -0.245861

Variance function:
 Structure: Different standard deviations per stratum
 Formula: ~1 | drug
 Parameter estimates:
        1         2
1.0000000 0.4173818  (seems to give us different variances as portion of residual, hurrrrah)

Coefficients:
              Value Std.Error  t-value p-value
(Intercept) 7.10189 0.1056507 67.22045  0.0000
drug2       0.01594 0.1144841  0.13923  0.8908


But astonishing :confused::
intervals(glsModel, level=0.9)
Approximate 90% confidence intervals

 Coefficients:
                 lower    est.     upper
(Intercept)  6.9186849 7.10189 7.2850951
drug2       -0.1825826 0.01594 0.2144626

#back-transformed
T vs. R       83.31   101.61   123.92 (same as without weights or lm() :-()

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-26 17:00

@ d_labes
Posting: # 5239
Views: 28,110
 

 gls() for unequal variances?

Dear D. Labes,

» But astonishing:
»
» #back-transformed
» T vs. R       83.31   101.61   123.92 (same as without weights or lm()

Very strange...

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-27 01:36

@ d_labes
Posting: # 5242
Views: 28,097
 

 Sims

Dear D. Labes,

you have already outed yourself as a passionate wheel-reinventer, but why don't you simply go with the Welch-test (orwhatsover for unequal variances)? Since our Capt'n luuuvs simulations, I recycled some of his code to play width:
sims   <- 10000
MeanT  <- 95
MeanR  <- 100
CVT    <- 0.25
CVR    <- 0.40
nCons  <- 0
nAnti  <- 0
nT     <- 12
nR     <- 10
for (iter in 1:sims)
  {
    lnPKT  <- rnorm(n=nT,mean=MeanT,sd=CVT*MeanT)
    lnPKR  <- rnorm(n=nR,mean=MeanR,sd=CVR*MeanR)
    WelchRes <- t.test(lnPKT,lnPKR,conf.level=0.9)
    TtestRes <- t.test(lnPKT,lnPKR,var.equal=TRUE,conf.level=0.9)
    WidthW <- abs(WelchRes$conf.int[1]-WelchRes$conf.int[2])
    WidthT <- abs(TtestRes$conf.int[1]-TtestRes$conf.int[2])
    if (WidthT<WidthW){
      nAnti <- nAnti+1
      }else{
      nCons <- nCons+1
    }
  }
result  <- paste(paste("t-test compared to Welch-test\n"),
          paste("Conservative = ", 100*nCons/sims, "%\n"),
          paste("Liberal = ", 100*nAnti/sims, "%\n"))
cat(result)

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2010-04-28 10:58

@ Helmut
Posting: # 5253
Views: 28,164
 

 Sandwich - Simsalabim

Dear Helmut,

----- Sandwich -----
» you have already outed yourself as a passionate wheel-reinventer, but why don't you simply go with the Welch-test (orwhatsover for unequal variances)?

To cite myself near the beginning of this thread: "I would go for a parallel groups study with exactly 2 groups with the 'simple' t-test (Welch variant ...)". Simple was here meant with respect to the lme() model used in bear.

But I'm seeking for something that can be generalized easily to more then 2 groups. Pairwise tests I'm not so happy with.

Eventually this (recently discovered) could be a way:
Herberich E, Sikorski J, Hothorn T (2010)
"A Robust Procedure for Comparing Multiple Means under Heteroscedasticity in Unbalanced Designs"
PLoS ONE 5(3): e9788. doi:10.1371/journal.pone.0009788
online resource

Easy to implement in R, only a couple of code lines.
But I was not a sandwich gourmet up to now :-D.

----- Simsalabim -----
<nitpicking>
» ...
» lnPKT <- rnorm(n=nT,mean=MeanT,sd=CVT*MeanT)
» lnPKR <- rnorm(n=nR,mean=MeanR,sd=CVR*MeanR)
» ...
:ponder:

If you are really interested in dealing with log-transformed metrics as lnPK suggests :-D I would suggest you the following modification:
# short hand function
   CV2sd <- function(CV) return(sqrt(log(1.0 + CV^2)))
...
     lnPKT  <- rlnorm(n=nT, mean=log(MeanT), sd=CV2sd(CVT))
     lnPKR  <- rlnorm(n=nR, mean=log(MeanR), sd=CV2sd(CVR))


Of course this will not affect the comparison between t-test with equal variances and Welch t-test I think.
</nitpicking>

Regards,

Detlew
Helmut
Hero
Homepage
Vienna, Austria,
2010-04-28 14:19

@ d_labes
Posting: # 5257
Views: 28,053
 

 Sandwich - Simsalabim

Dear D. Labes!

» But I'm seeking for something that can be generalized easily to more then 2 groups. Pairwise tests I'm not so happy with.

OK. Interesting reference, although the word "robust" in the title may trigger the non-parametric-panic-reflex in some regulators. :-D

Of course you are right about rlnorm(…) - quick and dirty as always.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
martin
Senior

Austria,
2010-05-02 18:22

@ d_labes
Posting: # 5272
Views: 28,285
 

 parametrization of R function rlnorm

dear d_labes!

I just want to draw your attention to the parametrization of R function rlnorm. When you want to generate log-normal distributed random variables with a given mean and CV on the observed scale - I suggest to use meanlog=log(mean)-0.5*log(1+cv^2) instead of meanlog=log(mean).

# example
set.seed(020510)
n <- 1E6
mue <- 5
cv <- 0.5

x <- rlnorm(n=n, meanlog=log(mue)-0.5*log(1+cv^2), sdlog=sqrt(log(1+cv^2)))
y <- rlnorm(n=n, meanlog=log(mue), sdlog=sqrt(log(1+cv^2)))

mean(x); sd(x)/mean(x)
mean(y); sd(y)/mean(y)


best regards

martin
d_labes
Hero

Berlin, Germany,
2010-05-03 16:22

@ martin
Posting: # 5276
Views: 27,936
 

 Mean of log-normal

dear martin!

» ... When you want to generate log-normal distributed random variables with a given mean and CV on the observed scale - I suggest to use meanlog=log(mean)-0.5*log(1+cv^2) instead of meanlog=log(mean) ...

Thanks for clarifying this.

Regards,

Detlew
ElMaestro
Hero

Denmark,
2013-07-26 21:42
(edited by ElMaestro on 2013-07-26 22:00)

@ martin
Posting: # 11065
Views: 19,449
 

 parametrization of R function rlnorm

Hi Martin and all,

Sorry to embark on archaelogy.

» When you want to generate log-normal distributed random variables with a given mean and CV on the observed scale - I suggest to use meanlog=log(mean)-0.5*log(1+cv^2) instead of meanlog=log(mean).
»
» # example
» set.seed(020510)
» n <- 1E6
» mue <- 5
» cv <- 0.5
»
» x <- rlnorm(n=n, meanlog=log(mue)-0.5*log(1+cv^2), sdlog=sqrt(log(1+cv^2)))
» y <- rlnorm(n=n, meanlog=log(mue), sdlog=sqrt(log(1+cv^2)))
»
» mean(x); sd(x)/mean(x)
» mean(y); sd(y)/mean(y)

Well.... Is that really something anyone wants to do?
When we simulate BE, like in crossover trials, we most likely want to simulate with a known geometric mean ratio on the observed scale and a known CV on the observed scale.
I could have misunderstood rlnorm entirely, but otherwise:

GeoMean=function(a)
{
return (exp(mean(log(a))))
}


set.seed(020510)
n <- 1E6
GMR <- 0.95
cv <- 0.5

x <- rlnorm(n=n, meanlog=log(GMR)-0.5*log(1+cv^2), sdlog=sqrt(log(1+cv^2)))
y <- rlnorm(n=n, meanlog=log(GMR), sdlog=sqrt(log(1+cv^2)))

mean(x); sd(x)/mean(x)
mean(y); sd(y)/mean(y)

GeoMean(x)
GeoMean(y)


Stuff like that really hurts.
I would any day prefer to use rnorm to avoid the confusion:

logTRratios=rnorm(n=1e6, mean=log(0.95), sd=sqrt(log(1+cv^2)))
TRratios=exp(logTRratios)

GeoMean(TRratios) ##Geometric mean observed scale
mean(TRratios) ##useless?
mean(logTRratios) ## close to log(0.95)
sd(TRratios)/mean(TRratios)


Let me add that for those who simulate Test and Ref data individually (yes, there are still backward people who do that:-D), the subtraction of something like 0.5*log(1+cv^2) will cancel out as long as a common variance term is used and balance is present because log(T)-log(R)=log(T/R).

I could be wrong, but…


Best regards,
ElMaestro


GCP lecturer: "And who do we consider the most important person in a clinical trial?"
Principal Investigator: "Me!".
Helmut
Hero
Homepage
Vienna, Austria,
2013-07-28 02:01

@ ElMaestro
Posting: # 11081
Views: 19,159
 

 Martin‽

Hi ElMaestro & Martin,

» Sorry to embark on archaelogy.

Great. Comes handy for some sim’s I’m currently running!

meang <- function(z) { exp(mean(log(na.omit(z)))) }
mu <- 1
CV <- 0.3
set.seed(20130728)
n  <- 1e7
x  <- rlnorm(n=n, meanlog=log(mu)-0.5*log(1+CV^2), sdlog=sqrt(log(CV^2+1)))
y  <- rlnorm(n=n, meanlog=log(mu), sdlog=sqrt(log(CV^2+1)))
res<- matrix(nrow=2, ncol=6)
colnames(res) <- c("a. mean", "%RE a. mean", "CV", "%RE CV", "g. mean", "%RE g. mean")
rownames(res) <- c("Par. 1 (x)", "Par. 2 (y)")
split.screen(c(2, 1))
  screen(1)
    hist(log(x), freq=F, xlim=c(-1, 1), las=1,
      main="meanlog=log(mu)-0.5*log(1+CV^2)", cex.main=0.9)
    abline(v=log(meang(x)), lwd=2, col="red")
  screen(2)
    hist(log(y), freq=F, xlim=c(-1, 1), las=1,
      main="meanlog=log(mu)", cex.main=0.9)
    abline(v=log(meang(y)), lwd=2, col="green")
close.screen(all=TRUE)
res[1, 1] <- mean(x)
res[1, 2] <- 100*(mean(x)-mu)/mu
res[1, 3] <- sd(x)/mean(x)
res[1, 4] <- 100*(sd(x)/mean(x)-CV)/CV
res[1, 5] <- meang(x)
res[1, 6] <- 100*(meang(x)-mu)/mu
res[2, 1] <- mean(y)
res[2, 2] <- 100*(mean(y)-mu)/mu
res[2, 3] <- sd(y)/mean(y)
res[2, 4] <- 100*(sd(y)/mean(y)-CV)/CV
res[2, 5] <- meang(y)
res[2, 6] <-100*(meang(y)-mu)/mu
round(res, 5)


[image]

           a. mean %RE a. mean      CV   %RE CV g. mean %RE g. mean
Par. 1 (x) 1.00008     0.00773 0.29994 -0.01999 0.95792    -4.20770
Par. 2 (y) 1.04414     4.41438 0.29992 -0.02725 1.00011     0.01147


Or alternatively:
boxplot(log(x), log(y), ylab="log(x, y)",
  names=c("Par. 1", "Par. 2"), las=1)
abline(h=0, lty=3)


[image]

Sometimes R drives my nuts. Modified from help(rlnorm):
The log normal distribution has density

f(x) = 1/(√(2π)σx)e–(logxμ)²/(2σ²)

where μ and σ are the mean and standard deviation of the logarithm.
The mean is E(X) = eμ+σ²/2, the median is med(X) = eμ, and the variance Var(X) = e2μ+σ²eσ²–1 and hence the coefficient of variation is √(eσ²–1) […].

How I love the word “hence”.

[image]All the best,
Helmut Schütz 
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Back to the forum Activity
 Thread view
Bioequivalence and Bioavailability Forum | Admin contact
16,931 Posts in 3,629 Threads, 1,030 registered users;
21 users online (0 registered, 21 guests).

Only dead fish go with the current.    Scuba divers' proverb

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
XHTML/CSS RSS Feed