Bioequivalence and Bioavailability Forum 16:43 CET

Main page Policy/Terms of Use Abbreviations Latest Posts

 Log in |  Register |  Search

AngusMcLean
Senior

USA,
2016-05-11 16:55

Posting: # 16294
Views: 14,212
 

 Dose Proportionality and Variance [Study As­sess­ment]

We have completed a dose proportionality study using the lowest strength to the highest strength of a modified release formulation of a class I drug. The approach used was the usual cross-over design with 20 subjects (AB, BA) and dose normalized the PK parameters Cmax and AUC0-T prior to bioequivalence. No problem we are BE comfortably within the limits (0.8-1.25). I use Phoenix WinNonlin V6.4 so the within subject and between subject variance values for Cmax and AUC0-t appear in the output.

Another worker has used the power model with the same data set for dose proportionality as described by Brian Smith in Pharmaceutical Research in year 2000. The paper is entitled "Confidence Interval Criteria for Assessment of Dose Proportionality". The point estimates are much the same as my approach, but he quotes 98% CI. His CI values are also within the lmits.

This worker has also calculated within subject and between subject variance values for the parameters. There are large differences for the values of within subject variance compared with my approach, but the between subject variance values are very similar for both Cmax and AUC. Should the within subject and between subject variance values of Cmax and AUCC0-t be very similar for both approaches?

Angus


Edit: Category changed; see also this post #1. Please don’t shout! [Helmut]
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-12 14:34

@ AngusMcLean
Posting: # 16299
Views: 13,442
 

 More information, please

Hi Angus,

» We have completed a dose proportionality study using the lowest strength to the highest strength of a modified release formulation of a class I drug. The approach used was the usual cross-over design with 20 subjects (AB, BA) and dose normalized the PK parameters Cmax and AUC0-T prior to bioequivalence. No problem we are BE comfortably within the limits (0.8-1.25).

Testing only two levels for dose proportionality is somewhat unconventional.

» Another worker has used the power model with the same data set for dose proportionality […]

Does it look like this (high dose = 8× low dose)?

[image]


» The point estimates are much the same as my approach, […]

Numbers? AUC only – Cmax is of limited value in DP.

» […] but he quotes 98% CI.

Smells of Bonferroni’s adjustment for four simultaneous (and independent) tests in order to control the familywise error rate: 100(1–2α/4) = 97.5% CI and FWER 1–(1–α)4 = 0.0491.
In the power model we have only two parameters (the coefficient α and the exponent β: E[D] = α·D β (independent from the number of dose levels tested) and we are interested only in β. I don’t see why a multiplicity-adjustment was done.

» His CI values are also within the lmits.

How did he get a CI of β? With two dose levels we have zero degrees of freedom for a model with two para­meters.

Before we can discuss variances, we need more information.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
AngusMcLean
Senior

USA,
2016-05-13 16:40

@ Helmut
Posting: # 16301
Views: 13,345
 

 More information, please

Hello: It is a very well known drug and many MR formulations are on the market. (You have experience with this drug). All except one formulation show dose proportionality from lowest to highest strength. The one which does not is almost dose proportional. We used that in our clinical protocol to justify our study design of using just two doses. We do have intermediate doses between the low and the high.


There are only two does levels to plot. The relationship is as follows:

LnPK=B0+B1*Ln(dose) where LnPK pertains to Cmax or AUC.

So we have a regression line going through the points: we evaluate the slope (B1), intercept and the confidence intervals about the slope to evaluate dose proportionality. Brian Smith in Pharm Research year 2000 has extended the approach from the original UK working party. It seems that you can calculate intrasubject and intersubject variance e.g. for AUC and partial AUC from this approach. I do not follow how to do it. I use the usual intrasubject and intersubject values from Phoenix WinNonlin 6.4 and I am happy with that.

Angus
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-14 02:26

@ AngusMcLean
Posting: # 16302
Views: 13,534
 

 Setup in Phoenix/WinNonlin

Hi Angus,

» It is a very well known drug and many MR formulations are on the market. (You have experience with this drug).

If we are talking about the same goody: Watch out for polymorphism… Sometimes you have poor metabolizers in the study where enzymes get saturated at higher doses. In one subject I once got a slope of 1.54 for AUC and 2.16 for Cmax over a only twofold dose range! For the other subjects it got 1.05 and 1.02 with a very narrow CI.

» There are only two does levels to plot. The relationship is as follows:
»
» LnPK=B0+B1*Ln(dose) where LnPK pertains to Cmax or AUC.
»
» So we have a regression line going through the points: we evaluate the slope (B1), intercept

OK, so far.

» and the confidence intervals about the slope to evaluate dose proportionality.

This is beyond me. df = np where n is the number of data points and p the number of parameters. How can you calculate a CI with df = 0?

» Brian Smith in Pharm Research year 2000 has extended the approach from the original UK working party.

Yep. Smith et al. use a mixed-effects model, where subjects are a random effect. Thus we increase n. Now a CI is possible even for p = 2.

» It seems that you can calculate intrasubject and intersubject variance e.g. for AUC and partial AUC from this approach.

Correct.

» I do not follow how to do it. I use the usual intrasubject and intersubject values from Phoenix WinNonlin 6.4 and I am happy with that.

If you are happy with that, what is your question? :-D

If you want to reproduce Smith’s results in Phoenix/WinNonlin: Start with a worksheet (columns subject, dose, Cmax, AUC, whatsoever). log-transform: dose, Cmax, … and weight=1/logCmax, … Send to LME. Map Subject as Classification, logCmax as Rgeressor, and logCmax as Dependent.
Model Specification: logCmax
Fixed Effects Confidence Level: 90%
Variance Structure / Random 1: Subject
With Smith’s Cmax-data of Table 1 I got for the slope:
0.7617 (90% CI: 0.6696, 0.8539), slightly different from the reported 0.7615 (0.679, 0.844). Why? Duno.

[image]
(jitter added to doses)


See also chapter 18.3 in Chow/Liu. Without explanation they recommend a 95% CI but a 90% CI in elaborating Smith’s approach. In general I prefer a weighted model (hence the transformation above). Fits much better.
                 SSQ      AIC   Var(Subject)  Var(Res)
w = 1          0.07389   9.247     0.1120     0.01456 
w = 1/logCmax  0.01548  -8.917     0.1108     0.003076



PS: Can you ask “the other worker” why he/she calculated the 98% CI?

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
AngusMcLean
Senior

USA,
2016-05-14 18:54
(edited by AngusMcLean on 2016-05-14 22:06)

@ Helmut
Posting: # 16303
Views: 13,247
 

 Setup in Phoenix/WinNonlin

Helmut: My apologies it is a typo it is 95% not 98%.

Yes; I am happy with the Phoenix WinNonlin calculations for within subject and between subject variance. The reason for my interest is the other worker following Smith's method is producing values, which are much lower than my Phoenix values for within subject variance. It seems to me that the values from the two methods should be much the same?

We will be speaking next week and I am going to ask him exactly how he derived his values for the data set. We each have the same data set. He got the data from me. Obviously the other worker is producing a set of values that are much lower and make the formulation appear to have lower within subject variability.

Thank you for above steps: I do see that LnCmax cannot be both the dependent and the regressor (I think LnDose is the regressor). I have tried to run the linear mixed effects model, but I cannot repeat your results. The program ran, but my residual variance was 0.154. I am thinking that maybe my input file does not have the structure needed, e.g. subject 4,5,6 in the Smith Data were treated at 50mg and 250mg so do you need to differentiate by including a period 1 and period 2 variable in the input file?
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-15 14:47

@ AngusMcLean
Posting: # 16305
Views: 13,285
 

 Setup in Phoenix/WinNonlin

Hi Angus,

» My apologies it is a typo it is 95% not 98%.

Whew! Confused me. ;-)

» I am happy with the Phoenix WinNonlin calculations for within subject and between subject variance. The reason for my interest is the other worker following Smith's method is producing values, which are much lower than my Phoenix values for within subject variance. It seems to me that the values from the two methods should be much the same?

Variances should be similar. Smith’s data don’t help because the information is incomplete (cross-over or paired, sequences, periods?). Here one of my studies (6×3 Williams’ design):
                      dose-norm.       power-model       
                          BE       w = 1   w = 1/ln(dose)
Var(sequence*subject)  0.1593     0.1645      0.1595     
Var(Residual)          0.02284    0.02330     0.007113   


» I do see that LnCmax cannot be both the dependent and the regressor (I think LnDose is the regressor).

Exactly. I uploaded a project-file at Certara’s forum. Compare it to your setup.

» I have tried to run the linear mixed effects model, but I cannot repeat your results. The program ran, but my residual variance was 0.154. I am thinking that maybe my input file does not have the structure needed, e.g. subject 4,5,6 in the Smith Data were treated at 50mg and 250mg so do you need to differentiate by including a period 1 and period 2 variable in the input file?

I have no clue how the subjects were treated… Maybe it was a dose escalation?
      period  
    1   2   3 
1  25         
2  25         
4      50  250
5      50  250
6      50  250
7      75  250
8      75  250
9      75  250

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
AngusMcLean
Senior

USA,
2016-05-15 15:17

@ Helmut
Posting: # 16306
Views: 13,232
 

 Setup in Phoenix/WinNonlin

Helmut: I am having difficulty with Phoenix: I get an error message when the program loads relating to a missing framework. It tells me that the program may not work properly. However I have been able to get your file to run OK eventually. It is a dose escalation study....very early development. No study design per se is given. After this success I then tried the data set I have already studied for dose proportionality by dose-normalized BE in Phoenix. There are only two doses given to 20 subjects cross-over design. So using your structure I set up the input for LME model. The model was lndose as given by you. I was able to get an evaluation for residual error for both Cmax and AUC0-t. The values were indeed very similar to what I had obtained in PHoenix (e.g. Cmax CV(%) was 19.9 compared with 20.06 (BE result). The CV(%) for AUC0-t ws 12.2 compared with 12.6% by BE approach). Other worker has reported CV(%) of 0.1 for AUC0-t. I am wondering if using SAS you can get a result of 0.1. For Cmax other worker reports CV% 10.1. I do not think using SAS can give such a difference?
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-15 15:56

@ AngusMcLean
Posting: # 16307
Views: 13,265
 

 Phoenix 64 Warning

Hi Angus,

» I am having difficulty with Phoenix: I get an error message when the program loads relating to a missing framework. It tells me that the program may not work properly.

I’m getting this message for a few months as well (only in the 64-bit version of Phoenix):

[image]

Environment: Windows 7 Pro SP1 (64bit) build 7601, Phoenix 64 build 6.4.0.768, .NET Framework 4.6.1 (last security update 2016-05-11).
Since I didn’t change my Phoenix-installation I suspect Billy-boy causing the trouble… I guess one of these February/March security updates are the reason: KB3122661, KB3127233, KB3136000.

» I was able to get an evaluation for residual error for both Cmax and AUC0-t. The values were indeed very similar to what I had obtained in PHoenix (e.g. Cmax CV(%) was 19.9 compared with 20.06 (BE result). The CV(%) for AUC0-t ws 12.2 compared with 12.6% by BE approach).

Congratulations!

» Other worker has reported CV(%) of 0.1 for AUC0-t. I am wondering if using SAS you can get a result of 0.1. For Cmax other worker reports CV% 10.1. I do not think using SAS can give such a difference?

Never seen that. We cross-validated Phoenix/WinNonlin and SAS many times. I think that he/she screwed up completely. I love “push-the-button” statisticians reporting a 0.1% CV (‼) without hesitation. How plausible is that? Good morning!
Body height can be measured pretty precisely. Ask the worker whether he/she thinks that repeated measurements can be done to ±1/16″. If the answer is yes, ask which variability he/she expects in AUC.
:ponder:

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
AngusMcLean
Senior

USA,
2016-05-15 20:11

@ Helmut
Posting: # 16309
Views: 13,144
 

 Phoenix 64 Warning

» […] We cross-validated Phoenix/WinNonlin and SAS many times. […] I love “push-the-button” statisticians reporting a 0.1% CV (‼) without hesitation. How plausible is that? […]
» Body height can be measured pretty precisely. Ask the worker whether he/she thinks that repeated measurements can be done to ±1/16″. If the answer is yes, ask which variability he/she expects in AUFC.

Yes; we are still in 1/16" units here. They are difficult to work with. It is highly unlikely for 0.1% to be correct. Sometimes people will believe what they find to be the most favorable. I will be finding out more soon about the other data. Meanwhile I wonder if Pharsight can come up with a SAS comparison.

Meanwhile I am off to visit a German store "Aldi"


Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut]
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-16 16:26

@ AngusMcLean
Posting: # 16316
Views: 13,060
 

 OT: imperial vs. metric units

Hi Angus,

» Yes; we are still in 1/16" units here. They are difficult to work with.

All scientific journals I know demand SI (i.e., metric) units. Imperial units are a mess.

:offtopic!:
One story from my past: When I was already a diving instructor I went through my cave-diving courses in Mexico according to a US-system (NACD).
To calculate the maximum dive time (leaving air consumption for descent/ascent aside) in the metric system is bloody easy. Say the tank’s volume is 12 L and pressurized to 200 bar. The volume of expanded air is 12 × 200 = 2,400 L. If the breathing rate at the surface is 20 L/min, the tank’s air lasts for 2,400 / 20 = 120 min (ambient pressure 1 bar). If you dive, every 10 meters the pressure increases by 1 bar. Thus at 10 m the tank’s air will last for 2,400 / [20 × (1 + 1)] = 60 min, at 20 m for 2,400 / [20 × (1 + 2)] = 40 min, and so on. No pocket calculator needed.
How is this stuff done in the US? To start the confusion tanks are not classified by their volume, but by the volume of expanded air if the tank is filled to its rated pressure (which commonly is 3,000 psi). A standard tank is called “80 ft3”. A common “surface breathing rate” is 0.7 ft3/min. The surface pressure is 14.5 psi and increases by 14.5 psi every additional 33 ft. Good luck in calculating the dive time.
In cave diving tanks are regularly “overfilled”, e.g., to 3,333 psi. Then your 80 ft3-tank contains 89 ft3. There is also special equipment (compressors, regulators, pressure gauges) rated for 300 bar (~4,350 psi). In the metric system the 12 L-tank is still a 12 L-tank (what else). In the imperial system it’s a 116 ft3-tank despite the dimension are the same. Crazy. I wonder why not more US-divers drown. Maybe they are better in math than me.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
ElMaestro
Hero

Denmark,
2016-05-15 20:54

@ AngusMcLean
Posting: # 16310
Views: 13,083
 

 Setup in Phoenix/WinNonlin

Hi Angus,

» Other worker has reported CV(%) of 0.1 for AUC0-t. I am wondering if using SAS you can get a result of 0.1. For Cmax other worker reports CV% 10.1. I do not think using SAS can give such a difference?

And this isn't just a case of someone confusing raw figures with percentages?

if (3) 4

x=c("Foo", "Bar")
b=data.frame(x)
typeof(b[,1]) ##aha, integer?
b[,1]+1 ##then let me add 1



Best regards,
ElMaestro

"(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018.
AngusMcLean
Senior

USA,
2016-05-15 22:30

@ ElMaestro
Posting: # 16313
Views: 13,013
 

 Setup in Phoenix/WinNonlin

» And this isn't just a case of someone confusing raw figures with percentages?

Thank you for your remarks: we will hopefully find out soon.
AngusMcLean
Senior

USA,
2016-05-16 21:00

@ Helmut
Posting: # 16317
Views: 13,049
 

 Setup in Phoenix/WinNonlin

» With Smith’s Cmax-data of Table 1 I got for the slope:
» 0.7617 (90% CI: 0.6696, 0.8539), slightly different from the reported 0.7615 (0.679, 0.844).

I have repeated the above calculation in NCSS as desribed by Jerry: here are the results of Brian Smith's example.
                                     90.0%       90.0%
          Effect   Effect            Lower       Upper
Effect    Estimate Standard Prob     Conf. Limit Conf. Limit     Effect
Name      (Beta)   Error    Level    of Beta     of Beta     DF  No.
Intercept  1.9414  0.2496   0.000025 1.4849      2.3978      9.2 1
logDose    0.7617  0.0492   0.000005 0.6659      0.8576      5.9 2


The within subject variance was 0.0146 (the same as Phoenix).


Edit: Full quote removed, tabulators changed to spaces and BBcoded. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 and #6! [Helmut]
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-17 01:50

@ AngusMcLean
Posting: # 16318
Views: 12,918
 

 NCSS vs. PHX/WNL vs. SAS

Hi Angus,

» I have repeated the above calculation in NCSS:
Intercept  1.9414  0.2496   0.000025 1.4849      2.3978      9.2
logDose    0.7617  0.0492   0.000005 0.6659      0.8576      5.9


Phoenix/WinNonlin:
           1.9414  0.2431   0.000020 1.4968      2.3860      9.2
           0.7617  0.0473   0.000004 0.6696      0.8539      5.9


Reported by Smith et al. (SAS Proc Mixed):
           1.94                      1.54        2.35
           0.7615                    0.679       0.844


Results by NCSS and Phoenix/WinNonlin are similar but don’t match SAS (whose CIs are wider). I love software.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
mittyri
Senior

Russia,
2016-05-18 08:23

@ Helmut
Posting: # 16321
Views: 12,853
 

 NCSS vs. PHX/WNL vs. SAS - Validation?

Hi Helmut,

» Results by NCSS and Phoenix/WinNonlin are similar but don’t match SAS (whose CIs are wider). I love software.

I'm surprised with the results
Is it possible to make a dataset for validation? Do we have anywhere the reference dataset and the accurate result?

Kind regards,
Mittyri
ElMaestro
Hero

Denmark,
2016-05-18 09:20
(edited by ElMaestro on 2016-05-18 09:58)

@ mittyri
Posting: # 16322
Views: 12,777
 

 Diagnostics

Hi all,

» » Results by NCSS and Phoenix/WinNonlin are similar but don’t match SAS (whose CIs are wider). I love software.
»
» I'm surprised with the results
» Is it possible to make a dataset for validation? Do we have anywhere the reference dataset and the accurate result?

Extract some model diagnostics: DF's and LogLikelihood, and compare to find out which result is the better candidate.

Life is good.

if (3) 4

x=c("Foo", "Bar")
b=data.frame(x)
typeof(b[,1]) ##aha, integer?
b[,1]+1 ##then let me add 1



Best regards,
ElMaestro

"(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018.
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-18 15:14

@ ElMaestro
Posting: # 16324
Views: 12,908
 

 Diagnostics: R and Phoenix

Hi ElMaestro et al.,

» Extract some model diagnostics: DF's and LogLikelihood, and compare to find out which result is the better candidate.

I can only provide the results of R and Phoenix:

R 3.2.5
library(nlme)
Subj   <- c(1, 2, 4, 5, 6, 4, 5, 6, 7, 8, 9, 7, 8, 9)
Dose   <- c(25, 25, 50, 50, 50, 250, 250, 250, 75, 75, 75, 250, 250, 250)
AUC    <- c(326.40, 437.82, 557.47, 764.85, 943.59, 2040.84, 2989.29,
            4107.58, 1562.42, 982.02, 1359.68, 3848.86, 4333.10, 3685.55)
Cmax   <- c(64.82, 67.35, 104.15, 143.12, 243.63, 451.44, 393.45,
            796.57, 145.13, 166.77, 296.90, 313.00, 387.00, 843.00)
resp   <- data.frame(Subj, Dose, Cmax, AUC)
resp$Subj <- factor(resp$Subj)
muddle <- lme(log(Cmax) ~ log(Dose), data=resp, random=~1|Subj)
sum.muddle <- summary(muddle)
CI.muddle  <- intervals(muddle, level=0.9, which="fixed")
print(sum.muddle); CI.muddle$fixed[, ]
Linear mixed-effects model fit by REML
 Data: resp
       AIC      BIC    logLik
  14.24355 16.18317 -3.121774

Random effects:
 Formula: ~1 | Subj
        (Intercept)  Residual
StdDev:   0.3347319 0.1206792

Fixed effects: log(Cmax) ~ log(Dose)
                Value  Std.Error DF   t-value p-value
(Intercept) 1.9413858 0.24314072  7  7.984618   1e-04
log(Dose)   0.7617406 0.04727976  5 16.111347   0e+00
 Correlation:
          (Intr)
log(Dose) -0.863

Standardized Within-Group Residuals:
        Min          Q1         Med          Q3         Max
-1.07547728 -0.35579449 -0.03301391  0.45088601  0.91853654

Number of Observations: 14
Number of Groups: 8
                lower      est.     upper
(Intercept) 1.4807366 1.9413858 2.4020350
log(Dose)   0.6664696 0.7617406 0.8570116


Phoenix 6.47.0.768
Model Specification and User Settings
       Dependent variable : logCmax
                Transform : None
              Fixed terms : int+logDose
    Random/repeated terms : Subject
    Denominator df option : satterthwaite

Class variables and their levels
                  Subject :    1   2   4   5   6   7   8   9

Final variance parameter estimates:
             Var(Subject)    0.112045
            Var(Residual)    0.0145635

     REML log(likelihood)   -0.623363
 -2* REML log(likelihood)    1.24673
 Akaike Information Crit.    9.24673
   Schwarz Bayesian Crit.   11.1864

   Effect:Level Estimate   StdError Denom_DF  T_stat  P_value Conf T_crit  Lower_CI  Upper_CI
---------------------------------------------------------------------------------------------
           int  1.9413858 0.2431407   9.2    7.98462 1.980E-5   90  1.829 1.4967592 2.3860125
logDose:logDose 0.7617406 0.0472798   5.9   16.11135 4.241E-6   90  1.949 0.6695783 0.8539029


Estimates and their SEs are exactly the same. CIs are not (due to different DFs?).


PS: An ideas how to weight by 1/log(Dose) in lme()? Suggested by Chow/Liu and gives me a better fit in Phoenix.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
zizou
Regular

Plzeň, Czech Republic,
2016-05-22 19:07

@ Helmut
Posting: # 16348
Views: 12,478
 

 Diagnostics: R

Dear Helmut.
» Estimates and their SEs are exactly the same. CIs are not (due to different DFs?).
Exactly, different DFs.

# equivalent of your code:
library(lme4)
Subj   <- c(1, 2, 4, 5, 6, 4, 5, 6, 7, 8, 9, 7, 8, 9)
Dose   <- c(25, 25, 50, 50, 50, 250, 250, 250, 75, 75, 75, 250, 250, 250)
AUC    <- c(326.40, 437.82, 557.47, 764.85, 943.59, 2040.84, 2989.29,
            4107.58, 1562.42, 982.02, 1359.68, 3848.86, 4333.10, 3685.55)
Cmax   <- c(64.82, 67.35, 104.15, 143.12, 243.63, 451.44, 393.45,
            796.57, 145.13, 166.77, 296.90, 313.00, 387.00, 843.00)
resp   <- data.frame(Subj, Dose, Cmax, AUC)
muddle <- lmer(log(Cmax) ~ log(Dose) + (1|Subj), data=resp)
summary(muddle)


DF, p-values, CIs not reported in lmer for the reasons stated there.

For more decimal places:
print(muddle, digits=7, ranef.comp=c("Var","Std.Dev."))
# REML criterion at convergence: 6.2435 (same as in SPSS 6.243548)
# note: -2 (Restricted) Log Likelihood from R lme: (-2)*(-3.121774) = 6.243548


Some reference for lmer can be found in this PDF.

For DFs, p-values, ...: library lmerTest can be used, see this PDF (page 2 - description, details, references to SAS).
library(lmerTest) # Library includes modification of function lmer (if I am not mistaken).
muddle <- lmer(log(Cmax) ~ log(Dose) + (1|Subj), data=resp)
summary(muddle)


Results:
Linear mixed model fit by REML t-tests use Satterthwaite approximations to
  degrees of freedom [lmerMod] Formula: log(Cmax) ~ log(Dose) + (1 | Subj)
   Data: resp

REML criterion at convergence: 6.2

Scaled residuals:
     Min       1Q   Median       3Q      Max
-1.07548 -0.35579 -0.03301  0.45089  0.91854

Random effects:
 Groups   Name        Variance Std.Dev.
 Subj     (Intercept) 0.11205  0.3347 
 Residual             0.01456  0.1207 
Number of obs: 14, groups:  Subj, 8

Fixed effects:
            Estimate Std. Error      df t value Pr(>|t|)   
(Intercept)  1.94139    0.24314 9.19600   7.985 1.98e-05 ***
log(Dose)    0.76174    0.04728 5.89600  16.111 4.24e-06 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
          (Intr)
log(Dose) -0.863

I think there is a little bug in df visualization - rounded on 3 decimal places, visible on 5 (actually I have not the latest version of R).
summary(muddle)$coefficients["(Intercept)","df"]  # [1] 9.195607

For calculation of 90% CIs: (below is only example for Intercept lower and upper limit)
alpha=0.05
summary(muddle)$coefficients["(Intercept)","Estimate"] - qt(1-alpha,summary(muddle)$coefficients["(Intercept)","df"]) * summary(muddle)$coefficients["(Intercept)","Std. Error"]
summary(muddle)$coefficients["(Intercept)","Estimate"] + qt(1-alpha,summary(muddle)$coefficients["(Intercept)","df"]) * summary(muddle)$coefficients["(Intercept)","Std. Error"]


Best regards,
zizou
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-23 01:22

@ zizou
Posting: # 16349
Views: 12,394
 

 Diagnostics: R

Hi zizou,

» DF, p-values, CIs not reported in lmer for the reasons stated there.

Yep, we know. :-D

» library lmerTest can be used,

THX; I forgot!

» I think there is a little bug in df visualization - rounded on 3 decimal places, visible on 5 (actually I have not the latest version of R).

Same in R 3.2.5 and lmerTest 2.0-30.

» summary(muddle)$coefficients["(Intercept)","df"]
»  # [1] 9.195607


Or easier:
summary(muddle)$coefficients[, "df"]
(Intercept)   log(Dose)
   9.195607    5.896110


summary(muddle, digits=8) doesn’t help. Or as Carl Witthoft wrote: IMHO the proper use of summary is indicated by its name: a way to get a quick-look at your data.

» For calculation of 90% CIs: (below is only example for Intercept lower and upper limit)
» alpha=0.05
» summary(muddle)$coefficients["(Intercept)","Estimate"] - qt(1-alpha,summary(muddle)$coefficients["(Intercept)","df"]) * summary(muddle)$coefficients["(Intercept)","Std. Error"]

Perfect! Little bit shorter:
summary(muddle)$coefficients[1, 1] +c(-1,+1)*qt(1-0.05, summary(muddle)$coefficients[1, 3]) * summary(muddle)$coefficients[1, 2]
[1] 1.496758 2.386013
summary(muddle)$coefficients[2, 1] +c(-1,+1)*qt(1-0.05, summary(muddle)$coefficients[2, 3]) * summary(muddle)$coefficients[2, 2]
[1] 0.6695768 0.8539044


So what do we have? A fair agreement across software – except SAS

Software               Estimate       90% CI       width
NCSS               B0 1.9414    1.4849    2.3978    0.9129
                   B1 0.7617    0.6659    0.8576    0.1917
Phoenix/WinNonlin  B0 1.9413858 1.4967592 2.3860125 0.8893
                   B1 0.7617406 0.6695783 0.8539029 0.1843
R lmerTest         B0 1.9413858 1.4967584 2.3860132 0.8893
                   B1 0.7617406 0.6695768 0.8539044 0.1843
SAS (code?)        B0 1.94      1.54      2.35      0.8100
                   B1 0.7615    0.679     0.844     0.1650

Since I trust most in Phoenix and R; given the width of the CI: Is NCSS too conservative and SAS liberal? Unfortunately we don’t have Smith’s code. Would be great if one of our SASians could jump in.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2016-05-24 12:02
(edited by d_labes on 2016-05-24 12:49)

@ Helmut
Posting: # 16358
Views: 12,326
 

 SASian potpourri

Dear All!

» ... Would be great if one of our SASians could jump in.

Here we go:

Software               Estimate       90% CI
NCSS               B0 1.9414    1.4849    2.3978   
                   B1 0.7617    0.6659    0.8576   
Phoenix/WinNonlin  B0 1.9413858 1.4967592 2.3860125
                   B1 0.7617406 0.6695783 0.8539029
R lmerTest         B0 1.9413858 1.4967584 2.3860132
                   B1 0.7617406 0.6695768 0.8539044
SAS (ML)           B0 1.9413856 1.5174574 2.3675030
                   B1 0.7614780 0.6738351 0.8491210
SAS (REML)         B0 1.9413856 1.4807360 2.4020351
                   B1 0.7617407 0.6664695 0.8570118
Smith et al.       B0 1.94      1.54      2.35     
                   B1 0.7615    0.679     0.844    
SAS (ML/satterth)  B0 1.9425    1.5391    2.3458   
                   B1 0.7615    0.6789    0.8441   

Smith et al. used ML (usual maximum likelihood) as estimation method! Degrees of freedom method is per default "containment" (7 for the intercept, 5 for the regression slope). Astonishing enough I couldn't reproduce their results w.r.t. to the 90% CIs to a sufficient degree of accuracy with this ddfm method, at least sufficient for me.
Choosing ddfm=SATTERTHWAITE gives the desired results.

SAS code:
Proc mixed data=dp method=ML;
  class subject;
  model lnCmax=lnd /s cl alpha=0.1 ddfm=SAT;
  random intercept /subject=subject;
run;



edit: the missing REML+SATTERTH
SAS (REML/satterth) B0 1.9414    1.4968    2.3860     
                    B1 0.7617    0.6696    0.8539   

Bingo! Same as R lmer() and Phoenix/WinNonlin, at least with 4 decimals, SAS default output format. Too lazy to tease out more numbers :cool:.

Regards,

Detlew
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-24 14:27

@ d_labes
Posting: # 16359
Views: 12,363
 

 Compilation

Dear all,

in R / lmerTest() one can get maximum likelihood estimation by setting the argument REML=FALSE in the model statement (default is REML=TRUE). Satterthwaite’s DF are 10.858 for the intercept and 6.884 for the slope. In lme() use method="ML" (default is method="REML"). Below a compilation (in analogy to BE rounded to four decimal figures and grouped by “similarity”):

Software        Method      Estimate      90% CI   
═══════════════════════════════════════════════════
SAS (Smith)   ML/Satter.   B0 1.94    1.54   2.35  
                           B1 0.7615  0.679  0.844 
SAS           ML/Satter.   B0 1.9425  1.5391 2.3458
                           B1 0.7615  0.6789 0.8441
R lmerTest    ML/Satter.   B0 1.9425  1.5391 2.3458
                           B1 0.7615  0.6789 0.8441
───────────────────────────────────────────────────
R lme         ML           B0 1.9425  1.5175 2.3675
                           B1 0.7615  0.6738 0.8491
───────────────────────────────────────────────────
SAS           ML/contain.  B0 1.9414  1.5175 2.3675
                           B1 0.7615  0.6738 0.8491
───────────────────────────────────────────────────
SAS           REML/Satter. B0 1.9414  1.4968 2.3860
                           B1 0.7617  0.6696 0.8539
PHX/WNL       REML/Satter. B0 1.9414  1.4968 2.3860
                           B1 0.7617  0.6696 0.8539
R lmerTest    REML/Satter. B0 1.9414  1.4968 2.3860
                           B1 0.7617  0.6696 0.8539
───────────────────────────────────────────────────
NCSS          method?      B0 1.9414  1.4849 2.3978
                           B1 0.7617  0.6659 0.8576
───────────────────────────────────────────────────
SAS           ML/contain.  B0 1.9414  1.4807 2.4020
                           B1 0.7617  0.6665 0.8570
R lme         REML         B0 1.9414  1.4807 2.4020
                           B1 0.7617  0.6665 0.8570
───────────────────────────────────────────────────
PHX/WNL       REML/resid.  B0 1.9414  1.4515 2.4313
                           B1 0.7617  0.6665 0.8570


My preference is REML/Satterthwaite because one could reproduce results in three different software packages. I would avoid NCSS; no idea how the calculation is done.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2016-05-24 16:33

@ Helmut
Posting: # 16360
Views: 12,244
 

 REML or not

Dear Helmut,

» My preference is REML/Satterthwaite because one could reproduce results in three different software packages.
Emphasis by me.

That's not really a reason. Five SAS implementations are as correct as one ;-).

From a description of Proc MIXED:
"For balanced data the REML method of PROC MIXED provides estimators and hypotheses test results that are identical to ANOVA (OLS method of GLM), provided that the ANOVA estimators of variance components are not negative. The estimators, as in GLM, are unbiased and have minimum variance properties. The ML estimators are biased in that case. In general case of unbalanced data neither the ML nor the REML estimators are unbiased and they do not have to be equal to those obtained from PROC GLM."

The first sentences seem to point to an advantage of REML over ML estimation, left the question of the ddfm aside.

Regards,

Detlew
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-24 16:57

@ d_labes
Posting: # 16361
Views: 12,192
 

 complete or not

Dear Detlew,

» » My preference is REML/Satterthwaite because one could reproduce results in three different software packages.
» Emphasis by me.
»
» That's not really a reason.

I stand corrected. ;-)

» The first sentences seem to point to an advantage of REML over ML estimation, left the question of the ddfm aside.

Yep. In dose proportionality quite often (much) more levels than in BE come into play. The highest I have seen so far were five. Unbalanced & incomplete data are more the rule than an exception. That’s why I would prefer REML.

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
AngusMcLean
Senior

USA,
2016-05-26 16:46

@ Helmut
Posting: # 16364
Views: 12,042
 

 Compilation

» » My preference is REML/Satterthwaite because one could reproduce results in three different software packages. I would avoid NCSS; no idea how the calculation is done.

Jerry from NCSS says that the default value is REML and it is Likelihood Type
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-26 19:13

@ AngusMcLean
Posting: # 16365
Views: 11,963
 

 doubts about NCSS

Hi Angus,

» Jerry from NCSS says that the default value is REML and it is Likelihood Type

OK; REML is a subset of maximum likelihood. Since in this post you reported 9.2 degrees of freedom for the intercept and 5.9 for the slope, why do NCSS’ 90% CIs not agree with the other packages (only the PEs)?
Maybe Jerry could register here instead of playing Chinese whispers. ;-)

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
zizou
Regular

Plzeň, Czech Republic,
2016-05-26 23:38

@ Helmut
Posting: # 16366
Views: 12,017
 

 Doubts about NCSS

Dear Helmut.

» Since in this post you reported 9.2 degrees of freedom for the intercept and 5.9 for the slope, why do NCSS’ 90% CIs not agree with the other packages (only the PEs)?

According to provided results, there are differences in Standard Errors. So I guess the differences of 90% CIs are due to SEs. You know [Lower Limit,Upper Limit] = PE ∓ SE*t(1-alpha,df). It seems like only SEs differ from other softwares in the right side of equation. :confused:

From the post with Compilation of results acc. to PEs NCSS uses REML and acc. to degrees of freedom 9.2 and 5.9 NCSS uses Satterthwaite's method. (if not lucky harmony)

Best regards,
zizou

REML, it's restricted!
AngusMcLean
Senior

USA,
2016-05-28 00:51

@ Helmut
Posting: # 16368
Views: 11,790
 

 Compilation

» My preference is REML/Satterthwaite because one could reproduce results in three different software packages. I would avoid NCSS; no idea how the calculation is done.

Jerry from NCSS reports the following:

We have looked into this and resolved it in our own minds. NCSS uses the Kenwood-Rogers method for degrees of freedom which is an extension of the Satterthwaite method.

This NCSS method is:

NCSS REML/extension of Saiterwaite B0 1.9414 (1.4849-2.3978); B1 0.7617 (0.6659-0.8576)
───────────────────────────────────────────────────
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-28 15:59

@ AngusMcLean
Posting: # 16369
Views: 11,943
 

 Kenward-Roger?

Hi Angus,

» Jerry from NCSS reports the following:
»
» We have looked into this and resolved it in our own minds. NCSS uses the Kenwood-Rogers method for degrees of freedom which is an extension of the Satterthwaite method.

Really? Still I can’t reproduce results of NCSS in R (data prepared like in this post).

library(lmerTest)
library(pbkrtest)
muddle   <- lmer(log(Cmax) ~ log(Dose) + (1|Subj), data=resp)
sum.Satt <- summary(muddle, ddf="Satterthwaite")
res.Satt <- data.frame(0:1, sum.Satt$coeff[ , "Estimate"],
                       sum.Satt$coeff[, 2],
                       sum.Satt$coeff[, 3],
                       sum.Satt$coeff[, 1]-
                       qt(1-0.05, sum.Satt$coeff[, 3])*
                       sum.Satt$coeff[, 2],
                       sum.Satt$coeff[, 1]+
                       qt(1-0.05, sum.Satt$coeff[, 3])*
                       sum.Satt$coeff[, 2],
                       row.names=NULL)
sum.KR <- summary(muddle, ddf="Kenward-Roger")
res.KR <- data.frame(0:1, sum.KR$coeff[ , "Estimate"],
                     sum.KR$coeff[, 2],
                     sum.KR$coeff[, 3],
                     sum.KR$coeff[, 1]-
                     qt(1-0.05, sum.KR$coeff[, 3])*
                     sum.KR$coeff[, 2],
                     sum.KR$coeff[, 1]+
                     qt(1-0.05, sum.KR$coeff[, 3])*
                     sum.KR$coeff[, 2],
                     row.names=NULL)
names(res.KR) <- names(res.Satt) <- c("B", "PE", "SE", "df",
                                      "CLlower", "CLupper")
print(round(res.Satt, 4), row.names=FALSE)
print(round(res.KR, 4), row.names=FALSE)


Satterthwaite
B   PE     SE     df   CLlower CLupper
0 1.9414 0.2431 9.1956  1.4968  2.3860
1 0.7617 0.0473 5.8961  0.6696  0.8539

Kenward-Roger
B   PE     SE     df   CLlower CLupper
0 1.9414 0.2431 9.0915  1.4962  2.3866
1 0.7617 0.0473 5.7527  0.6692  0.8543


Modified results of NCSS from a previous posts:
B   PE     SE     df   CLlower CLupper
0 1.9414 0.2496 9.2     1.4849  2.3978
1 0.7617 0.0492 5.9     0.6659  0.8576


As zizou suspected in this post there are differences in the SEs (therefore, we get different CIs even if DFs are identical) – which leaves the question open why they are different when compared to the other packages. NCCS seems to use Satterthwaite’s DFs and not Kenward-Roger’s (contrary to the documentation and what Jerry told you).

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
Shuanghe
Regular

Spain,
2019-01-04 17:45

@ Helmut
Posting: # 19728
Views: 959
 

 90% confidence interval for R_dnm

Dear all,

Man, I should checked here before I started my work. It could save me a lot of time...

Recently I was helping one of my colleagues for dose proportionality study and power model in Smith's article is the preferred method. While I did figured out the "correct" degree of freedom method and reproduce all reported results such as intercept, slope, and 90% CI of those values, ρ1, ρ2, the ratio of dose-normalised geometric mean value Rdnm,..., I could not figure out how Smith obtained the 90% confidence interval for Rdnm (0.477, 0.698).

According to Smith , testing θL < Rdnm < θU to draw conclusion of dose proportionality is equivalent to testing 1 + ln(θL)/ln(r) < β1 < 1 + ln(θH)/ln(r). The latter is what we do and obtaining 90% CI for slope is easy enough and we can judge of dose proportionality based on that. However, it's also interesting to be able to reproduce all Smith's results.

In his article (pp.1282, 2nd paragraph), Smith wrote that "The 90% CI for the difference in log-transformed means was calculated within the MIXED procedure. Exponentiation of each limit and division by r gave the 90% CI for Rdnm. This CI lay completely outside (0.80, 1.25), indicating a disproportionate increase in Cmax."


What limit was he talking about? My sas code is basically identical to detlew's above and there's no apparent "limit" in the output that's similar to what Smith mentioned. I tried also adding ESTIMATE or LSMEANS statement with various codes, couldn't get it at all. Any help?

Many thanks.

All the best,
Shuanghe
d_labes
Hero

Berlin, Germany,
2019-01-05 14:01

@ Shuanghe
Posting: # 19731
Views: 923
 

 90% confidence interval for R_dnm

Dear Shuanghe,

First: Happy New Year to You and to All.

» Man, I should checked here before I started my work. It could save me a lot of time...
:cool: Late but hopefully not too late insight. As I sometimes stated: All answers (of asked or not asked questions) are here. You only have to dig out what you are interested in.

» Recently I was helping one of my colleagues for dose proportionality study and power model in Smith's article is the preferred method. While I did figured out the "correct" degree of freedom method and reproduce all reported results such as intercept, slope, and 90% CI of those values, ρ1, ρ2, the ratio of dose-normalised geometric mean value Rdnm,..., I could not figure out how Smith obtained the 90% confidence interval for Rdnm (0.477, 0.698).

Can you please elaborate where your difficulties arose? Able to obtain a point estimate of Rdnm but no 90% CI thereof?

» ...
» In his article (pp.1282, 2nd paragraph), Smith wrote that "[i]The 90% CI for the difference in log-transformed means was calculated within the MIXED procedure. Exponentiation of each limit and division by r gave the 90% CI for Rdnm ...

For me this is a dubious description (the whole paragraph) I don't understand at all. Difference in log-transformed means of what?

I would go with the formula of Rdnm

R_dnm = r^(beta1 - 1)
with r= ratio of highest to lowest dose and beta1=slope


Use this with the point estimate of beta1 from your model and its 90% CI limits and you obtain the 90% CI of Rdnm if I'm correct.
Example Cmax in Table 2 of the Smith et al. paper:
beta1 = 0.7615 (0.679, 0.844)
R console:
c(10^(0.7615-1), 10^(0.679-1), 10^(0.844-1))
[1] 0.5774309 0.4775293 0.6982324

Smith reported in Table 2:
    0.577    (0.477,    0.698)
Good enough?

Hope this helps.

Regards,

Detlew
mittyri
Senior

Russia,
2019-01-06 17:00

@ d_labes
Posting: # 19735
Views: 875
 

 Visualizing lmer and limits

Dear Shuanghe, Dear Detlew,

Happy New Year!

Here's my attempt to visualize the results of lmer.
I tried to add the acceptance criteria to the plot.
What do you think? Is that suitable? Did I understand the article correctly?

library(lme4)
# the next one has A LOT of dependencies
# I added it to visualize confidence limits for each observation, by the way for the ribbon only max and min are used for each dose level
library(merTools)
library(ggplot2)
library(dplyr)
lowBind <- 0.8
Subj   <- c(1, 2, 4, 5, 6, 4, 5, 6, 7, 8, 9, 7, 8, 9)
Dose   <- c(25, 25, 50, 50, 50, 250, 250, 250, 75, 75, 75, 250, 250, 250)
AUC    <- c(326.40, 437.82, 557.47, 764.85, 943.59, 2040.84, 2989.29,
            4107.58, 1562.42, 982.02, 1359.68, 3848.86, 4333.10, 3685.55)
Cmax   <- c(64.82, 67.35, 104.15, 143.12, 243.63, 451.44, 393.45,
            796.57, 145.13, 166.77, 296.90, 313.00, 387.00, 843.00)
resp   <- data.frame(Subj, Dose, Cmax, AUC)
resp$Subj <- factor(resp$Subj)

muddle <- lmer(log(Cmax) ~ log(Dose) + (1|Subj), data=resp)

coefs <- data.frame(summary(muddle)$coefficients)
pred <- cbind(resp, predictInterval(muddle, resp, level=0.9)) %>%
  mutate(minDose = min(Dose),
         lowL = exp(coefs['(Intercept)','Estimate'])*Dose^(1+log(lowBind)/log(Dose/minDose)),
         upL = exp(coefs['(Intercept)','Estimate'])*Dose^(1+log(1/lowBind)/log(Dose/minDose))
         ) %>%
  group_by(Dose) %>%
  mutate(ymax = max(upr),
            ymin = min(lwr))

ggplot(pred,aes(x = log(Dose), y = log(Cmax))) +
  geom_point(shape = Subj, size = 2)+
  geom_point(aes(y = fit), shape = Subj, size = 2, colour = 'blue')+
  geom_ribbon(aes(ymin = ymin, ymax = ymax), fill = 'blue', alpha = .2) +
  geom_abline(intercept = coefs['(Intercept)','Estimate'], slope = coefs['log(Dose)','Estimate'], size = 2, color = 'blue', linetype = 'dashed') +
  geom_abline(intercept = coefs['(Intercept)','Estimate'] - qt(1-0.05, coefs['(Intercept)', 't.value']) * coefs['(Intercept)', 'Std..Error'],
              slope = coefs['log(Dose)', 'Estimate'] - qt(1-0.05, coefs['log(Dose)', 't.value']) * coefs['log(Dose)', 'Std..Error'], size = 2, color = 'blue', linetype = 'dotdash' ) +
  geom_abline(intercept = coefs['(Intercept)','Estimate'] + qt(1-0.05, coefs['(Intercept)', 't.value']) * coefs['(Intercept)', 'Std..Error'],
              slope = coefs['log(Dose)', 'Estimate'] + qt(1-0.05, coefs['log(Dose)', 't.value']) * coefs['log(Dose)', 'Std..Error'], size = 2, color = 'blue', linetype = 'dotdash') +
  geom_line(aes(y=log(lowL)), colour = 'red', size = 2) +
  geom_line(aes(y=log(upL)), colour = 'red', size = 2) +
  scale_y_continuous(labels = function(x) paste0(signif(x,4), "(", signif(exp(x), 4), ")")) +
  scale_x_continuous(breaks=log(Dose), labels = function(x) paste0(signif(x,4), "(", signif(exp(x), 4), ")")) 

[image]
Black dots: Observed values
Blue dots: Predicted values
Blue area: 90% prediction area for all observations
Blue dashed line: fitted regression line
Blue dot-dashed lines: 90% limits built using 90% CIs for the slope and intercept
Red lines: 80-125% acceptance limits

Kind regards,
Mittyri
Shuanghe
Regular

Spain,
2019-01-07 11:05

@ mittyri
Posting: # 19740
Views: 834
 

 Visualizing lmer and limits

Dear Mittyri,

Happy New Year!

» What do you think? Is that suitable? Did I understand the article correctly?

Very nice looking figure but I don't think that I understand the red line criteria. But as I mentioned earlier, with my R skill, I'm not really in a position to judge. I'll leave it to other R-gurus to comment.

All the best,
Shuanghe
d_labes
Hero

Berlin, Germany,
2019-01-07 15:08

@ mittyri
Posting: # 19745
Views: 812
 

 Visualizing lmer and limits

Dear Mittyri

» [image]
» Black dots: Observed values
» Blue dots: Predicted values
» Blue area: 90% prediction area for all observations
» Blue dashed line: fitted regression line
» Blue dot-dashed lines: 90% limits built using 90% CIs for the slope and intercept
» Red lines: 80-125% acceptance limits

I must confess that I don't understand what you do here with the last two points.
Dose dependent 90% limits for what using 90% CIs for the slope and intercept :confused:
Dose dependent acceptance limits :confused:.

Could you please elaborate and enlighten me?
With simple words please, not with complex sophisticated code.

The prediction area is calculated based on the dose values used in the study. Should it not calculated on interpolated dose values to get more smooth area borders?
And why do you used the prediction interval as a fit visualization? AFAIK is prediction for future observations, but I think we had the goal to visualize the fit of our current observations. So would it not better to use a 90% confidence interval instead?

Regards,

Detlew
mittyri
Senior

Russia,
2019-01-13 23:53

@ d_labes
Posting: # 19774
Views: 446
 

 Visualizing lmer and limits

Dear Detlew,

» » Blue dot-dashed lines: 90% limits built using 90% CIs for the slope and intercept
» » Red lines: 80-125% acceptance limits
» I must confess that I don't understand what you do here with the last two points.
From my experience: If Detlew does not understand my intentions, the intentions have wrong direction :-D
» Dose dependent 90% limits for what using 90% CIs for the slope and intercept :confused:
These are the lines built using lower/upper limits of 90% CIs for both Slope and Intercept.
For now I don't think that's correct, since the final conclusion should be made using slope only. So I removed intercept uncertainty from calculation of 90% CI prediction line.
As Dr.Smith mentioned, 'The estimate of the “intercept” parameter ... with a 90% CI of ... and its between-subject variability are not of interest here'

» Dose dependent acceptance limits :confused:.
» Could you please elaborate and enlighten me?
» With simple words please, not with complex sophisticated code.
I tried to reformulate eq4 used for acceptance criteria.
If Beta1 should be less than 1 + ln(Theta2)/ln(r), then what is the PK level in the current dose point which is still acceptable?
From my graph one could see that Dose = 50 is the last acceptable point.

» The prediction area is calculated based on the dose values used in the study. Should it not calculated on interpolated dose values to get more smooth area borders?
Agree!

» And why do you used the prediction interval as a fit visualization? AFAIK is prediction for future observations, but I think we had the goal to visualize the fit of our current observations. So would it not better to use a 90% confidence interval instead?
Stand corrected. I removed CIs for individual observations since it is nonsense here.

library(lme4)
library(lmerTest)
library(dplyr)
library(ggplot2)
lowBind <- 0.8
Subj   <- c(1, 2, 4, 5, 6, 4, 5, 6, 7, 8, 9, 7, 8, 9)
Dose   <- c(25, 25, 50, 50, 50, 250, 250, 250, 75, 75, 75, 250, 250, 250)
Cmax   <- c(64.82, 67.35, 104.15, 143.12, 243.63, 451.44, 393.45,
            796.57, 145.13, 166.77, 296.90, 313.00, 387.00, 843.00)
resp   <- data.frame(Subj, Dose, Cmax)
resp$Subj <- factor(resp$Subj)

muddle <- lmer(log(Cmax) ~ log(Dose) + (1|Subj), data=resp, REML = FALSE)

coefs <- data.frame(summary(muddle)$coefficients)

DoseSequence <- data.frame(DoseSequence = seq(sort(unique(Dose))[1], max(Dose), length.out = 100))

LinesDF <-
  DoseSequence %>%
  mutate(Fit =  exp(coefs['(Intercept)','Estimate'] + log(DoseSequence) * coefs['log(Dose)','Estimate']),
         LowCI = exp(coefs['(Intercept)','Estimate'] + log(DoseSequence) * (coefs['log(Dose)', 'Estimate'] - qt(1-0.05, coefs['log(Dose)', 'df']) * coefs['log(Dose)', 'Std..Error'])),
         UpCI = exp(coefs['(Intercept)','Estimate'] + log(DoseSequence) * (coefs['log(Dose)', 'Estimate'] + qt(1-0.05, coefs['log(Dose)', 'df']) * coefs['log(Dose)', 'Std..Error'])),           
         LowLi = exp(coefs['(Intercept)','Estimate'] + log(DoseSequence) * (1+log(lowBind)/log(DoseSequence/min(Dose)))),
         UpLi = exp(coefs['(Intercept)','Estimate'] + log(DoseSequence) * (1-log(lowBind)/log(DoseSequence/min(Dose))))
         )

ggplot() +
  geom_point(data = resp,aes(x = Dose, y = Cmax), size = 3, fill = 'black')+
  geom_line(data = LinesDF, aes(x = DoseSequenceFirst, y = Fit), size = 2, color = 'blue', linetype = 'dashed') +
  geom_line(data = LinesDF, aes(x = DoseSequenceFirst, y = LowCI), size = 2, color = 'blue', linetype = 'dotdash' ) +
  geom_line(data = LinesDF, aes(x = DoseSequenceFirst, y = UpCI), size = 2, color = 'blue', linetype = 'dotdash' ) +
  geom_line(data = LinesDF%>%filter(DoseSequence >= sort(unique(Dose))[2]), aes(x = DoseSequence, y = LowLi), size = 2, color = 'red') +
  geom_line(data = LinesDF%>%filter(DoseSequence >= sort(unique(Dose))[2]), aes(x = DoseSequence, y = UpLi), size = 2, color = 'red') +
  scale_y_continuous(trans = 'log10') +
  theme_bw()

[image]

Kind regards,
Mittyri
Shuanghe
Regular

Spain,
2019-01-07 10:53
(edited by Shuanghe on 2019-01-07 12:51)

@ d_labes
Posting: # 19739
Views: 832
 

 90% confidence interval for R_dnm

Dear Detlew and all,

Happy New Year!

» :cool: Late but hopefully not too late insight. As I sometimes stated: All answers (of asked or not asked questions) are here. You only have to dig out what you are interested in.

Reproduce Smith's results in SAS is easy enough (except the 90% CI of Rdnm, which is not realy need for judging dose proportionality), but I struggled for a loooooong loooooooooong time with R as I was playing lm and glm and to make it worse, random effect was coded wrong.... Obviously, my R skill needs to be improved much more :crying:
Anyway, not too late for me. I'll check the R code mentioned by Zizou and Helmut et al later.

» Can you please elaborate where your difficulties arose? Able to obtain a point estimate of Rdnm but no 90% CI thereof?

yes.

» I would go with the formula of Rdnm
»

R_dnm = r^(beta1 - 1)
» with r= ratio of highest to lowest dose and beta1=slope


» Use this with the point estimate of beta1 from your model and its 90% CI limits and you obtain the 90% CI of Rdnm if I'm correct.
» R console:
» c(10^(0.7615-1), 10^(0.679-1), 10^(0.844-1))
» [1] 0.5774309 0.4775293 0.6982324
» Smith reported in Table 2:
»     0.577    (0.477,    0.698)
» Good enough?

This must be it!
According to his article, Smith use the formula EXP(beta0 + beta1*ln(dose)) to calculate each mean to get ratio Rdnm. so the value is Rdnm = 0.577402 (red digit from my sas). But this is equivalent to what you wrote so if you use the full precision figure we would have obtained the same values (90% CI of (0.477381, 0.698378)). I checked with the AUC data as well, they matches what's reported in table 2.


» Hope this helps.
Definitely helps! Thanks.

All the best,
Shuanghe
d_labes
Hero

Berlin, Germany,
2019-01-07 15:17

@ Shuanghe
Posting: # 19746
Views: 807
 

 90% confidence interval for R_dnm

Dear Shuanghe,

» » I would go with the formula of Rdnm
» » [indent]R_dnm = r^(beta1 - 1)

» ...
» This must be it!
» According to his article, Smith use the formula EXP(beta0 + beta1*ln(dose)) to calculate each mean to get ratio Rdnm. so the value is Rdnm = 0.577402 (red digit from my sas). But this is equivalent to what you wrote so if you use the full precision figure we would have obtained the same values (90% CI of (0.477381, 0.698378)). I checked with the AUC data as well, they matches what's reported in table 2.

Could you please give a detailed example for what you did here?

Regards,

Detlew
Shuanghe
Regular

Spain,
2019-01-07 17:11

@ d_labes
Posting: # 19751
Views: 800
 

 90% confidence interval for R_dnm

Dear Detlew,

» Could you please give a detailed example for what you did here?

Sorry. It seems I didn't explain it clearly. My SAS code is almost the same as yours:

PROC MIXED DATA = smith METHOD = ML;
  CLASS subj ;
  MODEL lcmax = ldose / DDFM=SATTERTHWAITE CL ALPHA = 0.1 ;
  RANDOM INTERCEPT / SUBJECT = subj;
  ODS OUTPUT solutionf = out1;
RUN;


I just take the output and calculated some of the numbers according to Smith's article to reproduce his result.

PROC SQL;
  CREATE TABLE result AS
    SELECT
      d2.estimate AS beta1,
      d2.lower AS beta1_l,
      d2.upper AS beta1_u,
      1 + LOG(0.8)/LOG(10) AS dpcrit_l,
      1 + LOG(1.25)/LOG(10) AS dpcrit_u,
      1.25**(1/MAX(1-d2.lower, d2.upper-1)) AS roh1,
      1.25**(1/MAX(d2.lower-1, 1-d2.upper)) AS roh2,
      EXP(d3.estimate + d2.estimate*LOG(250))/EXP(d3.estimate + d2.estimate*LOG(25)) * (25/250) AS R_dnm,     
      10**(d2.estimate-1) AS Rdnm,
      10**(d2.lower-1) AS Rdnm_l,
      10**(d2.upper-1) AS Rdnm_u,
      EXP(d3.estimate + d2.estimate*(LOG(25))) AS dosepred_l,
      EXP(d3.estimate + d2.estimate*(LOG(250))) AS dosepred_h
    FROM out1 AS d2, out1 AS d3
    WHERE UPCASE(d2.effect) EQ "LDOSE" AND
          UPCASE(d3.effect) EQ "INTERCEPT";
  SELECT * FROM result;
QUIT;

This gives (Smith's reported value in blue):

beta1:    0.7615 (0.7615)
beta1_l:  0.6789 (0.679)
beta1_u:  0.8441 (0.844)
dpcrit_l: 0.90309 (0.903)
dpcrit_u: 1.09691 (1.097)

Those are slope and its 90% CI, and the corresponding criteria calculated with dose ratio r = 10 and θL and θU of 0.8 and 1.25, respectively. The rest are for information purpose only.

roh1:        2.003428 (2.0)
roh2:        4.183885 (4.2)
R_dnm:       0.577402 (0.577)
Rdnm:        0.577402 (0.577)
Rdnm_l:      0.477381 (0.477)
Rdnm_u:      0.698378 (0.698)
dosepred_l:  80.92991 (80.9)
dosepred_h:  467.2906 (467)


where R_dnm is the one I used previously, since Smith mentioned in the article that each mean PK was calcuated as exp(beta_0 + beta1*ln(dose)). R_dnm is dose-normalised mean ratio, hence the long line in PROC SQL:
EXP(d3.estimate + d2.estimate*LOG(250))/EXP(d3.estimate + d2.estimate*LOG(25)) * (25/250).
Rdnm is the same thing but calculated with your code which is the one I use now since it's equivalent to previous one but much shorter. I guess that my explanation in previous post with this regard is not clear so I added both of them here. :-D

The last 2 are predicted geometric mean PK values at the dose levels of 25 and 250, as given in the 1st column in Table 2 in Smith's article. These 2nd part of the result are not really necessary to judge dose proportionality (though roh1 and roh2 are useful to know) but as I said, I prefer to reproduce all results as kind of "validation".

By the way, Helmut, I don't know how to make a table (e.g. with 1 row) with heading here so I manually entered all values above; also, I copy/paste greek letter from elsewhere. Is there any helper section with BBCode for special symble and greek letters and table making? I vaguely recall there used to be a section with BBcode examples but couldn't find it now.

All the best,
Shuanghe
d_labes
Hero

Berlin, Germany,
2019-01-07 18:24

@ Shuanghe
Posting: # 19755
Views: 795
 

 90% confidence interval for R_dnm

Dear Shuanghe,

» » Could you please give a detailed example for what you did here?
»
» Sorry. It seems I didn't explain it clearly. My SAS code is almost the same as yours:
»
» PROC MIXED DATA = smith METHOD = ML;
»   CLASS subj ;
»   MODEL lcmax = ldose / DDFM=SATTERTHWAITE CL ALPHA = 0.1 ;
»   RANDOM INTERCEPT / SUBJECT = subj;
»   ODS OUTPUT solutionf = out1;
» RUN;

» ...

Thanks for your explanation. Didn't really understand but ...

Seen that you only used doses 25 and 250 for Rdnm although there are entries with doses of 50 and 75 mg. But since you are using the fitted (predicted) PK metrics it doesn't matter. And is equivalent to my suggestion.

» By the way, Helmut, I don't know how to make a table ...

Don't expect any reaction from Helmut before February. He is down under (New Zealand) and will return earliest at the beginning of February...

Regards,

Detlew
mittyri
Senior

Russia,
2019-01-08 00:19

@ Shuanghe
Posting: # 19758
Views: 771
 

 offtop: greek letters and tables

Dear Shuanghe,

» By the way, Helmut, I don't know how to make a table (e.g. with 1 row) with heading here so I manually entered all values above; also, I copy/paste greek letter from elsewhere. Is there any helper section with BBCode for special symble and greek letters and table making? I vaguely recall there used to be a section with BBcode examples but couldn't find it now.

I'm not Helmut, but will try to help you;-)

Regarding tables:
please see the Note 1

An example (you can see the BBBcode posting the reply to current message):

Dependent      Ratio[%Ref] CI_90_Lower CI_90_Upper Power   CV(%)  n
Ln(Cmax)           87.07     67.52      113.11      41.0   37.8  14


Regarding greek letters: you can find the table of supported greek letters using the link above and copy paste it. Or to generate the symbol, make sure Num Lock is on and press the ALT key as you type the number on the numeric keypad.
ALT+224 -> α
AFAIK using this method you are limited by Code Page 437 where ρ and some other letters are not presented and should be pasted from other page/app.

Kind regards,
Mittyri
Helmut
Hero
avatar
Homepage
Vienna, Austria,
2016-05-18 14:44

@ mittyri
Posting: # 16323
Views: 12,785
 

 Smith’s paper

Hi Mittri,

» Is it possible to make a dataset for validation? Do we have anywhere the reference dataset

You can download Smith’s paper here. We were exploring the Cmax data of Table I.

» and the accurate result?

Define accurate. ;-)

Cheers,
Helmut Schütz
[image]

The quality of responses received is directly proportional to the quality of the question asked. ☼
Science Quotes
d_labes
Hero

Berlin, Germany,
2019-01-05 15:00

@ Helmut
Posting: # 19732
Views: 917
 

 Smith’s paper

Dear all,

» You can download Smith’s paper here.

The link doesn't work anymore. Use this one instead.

Regards,

Detlew
Activity
 Thread view
Bioequivalence and Bioavailability Forum |  Admin contact
19,025 posts in 4,058 threads, 1,297 registered users;
online 6 (0 registered, 6 guests [including 6 identified bots]).

[The] impatience with ambiguity can be criticized in the phrase:
absence of evidence is not evidence of absence.    Carl Sagan

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
BEBAC Ing. Helmut Schütz
HTML5 RSS Feed