AngusMcLean ★★ USA, 20160511 16:55 Posting: # 16294 Views: 20,512 

We have completed a dose proportionality study using the lowest strength to the highest strength of a modified release formulation of a class I drug. The approach used was the usual crossover design with 20 subjects (AB, BA) and dose normalized the PK parameters Cmax and AUC0T prior to bioequivalence. No problem we are BE comfortably within the limits (0.81.25). I use Phoenix WinNonlin V6.4 so the within subject and between subject variance values for Cmax and AUC0t appear in the output. Another worker has used the power model with the same data set for dose proportionality as described by Brian Smith in Pharmaceutical Research in year 2000. The paper is entitled "Confidence Interval Criteria for Assessment of Dose Proportionality". The point estimates are much the same as my approach, but he quotes 98% CI. His CI values are also within the lmits. This worker has also calculated within subject and between subject variance values for the parameters. There are large differences for the values of within subject variance compared with my approach, but the between subject variance values are very similar for both Cmax and AUC. Should the within subject and between subject variance values of Cmax and AUCC0t be very similar for both approaches? Angus Edit: Category changed; see also this post #1. Please don’t shout! [Helmut] 
Helmut ★★★ Vienna, Austria, 20160512 14:34 @ AngusMcLean Posting: # 16299 Views: 19,417 

Hi Angus, » We have completed a dose proportionality study using the lowest strength to the highest strength of a modified release formulation of a class I drug. The approach used was the usual crossover design with 20 subjects (AB, BA) and dose normalized the PK parameters Cmax and AUC0T prior to bioequivalence. No problem we are BE comfortably within the limits (0.81.25). Testing only two levels for dose proportionality is somewhat unconventional. » Another worker has used the power model with the same data set for dose proportionality […] Does it look like this (high dose = 8× low dose)? » The point estimates are much the same as my approach, […] Numbers? AUC only – C_{max} is of limited value in DP. » […] but he quotes 98% CI. Smells of Bonferroni’s adjustment for four simultaneous (and independent) tests in order to control the familywise error rate: 100(1–2α/4) = 97.5% CI and FWER 1–(1–α)^{4} = 0.0491. In the power model we have only two parameters (the coefficient α and the exponent β: E[D] = α·D^{ β} (independent from the number of dose levels tested) and we are interested only in β. I don’t see why a multiplicityadjustment was done. » His CI values are also within the lmits. How did he get a CI of β? With two dose levels we have zero degrees of freedom for a model with two parameters. Before we can discuss variances, we need more information. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
AngusMcLean ★★ USA, 20160513 16:40 @ Helmut Posting: # 16301 Views: 19,340 

Hello: It is a very well known drug and many MR formulations are on the market. (You have experience with this drug). All except one formulation show dose proportionality from lowest to highest strength. The one which does not is almost dose proportional. We used that in our clinical protocol to justify our study design of using just two doses. We do have intermediate doses between the low and the high. There are only two does levels to plot. The relationship is as follows: LnPK=B0+B1*Ln(dose) where LnPK pertains to Cmax or AUC. So we have a regression line going through the points: we evaluate the slope (B1), intercept and the confidence intervals about the slope to evaluate dose proportionality. Brian Smith in Pharm Research year 2000 has extended the approach from the original UK working party. It seems that you can calculate intrasubject and intersubject variance e.g. for AUC and partial AUC from this approach. I do not follow how to do it. I use the usual intrasubject and intersubject values from Phoenix WinNonlin 6.4 and I am happy with that. Angus 
Helmut ★★★ Vienna, Austria, 20160514 02:26 @ AngusMcLean Posting: # 16302 Views: 19,590 

Hi Angus, » It is a very well known drug and many MR formulations are on the market. (You have experience with this drug). If we are talking about the same goody: Watch out for polymorphism… Sometimes you have poor metabolizers in the study where enzymes get saturated at higher doses. In one subject I once got a slope of 1.54 for AUC and 2.16 for C_{max} over a only twofold dose range! For the other subjects it got 1.05 and 1.02 with a very narrow CI. » There are only two does levels to plot. The relationship is as follows: » » LnPK=B0+B1*Ln(dose) where LnPK pertains to Cmax or AUC. » » So we have a regression line going through the points: we evaluate the slope (B1), intercept OK, so far. » and the confidence intervals about the slope to evaluate dose proportionality. This is beyond me. df = n – p where n is the number of data points and p the number of parameters. How can you calculate a CI with df = 0? » Brian Smith in Pharm Research year 2000 has extended the approach from the original UK working party. Yep. Smith et al. use a mixedeffects model, where subjects are a random effect. Thus we increase n. Now a CI is possible even for p = 2. » It seems that you can calculate intrasubject and intersubject variance e.g. for AUC and partial AUC from this approach. Correct. » I do not follow how to do it. I use the usual intrasubject and intersubject values from Phoenix WinNonlin 6.4 and I am happy with that. If you are happy with that, what is your question? If you want to reproduce Smith’s results in Phoenix/WinNonlin: Start with a worksheet (columns subject, dose, Cmax, AUC, whatsoever). logtransform: dose, Cmax, … and weight=1/logCmax, … Send to LME. Map Subject as Classification, logCmax as Rgeressor, and logCmax as Dependent. Model Specification: logCmax Fixed Effects Confidence Level: 90% Variance Structure / Random 1: Subject With Smith’s C_{max}data of Table 1 I got for the slope: 0.7617 (90% CI: 0.6696, 0.8539), slightly different from the reported 0.7615 (0.679, 0.844). Why? Duno.
See also chapter 18.3 in Chow/Liu. Without explanation they recommend a 95% CI but a 90% CI in elaborating Smith’s approach. In general I prefer a weighted model (hence the transformation above). Fits much better. SSQ AIC Var(Subject) Var(Res) PS: Can you ask “the other worker” why he/she calculated the 98% CI? — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
AngusMcLean ★★ USA, 20160514 18:54 (edited by AngusMcLean on 20160514 22:06) @ Helmut Posting: # 16303 Views: 19,231 

Helmut: My apologies it is a typo it is 95% not 98%. Yes; I am happy with the Phoenix WinNonlin calculations for within subject and between subject variance. The reason for my interest is the other worker following Smith's method is producing values, which are much lower than my Phoenix values for within subject variance. It seems to me that the values from the two methods should be much the same? We will be speaking next week and I am going to ask him exactly how he derived his values for the data set. We each have the same data set. He got the data from me. Obviously the other worker is producing a set of values that are much lower and make the formulation appear to have lower within subject variability. Thank you for above steps: I do see that LnCmax cannot be both the dependent and the regressor (I think LnDose is the regressor). I have tried to run the linear mixed effects model, but I cannot repeat your results. The program ran, but my residual variance was 0.154. I am thinking that maybe my input file does not have the structure needed, e.g. subject 4,5,6 in the Smith Data were treated at 50mg and 250mg so do you need to differentiate by including a period 1 and period 2 variable in the input file? 
Helmut ★★★ Vienna, Austria, 20160515 14:47 @ AngusMcLean Posting: # 16305 Views: 19,314 

Hi Angus, » My apologies it is a typo it is 95% not 98%. Whew! Confused me. » I am happy with the Phoenix WinNonlin calculations for within subject and between subject variance. The reason for my interest is the other worker following Smith's method is producing values, which are much lower than my Phoenix values for within subject variance. It seems to me that the values from the two methods should be much the same? Variances should be similar. Smith’s data don’t help because the information is incomplete (crossover or paired, sequences, periods?). Here one of my studies (6×3 Williams’ design): dosenorm. powermodel » I do see that LnCmax cannot be both the dependent and the regressor (I think LnDose is the regressor). Exactly. I uploaded a projectfile at Certara’s forum. Compare it to your setup. » I have tried to run the linear mixed effects model, but I cannot repeat your results. The program ran, but my residual variance was 0.154. I am thinking that maybe my input file does not have the structure needed, e.g. subject 4,5,6 in the Smith Data were treated at 50mg and 250mg so do you need to differentiate by including a period 1 and period 2 variable in the input file? I have no clue how the subjects were treated… Maybe it was a dose escalation? period — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
AngusMcLean ★★ USA, 20160515 15:17 @ Helmut Posting: # 16306 Views: 19,236 

Helmut: I am having difficulty with Phoenix: I get an error message when the program loads relating to a missing framework. It tells me that the program may not work properly. However I have been able to get your file to run OK eventually. It is a dose escalation study....very early development. No study design per se is given. After this success I then tried the data set I have already studied for dose proportionality by dosenormalized BE in Phoenix. There are only two doses given to 20 subjects crossover design. So using your structure I set up the input for LME model. The model was lndose as given by you. I was able to get an evaluation for residual error for both Cmax and AUC0t. The values were indeed very similar to what I had obtained in PHoenix (e.g. Cmax CV(%) was 19.9 compared with 20.06 (BE result). The CV(%) for AUC0t ws 12.2 compared with 12.6% by BE approach). Other worker has reported CV(%) of 0.1 for AUC0t. I am wondering if using SAS you can get a result of 0.1. For Cmax other worker reports CV% 10.1. I do not think using SAS can give such a difference? 
Helmut ★★★ Vienna, Austria, 20160515 15:56 @ AngusMcLean Posting: # 16307 Views: 19,252 

Hi Angus, » I am having difficulty with Phoenix: I get an error message when the program loads relating to a missing framework. It tells me that the program may not work properly. I’m getting this message for a few months as well (only in the 64bit version of Phoenix): Environment: Windows 7 Pro SP1 (64bit) build 7601, Phoenix 64 build 6.4.0.768, .NET Framework 4.6.1 (last security update 20160511). Since I didn’t change my Phoenixinstallation I suspect Billyboy causing the trouble… I guess one of these February/March security updates are the reason: KB3122661, KB3127233, KB3136000. » I was able to get an evaluation for residual error for both Cmax and AUC0t. The values were indeed very similar to what I had obtained in PHoenix (e.g. Cmax CV(%) was 19.9 compared with 20.06 (BE result). The CV(%) for AUC0t ws 12.2 compared with 12.6% by BE approach). Congratulations! » Other worker has reported CV(%) of 0.1 for AUC0t. I am wondering if using SAS you can get a result of 0.1. For Cmax other worker reports CV% 10.1. I do not think using SAS can give such a difference? Never seen that. We crossvalidated Phoenix/WinNonlin and SAS many times. I think that he/she screwed up completely. I love “pushthebutton” statisticians reporting a 0.1% CV (‼) without hesitation. How plausible is that? Good morning! Body height can be measured pretty precisely. Ask the worker whether he/she thinks that repeated measurements can be done to ±1/16″. If the answer is yes, ask which variability he/she expects in AUC. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
AngusMcLean ★★ USA, 20160515 20:11 @ Helmut Posting: # 16309 Views: 19,159 

» […] We crossvalidated Phoenix/WinNonlin and SAS many times. […] I love “pushthebutton” statisticians reporting a 0.1% CV (‼) without hesitation. How plausible is that? […]
» Body height can be measured pretty precisely. Ask the worker whether he/she thinks that repeated measurements can be done to ±1/16″. If the answer is yes, ask which variability he/she expects in AUFC. Yes; we are still in 1/16" units here. They are difficult to work with. It is highly unlikely for 0.1% to be correct. Sometimes people will believe what they find to be the most favorable. I will be finding out more soon about the other data. Meanwhile I wonder if Pharsight can come up with a SAS comparison. Meanwhile I am off to visit a German store "Aldi" Edit: Full quote removed. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5! [Helmut] 
Helmut ★★★ Vienna, Austria, 20160516 16:26 @ AngusMcLean Posting: # 16316 Views: 19,059 

Hi Angus, » Yes; we are still in 1/16" units here. They are difficult to work with. All scientific journals I know demand SI (i.e., metric) units. Imperial units are a mess. One story from my past: When I was already a diving instructor I went through my cavediving courses in Mexico according to a USsystem (NACD). To calculate the maximum dive time (leaving air consumption for descent/ascent aside) in the metric system is bloody easy. Say the tank’s volume is 12 L and pressurized to 200 bar. The volume of expanded air is 12 × 200 = 2,400 L. If the breathing rate at the surface is 20 L/min, the tank’s air lasts for 2,400 / 20 = 120 min (ambient pressure 1 bar). If you dive, every 10 meters the pressure increases by 1 bar. Thus at 10 m the tank’s air will last for 2,400 / [20 × (1 + 1)] = 60 min, at 20 m for 2,400 / [20 × (1 + 2)] = 40 min, and so on. No pocket calculator needed. How is this stuff done in the US? To start the confusion tanks are not classified by their volume, but by the volume of expanded air if the tank is filled to its rated pressure (which commonly is 3,000 psi). A standard tank is called “80 ft^{3}”. A common “surface breathing rate” is 0.7 ft^{3}/min. The surface pressure is 14.5 psi and increases by 14.5 psi every additional 33 ft. Good luck in calculating the dive time. In cave diving tanks are regularly “overfilled”, e.g., to 3,333 psi. Then your 80 ft^{3}tank contains 89 ft^{3}. There is also special equipment (compressors, regulators, pressure gauges) rated for 300 bar (~4,350 psi). In the metric system the 12 Ltank is still a 12 Ltank (what else). In the imperial system it’s a 116 ft^{3}tank despite the dimension are the same. Crazy. I wonder why not more USdivers drown. Maybe they are better in math than me. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
ElMaestro ★★★ Belgium?, 20160515 20:54 @ AngusMcLean Posting: # 16310 Views: 19,087 

Hi Angus, » Other worker has reported CV(%) of 0.1 for AUC0t. I am wondering if using SAS you can get a result of 0.1. For Cmax other worker reports CV% 10.1. I do not think using SAS can give such a difference? And this isn't just a case of someone confusing raw figures with percentages? — I could be wrong, but... Best regards, ElMaestro 
AngusMcLean ★★ USA, 20160515 22:30 @ ElMaestro Posting: # 16313 Views: 18,967 

» And this isn't just a case of someone confusing raw figures with percentages? Thank you for your remarks: we will hopefully find out soon. 
AngusMcLean ★★ USA, 20160516 21:00 @ Helmut Posting: # 16317 Views: 19,042 

» With Smith’s C_{max}data of Table 1 I got for the slope: » 0.7617 (90% CI: 0.6696, 0.8539), slightly different from the reported 0.7615 (0.679, 0.844). I have repeated the above calculation in NCSS as desribed by Jerry: here are the results of Brian Smith's example.
90.0% 90.0% The within subject variance was 0.0146 (the same as Phoenix). Edit: Full quote removed, tabulators changed to spaces and BBcoded. Please delete everything from the text of the original poster which is not necessary in understanding your answer; see also this post #5 and #6! [Helmut] 
Helmut ★★★ Vienna, Austria, 20160517 01:50 @ AngusMcLean Posting: # 16318 Views: 18,883 

Hi Angus, » I have repeated the above calculation in NCSS: Intercept 1.9414 0.2496 0.000025 1.4849 2.3978 9.2 Phoenix/WinNonlin: 1.9414 0.2431 0.000020 1.4968 2.3860 9.2 Reported by Smith et al. (SAS Proc Mixed): 1.94 1.54 2.35 Results by NCSS and Phoenix/WinNonlin are similar but don’t match SAS (whose CIs are wider). I love software. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
mittyri ★★ Russia, 20160518 08:23 @ Helmut Posting: # 16321 Views: 18,863 

Hi Helmut, » Results by NCSS and Phoenix/WinNonlin are similar but don’t match SAS (whose CIs are wider). I love software. I'm surprised with the results Is it possible to make a dataset for validation? Do we have anywhere the reference dataset and the accurate result? — Kind regards, Mittyri 
ElMaestro ★★★ Belgium?, 20160518 09:20 (edited by ElMaestro on 20160518 09:58) @ mittyri Posting: # 16322 Views: 18,741 

Hi all, » » Results by NCSS and Phoenix/WinNonlin are similar but don’t match SAS (whose CIs are wider). I love software. » » I'm surprised with the results » Is it possible to make a dataset for validation? Do we have anywhere the reference dataset and the accurate result? Extract some model diagnostics: DF's and LogLikelihood, and compare to find out which result is the better candidate. Life is good. — I could be wrong, but... Best regards, ElMaestro 
Helmut ★★★ Vienna, Austria, 20160518 15:14 @ ElMaestro Posting: # 16324 Views: 18,915 

Hi ElMaestro et al., » Extract some model diagnostics: DF's and LogLikelihood, and compare to find out which result is the better candidate. I can only provide the results of R and Phoenix: R 3.2.5 library(nlme) Phoenix 6.47.0.768 Model Specification and User Settings Estimates and their SEs are exactly the same. CIs are not (due to different DFs?). PS: An ideas how to weight by 1/log(Dose) in lme() ? Suggested by Chow/Liu and gives me a better fit in Phoenix.— Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
zizou ★ Plzeň, Czech Republic, 20160522 19:07 @ Helmut Posting: # 16348 Views: 18,457 

Dear Helmut. » Estimates and their SEs are exactly the same. CIs are not (due to different DFs?). Exactly, different DFs. # equivalent of your code: DF, pvalues, CIs not reported in lmer for the reasons stated there. For more decimal places: print(muddle, digits=7, ranef.comp=c("Var","Std.Dev.")) Some reference for lmer can be found in this PDF. For DFs, pvalues, ...: library lmerTest can be used, see this PDF (page 2  description, details, references to SAS). library(lmerTest) # Library includes modification of function lmer (if I am not mistaken). Results: Linear mixed model fit by REML ttests use Satterthwaite approximations to I think there is a little bug in df visualization  rounded on 3 decimal places, visible on 5 (actually I have not the latest version of R). summary(muddle)$coefficients["(Intercept)","df"]
# [1] 9.195607 For calculation of 90% CIs: (below is only example for Intercept lower and upper limit) alpha=0.05 Best regards, zizou 
Helmut ★★★ Vienna, Austria, 20160523 01:22 @ zizou Posting: # 16349 Views: 18,391 

Hi zizou, » DF, pvalues, CIs not reported in lmer for the reasons stated there. Yep, we know. » library lmerTest can be used, THX; I forgot! » I think there is a little bug in df visualization  rounded on 3 decimal places, visible on 5 (actually I have not the latest version of R). Same in R 3.2.5 and lmerTest 2.030. » summary(muddle)$coefficients["(Intercept)","df"]
Or easier: summary(muddle)$coefficients[, "df"]
summary(muddle, digits=8) doesn’t help. Or as Carl Witthoft wrote: IMHO the proper use of summary is indicated by its name: a way to get a quicklook at your data.» For calculation of 90% CIs: (below is only example for Intercept lower and upper limit) » alpha=0.05 » summary(muddle)$coefficients["(Intercept)","Estimate"]  qt(1alpha,summary(muddle)$coefficients["(Intercept)","df"]) * summary(muddle)$coefficients["(Intercept)","Std. Error"] Perfect! Little bit shorter: summary(muddle)$coefficients[1, 1] +c(1,+1)*qt(10.05, summary(muddle)$coefficients[1, 3]) * summary(muddle)$coefficients[1, 2]
So what do we have? A fair agreement across software – except SAS… Software Estimate 90% CI width Since I trust most in Phoenix and R; given the width of the CI: Is NCSS too conservative and SAS liberal? Unfortunately we don’t have Smith’s code. Would be great if one of our SASians could jump in. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20160524 12:02 (edited by d_labes on 20160524 12:49) @ Helmut Posting: # 16358 Views: 18,299 

Dear All! » ... Would be great if one of our SASians could jump in. Here we go: Software Estimate 90% CI Smith et al. used ML (usual maximum likelihood) as estimation method! Degrees of freedom method is per default "containment" (7 for the intercept, 5 for the regression slope). Astonishing enough I couldn't reproduce their results w.r.t. to the 90% CIs to a sufficient degree of accuracy with this ddfm method, at least sufficient for me. Choosing ddfm=SATTERTHWAITE gives the desired results.SAS code:
Proc mixed data=dp method=ML; edit: the missing REML+SATTERTH
SAS (REML/satterth) B_{0} 1.9414 1.4968 2.3860 Bingo! Same as R lmer() and Phoenix/WinNonlin, at least with 4 decimals, SAS default output format. Too lazy to tease out more numbers . — Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20160524 14:27 @ d_labes Posting: # 16359 Views: 18,390 

Dear all, in R / lmerTest() one can get maximum likelihood estimation by setting the argument REML=FALSE in the model statement (default is REML=TRUE ). Satterthwaite’s DF are 10.858 for the intercept and 6.884 for the slope. In lme() use method="ML" (default is method="REML" ). Below a compilation (in analogy to BE rounded to four decimal figures and grouped by “similarity”):Software Method Estimate 90% CI My preference is REML/Satterthwaite because one could reproduce results in three different software packages. I would avoid NCSS; no idea how the calculation is done. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
d_labes ★★★ Berlin, Germany, 20160524 16:33 @ Helmut Posting: # 16360 Views: 18,244 

Dear Helmut, » My preference is REML/Satterthwaite because one could reproduce results in three different software packages. Emphasis by me. That's not really a reason. Five SAS implementations are as correct as one . From a description of Proc MIXED: "For balanced data the REML method of PROC MIXED provides estimators and hypotheses test results that are identical to ANOVA (OLS method of GLM), provided that the ANOVA estimators of variance components are not negative. The estimators, as in GLM, are unbiased and have minimum variance properties. The ML estimators are biased in that case. In general case of unbalanced data neither the ML nor the REML estimators are unbiased and they do not have to be equal to those obtained from PROC GLM." The first sentences seem to point to an advantage of REML over ML estimation, left the question of the ddfm aside. — Regards, Detlew 
Helmut ★★★ Vienna, Austria, 20160524 16:57 @ d_labes Posting: # 16361 Views: 18,176 

Dear Detlew, » » My preference is REML/Satterthwaite because one could reproduce results in three different software packages. » Emphasis by me. » » That's not really a reason. I stand corrected. » The first sentences seem to point to an advantage of REML over ML estimation, left the question of the ddfm aside. Yep. In dose proportionality quite often (much) more levels than in BE come into play. The highest I have seen so far were five. Unbalanced & incomplete data are more the rule than an exception. That’s why I would prefer REML. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
AngusMcLean ★★ USA, 20160526 16:46 @ Helmut Posting: # 16364 Views: 17,992 

» » My preference is REML/Satterthwaite because one could reproduce results in three different software packages. I would avoid NCSS; no idea how the calculation is done. Jerry from NCSS says that the default value is REML and it is Likelihood Type 
Helmut ★★★ Vienna, Austria, 20160526 19:13 @ AngusMcLean Posting: # 16365 Views: 17,924 

Hi Angus, » Jerry from NCSS says that the default value is REML and it is Likelihood Type OK; REML is a subset of maximum likelihood. Since in this post you reported 9.2 degrees of freedom for the intercept and 5.9 for the slope, why do NCSS’ 90% CIs not agree with the other packages (only the PEs)? Maybe Jerry could register here instead of playing Chinese whispers. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
zizou ★ Plzeň, Czech Republic, 20160526 23:38 @ Helmut Posting: # 16366 Views: 18,040 

Dear Helmut. » Since in this post you reported 9.2 degrees of freedom for the intercept and 5.9 for the slope, why do NCSS’ 90% CIs not agree with the other packages (only the PEs)? According to provided results, there are differences in Standard Errors. So I guess the differences of 90% CIs are due to SEs. You know [Lower Limit,Upper Limit] = PE ∓ SE*t(1alpha,df) . It seems like only SEs differ from other softwares in the right side of equation. From the post with Compilation of results acc. to PEs NCSS uses REML and acc. to degrees of freedom 9.2 and 5.9 NCSS uses Satterthwaite's method. (if not lucky harmony) Best regards, zizou REML, it's restricted! 
AngusMcLean ★★ USA, 20160528 00:51 @ Helmut Posting: # 16368 Views: 17,740 

» My preference is REML/Satterthwaite because one could reproduce results in three different software packages. I would avoid NCSS; no idea how the calculation is done. Jerry from NCSS reports the following: We have looked into this and resolved it in our own minds. NCSS uses the KenwoodRogers method for degrees of freedom which is an extension of the Satterthwaite method. This NCSS method is: NCSS REML/extension of Saiterwaite B0 1.9414 (1.48492.3978); B1 0.7617 (0.66590.8576) ─────────────────────────────────────────────────── 
Helmut ★★★ Vienna, Austria, 20160528 15:59 @ AngusMcLean Posting: # 16369 Views: 18,000 

Hi Angus, » Jerry from NCSS reports the following: » » We have looked into this and resolved it in our own minds. NCSS uses the KenwoodRogers method for degrees of freedom which is an extension of the Satterthwaite method. Really? Still I can’t reproduce results of NCSS in R (data prepared like in this post). library(lmerTest) Satterthwaite B PE SE df CLlower CLupper KenwardRoger B PE SE df CLlower CLupper Modified results of NCSS from a previous posts: B PE SE df CLlower CLupper As zizou suspected in this post there are differences in the SEs (therefore, we get different CIs even if DFs are identical) – which leaves the question open why they are different when compared to the other packages. NCCS seems to use Satterthwaite’s DFs and not KenwardRoger’s (contrary to the documentation and what Jerry told you). — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Shuanghe ★★ Spain, 20190104 17:45 @ Helmut Posting: # 19728 Views: 6,903 

Dear all, Man, I should checked here before I started my work. It could save me a lot of time... Recently I was helping one of my colleagues for dose proportionality study and power model in Smith's article is the preferred method. While I did figured out the "correct" degree of freedom method and reproduce all reported results such as intercept, slope, and 90% CI of those values, ρ_{1}, ρ_{2}, the ratio of dosenormalised geometric mean value R_{dnm},..., I could not figure out how Smith obtained the 90% confidence interval for R_{dnm} (0.477, 0.698). According to Smith , testing θ_{L} < R_{dnm} < θ_{U} to draw conclusion of dose proportionality is equivalent to testing 1 + ln(θ_{L})/ln(r) < β_{1} < 1 + ln(θ_{H})/ln(r). The latter is what we do and obtaining 90% CI for slope is easy enough and we can judge of dose proportionality based on that. However, it's also interesting to be able to reproduce all Smith's results. In his article (pp.1282, 2nd paragraph), Smith wrote that "The 90% CI for the difference in logtransformed means was calculated within the MIXED procedure. Exponentiation of each limit and division by r gave the 90% CI for R_{dnm}. This CI lay completely outside (0.80, 1.25), indicating a disproportionate increase in C_{max}." What limit was he talking about? My sas code is basically identical to detlew's above and there's no apparent "limit" in the output that's similar to what Smith mentioned. I tried also adding ESTIMATE or LSMEANS statement with various codes, couldn't get it at all. Any help? Many thanks. — All the best, Shuanghe 
d_labes ★★★ Berlin, Germany, 20190105 14:01 @ Shuanghe Posting: # 19731 Views: 6,863 

Dear Shuanghe, First: Happy New Year to You and to All. » Man, I should checked here before I started my work. It could save me a lot of time... Late but hopefully not too late insight. As I sometimes stated: All answers (of asked or not asked questions) are here. You only have to dig out what you are interested in. » Recently I was helping one of my colleagues for dose proportionality study and power model in Smith's article is the preferred method. While I did figured out the "correct" degree of freedom method and reproduce all reported results such as intercept, slope, and 90% CI of those values, ρ_{1}, ρ_{2}, the ratio of dosenormalised geometric mean value R_{dnm},..., I could not figure out how Smith obtained the 90% confidence interval for R_{dnm} (0.477, 0.698). Can you please elaborate where your difficulties arose? Able to obtain a point estimate of R_{dnm} but no 90% CI thereof? » ... » In his article (pp.1282, 2nd paragraph), Smith wrote that "[i]The 90% CI for the difference in logtransformed means was calculated within the MIXED procedure. Exponentiation of each limit and division by r gave the 90% CI for R_{dnm} ... For me this is a dubious description (the whole paragraph) I don't understand at all. Difference in logtransformed means of what? I would go with the formula of R_{dnm} R_dnm = r^(beta1  1) Use this with the point estimate of beta1 from your model and its 90% CI limits and you obtain the 90% CI of R_{dnm} if I'm correct. Example Cmax in Table 2 of the Smith et al. paper: beta1 = 0.7615 (0.679, 0.844) R console: c(10^(0.76151), 10^(0.6791), 10^(0.8441)) Smith reported in Table 2: 0.577 (0.477, 0.698) Good enough? Hope this helps. — Regards, Detlew 
mittyri ★★ Russia, 20190106 17:00 @ d_labes Posting: # 19735 Views: 6,829 

Dear Shuanghe, Dear Detlew, Happy New Year! Here's my attempt to visualize the results of lmer. I tried to add the acceptance criteria to the plot. What do you think? Is that suitable? Did I understand the article correctly? library(lme4) Black dots: Observed values Blue dots: Predicted values Blue area: 90% prediction area for all observations Blue dashed line: fitted regression line Blue dotdashed lines: 90% limits built using 90% CIs for the slope and intercept Red lines: 80125% acceptance limits — Kind regards, Mittyri 
Shuanghe ★★ Spain, 20190107 11:05 @ mittyri Posting: # 19740 Views: 6,769 

Dear Mittyri, Happy New Year! » What do you think? Is that suitable? Did I understand the article correctly? Very nice looking figure but I don't think that I understand the red line criteria. But as I mentioned earlier, with my R skill, I'm not really in a position to judge. I'll leave it to other Rgurus to comment. — All the best, Shuanghe 
d_labes ★★★ Berlin, Germany, 20190107 15:08 @ mittyri Posting: # 19745 Views: 6,743 

Dear Mittyri » » Black dots: Observed values » Blue dots: Predicted values » Blue area: 90% prediction area for all observations » Blue dashed line: fitted regression line » Blue dotdashed lines: 90% limits built using 90% CIs for the slope and intercept » Red lines: 80125% acceptance limits I must confess that I don't understand what you do here with the last two points. Dose dependent 90% limits for what using 90% CIs for the slope and intercept Dose dependent acceptance limits . Could you please elaborate and enlighten me? With simple words please, not with complex sophisticated code. The prediction area is calculated based on the dose values used in the study. Should it not calculated on interpolated dose values to get more smooth area borders? And why do you used the prediction interval as a fit visualization? AFAIK is prediction for future observations, but I think we had the goal to visualize the fit of our current observations. So would it not better to use a 90% confidence interval instead? — Regards, Detlew 
mittyri ★★ Russia, 20190113 23:53 @ d_labes Posting: # 19774 Views: 6,397 

Dear Detlew, » » Blue dotdashed lines: 90% limits built using 90% CIs for the slope and intercept » » Red lines: 80125% acceptance limits » I must confess that I don't understand what you do here with the last two points. From my experience: If Detlew does not understand my intentions, the intentions have wrong direction » Dose dependent 90% limits for what using 90% CIs for the slope and intercept These are the lines built using lower/upper limits of 90% CIs for both Slope and Intercept. For now I don't think that's correct, since the final conclusion should be made using slope only. So I removed intercept uncertainty from calculation of 90% CI prediction line. As Dr.Smith mentioned, 'The estimate of the “intercept” parameter ... with a 90% CI of ... and its betweensubject variability are not of interest here' » Dose dependent acceptance limits . » Could you please elaborate and enlighten me? » With simple words please, not with complex sophisticated code. I tried to reformulate eq4 used for acceptance criteria. If Beta1 should be less than 1 + ln(Theta2)/ln(r), then what is the PK level in the current dose point which is still acceptable? From my graph one could see that Dose = 50 is the last acceptable point. » The prediction area is calculated based on the dose values used in the study. Should it not calculated on interpolated dose values to get more smooth area borders? Agree! » And why do you used the prediction interval as a fit visualization? AFAIK is prediction for future observations, but I think we had the goal to visualize the fit of our current observations. So would it not better to use a 90% confidence interval instead? Stand corrected. I removed CIs for individual observations since it is nonsense here. library(lme4) — Kind regards, Mittyri 
Shuanghe ★★ Spain, 20190107 10:53 (edited by Shuanghe on 20190107 12:51) @ d_labes Posting: # 19739 Views: 6,768 

Dear Detlew and all, Happy New Year! » Late but hopefully not too late insight. As I sometimes stated: All answers (of asked or not asked questions) are here. You only have to dig out what you are interested in. Reproduce Smith's results in SAS is easy enough (except the 90% CI of R_{dnm}, which is not realy need for judging dose proportionality), but I struggled for a loooooong loooooooooong time with R as I was playing lm and glm and to make it worse, random effect was coded wrong.... Obviously, my R skill needs to be improved much more Anyway, not too late for me. I'll check the R code mentioned by Zizou and Helmut et al later. » Can you please elaborate where your difficulties arose? Able to obtain a point estimate of R_{dnm} but no 90% CI thereof? yes. » I would go with the formula of R_{dnm} » R_dnm = r^(beta1  1) » Use this with the point estimate of beta1 from your model and its 90% CI limits and you obtain the 90% CI of R_{dnm} if I'm correct. » R console: » c(10^(0.76151), 10^(0.6791), 10^(0.8441)) » [1] 0.5774309 0.4775293 0.6982324 » Smith reported in Table 2: » 0.577 (0.477, 0.698) » Good enough? This must be it! According to his article, Smith use the formula EXP(beta0 + beta1*ln(dose)) to calculate each mean to get ratio R_{dnm}. so the value is R_{dnm} = 0.577402 (red digit from my sas). But this is equivalent to what you wrote so if you use the full precision figure we would have obtained the same values (90% CI of (0.477381, 0.698378)). I checked with the AUC data as well, they matches what's reported in table 2. » Hope this helps. Definitely helps! Thanks. — All the best, Shuanghe 
d_labes ★★★ Berlin, Germany, 20190107 15:17 @ Shuanghe Posting: # 19746 Views: 6,733 

Dear Shuanghe, » » I would go with the formula of R_{dnm} » » R_dnm = r^(beta1  1) » ... » This must be it! » According to his article, Smith use the formula EXP(beta0 + beta1*ln(dose)) to calculate each mean to get ratio R_{dnm}. so the value is R_{dnm} = 0.577402 (red digit from my sas). But this is equivalent to what you wrote so if you use the full precision figure we would have obtained the same values (90% CI of (0.477381, 0.698378)). I checked with the AUC data as well, they matches what's reported in table 2. Could you please give a detailed example for what you did here? — Regards, Detlew 
Shuanghe ★★ Spain, 20190107 17:11 @ d_labes Posting: # 19751 Views: 6,723 

Dear Detlew, » Could you please give a detailed example for what you did here? Sorry. It seems I didn't explain it clearly. My SAS code is almost the same as yours: PROC MIXED DATA = smith METHOD = ML; I just take the output and calculated some of the numbers according to Smith's article to reproduce his result.
PROC SQL; This gives (Smith's reported value in blue): beta1: 0.7615 (0.7615) Those are slope and its 90% CI, and the corresponding criteria calculated with dose ratio r = 10 and θ_{L} and θ_{U} of 0.8 and 1.25, respectively. The rest are for information purpose only. roh1: 2.003428 (2.0) where R_dnm is the one I used previously, since Smith mentioned in the article that each mean PK was calcuated as exp(beta_0 + beta1*ln(dose) ). R_dnm is dosenormalised mean ratio, hence the long line in PROC SQL: EXP(d3.estimate + d2.estimate*LOG(250))/EXP(d3.estimate + d2.estimate*LOG(25)) * (25/250) .Rdnm is the same thing but calculated with your code which is the one I use now since it's equivalent to previous one but much shorter. I guess that my explanation in previous post with this regard is not clear so I added both of them here. The last 2 are predicted geometric mean PK values at the dose levels of 25 and 250, as given in the 1st column in Table 2 in Smith's article. These 2nd part of the result are not really necessary to judge dose proportionality (though roh1 and roh2 are useful to know) but as I said, I prefer to reproduce all results as kind of "validation". By the way, Helmut, I don't know how to make a table (e.g. with 1 row) with heading here so I manually entered all values above; also, I copy/paste greek letter from elsewhere. Is there any helper section with BBCode for special symble and greek letters and table making? I vaguely recall there used to be a section with BBcode examples but couldn't find it now. — All the best, Shuanghe 
d_labes ★★★ Berlin, Germany, 20190107 18:24 @ Shuanghe Posting: # 19755 Views: 6,727 

Dear Shuanghe, » » Could you please give a detailed example for what you did here? » » Sorry. It seems I didn't explain it clearly. My SAS code is almost the same as yours: » » PROC MIXED DATA = smith METHOD = ML;
» CLASS subj ; » ... Thanks for your explanation. Didn't really understand but ... Seen that you only used doses 25 and 250 for R_{dnm} although there are entries with doses of 50 and 75 mg. But since you are using the fitted (predicted) PK metrics it doesn't matter. And is equivalent to my suggestion. » By the way, Helmut, I don't know how to make a table ... Don't expect any reaction from Helmut before February. He is down under (New Zealand) and will return earliest at the beginning of February... — Regards, Detlew 
mittyri ★★ Russia, 20190108 00:19 @ Shuanghe Posting: # 19758 Views: 6,732 

Dear Shuanghe, » By the way, Helmut, I don't know how to make a table (e.g. with 1 row) with heading here so I manually entered all values above; also, I copy/paste greek letter from elsewhere. Is there any helper section with BBCode for special symble and greek letters and table making? I vaguely recall there used to be a section with BBcode examples but couldn't find it now. I'm not Helmut, but will try to help you Regarding tables: please see the Note 1 An example (you can see the BBBcode posting the reply to current message): Dependent Ratio[%Ref] CI_90_Lower CI_90_Upper Power CV(%) n Regarding greek letters: you can find the table of supported greek letters using the link above and copy paste it. Or to generate the symbol, make sure Num Lock is on and press the ALT key as you type the number on the numeric keypad. ALT+224 > α AFAIK using this method you are limited by Code Page 437 where ρ and some other letters are not presented and should be pasted from other page/app. — Kind regards, Mittyri 
Helmut ★★★ Vienna, Austria, 20190202 16:04 @ mittyri Posting: # 19847 Views: 5,548 

Dear Mittyri & Shuanghe, » » Is there any helper section with BBCode for special symble and greek letters […]? Copypaste them from there. You can try this experimental page as well (once I mastered to generate JavaScript within PHP it will be directly available in the posting form). — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 
Helmut ★★★ Vienna, Austria, 20160518 14:44 @ mittyri Posting: # 16323 Views: 18,757 

Hi Mittri, » Is it possible to make a dataset for validation? Do we have anywhere the reference dataset You can download Smith’s paper here. We were exploring the C_{max} data of Table I. » and the accurate result? Define accurate. — Cheers, Helmut Schütz The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes 