yicaoting ★ NanKing, China, 2011-10-03 16:37 (4950 d 09:39 ago) Posting: # 7409 Views: 16,800 |
|
Dear all, As far as I know, when number of subjects in sequence RT equals to that of TR, LSM of R or T equals to Geometric Mean of all PKmetrics of R or T. I want to calculate Least Square Means(LSM) for unequal sequence 2*2 Xover BE for further calculation of Diff(T-R) of LSM, and for further estimation of 90% CI of Diff(T-R) of LSM, and for further estimation of 90%CI of GMR(geometric means ratio). Let me shown my calculation in details: My data is extracted from Hauschke D's Bioequivalence Studies in Drug Development: Methods and Applications Page 70, (date of Subject 16 and 18 ws deleted to generate unequal sequences) sub sequence auc period formulation ln(auc) in which Treatment 1 was cconsidered as Reference. First, I used the original data (no transformation) to do BE analysis with WinNonlin 5.2, some of the the results are: Treatment LSM SE CI level T_critical LowerCI UpperCI My questions are: 1. How to calculate LSM and SE for each Treatment? I want to manually calculate it, can anyone give me some exact equation? I have read WNL's user guide, it didn't tell me the exact equation. Only reads "(computed by LinMix)," on page 332 of WNL 5.1's User Guide 2. Why SE of the two treatments are identical? Second, I setted "Ln(x)" transformation in WinNonlin's BE Wizard, some of the resuls are: Treatment LSM SE CI level T_critical LowerCI UpperCI I have the same questions as above mentioned. I have tried to analysis this unequal sequences data in SAS and Stata, both were failed. In SAS, I used the following code: proc glm data=dose_equivalence; It returned: ERROR: One or more variables are missing or freq or weight is zero on every observation. In Stata, I used the following code: pkequiv auc formulation period sequence sub It returned: must specify an equivalence comparison /* Is SAS or Stata not able to deal with unequal sequences, or I didn't use the right code? Thank you very much for your help. Edit: Category changed. Please don’t paste tabs into your post; use the Preview to check. [Helmut] |
ElMaestro ★★★ Denmark, 2011-10-03 16:51 (4950 d 09:24 ago) @ yicaoting Posting: # 7410 Views: 15,285 |
|
Dear yicaoting, ❝ 1. How to calculate LSM and SE for each Treatment? I want to manually calculate it, can anyone give me some exact equation? ❝ I have read WNL's user guide, it didn't tell me the exact equation. Only reads "(computed by LinMix)," on page 332 of WNL 5.1's User Guide ❝ 2. Why SE of the two treatments are identical? 1. You take the Mean of T in Seq TR and the mean of T in Seq RT. You add them and divide by two. Do the same for R. 2. The crucial variability for a 2,2,2-BE design from which the 90% CI is derived is the residual sigma (which is your [pseudo-]within variability). Since there is no true replication of neither T nor R you cannot derive a within-subject variability separately for T or R. Think of it in matrix terms: Your error matrix is just a bunch of zeros and with the common sigma on the diagonal. — Pass or fail! ElMaestro |
yicaoting ★ NanKing, China, 2011-10-03 17:28 (4950 d 08:47 ago) @ ElMaestro Posting: # 7412 Views: 14,478 |
|
Dear ElMaestro, ❝ 1. You take the Mean of T in Seq TR and the mean of T in Seq RT. You add them and divide by two. Do the same for R. Thank you for your guide, your equation works well to calculate LSM for unequal sequences data (both for original and Ln-transformed data), thank you again. But how to calculate SE in WinNonlin? The same SEs for R and T, why? ❝ 2. The crucial variability for a 2,2,2-BE design from which the 90% CI is derived is the residual sigma (which is your [pseudo-]within variability). Since there is no true replication of neither T nor R you cannot derive a within-subject variability separately for T or R. Think of it in matrix terms: Your error matrix is just a bunch of zeros and with the common sigma on the diagonal. Thank you for your explanation on residual sigma. |
ElMaestro ★★★ Denmark, 2011-10-03 17:36 (4950 d 08:39 ago) @ yicaoting Posting: # 7413 Views: 14,484 |
|
Dear yicaoting, ❝ But how to calculate SE in WinNonlin? The same SEs for R and T, why? I don't use WinNonlin so can't be of help. Answer to the other aspect (same SEs for R and T) is in my pt. 2 above. best regards, EM. |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-10-03 18:14 (4950 d 08:01 ago) @ yicaoting Posting: # 7415 Views: 14,441 |
|
Dear yicaoting! ❝ But how to calculate SE in WinNonlin? After reading the entire LinMix-Section of the manual I’m confused as well. Consider registering at Pharsight’s Extranet and ask there. Maybe they come up with a more comprehensible description. ![]() BTW, you are not alone. A similar question did not get a single answer… — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2011-10-03 22:10 (4950 d 04:05 ago) @ Helmut Posting: # 7418 Views: 14,471 |
|
Dear Both, perhaps the apps mean to calculate the between-subject SE's (can be calculated for both T and R and assumed to be equal [not a word about Welch!])? EM |
Pankaj Bhangale ☆ India, 2011-10-04 16:52 (4949 d 09:23 ago) @ ElMaestro Posting: # 7422 Views: 14,396 |
|
❝ Dear ElMaestro, ❝ ❝ 1. How to calculate LSM and SE for each Treatment? I want to manually calculate it, can anyone give me some exact equation? ❝ 1. You take the Mean of T in Seq TR and the mean of T in Seq RT. You add them and divide by two. Do the same for R. For Example: Suppose, n1(no. of TR Seq)=8 & n2(no. of RT Seq.)=6 (i.e. unequal sequences) M1=Mean of T in Seq TR (n1=8) = 86.62 M2=Mean of T in Seq RT (n2=6) = 67.73 According to your formula, Combine mean is (86.62 + 67.73)/2 = 77.17 (This formula is correct when n1=n2) but here n1 & n2 are different. I suggest Formula for Combine Mean=((n1*M1)+(n2*M2))/(n1+n2) Mean of T in Seq TR & RT = ((8*86.62)+(6*67.73))/(8+6)=78.52 Thanks, Best Regards, Pankaj Bhangale |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-10-05 03:53 (4948 d 22:22 ago) @ Pankaj Bhangale Posting: # 7425 Views: 14,346 |
|
Dear Pankaj, personally I have great sympathy for your procedure (suggesting a weighted mean). With yicaoting’s data of formulation 1 we would get: (238.92·7 + 231.38·9)/(7+9) = 234.68 Unfortunately both SAS and Phoenix/WinNonlin report only the LSM (238.92 + 231.38)/2 = 235.15 (which is required in many guidelines). No big deal if sequences are not too unbalanced.I think your suggestion is reasonable but you open a can of worms (see also this post and ElMaestro’s reply). — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2011-10-04 11:39 (4949 d 14:36 ago) @ yicaoting Posting: # 7420 Views: 15,349 |
|
Dear yicaoting, ❝ I have tried to analysis this unequal sequences data in SAS and Stata, both were failed. ❝ ❝ In SAS, I used the following code: ❝ ... ❝ It returned: ❝ ❝ 182 quit; Sorry but I can't reproduce your Error(neous) result utilizing SAS 9.2. Using your data and your code (exactly as given above by you) all goes fine and I get for the LSM section:
Least Squares Means So, what did you do here ![]() Check your variables. The error occurred hints to a left hand variable in the model which has only missing values everywhere. BTW: We had already several discussions here on unbalanced / incomplete data. See for instance this thread. But be warned: Rather lengthy and nitpicking ![]() — Regards, Detlew |
yicaoting ★ NanKing, China, 2011-10-04 21:37 (4949 d 04:38 ago) @ d_labes Posting: # 7423 Views: 14,475 |
|
Dear d_labes, Thanks for your test of my SAS code. Now, I can manually calculate LSM to obtain the same result of WinNonlin and SAS. But new problem arises. Both WNL and SAS give us the same SE for R and T, why? The SE and 90%CI for R and T are not equal within WNL and SAS, which will be reliable? (Even for equal sequences data) Let's take my results for my unequal sequences data as example: In WNL: For original data, I get: Treatment LSM SE CI level T_critical LowerCI UpperCI For "Ln(x)" transformed data, I get: Treatment LSM SE CI level T_critical LowerCI UpperCI In SAS: For original data, I get: H0:LSMean1= For "Ln(x)" transformed data, I get: Least Squares Means It's really puzzled me, I am eagerly need help from d_labes, HS, yjlee168, ElMaestro or others who are expertised at this problem. Give my greatest thanks to you. |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-10-05 03:01 (4948 d 23:14 ago) @ yicaoting Posting: # 7424 Views: 14,510 |
|
Dear yicaoting, yeah, that’s funny. I’ve compared WNL to SAS numerous times – but only looking at LSM, the PE and CI. Never checked the SE / CI of the formulations. I’ll send Simon an invitation to join the party. — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2011-10-05 14:45 (4948 d 11:30 ago) @ yicaoting Posting: # 7426 Views: 14,324 |
|
Dear yicaoting! ❝ Both WNL and SAS give us the same SE for R and T, why? ![]() I myself have accepted this as a fact within the 2x2 crossover design evaluation without really understanding the 'Why'. Since the term 'Least square mean' is ascribed to ![]() ![]() ❝ The SE and 90%CI for R and T are not equal within WNL and SAS, which will be reliable? (Even for equal sequences data) Wow! That's curious. But it does not matter too much because we are not interested in the LSMeans itself but in the treatment difference (in the log-transformed domain) and the 90% CI for that. As long as these results are identical all is right with the world I think ![]() — Regards, Detlew |
ElMaestro ★★★ Denmark, 2011-10-05 15:23 (4948 d 10:52 ago) @ d_labes Posting: # 7427 Views: 14,315 |
|
Very well said, d_labes. LSMeans and type III SS borderline a little on religion. For LSMeans, one potential argument for using them is that they are directly extracable as the model effects (b) from the y=Xb standard linear model when it has been fit with least squares and if proper contrasts were used in X. The difference is directly extractable regardless of the contrasts used. Least squares effects might be a better term? EM |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-10-08 04:21 (4945 d 21:54 ago) @ d_labes Posting: # 7449 Views: 14,031 |
|
Comrades! ❝ ❝ The SE and 90%CI for R and T are not equal within WNL and SAS, which will be reliable? (Even for equal sequences data) ❝ ❝ Wow! That's curious. Mistery solved (?) after this post.
If you want to get in PHX/WNL the same LSMs/SE/CIs of treatments as in SAS you have to set all effects fixed. Maybe you can try SAS’ Proc MIXED instead of Proc GLM and tweak it in order to give you LSMEANS similar to PHX/WNL. But I’m not equipped with ![]() — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
d_labes ★★★ Berlin, Germany, 2011-10-10 17:46 (4943 d 08:29 ago) @ Helmut Posting: # 7459 Views: 14,272 |
|
Dear Helmut, dear All! Here the results with real mixed model software: Proc mixed data=dose_equivalence; (only the least square means part shown for the ln-transformed data)
Seems there is a perfect consistency of the results. Remember (from our previous discussions about this topic) that SAS Proc GLM fits all effects as fixed effects. The random statement handles the named effects as random only in a "post-hoc manner", what ever this really means. That's the reason why our captain calls this statement within Proc GLM bogus. WNL on the other hand uses in his default evaluation obviously the real mixed model solution. So far so good. To be conform with the holy scripture (term invented by EM), page 15 "The terms to be used in the ANOVA model are usually sequence, subject within sequence, period and formulation. Fixed effects, rather than random effects, should be used for all terms." the PHX/WNL user should leave the default and define all effects as fixed. That answers Yicoating's question "Which is reliable?" For the great oracle EMA: the SAS results using Proc GLM (random statement or not) or WNL results after setting all effects fixed ![]() From a scientific point of view there are good reasons to use the real mixed effects solution. Fortunately the outcome we are interested in at the end - the difference between formulations and their 90% CI - doesn't depend on the choice ![]() — Regards, Detlew |
Helmut ★★★ ![]() ![]() Vienna, Austria, 2011-10-10 18:30 (4943 d 07:45 ago) @ d_labes Posting: # 7460 Views: 13,910 |
|
Dear D. Labes, THX for the comparison! I guess Pharsight will add a note to the manual. PHX/WNL users will learn to life with many lines of " not estimable " in the output if all effects are fixed. ![]()
Effect:Level Estimate StdError Denom_DF T_stat P_value Conf T_crit Lower_CI Upper_CI — Dif-tor heh smusma 🖖🏼 Довге життя Україна! ![]() Helmut Schütz ![]() The quality of responses received is directly proportional to the quality of the question asked. 🚮 Science Quotes |
ElMaestro ★★★ Denmark, 2011-10-10 21:53 (4943 d 04:23 ago) @ Helmut Posting: # 7463 Views: 14,070 |
|
Dear HS, ❝ THX for the comparison! I guess Pharsight will add a note to the manual. PHX/WNL users will learn to life with many lines of " Fortunately you can get rid of a lot of those "not estimable"s. There are two kinds of "not estimable" in play. Those effects which exist but whose addition does not increase the rank of the model matrix (e.g. column for treatment 2 is the intercept minus treatment 1; hence loss of a treatment df) and those which simply don't exist. The former is trivial, the latter should be trivial and easily avoided because subjects are uniquely coded but it seems certain bad habits are difficult to get rid of ![]() EM. |
yicaoting ★ NanKing, China, 2011-10-13 20:39 (4940 d 05:36 ago) @ Helmut Posting: # 7481 Views: 14,335 |
|
Dear HS, d_labes and ElMaestro, Thank all of you for your great discussion on LSM comparison between WNL and SAS. The concept of "real mixed effects" in WNL or SAS is key factor that leads to different SE, and thus 90%CI for LSM. Now let me go on with the story of Proc GLM and Proc Mixed in SAS. Let me begin the story with my test results in SAS. All tests are performed using untransformed data. ----GLM 1 and GLM 2----Start-------------------------------------------------------- proc glm data=dose_equivalence; class subject sequence period formulation; model AUC=sequence subject(sequence) period formulation; random subject(sequence) / test; lsmeans formulation/stderr pdiff cl alpha=0.1; run; quit; proc glm data=dose_equivalence; class subject sequence period formulation; model AUC=sequence subject(sequence) period formulation; lsmeans formulation/stderr pdiff cl alpha=0.1; run; quit; The GLM Procedure ----GLM 1 and GLM 2---End--------------------------------------------------------- ----GLM 3 and GLM 4----Start-------------------------------------------------------- proc glm data=dose_equivalence; class subject sequence period formulation; model AUC=sequence period formulation; random subject(sequence) / test; lsmeans formulation/stderr pdiff cl alpha=0.1; run; quit; proc glm data=dose_equivalence; class subject sequence period formulation; model AUC=sequence period formulation; lsmeans formulation/stderr pdiff cl alpha=0.1; run; quit; The GLM Procedure ----GLM 3 and GLM 4---End--------------------------------------------------------- ----Mixed 1--Start------------------------------------------------------------------------ proc mixed data=dose_equivalence; class subject sequence period formulation; model AUC=sequence subject(sequence) period formulation; random subject(sequence) / subject=subject; lsmeans formulation/cl diff alpha=0.1; run; quit; The Mixed Procedure ----Mixed 1--End------------------------------------------------------------------------ |
yicaoting ★ NanKing, China, 2011-10-13 20:52 (4940 d 05:23 ago) @ Helmut Posting: # 7482 Views: 13,912 |
|
----Mixed 2--Start------------------------------------------------------------------------ proc mixed data=dose_equivalence; class subject sequence period formulation; model AUC=sequence subject(sequence) period formulation; lsmeans formulation/cl diff alpha=0.1; run; quit; The Mixed Procedure ----Mixed 2--End------------------------------------------------------------------------ ----Mixed 3--Start------------------------------------------------------------------------ proc mixed data=dose_equivalence; class subject sequence period formulation; model AUC=sequence period formulation; random subject(sequence) / subject=subject; lsmeans formulation/cl diff alpha=0.1; run; quit; The Mixed Procedure ----Mixed 3--End------------------------------------------------------------------------ ----Mixed 4--Start------------------------------------------------------------------------ proc mixed data=dose_equivalence; class subject sequence period formulation; model AUC=sequence period formulation; lsmeans formulation/cl diff alpha=0.1; run; quit; The Mixed Procedure ----Mixed 4--End------------------------------------------------------------------------ |
yicaoting ★ NanKing, China, 2011-10-13 21:05 (4940 d 05:10 ago) @ Helmut Posting: # 7485 Views: 14,077 |
|
❝ untransformed ❝ PHX/WNL 6.2 ❝ ❝ random: sub(sequence) ❝ Treatment LSM SE LowerCI UpperCI ❝ 1 235.1521 12.7463 213.1332 257.1711 ❝ 2 231.8667 12.7463 209.8477 253.8856 ❝ -------------------------------------------------------- ❝ 1 - 2 3.2854762 10.387277 0.75644 -15.009737 21.580689 ❝ ❝ fixed: sequence+formulation+period+sub(sequence) ❝ Treatment LSM SE LowerCI UpperCI ❝ 1 235.1521 7.34491 222.2155 248.0888 ❝ 2 231.8667 7.34491 218.9300 244.8033 ❝ -------------------------------------------------------- ❝ 1 - 2 3.2854762 10.387277 0.75644 -15.009737 21.580689 As HS calculated, since WNL's fixed: sequence+formulation+period random: sub(sequence) and fixed: sequence+formulation+period+sub(sequence) generates identical result on Diff 1-2 and it's SE 10.387277 and 90% CI -15.009737 21.580689, and this is what BE analysis TRUELY concerning about, so let's use this result as a temp "Golden Standard". it can be seen: when Proc GLM (GLM 3 and GLM 4) is used, never use model AUC=sequence period formulation even random subject(sequence) / test; is added. However, when Proc Mixed (Mixed 3) is used, you can use model AUC=sequence period formulation; but remeber to specify random subject(sequence) / subject=subject; as random effect. Now, let's consider WNL's fixed: sequence+formulation+period+sub(sequence) as a TRUE fixed effect analysis. It can be seen that both GLM 1 and GLM 2 are in fixed mode even random subject(sequence) / test; is added (see GLM 2) this is previously discussed as so called "… post hoc fashion …" When Proc Mixed is used, once you specified model AUC=sequence subject(sequence) period formulation; SAS will consider subject(sequence) as random effect.(Mixed 1 and 2), regardless of specifying random subject(sequence) / subject=subject; or not, (Mixed 2) the results are identical. Since all results of SAS's GLM 1, GLM 2, Mixed 1 and Mixed 2, identical SEs for LSM of R and T are both = 7.344914 90% CI for 1 and 2 are 222.215471 248.088815 90% CI for diff is -15.009741 21.580693 can we concluded that this result is reliable? Besides, as shown is Mixed 2, can we manually obtain SE = 7.344914 from the result of Type 3 Tests of Fixed Effects? I have tried, but failed. Mixed 3 gets right 90% CI for diff, but strange 90% CI for LSMs of R 212.70 257.60 and T 209.42 254.32. WNL's default fixed: sequence+formulation+period are never obtained by SAS's any trying of Proc GLM or Proc Mixed with many optional settings. So may be it is time to suspect WNL's default setting in BE Wizard? Do you agree? Thanks to HS, d_labes and ElMaestro for your kind patience on this topic. Edit: Sorry yicaoting, I tried to edit your post in order to get a more compact style. [Helmut] |
ElMaestro ★★★ Denmark, 2011-10-14 02:02 (4940 d 00:13 ago) @ yicaoting Posting: # 7489 Views: 13,741 |
|
Dear yicaoting, impressive amount of work. I must admit you lost me completely quite early here. The reason might be that I do not speak WNL or SAS and/or that my brain is walnut-sized. My problem is I cannot see what you are trying to achieve. For my learning purposes and/or your consideration: 1. Why would you use PROC MIXED when not specifying a random effect? 2. Do you know what the documented behaviour of PROC MIXED is when no random effect is specified? There is an example in the online manual but it does not tell what the general behaviour is. 3. Fitting a mixed model with just a lone sigma2 on the diagnonal in the covariance matrix (=sigma2I) is conceptually similar to the linear model, the difference being just that missing values does not mean discarded subjects with PROC MIXED. 4. What is the documented behaviour of PROC MIXED when you specify the same effect as both random and fixed? If I get you right in this case [subject(sequence)] it just defaulted to sigma2 on the diagonal in the covariance matrix, but would anyone really specify a mixed model that way? I speculate, the inner works might simply skip the (or better: a) random effect if it has been already specified as fixed, which in your case just leads to pt. 2. above. If it does exactly the opposite (skips the fixed effect when it is specified as random) it would lead to the same. — Pass or fail! ElMaestro |
yicaoting ★ NanKing, China, 2011-10-14 08:30 (4939 d 17:46 ago) @ ElMaestro Posting: # 7491 Views: 13,702 |
|
Dear ElMaestro, Thank you for your persistent concerning on my post. ❝ My problem is I cannot see what you are trying to achieve. My only and final purpose is to manually calculate SEs and CIs of R and T in BE analysis both for equal and unequal balances data (situation of incomplete data is beyongd my ability), may be we can called it "post-BE estimation". ❝ For my learning purposes and/or your consideration: ❝ 1. Why would you use PROC MIXED when not specifying a random effect? I used PROC MIXED without a random effect with only purpose of try-and-see what will we get with such a unusual method. But not to recommend this in BE analysis. BTW: After my tries, I know that if we use WNL's default settings in BE Wizard, it is impossible get identical result from SAS, here so called identical result includes: LSMs and its SEs and CIs for R and T, and 90% CI of difference. Thus, maybe it is time to modify WNL's default setting or to suspect SAS? I am really ![]() ❝ 2. Do you know what the documented behaviour of PROC MIXED is when no random effect is specified? There is an example in the online manual but it does not tell what the general behaviour is. I really want to know, but Google gives me no concrete answer. May be it is too complex to list all the calculation steps in Proc Mixed, but I really want to manually calc SE and CI. You know many programs are able to handle matrix and var-covar matrix, so I think it is possible but might be time-consuming, and unfortunately, no one can tell me how to do it. All in one, even if let's stop the game of Proc Mixed or Proc GLM, is it possible to manually calculate CI of PKmetrics for R and T in 2*2 crossover design? Without use of WNL or SAS or other software with similar function, we cann't get it? Another Issue: I know that in NSCC 2007's TOST analysis, SEs for R and T are different, and I have derived it's calculation step, it uses pooled SE from datesets of two sequences for each treatment. Although it does not give out CI, it can be easily calculated using LSMean+/- T*SE. From a personal view, I think different SEs for R and T is more reasonable than the same SEs. What's your opinion? Thus, which CI is true or acceptable? SAS's (same as WNL's) or NCSS's? Again, thank you for your attention. |
ElMaestro ★★★ Denmark, 2011-10-14 15:23 (4939 d 10:53 ago) @ yicaoting Posting: # 7494 Views: 13,776 |
|
Hi yicaoting, ❝ BTW: After my tries, I know that if we use WNL's default settings in BE Wizard, it is impossible get identical result from SAS, here so called identical result includes: LSMs and its SEs and CIs for R and T, and 90% CI of difference. Thus, maybe it is time to modify WNL's default setting or to suspect SAS? I am really Helmut informed us that WNL used a default Mixed model even for a 2,2,2-BE evaluation. That could be why you don't get the same result. At the end you need to ask yourself: Do I wish to include subjects in my 2,2,2-BE analysis which have a missing period? If you answer is yes, then use a mixed model. If your answer is no, use a linear model (or delete the subjects in question and do the mixed, same thing). ❝ All in one, even if let's stop the game of Proc Mixed or Proc GLM, is it possible to manually calculate CI of PKmetrics for R and T in 2*2 crossover design? No, it is perfectly possible to calculate a CI 'manually', as long as you want to reproduce the CI you get from GLM (but not MIXED, case of missing values in a period makes the difference). Look up the equations in Chow & Liu's book; I don't have it here. Edit: You can also look up the equations in Potvin et al. Pharm. Stat. 7:245–262. ❝ Another Issue: I know that in NSCC 2007's TOST analysis, SEs for R and T are different, and I have derived it's calculation step, it uses pooled SE from datesets of two sequences for each treatment. Although it does not give out CI, it can be easily calculated using LSMean+/- T*SE. From a personal view, I think different SEs for R and T is more reasonable than the same SEs. What's your opinion? My opinion is that in a 2,2,2-design we do not have true replication of Test or Reference. Therefore we cannnot calculate within-subject-variabilities for T or R. We can calculate it for the difference. We can also derive the variability for the between-subject variability for T and R, and we can possibly even do that individually if we have a reason to do so (but don't ask me how; I don't know). So if you are looking for SE's corresponding to intra-subject variability for T or R, look no further until you have a replicated study. ❝ Thus, which CI is true or acceptable? SAS's (same as WNL's) or NCSS's? There is no true or false. As another example, look up on the www the heated discussions around type I, II and III sums of squares. There is a lot of personal preference and religion involved here. Some of it is written in the form of guidelines. As you saw from an earlier post by Helmut, the EU guideline now asks for subject as fixed factor. If you can accept that as your golden standard then there's your answer. — Pass or fail! ElMaestro |