ElMaestro ★★★ Belgium?, 20190130 11:07 (533 d 06:55 ago) (edited by ElMaestro on 20190130 13:56) Posting: # 19826 Views: 2,810 

Hi all, I am getting back to a good old topic, the LS Mean. I am asking this in very general terms without specific reference to typical BE model, and I really really hope we can get beyond "if there's balance then soandso...". I still have pretty much no idea what an LSMean really is. I am aware that this SAS invention is described e.g. here and its calculation here. But I am still quite unable to see what an LS Mean achieves. For example, "LSmeans are predicted population margins; that is, they estimate the marginal means over a balanced population." ... what does that mean and why does this make the LS Mean more relevant than a basic model effect from the b vector of y=Xb+e? Can someone with stats knowledge tell me:
In perspective, and pardon this little provocation, Kinetica also calculated certain things on basis of an assumption of balance whether or not there truly was balance. And that was...well.... possibly not considered widely relevant or optimal. Please: We do not need to discuss balance and when LSMeans are equal to ordinary means or treatment effects in the b vector for a BE trial. While the two are tightly related this latter aspect is not per se what I am asking about. Edit: Category changed; see also this post #1. [Mittyri] — I could be wrong, but... Best regards, ElMaestro "Pass or fail" (D. Potvin et al., 2008) 
Obinoscopy ★ USA, 20190203 15:35 (529 d 02:27 ago) (edited by Obinoscopy on 20190203 15:45) @ ElMaestro Posting: # 19850 Views: 2,434 

Hi ElMaestro, I guess LSMeans still gives a lot of people headaches which explains why your thread has been avoided like a plague. Well I am not an expert but lemme try and say what I know....my widow's might. I am responding to questions (1) and (3). I think LSMeans is a more accurate estimate of the mean of Ln[T] and Ln[R] in crossover studies. This is because the other mean estimates do not take into cognizance of the fact that the var(Ln[T]) in the sequence TR is different from the var(Ln[T]) in the sequence RT. Same for Ln[R]. In effect, the two population (sequence TR and RT) are different. I usually imagine a situation where a drug has a lower AUC in period II than period I (perhaps due to the action of antibodies that was developed during period I). In such case, it would not be advisable to find the means for Ln[T] and Ln[R] without accounting for the period effect. If not done, the mean Ln[T] might be underestimated when compared to mean Ln[R] if the study was not balanced (RT having more subjects than TR). The Model Residual is the variance for the difference of LSMeans between Ln[T] and Ln[R]. Lemme do some little mathematical manipulations (I might be wrong though) to buttress my point: Diff of LSM = {(Ln[T] in TR) + (Ln[T] in RT)}/2  {(Ln[R] in TR) + (Ln[R] in RT)}/2 Diff of LSM = 1/2{(Ln[T] in TR) + (Ln[T] in RT)  (Ln[R] in TR)  (Ln[R] in RT)} Diff of LSM = 1/2{(Ln[T] in TR)  (Ln[R] in TR) + (Ln[T] in RT)  (Ln[R] in RT)} Diff of LSM = {(Ln[T] in TR)  (Ln[R] in TR)}/2 + {(Ln[T] in RT)  (Ln[R] in RT)}/2 Diff of LSM = Residuals in TR + Residuals in RT Thus the Mean Sum of Squares for the Residuals is the variance for the difference of LSMeans. This is my arithmetolophystics and Al Jabra. I know there are lots of magic in it. For your other questions, I guess the headaches remains until someone offers us some panadol or Tylenol. Regards — Scopy 
ElMaestro ★★★ Belgium?, 20190203 18:39 (528 d 23:23 ago) @ Obinoscopy Posting: # 19851 Views: 2,416 

Hellobi , and thanks for helping towards an understanding. I am a bit confused, what is ln[T] in your terminology? Is it individual measurements, or effects form the bvector or is it the average on the ln scale? And what is the starting point for this? Also, residuals once a linear model has been fit to the least squares criterion sum to zero, right? I got confused when you went from step 4 to 5. » The Model Residual is the variance for the difference of LSMeans between Ln[T] and Ln[R]. Why? I would think it is the plain uncertainty for an (within) effect estimate in the b vector when we talk crossovers. Multiply by two and you have the variance of the difference or something like that. — I could be wrong, but... Best regards, ElMaestro "Pass or fail" (D. Potvin et al., 2008) 
Obinoscopy ★ USA, 20190203 19:43 (528 d 22:19 ago) @ ElMaestro Posting: # 19852 Views: 2,408 

» Hellobi , » » thanks for helping towards an understanding. No harm in trying » I am a bit confused, what is ln[T] in your terminology? » Is it individual measurements, or effects form the bvector or is it the average on the ln scale? » And what is the starting point for this? It's the individual measurements. » Also, residuals once a linear model has been fit to the least squares criterion sum to zero, right? I got confused when you went from step 4 to 5. Yeah, that's true, the residuals should sum up to zero. The variance will be the sum of squares of the residuals, in this case, will be greater than zero. I was just trying to show the relationship between the residuals and the difference of the lsmeans. The maths isn't perfect one bit. » » The Model Residual is the variance for the difference of LSMeans between Ln[T] and Ln[R]. » » Why? » I would think it is the plain uncertainty for an (within) effect estimate in the b vector when we talk crossovers. Multiply by two and you have the variance of the difference or something like that. Oh yes! Mutiplying it by two gives the variance of the difference of LSMeans. Hope I have not worsened the headache with my statistically incoherent post . Regards, — Scopy 