## Huge gap in my understanding [General Statistics]

Dear Martin,

it is really kind of you to try to answer but actually I am afraid I must have completely failed at explaining my question because your post addresses a question I did not ask, and it does not, I think, address the issue I hoped to get an answer for. I apologise for not being able to express myself clearly.

I will try and rephrase.

We get a model with factors and a covariate and we derive treatments effects (for treatment as a fixed factor). Let us say it has two levels, A and B.

If we look at the difference in treatment means for A and B, that difference may be different from the difference in LSMeans for A and B. That is a potential worry to me, or a confusion, which I would like a comment on. The reason is that the model means or treatment effects are maximum likelihood estimates, so if the difference in maximum likelihood treatment estimates is not the same as the LSMean difference, then it means the LSMean difference is not a maximum likelihood difference. This is in a nutshell the cosmic mindf%cker that I am asking about.

Would we ever, regardless of how LSMeans are otherwise defined (blah blah LSMEANS statement produces means which are adjusted for the average value of the specified covariate(s) blah etc etc etc) prefer them over maximum likelihood differences?

One thing is 'adjusted for' and 'more relevant in case of imbalance' and what not. But that does not in itself explain why anyone would ever deviate from a conclusion based on the maximum likelihood difference regardless of imbalance or any other model phenomenon, am I wrong? The whole point of a linear model and most other models, basically, is the maximum likelihood. At least in my little world.

In a nutshell: If the most likely (=maximum likelihood by way of least squares) difference of A and B is 5 and the LSMean difference is 10, why would I ever prefer the less likely difference of 10??? Or more generally why would I ever minimise sums of squares to generate maximum likelihood treatments differences that I am not using, but in stead I am minimísing sums of squares to generate another type of treatment differences that sound fancier and sexier but is less likely??? More realistic is still not equal to more likely.

I am curious to hear your view on exactly and solely this aspect of

it is really kind of you to try to answer but actually I am afraid I must have completely failed at explaining my question because your post addresses a question I did not ask, and it does not, I think, address the issue I hoped to get an answer for. I apologise for not being able to express myself clearly.

I will try and rephrase.

We get a model with factors and a covariate and we derive treatments effects (for treatment as a fixed factor). Let us say it has two levels, A and B.

If we look at the difference in treatment means for A and B, that difference may be different from the difference in LSMeans for A and B. That is a potential worry to me, or a confusion, which I would like a comment on. The reason is that the model means or treatment effects are maximum likelihood estimates, so if the difference in maximum likelihood treatment estimates is not the same as the LSMean difference, then it means the LSMean difference is not a maximum likelihood difference. This is in a nutshell the cosmic mindf%cker that I am asking about.

Would we ever, regardless of how LSMeans are otherwise defined (blah blah LSMEANS statement produces means which are adjusted for the average value of the specified covariate(s) blah etc etc etc) prefer them over maximum likelihood differences?

One thing is 'adjusted for' and 'more relevant in case of imbalance' and what not. But that does not in itself explain why anyone would ever deviate from a conclusion based on the maximum likelihood difference regardless of imbalance or any other model phenomenon, am I wrong? The whole point of a linear model and most other models, basically, is the maximum likelihood. At least in my little world.

In a nutshell: If the most likely (=maximum likelihood by way of least squares) difference of A and B is 5 and the LSMean difference is 10, why would I ever prefer the less likely difference of 10??? Or more generally why would I ever minimise sums of squares to generate maximum likelihood treatments differences that I am not using, but in stead I am minimísing sums of squares to generate another type of treatment differences that sound fancier and sexier but is less likely??? More realistic is still not equal to more likely.

I am curious to hear your view on exactly and solely this aspect of

*LSMean difference not being equal to maximum likelihood differences*. I hope it is now reworded. Again, I apologise for not being well able to express myself. Many thanks in advance.—

Pass or fail!

ElMaestro

Pass or fail!

ElMaestro

### Complete thread:

- LSMeans ElMaestro 2018-02-27 09:02 [General Statistics]
- Understanding (!) LSMeans d_labes 2018-02-28 09:49
- Huge gap in my understanding ElMaestro 2018-03-04 10:03
- Huge gap in my understanding martin 2018-03-06 14:40
- Huge gap in my understanding ElMaestro 2018-03-06 17:30
- Huge gap in my understanding martin 2018-03-06 19:58
- Huge gap in my understandingElMaestro 2018-03-06 20:53
- Huge gap in my understanding martin 2018-03-07 08:42

- Huge gap in my understandingElMaestro 2018-03-06 20:53

- Huge gap in my understanding martin 2018-03-06 19:58

- Huge gap in my understanding ElMaestro 2018-03-06 17:30

- Huge gap in my understanding martin 2018-03-06 14:40