# Bioequivalence and Bioavailability Forum 07:56 CET

ElMaestro
Hero

Denmark,
2019-01-30 11:07
(edited by ElMaestro on 2019-01-30 13:56)

Posting: # 19826
Views: 632

## LSMeans, still causing me headaches [General Sta­tis­tics]

Hi all,

I am getting back to a good old topic, the LS Mean. I am asking this in very general terms without specific reference to typical BE model, and I really really hope we can get beyond "if there's balance then so-and-so...".

I still have pretty much no idea what an LSMean really is. I am aware that this SAS invention is described e.g. here and its calculation here.

But I am still quite unable to see what an LS Mean achieves. For example, "LS-means are predicted population margins; that is, they estimate the marginal means over a balanced population." ... what does that mean and why does this make the LS Mean more relevant than a basic model effect from the b vector of y=Xb+e?

Can someone with stats knowledge tell me:
1. Generally, in which situations, in scientific (not regulatory) terms, are LSMeans relevant?
2. Why (in which situations generally) is a least squares treatment effect (i.e. the coefficients in b)less relevant than an LSMean when the two may differ?
3. On what basis would we say that the model residual "always" can be used to generate a CI for LS Mean differences?
In other words, can you convince me using arithmetolophystics and Al Jabra that the standard error of a treatment effect is the same as the standard error for an LSMean (for the given level of the factor of interest).

In perspective, and pardon this little provocation, Kinetica also calculated certain things on basis of an assumption of balance whether or not there truly was balance. And that was...well.... possibly not considered widely relevant or optimal.

Please: We do not need to discuss balance and when LSMeans are equal to ordinary means or treatment effects in the b vector for a BE trial. While the two are tightly related this latter aspect is not per se what I am asking about.

``` if (3) 4 x=c("Foo", "Bar") b=data.frame(x) typeof(b[,1]) ##aha, integer? b[,1]+1 ##then let me add 1 ```

Best regards,
ElMaestro

"(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018.
Obinoscopy
Regular

Nigeria,
2019-02-03 15:35
(edited by Obinoscopy on 2019-02-03 15:45)

@ ElMaestro
Posting: # 19850
Views: 478

## LSMeans, still causing lots of people headaches

Hi ElMaestro,

I guess LSMeans still gives a lot of people headaches which explains why your thread has been avoided like a plague.

Well I am not an expert but lemme try and say what I know....my widow's might. I am responding to questions (1) and (3).

I think LSMeans is a more accurate estimate of the mean of Ln[T] and Ln[R] in crossover studies. This is because the other mean estimates do not take into cognizance of the fact that the var(Ln[T]) in the sequence TR is different from the var(Ln[T]) in the sequence RT. Same for Ln[R]. In effect, the two population (sequence TR and RT) are different.

I usually imagine a situation where a drug has a lower AUC in period II than period I (perhaps due to the action of antibodies that was developed during period I). In such case, it would not be advisable to find the means for Ln[T] and Ln[R] without accounting for the period effect. If not done, the mean Ln[T] might be underestimated when compared to mean Ln[R] if the study was not balanced (RT having more subjects than TR).

The Model Residual is the variance for the difference of LSMeans between Ln[T] and Ln[R]. Lemme do some little mathematical manipulations (I might be wrong though) to buttress my point:

Diff of LSM = {(Ln[T] in TR) + (Ln[T] in RT)}/2 - {(Ln[R] in TR) + (Ln[R] in RT)}/2
Diff of LSM = 1/2{(Ln[T] in TR) + (Ln[T] in RT) - (Ln[R] in TR) - (Ln[R] in RT)}
Diff of LSM = 1/2{(Ln[T] in TR) - (Ln[R] in TR) + (Ln[T] in RT) - (Ln[R] in RT)}
Diff of LSM = {(Ln[T] in TR) - (Ln[R] in TR)}/2 + {(Ln[T] in RT) - (Ln[R] in RT)}/2
Diff of LSM = Residuals in TR + Residuals in RT

Thus the Mean Sum of Squares for the Residuals is the variance for the difference of LSMeans. This is my arithmetolophystics and Al Jabra. I know there are lots of magic in it.

For your other questions, I guess the headaches remains until someone offers us some panadol or Tylenol.

Regards

Scopy
ElMaestro
Hero

Denmark,
2019-02-03 18:39

@ Obinoscopy
Posting: # 19851
Views: 457

## LSMeans, still causing lots of people headaches

Hellobi ,

and thanks for helping towards an understanding.

I am a bit confused, what is ln[T] in your terminology?
Is it individual measurements, or effects form the b-vector or is it the average on the ln scale?
And what is the starting point for this?

Also, residuals -once a linear model has been fit to the least squares criterion- sum to zero, right? I got confused when you went from step 4 to 5.

» The Model Residual is the variance for the difference of LSMeans between Ln[T] and Ln[R].

Why?
I would think it is the plain uncertainty for an (within-) effect estimate in the b vector when we talk crossovers. Multiply by two and you have the variance of the difference or something like that.

``` if (3) 4 x=c("Foo", "Bar") b=data.frame(x) typeof(b[,1]) ##aha, integer? b[,1]+1 ##then let me add 1 ```

Best regards,
ElMaestro

"(...) targeted cancer therapies will benefit fewer than 2 percent of the cancer patients they’re aimed at. That reality is often lost on consumers, who are being fed a steady diet of winning anecdotes about miracle cures." New York Times (ed.), June 9, 2018.
Obinoscopy
Regular

Nigeria,
2019-02-03 19:43

@ ElMaestro
Posting: # 19852
Views: 452

## LSMeans, still causing lots of people headaches

» Hellobi ,
»
» thanks for helping towards an understanding.

No harm in trying

» I am a bit confused, what is ln[T] in your terminology?
» Is it individual measurements, or effects form the b-vector or is it the average on the ln scale?
» And what is the starting point for this?

It's the individual measurements.

» Also, residuals -once a linear model has been fit to the least squares criterion- sum to zero, right? I got confused when you went from step 4 to 5.

Yeah, that's true, the residuals should sum up to zero. The variance will be the sum of squares of the residuals, in this case, will be greater than zero. I was just trying to show the relationship between the residuals and the difference of the lsmeans. The maths isn't perfect one bit.

» » The Model Residual is the variance for the difference of LSMeans between Ln[T] and Ln[R].
»
» Why?
» I would think it is the plain uncertainty for an (within-) effect estimate in the b vector when we talk crossovers. Multiply by two and you have the variance of the difference or something like that.

Oh yes! Mutiplying it by two gives the variance of the difference of LSMeans. Hope I have not worsened the headache with my statistically incoherent post .

Regards,

Scopy
Bioequivalence and Bioavailability Forum |  Admin contact
19,187 posts in 4,084 threads, 1,308 registered users;
online 12 (1 registered, 11 guests [including 9 identified bots]).

In these days, a man who says a thing cannot be done
is quite apt to be interrupted by some idiot doing it.    Elbert Green Hubbard

The BIOEQUIVALENCE / BIOAVAILABILITY FORUM is hosted by
Ing. Helmut Schütz