Mean as intercept; model matrices [🇷 for BE/BA]
Dear Bears
"Contrast coding", is my guess.
I think R uses a different method of contrasts in the model matrix. I don't have SAS so I have idea if this is correct. Try and look at SAS' model matrices yourself. Contrasts are the secret to how the fit is done; one model can be fit with different contasts, and the model coefficients depend on the contrast method. In the default setting R puts a one in the m.m. in a given column when the corresponding dataline contains this factor at the level indicated by the column. So, for example, in our example the columns corresponding to subjects will sum to two because we have two observations from each subject.
Another way of contrasting is make them sum to zero. If we want to be sure we get the model is fit with the mean as intercept, I am now inclined (bear in mind I am just an amateur equipped with a brain having the size of a walnut) to think that we need the contrasts to sum to zero; all errors in any model by definition must sum to zero. In Chow&Liu's model, mean plus all coefficients then must sum to the mean, so all coefficient must sum to zero.
EM.
❝ We just don't know the term yet.
"Contrast coding", is my guess.
I think R uses a different method of contrasts in the model matrix. I don't have SAS so I have idea if this is correct. Try and look at SAS' model matrices yourself. Contrasts are the secret to how the fit is done; one model can be fit with different contasts, and the model coefficients depend on the contrast method. In the default setting R puts a one in the m.m. in a given column when the corresponding dataline contains this factor at the level indicated by the column. So, for example, in our example the columns corresponding to subjects will sum to two because we have two observations from each subject.
Another way of contrasting is make them sum to zero. If we want to be sure we get the model is fit with the mean as intercept, I am now inclined (bear in mind I am just an amateur equipped with a brain having the size of a walnut) to think that we need the contrasts to sum to zero; all errors in any model by definition must sum to zero. In Chow&Liu's model, mean plus all coefficients then must sum to the mean, so all coefficient must sum to zero.
EM.
Complete thread:
- Mean as intercept; model matrices ElMaestro 2009-05-20 20:35
- Mean as intercept; model matrices yjlee168 2009-05-21 13:35
- Mean as intercept; model matrices ElMaestro 2009-05-21 14:30
- Mean as intercept; model matrices ElMaestro 2009-05-22 14:39
- Mean as intercept; model matrices yjlee168 2009-05-23 20:06
- Mean as intercept; model matrices Aceto81 2009-05-26 10:28
- Mean as intercept; model matrices yjlee168 2009-05-23 19:30
- Mean as intercept; model matricesElMaestro 2009-05-23 19:53
- Reparameterization d_labes 2009-05-25 11:47
- Reparameterization yjlee168 2009-05-25 14:17
- Reparameterization ElMaestro 2009-05-25 20:29
- Reparameterized brain d_labes 2009-05-26 08:01
- Mean as intercept; model matrices yjlee168 2009-05-21 13:35